A containerized full-stack chatbot application powered by Docker, built for seamless integration into any website. Designed with scalability in mind, it connects a lightweight front-end to a Python back-end and an Ollama LLM server. Perfect for embedding a responsive, smart chat assistant into existing websites — hosted on a Raspberry Pi or any Docker-compatible environment.
Future updates will include RAG (Retrieval-Augmented Generation) capabilities, allowing the chatbot to serve domain-specific answers from uploaded documents or manuals.
| Feature | Description |
|---|---|
| index.html | Chat UI, can be embedded into any website |
| style.css | Handles layout and responsiveness |
| chatbot.js | Sends and receives chat messages using the fetch() |
- Requires a running backend (Apache proxy + Ollama) to function.
- Includes chat toggling, smooth scrolling, and animated message handling.
- Built for ease of customization — simply copy to your website.
This project uses a containerized microservice-style setup with clear separation between the chatbot frontend and the LLM engine.
| Component | Role |
|---|---|
chatbot |
Web interface container that serves the frontend and proxies chat requests to the LLM backend |
ollama |
LLM backend container running Ollama, exposing models on port 11434 |
- The
chatbotcontainer runs a web app (currently static HTML + JavaScript) and connects to the Ollama API in theollamacontainer. - Apache2 is configured with a custom httpd.conf to handle proxy routing.
| Component | Port | Description |
|---|---|---|
| Ollama API | 11434 |
Local LLM model server (e.g., tinyllama) |
- Supports pulling and running open LLMs like
tinyllama,mistral, and others via Ollama. - Models are downloaded once and stored in a Docker volume (
ollama_models). - You can interact with the LLM via simple HTTP requests to
http://localhost:11434/api/chat.
git clone https://github.com/Janos11/chatBot.git
cd open-webui
docker compose up -dBuild and run with Docker Compose:
docker compose up --buildAccess the chat interface in your browser:
Then open your browser: 👉 http://localhost:85 or http://<your-pi-ip>:85
| Technology | Purpose | Link |
|---|---|---|
| Docker | Containerization | docker.com |
| Flask | Lightweight backend API | Flask |
| Ollama | LLM API and model hosting | ollama.com |
| Apache2 | Web server and reverse proxy | httpd.apache.org |
| HTML/CSS/JS | Frontend interface | - |
| Raspberry Pi | Target embedded deployment platform | raspberrypi.com |
| Section | Status |
|---|---|
| 🔧 Resolving CORS issue | resolving_cors_issue_ollama_api_integration.md |
| 📚 How to Add Documents for RAG | Coming soon |
| 🧪 Testing Instructions | Coming soon |
| ✅ Full Stack Summary | Coming soon |
| 🗂️ Git Commands | git_cheat_sheet.md |
| 🦙 Ollama Commands | ollama_commands.md |
This project is a containerized, local LLM chatbot system with a complete frontend-to-backend pipeline:
| Component | Technology/Description |
|---|---|
| Frontend | Embeddable chat UI built with HTML, CSS, and JavaScript |
| Proxy Layer | Apache2 (inside chatbot container) proxies to Ollama API |
| AI Layer | Ollama running local LLMs (e.g., tinyllama, mistral) |
| Deployment | Docker Compose orchestrates both chatbot and ollama services |
| Hosting | Designed for local deployment (e.g., Raspberry Pi) or cloud VPS |
| János Rostás |
👨💻 Electronic & Computer Engineer 🧠 Passionate about AI, LLMs, and RAG systems 🐳 Docker & Linux Power User 🔧 Raspberry Pi Builder | Automation Fanatic 💻 Git & GitHub DevOps Explorer 📦 Loves tinkering with Ollama, containerized models, and APIs 🌐 janosrostas.co.uk 🐙 GitHub | 🐋 Docker Hub |
| ChatGPT |
🤖 AI Pair Programmer by OpenAI 💡 Supports brainstorming, prototyping, and debugging 📚 Backed by years of programming knowledge and best practices |
| Grok |
🤖 AI Assistant by xAI 🚀 Accelerates human scientific discovery 💬 Provides helpful and truthful answers 🌐 Accessible on grok.com and X platforms |
