Skip to content

A modular chatbot project with separate frontend and backend components designed for Raspberry Pi deployment. Features a lightweight web UI (HTML/CSS/JS) and a Python backend running inside a Docker container, served via Apache2.

Notifications You must be signed in to change notification settings

Janos11/chatBot

Repository files navigation

🤖 AI-Enhanced ChatBot

A containerized full-stack chatbot application powered by Docker, built for seamless integration into any website. Designed with scalability in mind, it connects a lightweight front-end to a Python back-end and an Ollama LLM server. Perfect for embedding a responsive, smart chat assistant into existing websites — hosted on a Raspberry Pi or any Docker-compatible environment.

Future updates will include RAG (Retrieval-Augmented Generation) capabilities, allowing the chatbot to serve domain-specific answers from uploaded documents or manuals.


Screenshot - chatBot working on test website, OK

🌐 Frontend

Feature Description
index.html Chat UI, can be embedded into any website
style.css Handles layout and responsiveness
chatbot.js Sends and receives chat messages using the fetch()
  • Requires a running backend (Apache proxy + Ollama) to function.
  • Includes chat toggling, smooth scrolling, and animated message handling.
  • Built for ease of customization — simply copy to your website.

🧠 Backend Architecture

This project uses a containerized microservice-style setup with clear separation between the chatbot frontend and the LLM engine.

🔧 Components

Component Role
chatbot Web interface container that serves the frontend and proxies chat requests to the LLM backend
ollama LLM backend container running Ollama, exposing models on port 11434
  • The chatbot container runs a web app (currently static HTML + JavaScript) and connects to the Ollama API in the ollama container.
  • Apache2 is configured with a custom httpd.conf to handle proxy routing.

🧪 LLM Integration with Ollama

Component Port Description
Ollama API 11434 Local LLM model server (e.g., tinyllama)
  • Supports pulling and running open LLMs like tinyllama, mistral, and others via Ollama.
  • Models are downloaded once and stored in a Docker volume (ollama_models).
  • You can interact with the LLM via simple HTTP requests to http://localhost:11434/api/chat.

🐳 Docker Setup

docker-compose.yml
Dockerfile


🚀 Getting Started

git clone https://github.com/Janos11/chatBot.git
cd open-webui
docker compose up -d

Build and run with Docker Compose:

docker compose up --build

Access the chat interface in your browser: Then open your browser: 👉 http://localhost:85 or http://<your-pi-ip>:85


🧾 Technologies Used

Technology Purpose Link
Docker Containerization docker.com
Flask Lightweight backend API Flask
Ollama LLM API and model hosting ollama.com
Apache2 Web server and reverse proxy httpd.apache.org
HTML/CSS/JS Frontend interface -
Raspberry Pi Target embedded deployment platform raspberrypi.com

📌 Documentation

Section Status
🔧 Resolving CORS issue resolving_cors_issue_ollama_api_integration.md
📚 How to Add Documents for RAG Coming soon
🧪 Testing Instructions Coming soon
✅ Full Stack Summary Coming soon
🗂️ Git Commands git_cheat_sheet.md
🦙 Ollama Commands ollama_commands.md

✅ Full Stack Summary

This project is a containerized, local LLM chatbot system with a complete frontend-to-backend pipeline:

Component Technology/Description
Frontend Embeddable chat UI built with HTML, CSS, and JavaScript
Proxy Layer Apache2 (inside chatbot container) proxies to Ollama API
AI Layer Ollama running local LLMs (e.g., tinyllama, mistral)
Deployment Docker Compose orchestrates both chatbot and ollama services
Hosting Designed for local deployment (e.g., Raspberry Pi) or cloud VPS

🤝 Contributors

János Rostás 👨‍💻 Electronic & Computer Engineer
🧠 Passionate about AI, LLMs, and RAG systems
🐳 Docker & Linux Power User
🔧 Raspberry Pi Builder | Automation Fanatic
💻 Git & GitHub DevOps Explorer
📦 Loves tinkering with Ollama, containerized models, and APIs
🌐 janosrostas.co.uk
🔗 LinkedIn
🐙 GitHub | 🐋 Docker Hub
ChatGPT 🤖 AI Pair Programmer by OpenAI
💡 Supports brainstorming, prototyping, and debugging
📚 Backed by years of programming knowledge and best practices
Grok 🤖 AI Assistant by xAI
🚀 Accelerates human scientific discovery
💬 Provides helpful and truthful answers
🌐 Accessible on grok.com and X platforms

About

A modular chatbot project with separate frontend and backend components designed for Raspberry Pi deployment. Features a lightweight web UI (HTML/CSS/JS) and a Python backend running inside a Docker container, served via Apache2.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published