Intervux AI runs resume-aware interviews, evaluates answers, tracks model experiments, and provides recruiter decision support.
Hiring and interview preparation workflows today are still:
- ⚡️ Static & unrealistic - weak simulation of real interview pressure and rigid Q&A scripts.
- 📝 Text-only - missing multimodal signals, speech cadence, and real-time stress testing.
- 📊 Hard to compare - inconsistent human evaluation across candidates due to inherent bias.
Modern hiring requires structured evaluation, adaptive questioning, and reliable analytics.
Intervux introduces a real-time AI interview runtime combined with recruiter intelligence tools.
- 🧠 Context Awareness: Resume parsing builds deep interview context and skill profiles automatically.
- 🎯 Adaptive Interview Engine: Dynamic questioning adjusts live based on candidate responses.
- 📈 Evaluation Pipeline: Answers are scored using structured, unbiased evaluation signals.
- 📋 Recruiter Intelligence: Dashboards provide experiment tracking, telemetry, and decision support algorithms.
- (Placeholder) Watch the Recruiter Dashboard Demo Video (YouTube) 🎥
- (Placeholder) Watch the Live Candidate Interview Runtime (YouTube) 🎥
- (Placeholder) Live Demo Application Link 🔗
If you are looking to understand the underlying system design, navigate to our docs/ folder:
- 🏗️ System Architecture (ARCHITECTURE.md): Deep dive into the backend services and React configurations.
- 🔄 System Flow (SYSTEM_FLOW.md): The data flow pipelines (LLM interactions, Telemetry cycles).
- 📐 High-Level Design (HLD_Intervux_AI.docx): Broad platform layouts for enterprise evaluation.
- 🔧 Low-Level Design (LLD_Intervux_AI.docx): Component-level class diagrams and state logic.
If you don't know much about building web applications natively—Docker is the easiest and recommended way to run this application! By using Docker, you bypass the need to install Python, Node, Postgres, or Redis manually on your computer.
All you need installed on your machine are:
git clone https://github.com/YourUsername/intervux-ai.git
cd intervux-aiInside the root folder, copy the example Docker variables into a new .env.docker file.
# Windows (PowerShell)
Copy-Item .env.docker.example .env.docker
# Mac / Linux
cp .env.docker.example .env.dockerOpen .env.docker in your text editor and add your AI credentials (e.g., set GOOGLE_API_KEY=your_gemini_api_key_here). The rest of the settings are already perfectly configured for Docker!
Open Docker Desktop, then run the build command in your terminal:
docker-compose up --buildNote: The first time you run this, it may take 3-5 minutes to download Linux packages and build the databases.
Once the terminal logs calm down, the servers are running!
- Frontend Dashboard: http://localhost:5173
- Backend API Docs: http://localhost:8000/docs
- Celery Task Monitor (Flower): http://localhost:5555
Use these default credentials to log in to the Frontend Dashboard:
| Role | Password | |
|---|---|---|
| Admin | admin@intervux.ai | admin123 |
| Recruiter | recruiter@intervux.ai | recruiter123 |
If you are actively developing code and want terminal-level debug control without running containers, follow these hybrid steps.
- 🐍 Python 3.10+
- 📦 Node.js 18+
- 🗄️ PostgreSQL Database (Running locally)
- 🛑 Redis Server (Running locally)
python -m venv myenv
.\myenv\Scripts\activate
pip install -r backend/requirements.txt(Windows developers only: pip install -r requirements/windows.txt)
Create a local .env copy and connect it to your local Postgres/Redis instances:
GOOGLE_API_KEY=your_key_here
DATABASE_URL=postgresql://postgres:password@localhost:5432/intervux
REDIS_URL=redis://localhost:6379/0
LLM_PROVIDER=geminiStart the FastAPI backend:
uvicorn backend.main:app --reloadIn a new terminal:
cd frontend
npm install
npm run devOur application uses a strict Domain-Driven modular architecture divided natively between React interfaces and Python services.
intervux-ai/
|- backend/ # FastAPI Python runtime
| |- main.py # App entrypoint + ASGI hook
| |- ai/ # AI Namespaces (Models, STT, Engines)
| |- api/routes/ # Granular REST API Endpoints
| |- auth/ # JWT, RBAC security
| |- background/ # Background Task orchestration
| |- core/ # LLM Brains and configurations
| |- db/ # SQLAlchemy engine + models
| |- services/ # Evaluation telemetry routers
| `- sockets/ # Real-time WebSockets logic
|- frontend/ # React / Vite SPA
| |- src/components/dashboard/ # Shared React widgets
| |- src/pages/ # Top-level routable views
| `- src/types/ # Global Typescript interfaces
|- tests/ # Pytest suites
|- requirements/ # Python PIP registries
|- docs/ # Project Documentation & Architecture
|- docker-compose.yaml # Local Docker orchestration
|- Dockerfile # Container manifest
`- .env.docker.example # Container secrets template
Intervux connects modern browser UX to powerful multi-agent AI pipelines processing on background Celery workers.
flowchart TD
User[Candidate User] -->|UI + Audio| Frontend[React Frontend]
Frontend -->|HTTP / WebSocket| Backend[FastAPI Backend]
Backend -->|Async Tasking| Redis[(Redis Broker)]
Redis -->|Evaluation Jobs| Worker[Celery Worker]
Worker -->|LLM Prompts| Provider[LLM Provider]
Worker -->|Persist Evaluates| PostgreSQL[(PostgreSQL)]
Backend -->|Stream Results| Dashboard[Recruiter Dashboard]
- Context Awareness: Natural Language Processing (NLP) parses resumes to build interview context parameters.
- Adaptive Interview Engine: Next-question logic is determined probabilistically by evaluating immediate prior answers from candidates.
- Recruiter Intelligence: Endpoints like
/api/interview/{id}/decisionyield explicit, data-backed recruiter recommendations. - Experiment Tracking: Allows administrators to trace and compare LLM latency (
p99), response quality, and throughput per model deployed.
To validate system reliability before a deployment, run the unified tracking suite.
pip install -r requirements/test.txt
pytest -vCI (Continuous Integration) also runs these assertions automatically on push via .github/workflows/tests.yml.
Distributed under the MIT License. Built with ❤️ by Vishal Gorule.