Skip to content

Donnadieu/continuum

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

496 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Continuum

An AI That Actually Knows You

Continuum is an AI-powered personal context platform that builds a comprehensive, evidence-backed understanding of who you are — your relationships, memories, preferences, and history.

Unlike generic AI assistants that start every conversation from zero, Continuum maintains persistent knowledge of your life and gets smarter over time.

Core Invariant

Every claim the system makes must be traceable to lived evidence or explicitly labeled as inference. No hallucinated facts. No silent upgrades from inference to certainty.

Features (MVP)

  • Data Integrations — Connect Google (Gmail, Calendar, Contacts) to pull in the raw material of your professional life
  • Entity Extraction — Automatically identify people, relationships, events, and patterns from your data
  • Conversational Interface — Ask anything about your life and get evidence-backed answers
  • Provenance System — Every claim is traceable to source data with confidence scores

Tech Stack

Component Technology
Backend Python + FastAPI
Database Supabase PostgreSQL (session data)
Memory Storage Supabase Storage (canonical JSON)
Knowledge Graph Neo4j (optional, semantic search)
Auth Supabase Auth
LLM OpenAI GPT-4 + text-embedding-3-small
Frontend React + Vite

Getting Started

Prerequisites

  • Python 3.11+
  • uv (Python package manager)
  • A Supabase project
  • Google Cloud Console project with OAuth credentials
  • OpenAI API key

Setup

  1. Clone and install dependencies
cd continuum
uv sync
  1. Configure environment
cp .env.example .env
# Edit .env with your credentials
  1. Set up Supabase database

Run the migration script in your Supabase SQL Editor:

cat supabase/migrations/001_initial_schema.sql
# Copy and paste into Supabase SQL Editor
  1. Start the development server
uv run uvicorn backend.main:app --reload
  1. Visit the API docs

Open http://localhost:8000/docs

Project Structure

continuum/
├── backend/                 # Python FastAPI backend
│   ├── main.py             # FastAPI app entry
│   ├── config.py           # Settings management
│   ├── api/v1/             # API endpoints
│   │   └── schemas/        # Pydantic request/response models
│   ├── db/
│   │   ├── models/         # SQLAlchemy models
│   │   └── repositories/   # Data access layer
│   ├── agents/             # AI agents for ingestion
│   │   └── providers/      # LLM provider abstractions
│   ├── integrations/       # External API connectors
│   │   └── google/         # Gmail, Calendar, Contacts
│   ├── memory/             # Cognitive memory schemas
│   │   └── schemas/        # JSON-LD canonical records
│   ├── graph/              # Neo4j graph service
│   ├── storage/            # Storage abstraction layer
│   ├── services/           # Business logic services
│   ├── core/               # Core domain logic
│   └── workers/            # Background task workers
├── frontend/               # React + Vite frontend
│   └── src/
│       ├── components/     # UI components (shadcn/ui)
│       ├── hooks/          # React hooks
│       ├── lib/            # Utilities (supabase client)
│       ├── pages/          # Page components
│       └── types/          # TypeScript types
├── supabase/
│   └── migrations/         # SQL migrations for Supabase
└── tests/                  # Python test suite
    ├── api/v1/             # API v1 endpoint tests
    ├── extractors/         # Entity extractor tests
    ├── integrations/       # External integration tests
    ├── query/              # Query/retrieval module tests
    ├── test_api/           # API endpoint tests
    ├── test_agents/        # Agent tests
    ├── test_core/          # Core domain logic tests
    ├── test_db/            # Database tests
    ├── test_graph/         # Neo4j graph service tests
    ├── test_memory/        # Memory service tests
    ├── test_services/      # Business logic service tests
    └── test_storage/       # Storage layer tests

API Endpoints

Endpoint Description
GET /health Health check
GET /api/v1/auth/me Get current user
POST /api/v1/auth/logout Logout current user
GET /api/v1/integrations List connected data sources
GET /api/v1/integrations/google/auth-url Start Google OAuth
POST /api/v1/chat Chat with evidence-backed responses
GET /api/v1/entities List extracted entities
GET /api/v1/entities/{id}/evidence Get evidence for an entity
POST /api/v1/sync Trigger manual sync
GET /api/v1/sync/status Get sync status

Architecture

Data Architecture (Technical Spec v3)

Continuum uses a hybrid architecture with clear separation of concerns:

Layer Storage Purpose
Session Data PostgreSQL User accounts, OAuth tokens, chat history
Memory Data Supabase Storage Canonical JSON records (persons, episodes, evidence)
Search Index Neo4j (optional) Graph queries, semantic search, relationship traversal

Data Flow

External Data → Sync Pipeline → MemoryService
                                     ↓
                    ┌────────────────┴────────────────┐
                    ↓                                 ↓
             Supabase Storage              Neo4j (optional)
             (Source of Truth)             (Search Index)
                    ↓                                 ↓
                    └────────────────┬────────────────┘
                                     ↓
                          RetrievalOrchestrator
                                     ↓
                        Evidence-Backed Response

MemoryService

The MemoryService is the primary interface for all memory operations:

  • Person CRUD: Create, read, update, list persons
  • Episode Management: Store events and interactions
  • Relationship Tracking: Model connections between people
  • Evidence Storage: Maintain provenance for all claims
  • Semantic Search: Vector similarity via Neo4j (optional)
  • Index Rebuilding: Reconstruct Neo4j from Storage

See backend/memory/README.md for detailed documentation.

AI Agents

Agent Purpose
Ingest Agent Normalizes data from integrations
Entity Agent Extracts and resolves entities
Relationship Agent Infers relationships with confidence
Narrative Agent Generates summaries with citations
Provenance Agent Enforces evidence requirements
Retrieval Orchestrator Assembles evidence-backed responses

Development

Running Tests

uv run pytest

Code Formatting

uv run ruff check .
uv run ruff format .

Type Checking

uv run mypy backend

Environment Variables

Variable Description
SUPABASE_URL Your Supabase project URL
SUPABASE_ANON_KEY Supabase anonymous key
SUPABASE_SERVICE_ROLE_KEY Supabase service role key
DATABASE_URL Direct PostgreSQL connection string
STORAGE_BUCKET Supabase Storage bucket for memories (default: canonical)
NEO4J_URI Neo4j connection URI (optional)
NEO4J_USER Neo4j username (optional)
NEO4J_PASSWORD Neo4j password (optional)
GOOGLE_CLIENT_ID Google OAuth client ID
GOOGLE_CLIENT_SECRET Google OAuth client secret
OPENAI_API_KEY OpenAI API key
ENCRYPTION_KEY Fernet key for encrypting OAuth tokens

Roadmap

  • Phase 1: Foundation (project setup, database, auth)
  • Phase 2: Google Integration (OAuth, Gmail, Calendar, Contacts sync)
  • Phase 3: Entity System (extraction, resolution, evidence tracking)
  • Phase 4: Knowledge Graph (Neo4j integration, vector search, relationships)
  • Phase 5: Conversational Interface (chat with provenance, citations)
  • Phase 6: Demo Frontend (React + Vite UI with shadcn/ui)
  • Phase 7: Cognitive Memory (preferences, decisions, autobiographical narratives)
  • Phase 8: Production Hardening (performance optimization, monitoring, scaling)

License

MIT

About

No description, website, or topics provided.

Resources

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published