Skip to content

susovan87/langgraph-agents

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LangGraph Agents

A portfolio project demonstrating production-ready LLM agent development using LangChain and LangGraph. Showcases test-driven development, clean architecture, multi-provider support (OpenAI/Gemini), and industry best practices for building reliable agent-based systems.

Features

  • Multi-LLM Support: Unified interface for OpenAI and Google Gemini
  • LangGraph Agents: ReAct pattern implementation with tool calling
  • Type Safety: Full Pydantic validation and type hints
  • Structured Logging: Production-ready logging with structlog
  • Test-Driven: Comprehensive unit and integration test suite
  • Clean Architecture: Clear separation of concerns (domain, services, agents)

Tech Stack

  • Python 3.12+: Modern Python with type hints
  • LangChain: LLM framework and abstractions
  • LangGraph: State machine for agent workflows
  • Pydantic: Settings management and validation
  • pytest: Testing framework with async support
  • uv: Fast Python package manager

Prerequisites

  • Python 3.12 or higher
  • uv package manager
  • API keys for OpenAI and/or Google Gemini

Installation

  1. Clone the repository

    git clone <repository-url>
    cd langgraph-agents
  2. Install dependencies

    uv sync
  3. Set up environment variables

    Create a .env file in the project root:

    OPENAI_API_KEY=your-openai-api-key
    GEMINI_API_KEY=your-gemini-api-key

Usage

CLI Interface

The project includes a simple CLI for running the math agent:

# Basic usage
uv run python src/main.py "What is 25 * 4 + 10?"

# More examples
uv run python src/main.py "Calculate the sum of 5, 10, and 15"
uv run python src/main.py "What is the square root of 144?"

Programmatic Usage

from agents.math import execute_math_agent
from domain import LLMProvider

# Execute math agent
result = await execute_math_agent(
    query="What is 25 * 4 + 10?",
    llm_provider=LLMProvider.OPENAI,
    temperature=0.7,
)

print(result["response_text"])  # Final answer
print(result["tools_used"])     # List of tools called
print(result["iterations"])     # Number of reasoning cycles

Testing

The project uses folder-based test organization with comprehensive unit and integration tests.

Test Structure

tests/
├── unit/          # Fast tests, mocked dependencies
├── integration/   # Real API calls
└── conftest.py    # Shared fixtures

Running Tests

Unit Tests (Fast, No API Keys Required)

# Run all unit tests
uv run pytest tests/unit/

# With coverage report
uv run pytest tests/unit/ --cov=src --cov-report=html
open htmlcov/index.html

Integration Tests (Requires API Keys)

# Run integration tests
uv run pytest tests/integration/

All Tests

# Run everything
uv run pytest

# Run specific test file
uv run pytest tests/unit/test_math_tools.py

# Run specific test function
uv run pytest tests/unit/test_math_tools.py::test_function_name

# Run tests matching a pattern
uv run pytest -k "math"

Test Types

  • Unit Tests (tests/unit/)

    • Execute in < 1 second per test
    • No external dependencies (mocked LLM calls)
    • Default for CI/CD pipelines
  • Integration Tests (tests/integration/)

    • Make real API calls to OpenAI/Gemini
    • Require valid API keys in .env
    • Test actual LLM behavior

Project Structure

langgraph-agents/
├── src/
│   ├── agents/           # LangGraph agent implementations
│   │   └── tools/        # LangChain tools (@tool decorator)
│   ├── core/             # Core utilities (logging)
│   ├── domain/           # Domain models and enums
│   ├── services/         # External service abstractions
│   ├── config.py         # Pydantic settings
│   └── main.py           # CLI entry point
├── tests/
│   ├── unit/             # Unit tests (mocked)
│   ├── integration/      # Integration tests (real APIs)
│   └── conftest.py       # Shared test fixtures
├── .env                  # Environment variables (create this)
├── pyproject.toml        # Project configuration
└── README.md             # This file

Development

Code Quality

Format code

uv run black src/ tests/

Lint

uv run ruff check src/ tests/

Type checking

uv run mypy src/

Adding New Agents

  1. Create tools in src/agents/tools/{agent_name}.py:

    from langchain_core.tools import tool
    
    @tool
    def my_tool(input: str) -> str:
        """Clear description for LLM."""
        return "result"
    
    MY_AGENT_TOOLS = [my_tool]
  2. Create agent in src/agents/{agent_name}.py:

    from typing_extensions import TypedDict
    from langgraph.graph import StateGraph
    
    class MyAgentState(TypedDict):
        messages: Annotated[list[BaseMessage], add_messages]
    
    def create_my_agent_graph(llm_provider: LLMProvider):
        # Build StateGraph with agent and tools nodes
        pass
    
    async def execute_my_agent(query: str):
        # Convenient execution function
        pass
  3. Write tests following TDD approach

Architecture Guidelines

  • Domain Layer: Core models, no external dependencies
  • Services: Abstract external dependencies (LLMs, APIs)
  • Agents: LangGraph implementations using services
  • Tools: Pure functions with @tool decorator
  • Imports: Always use absolute imports from src/

Configuration

Settings are managed via Pydantic and loaded from .env:

from config import get_settings

settings = get_settings()  # Cached singleton
print(settings.openai_model)  # "gpt-4o-mini" (default)

Available Settings:

  • OPENAI_API_KEY: OpenAI API key (required)
  • GEMINI_API_KEY: Google Gemini API key (required)
  • OPENAI_MODEL: OpenAI model name (default: "gpt-4o-mini")
  • GEMINI_MODEL: Gemini model name (default: "gemini-2.5-flash")
  • LLM_TEMPERATURE: Generation temperature (default: 0.7)
  • LLM_MAX_TOKENS: Max output tokens (default: 2048)
  • ENVIRONMENT: Environment name (default: "development")

License

MIT

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Write tests first (TDD)
  4. Implement feature
  5. Run tests and linting
  6. Submit pull request

About

Production-ready LLM agent development using LangGraph

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages