Loading...
Loading...
Annotated file tree with the four layers we use on every production agent: containerization, domain, application, infrastructure, plus tests.
awesome-agent-api .github/ # CI workflows, issue templates notebooks/ # Exploration, data analysis data/ # Sample data, fixtures Makefile # One-line task runners docker-compose.yaml [Containerization] Dockerfile [Containerization] pyproject.toml # Deps, lint, build config src/ awesome_agent_api/ __init__.py config.py # Env vars, settings domain/ [Domain Layer] __init__.py exceptions.py # Typed errors memory/ # Memory abstractions prompts/ # Prompt templates (versioned) tools/ # Tool definitions (MCP-friendly) utils.py application/ [Application Layer] __init__.py chat_service.py evaluation_service.py ingest_documents_service.py reset_memory_service.py infrastructure/ [Infrastructure Layer] __init__.py api/ # HTTP surface __init__.py main.py # FastAPI / Hono entrypoint models.py # Request / response schemas db/ # Migrations, repositories llm_providers/ # Anthropic, OpenAI, Groq... mcp_clients/ # MCP server connectors monitoring/ # Logs, traces, metrics static/ # Assets, public files tests/ [Tests] __init__.py conftest.py # Shared fixtures test_example_agent_api.py
Dockerfile + docker-compose.yaml
Reproducible local dev and one-command production deploys. Pin every dep, expose only what's needed, multi-stage build for slim images.
Pure business logic, zero I/O
Prompts, tool definitions, memory abstractions, typed exceptions. No HTTP, no DB, no LLM provider. Easy to unit test.
Use cases / services
One service per use case (chat, ingest, evaluate, reset). Orchestrates the domain and infrastructure. Becomes your API surface.
API, DB, LLMs, MCP clients, monitoring
All external systems live here HTTP framework, database, LLM provider clients, MCP servers, telemetry. Swappable adapters.
Unit + integration
Tests live alongside the layers they cover. Conftest for shared fixtures. Eval suites in tests/eval/ run in CI.
99% of our agent projects follow this structure. Clean separation of layers makes it trivial to swap LLMs, swap the API framework, or run the same business logic from a CLI, an API, or a queue worker. Want us to bootstrap one for your team? Request a quote.
We turn AI cheatsheets into production code. Tell us what you're building.