Getting Started
Installation
Install Cogitator and set up your development environment.
Prerequisites
- Node.js 20+ — Download
- pnpm (recommended) —
npm install -g pnpm - Docker (optional) — For Redis, Postgres, and sandboxed execution
- Ollama (for local LLMs) — Download or use OpenAI/Anthropic API
node --version # v20.0.0 or higher
pnpm --version # 8.0.0 or higherScaffolding a Project
The fastest way to start is with the CLI scaffolder:
npx create-cogitator-app my-agent
cd my-agent
npm install
npm run devcreate-cogitator-app supports 6 templates:
| Template | Description |
|---|---|
basic | Simple agent with tools |
memory | Agent with persistent memory and RAG |
swarm | Multi-agent swarm coordination |
workflow | DAG-based workflow orchestration |
api-server | REST API server with Express |
nextjs | Next.js app with chat UI |
You can also pass flags for non-interactive mode:
npx create-cogitator-app my-agent \
--template swarm \
--provider openai \
--package-manager pnpmManual Installation
Add Cogitator to an existing project:
pnpm add @cogitator-ai/core @cogitator-ai/types zodOptional packages depending on your needs:
# Memory & RAG
pnpm add @cogitator-ai/memory
# Multi-agent swarms
pnpm add @cogitator-ai/swarms
# Workflow orchestration
pnpm add @cogitator-ai/workflows
# Server adapters
pnpm add @cogitator-ai/express # or fastify, hono, koa
# Config file support
pnpm add @cogitator-ai/config
# MCP protocol
pnpm add @cogitator-ai/mcpDocker Services
For production features (memory, RAG, queues), start infrastructure:
docker-compose up -dThis starts:
- Redis (port 6379) — Short-term memory, pub/sub, job queues
- Postgres + pgvector (port 5432) — Long-term memory, semantic search
- Ollama (port 11434) — Local LLM inference
Pull a model:
ollama pull llama3.2