Cogitator
Getting Started

Installation

Install Cogitator and set up your development environment.

Prerequisites

  • Node.js 20+Download
  • pnpm (recommended) — npm install -g pnpm
  • Docker (optional) — For Redis, Postgres, and sandboxed execution
  • Ollama (for local LLMs) — Download or use OpenAI/Anthropic API
node --version    # v20.0.0 or higher
pnpm --version    # 8.0.0 or higher

Scaffolding a Project

The fastest way to start is with the CLI scaffolder:

npx create-cogitator-app my-agent
cd my-agent
npm install
npm run dev

create-cogitator-app supports 6 templates:

TemplateDescription
basicSimple agent with tools
memoryAgent with persistent memory and RAG
swarmMulti-agent swarm coordination
workflowDAG-based workflow orchestration
api-serverREST API server with Express
nextjsNext.js app with chat UI

You can also pass flags for non-interactive mode:

npx create-cogitator-app my-agent \
  --template swarm \
  --provider openai \
  --package-manager pnpm

Manual Installation

Add Cogitator to an existing project:

pnpm add @cogitator-ai/core @cogitator-ai/types zod

Optional packages depending on your needs:

# Memory & RAG
pnpm add @cogitator-ai/memory

# Multi-agent swarms
pnpm add @cogitator-ai/swarms

# Workflow orchestration
pnpm add @cogitator-ai/workflows

# Server adapters
pnpm add @cogitator-ai/express    # or fastify, hono, koa

# Config file support
pnpm add @cogitator-ai/config

# MCP protocol
pnpm add @cogitator-ai/mcp

Docker Services

For production features (memory, RAG, queues), start infrastructure:

docker-compose up -d

This starts:

  • Redis (port 6379) — Short-term memory, pub/sub, job queues
  • Postgres + pgvector (port 5432) — Long-term memory, semantic search
  • Ollama (port 11434) — Local LLM inference

Pull a model:

ollama pull llama3.2

On this page