Cogitator
Integrations

OpenAI Compatibility

Drop-in OpenAI Assistants API compatibility layer that lets existing OpenAI SDK clients interact with Cogitator agents.

Overview

The @cogitator-ai/openai-compat package exposes Cogitator as an OpenAI-compatible API server. Any application built on the OpenAI SDK can point at your Cogitator instance and work without code changes -- assistants, threads, messages, and runs all map to Cogitator concepts under the hood.

pnpm add @cogitator-ai/openai-compat @cogitator-ai/core

Quick Start

Spin up a compatibility server in a few lines:

import { Cogitator } from '@cogitator-ai/core';
import { createOpenAIServer } from '@cogitator-ai/openai-compat';

const cog = new Cogitator({
  llm: {
    defaultProvider: 'openai',
    providers: { openai: { apiKey: process.env.OPENAI_API_KEY! } },
  },
});

const server = createOpenAIServer(cog, {
  port: 8080,
  apiKeys: ['sk-my-secret-key'],
  tools: [calculator, webSearch],
});

await server.start();

Now use the standard OpenAI SDK against your local server:

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'sk-my-secret-key',
  baseURL: 'http://localhost:8080/v1',
});

const assistant = await openai.beta.assistants.create({
  model: 'gpt-4o',
  name: 'My Assistant',
  instructions: 'You are a helpful assistant with access to tools.',
});

const thread = await openai.beta.threads.create();

await openai.beta.threads.messages.create(thread.id, {
  role: 'user',
  content: 'What is the square root of 144?',
});

const run = await openai.beta.threads.runs.create(thread.id, {
  assistant_id: assistant.id,
});

OpenAIServer

The server is built on Fastify and implements the full Assistants API surface:

const server = new OpenAIServer(cogitator, config);
await server.start();

console.log(server.getBaseUrl());
console.log(server.isRunning());

await server.stop();

OpenAIServerConfig

OptionTypeDefaultDescription
portnumber8080Port to listen on
hoststring'0.0.0.0'Host to bind to
apiKeysstring[][]API keys for auth (empty = no auth)
toolsTool[][]Cogitator tools available to all assistants
loggingbooleanfalseEnable request logging via pino
cors{ origin, methods }{ origin: true }CORS configuration

Supported Endpoints

The server implements these OpenAI API routes:

MethodEndpointDescription
GET/v1/modelsList available models
POST/v1/assistantsCreate assistant
GET/v1/assistantsList assistants
GET/v1/assistants/:idGet assistant
POST/v1/assistants/:idUpdate assistant
DELETE/v1/assistants/:idDelete assistant
POST/v1/threadsCreate thread
GET/v1/threads/:idGet thread
DELETE/v1/threads/:idDelete thread
POST/v1/threads/:id/messagesAdd message
GET/v1/threads/:id/messagesList messages
GET/v1/threads/:id/messages/:midGet message
POST/v1/threads/:id/runsCreate run
GET/v1/threads/:id/runs/:ridGet run status
POST/v1/threads/:id/runs/:rid/cancelCancel run
POST/v1/threads/:id/runs/:rid/submit_tool_outputsSubmit tool outputs
POST/v1/filesUpload file
GET/v1/filesList files
DELETE/v1/files/:idDelete file

OpenAIAdapter

For in-process usage without a network server, use OpenAIAdapter directly. It provides the same Assistants API as programmatic methods:

import { createOpenAIAdapter } from '@cogitator-ai/openai-compat';

const adapter = createOpenAIAdapter(cog, { tools: [calculator] });

const assistant = await adapter.createAssistant({
  model: 'gpt-4o',
  name: 'Math Helper',
  instructions: 'Help with math problems.',
});

const thread = await adapter.createThread();

await adapter.addMessage(thread.id, {
  role: 'user',
  content: 'What is 15! (factorial)?',
});

const run = await adapter.createRun(thread.id, {
  assistant_id: assistant.id,
});

Streaming Runs

Pass stream: true in the run request to get real-time events:

const run = await adapter.createRun(thread.id, {
  assistant_id: assistant.id,
  stream: true,
});

const emitter = adapter.getStreamEmitter(run.id);

emitter?.on('event', (type, data) => {
  switch (type) {
    case 'thread.message.delta':
      process.stdout.write(data.delta.content[0].text.value);
      break;
    case 'thread.run.completed':
      console.log('\nRun finished, tokens:', data.usage);
      break;
  }
});

Thread Storage

By default, threads and messages are stored in memory. For production, plug in Redis or PostgreSQL:

import {
  RedisThreadStorage,
  PostgresThreadStorage,
  ThreadManager,
  createOpenAIAdapter,
} from '@cogitator-ai/openai-compat';

const storage = new RedisThreadStorage({
  url: 'redis://localhost:6379',
  keyPrefix: 'cogitator:',
  ttl: 86400,
});
await storage.connect();

const manager = new ThreadManager(storage);

Or use the factory function:

import { createThreadStorage } from '@cogitator-ai/openai-compat';

const storage = createThreadStorage({
  type: 'postgres',
  connectionString: 'postgresql://user:pass@localhost/mydb',
  tableName: 'assistant_threads',
});
await storage.connect?.();

Storage Options

BackendPackage RequiredConfig
In-memory(none){ type: 'memory' }
Redisioredis{ type: 'redis', url, keyPrefix, ttl }
PostgreSQLpg{ type: 'postgres', connectionString }

Full Example: Migration from OpenAI

If you have an existing OpenAI-powered app, migration is straightforward. Just change the baseURL:

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: process.env.COGITATOR_API_KEY,
  baseURL: process.env.COGITATOR_URL + '/v1',
});

const thread = await openai.beta.threads.create();

await openai.beta.threads.messages.create(thread.id, {
  role: 'user',
  content: 'Analyze the sentiment of this review: "Absolutely loved this product!"',
});

const run = await openai.beta.threads.runs.createAndPoll(thread.id, {
  assistant_id: 'asst_abc123',
});

if (run.status === 'completed') {
  const messages = await openai.beta.threads.messages.list(thread.id);
  console.log(messages.data[0].content[0].text.value);
}

On this page