Foundations
•6 min read
AI Framework Comparison
How Directive compares to popular AI agent frameworks.
Directive's AI adapter doesn't replace your LLM framework – it wraps it with constraint-driven orchestration. This means you keep your existing agent code and gain guardrails, reactive state, time-travel debugging, and declarative patterns on top.
At a Glance
| Feature | Directive AI | LangChain/LangGraph | CrewAI | AutoGen | Vercel AI SDK |
|---|---|---|---|---|---|
| Approach | Constraint-driven wrapper | Graph-based chains | Role-based crews | Conversational agents | Streaming-first UI |
| Framework lock-in | None – wraps any runner | LangChain ecosystem | CrewAI agents | AutoGen agents | Vercel ecosystem |
| Reactive state | Directive System backbone | LangGraph state | Shared memory | Chat history | React state |
| Guardrails | Input + output + tool-call | LangSmith eval | – | – | – |
| Execution patterns | 8 built-in (parallel, sequential, supervisor, DAG, race, reflect, debate, goal) | LangGraph nodes/edges | Sequential/parallel | Round-robin chat | – |
| Constraints | Declarative when/require | – | – | – | – |
| Time-travel debug | Built-in snapshots + fork | LangSmith tracing | – | – | – |
| DevTools | Visual debugger (13 views) | LangSmith dashboard | – | AutoGen Studio | – |
| Streaming | Token-level with backpressure | LangChain streaming | – | – | Core strength |
| Memory | 3 strategies + summarizers | LangChain memory | Crew memory | Chat history | – |
| Evals | 10 built-in criteria + LLM judge | LangSmith evals | – | – | – |
| Self-healing | Circuit breaker + auto-reroute | – | – | – | – |
| Goal pattern | Desired-state goal resolution | – | Goal-oriented tasks | – | – |
| Pattern checkpoints | Save/resume all 8 patterns | LangGraph checkpointing | – | – | – |
| TypeScript | First-class, fully typed | Python-first, TS port | Python only | Python-first, TS port | First-class |
| Bundle size | Tree-shakeable, zero-cost debug | Large dependency tree | N/A (Python) | N/A (Python) | Small |
LangChain / LangGraph
LangChain provides a comprehensive toolkit for building LLM applications with chains, agents, and tools. LangGraph adds graph-based orchestration with nodes and edges.
When LangChain is Better
- You need the broadest ecosystem of integrations (100+ LLM providers, vector stores, tools)
- Your team is Python-first
- You want LangSmith's hosted tracing and evaluation platform
When Directive Adds Value
- You want framework-agnostic orchestration that wraps any LLM SDK
- You need declarative constraints that automatically trigger agent runs
- You want reactive state (derivations, scratchpad) that drives UI updates
- You need visual debugging (Timeline, Cost, State) without a hosted service
- You want self-healing with automatic agent rerouting
Using Together
Directive can wrap a LangChain runner. Use LangChain for your LLM calls and tool integrations, Directive for orchestration, guardrails, and state management.
CrewAI
CrewAI provides role-based agent teams with tasks, tools, and process flows. Agents have roles, goals, and backstories.
When CrewAI is Better
- You want the simplest mental model for multi-agent systems
- Role-based metaphors (researcher, writer, reviewer) fit your use case
- You're building in Python
When Directive Adds Value
- You need TypeScript-native orchestration
- You want per-agent and orchestrator-level guardrails (input, output, tool-call)
- You need 8 execution patterns beyond sequential and parallel (including goal-directed resolution)
- You want reactive cross-agent derivations and shared scratchpad
- You need breakpoints, checkpoints, and time-travel debugging
Directive's goal pattern goes beyond CrewAI's role-based goals. CrewAI goals are natural-language descriptions that guide agent behavior (goal="Identify trending topics"). Directive goals are machine-checkable conditions with dependency resolution, quantitative satisfaction tracking, and progressive relaxation – the runtime knows exactly how close you are to done and can self-correct when progress stalls.
AutoGen
Microsoft's AutoGen enables multi-agent conversations where agents chat with each other to solve problems.
When AutoGen is Better
- Conversational multi-agent patterns (round-robin, group chat) are your primary use case
- You want AutoGen Studio's visual builder
- Your team uses Python
When Directive Adds Value
- You need structured execution patterns (DAG, race, reflect, debate) beyond conversation
- You want constraint-driven orchestration with declarative rules
- You need token budgets, circuit breakers, and self-healing
- You want a reactive state backbone that drives UI updates
- You need evals with 10 built-in criteria and LLM-as-judge scoring
Vercel AI SDK
Vercel AI SDK provides streaming-first UI primitives for React, with excellent DX for chatbots and generative UI.
When Vercel AI SDK is Better
- You're building a chat UI and want the fastest path to streaming responses
- You want React Server Components integration
- Your use case is primarily single-agent chat
When Directive Adds Value
- You need multi-agent orchestration with patterns, constraints, and guardrails
- You want framework-agnostic state (works with React, Vue, Svelte, Solid, Lit)
- You need time-travel debugging and visual DevTools (Timeline, Cost, State)
- You want declarative agent routing based on runtime state
- You need production features: evals, OTEL, self-healing, goal pattern
Using Together
Use Vercel AI SDK for the streaming UI layer and Directive for backend orchestration, guardrails, and state management.
Migration Cheat Sheets
Quick code mappings from other frameworks to Directive AI.
From LangChain
| LangChain | Directive |
|---|---|
ChatOpenAI({ model }) | AgentRunner function wrapping your SDK |
RunnableSequence.from([a, b, c]) | sequential(['a', 'b', 'c']) pattern |
StateGraph with nodes/edges | dag({ nodes, edges }) pattern |
PromptTemplate | Agent instructions field |
LangSmith tracing | debug: true + DevTools |
ConversationBufferMemory | createMemory({ strategy: 'sliding-window' }) |
// LangChain
const chain = RunnableSequence.from([retriever, llm, parser]);
const result = await chain.invoke("query");
// Directive
const orchestrator = createMultiAgentOrchestrator({
runner,
agents: {
retriever: { agent: retriever },
writer: { agent: writer },
},
patterns: { pipeline: sequential(['retriever', 'writer']) },
});
const result = await orchestrator.runPattern('pipeline', 'query');
From Vercel AI SDK
| Vercel AI SDK | Directive |
|---|---|
streamText() | createStreamingRunner() + createSSETransport() |
useChat() | useFact(system, 'agent::__agent') + SSE client |
tool() definitions | toolCall guardrails for validation |
generateText() | orchestrator.run(agentId, input) |
// Vercel AI SDK
const result = streamText({ model: openai('gpt-4'), prompt: 'Hello' });
// Directive (with SSE transport for streaming)
const streamable = {
stream: (agentId: string, input: string, opts?: { signal?: AbortSignal }) =>
streamRunner(agent, input, opts),
};
return transport.toResponse(streamable, 'chat', 'Hello');
From CrewAI / AutoGen
| CrewAI / AutoGen | Directive |
|---|---|
Agent(role=..., goal=...) | AgentLike with name + instructions |
Task(agent=..., description=...) | Agent registration + pattern |
Crew(agents, tasks, process) | createMultiAgentOrchestrator({ agents, patterns }) |
crew.kickoff() | orchestrator.runPattern('pipeline', input) |
| Round-robin chat | debate({ agents, evaluator }) pattern |
Directive's Unique Differentiators
Features no other framework provides:
- Constraint-driven orchestration – Declare
when/requirerules; the runtime resolves them automatically - Goal pattern – Declare desired end-state with
produces/requiresdeclarations; runtime resolves through dependency-ordered agent runs with satisfaction scoring and progressive relaxation - Reactive Directive System backbone – Every agent is a namespaced module with reactive facts, derivations, and effects
- Cross-agent derivations – Compute values across all agent states reactively
- Visual DevTools – Timeline, Cost, State, DAG, Breakpoints, Compare
- Self-healing – Circuit breakers with automatic agent rerouting and health scoring
- Pattern checkpoints – Save/resume mid-execution for all 8 pattern types with progress tracking, forking, and diffing
- Framework-agnostic – Wraps any
AgentRunnerfunction, no LLM SDK lock-in
Next Steps
- Overview – Full feature map and reading paths
- Running Agents – Get started with Directive AI
- Execution Patterns – See all 8 patterns in action

