Skip to main content

AI & Agents

3 min read

AI & Agents Overview

The AI adapter brings Directive's constraint system to AI agent orchestration. Wrap any LLM framework with safety guardrails, approval workflows, token budgets, and state persistence.


Architecture

Directive doesn't replace your agent framework – it wraps it:

Your Agent Framework (OpenAI, Anthropic, LangChain, etc.)

Directive AI Adapter (guardrails, constraints, state)

Your Application

Learning Path

Build up from simple to complex:

LevelPageWhat You Learn
1Running AgentsEnd-to-end examples and deployment patterns
2Resilience & RoutingRetry, fallback, budgets, model selection, structured outputs
3OrchestratorSingle-agent runs with guardrails and constraints
4Agent StackComposable agent pipelines with .run() / .stream() / .structured()
5GuardrailsInput/output/tool-call validation, PII detection, moderation
6StreamingReal-time token streaming with backpressure and stream guardrails
7Multi-AgentParallel, sequential, and supervisor execution patterns
8MCP IntegrationModel Context Protocol tool servers
9SSE TransportServer-Sent Events streaming for HTTP endpoints
10RAG EnricherEmbedding-based retrieval-augmented generation

Key Concepts

ConceptDescription
OrchestratorWraps an AgentRunner with constraints, guardrails, and state tracking
Agent StackComposable .run() / .stream() / .structured() API
GuardrailsInput, output, and tool-call validators that block or transform data
ConstraintsDeclarative rules (e.g., "if confidence < 0.7, escalate to expert")
MemorySliding window, token-based, or hybrid conversation management
ResilienceIntelligent retry, provider fallback chains, and cost budget guards
Circuit BreakerAutomatic fault isolation for failing agent calls

Quick Example

import { createAgentOrchestrator, createPIIGuardrail } from '@directive-run/ai';

const orchestrator = createAgentOrchestrator({
  runner: myAgentRunner,

  // Block any user input that contains personal information
  guardrails: {
    input: [createPIIGuardrail({ action: 'block' })],
  },

  // Pause agents automatically when token spend exceeds the budget
  constraints: {
    budgetLimit: {
      when: (facts) => facts.agent.tokenUsage > 10000,
      require: { type: 'PAUSE_AGENTS' },
    },
  },

  maxTokenBudget: 10000,
});

// Run the agent – guardrails and constraints are applied automatically
const result = await orchestrator.run(myAgent, 'Hello!');

Safety & Compliance

Directive provides security guardrails and compliance tooling for AI agent systems. See the Security & Compliance section for full details. Apply multiple layers of protection:

User Input
  → Prompt Injection Detection  (block attacks before they reach agents)
  → PII Detection               (redact sensitive data from input)
  → Agent Execution              (safe to process after filtering)
  → Output PII Scan             (catch any data leaks in responses)
  → Audit Trail                 (log every operation for compliance)
FeaturePageThreat Addressed
PII DetectionInput/output scanningPersonally identifiable information leaking to/from agents
Prompt InjectionInput validationJailbreaks, instruction overrides, encoding evasion
Audit TrailObservabilityTamper-evident logging of every system operation
GDPR/CCPAData governanceRight to erasure, data export, consent tracking, retention
ScenarioFeatures
User-facing chatbotPII detection + prompt injection + audit trail
Internal toolAudit trail + GDPR compliance
Healthcare/financeAll four features
Development/testingAudit trail only

Next Steps

Previous
Custom Plugins

We care about your data. We'll never share your email.

Powered by Directive. This signup uses a Directive module with facts, derivations, constraints, and resolvers – zero useState, zero useEffect. Read how it works

Directive - Constraint-Driven State Management for TypeScript