Advanced
•3 min read
Benchmarks
Directive's constraint engine is designed for real-time applications. These benchmarks measure every hot path.
Results
All numbers measured on an Apple M4 Max, Node.js 22, using Vitest's built-in bench API. Run pnpm bench to reproduce.
Fact Mutations
| Operation | ops/sec | Latency |
|---|---|---|
| Single fact read | 18,873,452 | 53ns |
| Single fact set | 6,332,174 | 158ns |
| 100 facts batched | 127,486 | 7.8μs |
| 100 fact reads via proxy | 142,357 | 7.0μs |
Derivations
| Operation | ops/sec | Latency |
|---|---|---|
| Simple derivation (invalidate + read) | 1,880,103 | 532ns |
| 10 chained derivations | 259,680 | 3.9μs |
| 50 chained derivations | 55,568 | 18μs |
| Invalidate 20 derivations (1 fact change) | 1,490,468 | 671ns |
Constraint Evaluation + Reconciliation
| Operation | ops/sec | Latency |
|---|---|---|
| No-op reconcile (nothing changed) | 67,379 | 15μs |
| Single constraint + resolver | 56,569 | 18μs |
| 10 constraints, 1 triggers | 28,382 | 35μs |
| Minimal cycle (1 fact → 1 constraint → 1 resolver) | 45,721 | 22μs |
| Medium system (5 facts → 3 constraints → 3 resolvers → 5 derivations) | 18,780 | 53μs |
Real-World Scenarios
| Scenario | ops/sec | Latency | Description |
|---|---|---|---|
| Traffic light | 27,009 | 37μs | 9 event dispatches → 3 constraint transitions |
| Auth flow | 35,334 | 28μs | login → token → profile (cascading constraints) |
Head-to-Head Comparison
All libraries benchmarked side-by-side on the same machine (M4 Max, Node 22) using identical operations via Vitest bench. Run pnpm bench -- --grep Comparison to reproduce.
Single Read (ops/sec)
| Library | ops/sec | Relative |
|---|---|---|
| Preact Signals | 33,268,404 | 1.00x |
| Zustand | 33,211,323 | 1.00x |
| XState | 32,825,106 | 0.99x |
| Redux Toolkit | 32,732,659 | 0.98x |
| MobX | 23,280,113 | 0.70x |
| Directive | 18,281,792 | 0.55x |
| Jotai | 9,092,914 | 0.27x |
Single Write (ops/sec)
| Library | ops/sec | Relative |
|---|---|---|
| Preact Signals | 30,237,572 | 1.00x |
| Zustand | 16,601,258 | 0.55x |
| MobX | 9,220,850 | 0.31x |
| Directive | 7,600,237 | 0.25x |
| Redux Toolkit | 2,347,409 | 0.08x |
| Jotai | 2,143,979 | 0.07x |
| XState | 1,247,273 | 0.04x |
Derived/Computed Value (write + read, ops/sec)
| Library | ops/sec | Relative |
|---|---|---|
| Preact Signals | 22,684,122 | 1.00x |
| Zustand | 13,012,622 | 0.57x |
| MobX | 4,936,002 | 0.22x |
| Redux Toolkit | 2,307,794 | 0.10x |
| Directive | 2,061,300 | 0.09x |
| XState | 1,252,395 | 0.06x |
| Jotai | 830,972 | 0.04x |
1,000 Write+Read Cycles (ops/sec)
| Library | ops/sec | Relative |
|---|---|---|
| Preact Signals | 83,112 | 1.00x |
| Zustand | 26,609 | 0.32x |
| Directive | 9,667 | 0.12x |
| MobX | 9,318 | 0.11x |
| Redux Toolkit | 2,460 | 0.03x |
| Jotai | 1,824 | 0.02x |
| XState | 1,222 | 0.01x |
What Only Directive Can Do
The comparison above measures raw state operations. Directive is slower on pure reads/writes because every access goes through a Proxy that enables auto-tracking. But Directive is the only library that also does:
| Capability | Directive | Others |
|---|---|---|
| Full reconcile (constraints + resolvers) | 18,780/s (53μs) | – |
| Auth flow (3-step cascade) | 35,334/s (28μs) | – |
Causal debugging (explain()) | Yes | No |
| Auto-tracked derivations | Yes | Only MobX, Signals |
| Constraint-driven cache invalidation | Yes | No |
| Time-travel debugging | Yes | Only Redux (via devtools) |
| Zero dependencies | Yes | Only Zustand, Jotai |
The trade-off: Directive's proxy overhead costs ~2x vs plain object reads, but enables auto-tracking, causal debugging, and the constraint engine – features no other library provides.
Why It's Fast
Three optimizations account for most of the performance:
Module-level
__DEV__const – V8's JIT compiler can't optimize proxy traps that containprocess.env.NODE_ENVchecks (treated as potentially side-effectful). Hoisting to a module-level const lets V8 compile the proxy handler to native code. This alone gave a 3.1x improvement on reads.queueMicrotaskinstead ofsetTimeout(0)– Thesettle()function usedsetTimeout(0)to yield before checking settlement. Browsers clampsetTimeout(0)to ~1-4ms. Replacing withqueueMicrotaskresolves within the same microtask queue, eliminating ~1ms of scheduler latency per cycle. This gave a 23-97x improvement on reconciliation.Fast-path
trackAccess– 99% of fact reads happen outside derivation computation. An early return whentrackingStack.length === 0skips the entire tracking mechanism, saving 3 function calls per read.
Running Benchmarks
pnpm bench
Individual categories:
pnpm bench -- --grep "Fact Mutations"
pnpm bench -- --grep "Derivations"
pnpm bench -- --grep "Reconciliation"
Next Steps
- Constraints – How the constraint engine works
- Derivations – Auto-tracked computed values
- Plugins – Performance plugin for runtime metrics

