Open Source · Written in Rust

The Agent-Native
Workflow Engine

Stop burning tokens on HTTP calls and JSON parsing. r8r uses deterministic nodes for deterministic work — and only calls the LLM when you actually need reasoning.

Fully Agentic
LLM token on every step
$$$
r8r
LLM only where needed
$

Built for how agents
actually work

Traditional tools were built for humans clicking through visual editors. r8r was built for AI agents that need to create, execute, and orchestrate workflows programmatically.

50ms Cold Start

No JVM warmup, no container spin-up. A single Rust binary that starts instantly and uses ~15MB of RAM.

🤖

Agent Node

Drop AI reasoning into any workflow. Call OpenAI, Anthropic, Ollama, or any OpenAI-compatible endpoint directly.

💰

Token Efficient

Deterministic nodes handle HTTP calls, JSON parsing, conditional routing. The LLM only runs where you actually need intelligence.

📄

YAML + Git

Workflows are plain YAML files. Version control them, diff them, review them in PRs. No database blobs.

🔒

Durable Execution

Checkpoint, resume, and replay. If a node fails at 3am, r8r retries with backoff — not the entire pipeline.

🔌

MCP Built-In

Model Context Protocol server included. AI agents can discover and execute your workflows natively.

AI reasoning as
a workflow step

The agent node gets the same durability as every other node — retries, checkpoints, fallback values. Multi-provider, structured output, JSON schema validation.

OpenAI Anthropic Ollama Custom Endpoint
View All Node Types →
fraud-detector.yaml
name: fraud-detector nodes: - id: fetch-order type: http # deterministic — no tokens config: url: "https://api.store.com/orders/{{ input.id }}" - id: check-fraud type: agent # AI only where needed config: provider: openai model: gpt-4o prompt: "Is this fraudulent? {{ nodes.fetch-order.output }}" response_format: json json_schema: type: object required: [verdict, confidence] properties: verdict: { type: string, enum: [fraud, legit] } confidence: { type: number } depends_on: [fetch-order] retry: max_attempts: 3 backoff: exponential - id: route type: switch # deterministic routing depends_on: [check-fraud] config: expression: "nodes.check_fraud.output.verdict" cases: fraud: [flag-order] legit: [process-order]
~15MB
Memory Usage
50ms
Cold Start
400+
Tests
20+
Node Types

r8r vs n8n

Different tools for different users. If your workflows are triggered by AI agents, r8r was built for you.

Feature r8r n8n
Primary User AI agents & developers Human operators
Interface CLI, API, MCP Visual drag-and-drop
Language Rust TypeScript
Memory (idle) ~15 MB ~500 MB+
Startup ~50ms Seconds
Storage SQLite (embedded) PostgreSQL/MySQL
Workflows YAML files (git-friendly) Database blobs
Agent / LLM Nodes Multi-provider
MCP Support Built-in
Durable Execution Checkpoint, resume, replay Basic retry
Circuit Breakers

A node for every job

Mix deterministic nodes with AI reasoning. Each node gets retries, circuit breakers, and checkpoints automatically.

agent
http
transform
switch
if
subworkflow
email
slack
database
s3
filter
sort
split
aggregate
merge
dedupe
crypto
datetime
variables
circuit_breaker
wait
debug

Up and running in seconds

Clone, build, run. No Docker registry, no database migrations, no config files.

~/projects
$ git clone https://github.com/qhkm/r8r.git && cd r8r
$ cargo build --release
$ ./target/release/r8r server --workflows ./examples
Server running on http://localhost:3000
6 workflows loaded · 50ms startup
$ curl -X POST localhost:3000/api/workflows/fraud-detector/execute \
-d '{"input": {"id": "ORD-1234"}}'
{"verdict":"legit","confidence":0.94}
$
"r8r cut our AI agent token costs by 80%. We were paying for GPT-4 to parse JSON and make HTTP calls. Now that's all deterministic, and we only use LLMs for actual reasoning."
🤖
AI-First Engineering Team
Building agentic workflows at scale

Ready to stop burning tokens?

r8r is free, open source, and built for the AI age. Star us on GitHub or start building workflows in under a minute.