Defining Sequences
Sequences are defined as JSON and created via the REST API. A sequence contains a tree of blocks — steps, parallel groups, races, loops, routers, and try-catch blocks. Blocks can be nested recursively.
About handlers: Every step has a handler field. This can be a built-in handler (like http_request, log, sleep) or a custom handler you register via the REST API or a worker. Examples below use both — built-in handlers are marked with a note where relevant.
Step block
The simplest block — one action. This example calls a URL after a 2-day delay, with retries and rate limiting:
{
"type": "step",
"id": "send_reminder",
"handler": "http_request",
"params": { "url": "https://httpbin.org/post", "method": "POST" },
"delay": {
"duration": "2d",
"business_days_only": true,
"jitter": "2h"
},
"retry": {
"max_attempts": 3,
"initial_backoff": "1s",
"max_backoff": "30s",
"backoff_multiplier": 2.0
},
"timeout": "30s",
"rate_limit_key": "mailbox:outreach@acme.com"
}Parallel block
Run multiple branches at the same time. All branches must complete before the parallel block finishes:
{
"type": "parallel",
"id": "enrich_data",
"branches": [
[
{ "type": "step", "id": "geo_lookup", "handler": "geo_lookup",
"params": { "ip": "{{context.data.ip}}" } }
],
[
{ "type": "step", "id": "credit_check", "handler": "credit_check",
"params": { "user_id": "{{context.data.user_id}}" } }
]
]
}Race block
Run multiple branches at the same time, but finish as soon as one branch completes. Useful for timeout patterns or trying multiple providers:
{
"type": "race",
"id": "fastest_provider",
"semantics": "FirstToSucceed",
"branches": [
[
{ "type": "step", "id": "provider_a", "handler": "fetch_data",
"params": { "provider": "a", "query": "{{context.data.query}}" } }
],
[
{ "type": "step", "id": "provider_b", "handler": "fetch_data",
"params": { "provider": "b", "query": "{{context.data.query}}" } }
],
[
{ "type": "step", "id": "timeout", "handler": "noop",
"delay": { "duration": "5s" } }
]
]
}FirstToSucceed completes when any branch succeeds. FirstToResolve completes when any branch finishes, even if it fails.
Router block
Branch based on conditions. The engine checks routes in order and runs the first matching one. If nothing matches, the default branch runs:
{
"type": "router",
"id": "check_tier",
"routes": [
{
"condition": "context.data.plan == 'enterprise'",
"blocks": [{ "type": "step", "id": "priority_support", "handler": "log",
"params": { "message": "Enterprise ticket", "level": "info" } }]
},
{
"condition": "context.data.plan == 'pro'",
"blocks": [{ "type": "step", "id": "standard_support", "handler": "log",
"params": { "message": "Pro ticket", "level": "info" } }]
}
],
"default": [{ "type": "step", "id": "basic_support", "handler": "log",
"params": { "message": "Basic ticket", "level": "info" } }]
}Loop and for_each
Loop repeats while a condition is true. Useful for polling, retry patterns, or waiting for an external state:
{
"type": "loop",
"id": "poll_status",
"condition": "context.data.status != 'ready'",
"max_iterations": 10,
"body": [
{ "type": "step", "id": "check", "handler": "http_request",
"params": { "url": "https://httpbin.org/get?check=1", "method": "GET" } },
{ "type": "step", "id": "wait", "handler": "sleep",
"params": { "duration_ms": 5000 } }
]
}for_each iterates over a collection in context.data. Each item is bound to a variable (default item) that you can reference in template expressions:
{
"type": "for_each",
"id": "notify_users",
"collection": "context.data.users",
"item_var": "user",
"max_iterations": 100,
"body": [
{ "type": "step", "id": "send", "handler": "http_request",
"params": { "url": "https://httpbin.org/post", "method": "POST",
"body": { "to": "{{user.email}}", "name": "{{user.name}}" } } }
]
}Try-catch-finally
Catch errors from a step and run recovery logic. The finally block always runs, even if a cancel signal arrives:
{
"type": "try_catch",
"id": "safe_send",
"try_block": [{ "type": "step", "id": "send", "handler": "send_email" }],
"catch_block": [{ "type": "step", "id": "fallback", "handler": "send_sms" }],
"finally_block": [{ "type": "step", "id": "audit", "handler": "audit_log" }]
}send_email, send_sms, and audit_log are custom handlers you would register for your application. log and noop are built-in.
sub_sequence block
Invoke another sequence as a child instance. The parent transitions to Waiting until the child completes. On completion, the child's context.data is merged into the parent. Useful for composing reusable flows or breaking large campaigns into modular stages.
{
"type": "sub_sequence",
"id": "run_nurture_flow",
"sequence_name": "nurture_flow",
"input": {
"email": "{{context.data.email}}",
"plan": "{{context.data.plan}}"
}
}The child instance inherits the parent's tenant and namespace. The sequence_name resolves to the latest version of the named sequence at runtime.
A/B split block
Route instances across named variants by weight. The selection is deterministic — a hash of the instance ID picks the variant, so the same instance always takes the same path, even after a crash or restart. The selected variant name is available downstream as {{steps.<id>.output.selected_variant}}.
{
"type": "ab_split",
"id": "subject_line_test",
"variants": [
{
"name": "direct", "weight": 50,
"blocks": [
{ "type": "step", "id": "send_direct", "handler": "send_email",
"params": { "subject": "Quick question about {{context.data.company}}" } }
]
},
{
"name": "curiosity", "weight": 50,
"blocks": [
{ "type": "step", "id": "send_curiosity", "handler": "send_email",
"params": { "subject": "Noticed something about {{context.data.company}}" } }
]
}
]
}
// Read the selected variant downstream
{ "type": "step", "id": "record_experiment", "handler": "log",
"params": { "message": "Variant: {{steps.subject_line_test.output.selected_variant}}", "level": "info" } }send_window — business hours delivery
Restrict when a step can execute. Steps ready outside the window are deferred until it opens — they are never dropped or failed. Combine with business_days_only on the delay to also skip weekends and holidays.
{
"type": "step", "id": "send_followup",
"handler": "http_request",
"params": { "url": "https://api.example.com/send", "method": "POST",
"body": { "to": "{{context.data.email}}", "template": "followup" } },
"send_window": {
"start_hour": 9,
"end_hour": 17,
"days": [0, 1, 2, 3, 4]
},
"delay": {
"duration": 259200000,
"business_days_only": true,
"jitter": 7200000,
"holidays": ["2026-12-25", "2027-01-01"]
}
}
// days: 0=Mon … 4=Fri. jitter (ms) spreads sends ±2h so they don't all fire at 9:00:00.fire_at_local — wall-clock scheduling
Schedule a step to fire at a specific local time instead of after a duration. Set fire_at_local to an ISO 8601 NaiveDateTime (e.g. "2026-03-08T02:30:00"). The engine converts this to UTC using the step-level timezone (or the instance timezone as fallback). DST transitions are handled by rolling forward to the next valid local time. When fire_at_local is set, duration is ignored.
{
"type": "step", "id": "morning_digest",
"handler": "send_digest_email",
"params": { "to": "{{context.data.email}}" },
"delay": {
"fire_at_local": "2026-03-08T09:00:00",
"timezone": "America/New_York",
"business_days_only": true,
"holidays": ["2026-12-25"]
}
}
// duration is ignored when fire_at_local is set.
// timezone falls back to the instance timezone if omitted.deadline — SLA timer
Fire a handler if a step is not resolved within a time limit. The deadline measures from when the step first becomes eligible (after any delay). The step still runs to completion — the deadline fires concurrently as an alert.
{
"type": "step", "id": "retry_charge",
"handler": "charge_card",
"params": { "customer_id": "{{context.data.customer_id}}" },
"delay": { "duration": 432000000 },
"deadline": 604800000,
"on_deadline_breach": {
"handler": "notify_slack",
"params": { "channel": "#billing-alerts", "message": "Dunning SLA breached for {{context.data.customer_id}}" }
}
}wait_for_input — human input
Pause a step until an external signal arrives. The instance moves to Waiting and consumes no scheduler resources. An optional escalation_handler fires if nobody responds within timeout milliseconds.
{
"type": "step", "id": "request_approval",
"handler": "send_approval_request",
"params": { "approver_email": "{{context.data.approver_email}}", "diff": "{{steps.diff.output}}" },
"wait_for_input": {
"prompt": "Review the deployment diff and approve or reject.",
"timeout": 14400000,
"escalation_handler": "escalate_approval"
}
}
// Resume it via signal — the payload becomes step output
POST /instances/{id}/signals
{ "signal_type": "custom", "payload": { "decision": "approved", "reviewer": "alice@acme.com" } }
// Advanced mode: custom choices with store_as
{
"type": "step", "id": "request_decision",
"handler": "send_approval_request",
"wait_for_input": {
"prompt": "Choose an action for this deployment.",
"choices": [
{ "label": "Approve", "value": "approve" },
{ "label": "Reject", "value": "reject" },
{ "label": "Escalate", "value": "escalate" }
],
"store_as": "decision",
"timeout": 14400000,
"escalation_handler": "escalate_approval"
}
}
// choices: optional array of { label, value } options (defaults to Yes/No if omitted)
// store_as: context.data key where the picked value is stored (defaults to block id)
// Router can match on the stored value: "context.data.decision == 'approve'"cancellation_scope — protect critical steps
Wrap a block so that cancel signals are ignored while it executes. Use it to prevent a workflow from being interrupted during a non-reversible action (database write, payment, account suspension). Individual steps inside can set cancellable: false as an alternative.
{
"type": "cancellation_scope",
"id": "protected_suspension",
"blocks": [
{ "type": "step", "id": "suspend_account", "handler": "suspend_subscription",
"params": { "customer_id": "{{context.data.customer_id}}" }, "cancellable": false },
{ "type": "step", "id": "notify_suspension", "handler": "send_email",
"params": { "to": "{{context.data.email}}", "template": "account_suspended" } }
]
}queue_name — task queue routing
Route a step to a specific worker pool by tagging it with a queue name. Workers that poll POST /workers/tasks/poll/queue with a queue_name only receive steps for that queue. Steps without queue_name go to the default queue. Queues are implicit — no setup required.
// Step in the sequence
{ "type": "step", "id": "run_inference",
"handler": "model_predict",
"queue_name": "gpu_workers",
"params": { "model": "gpt-4", "input": "{{context.data.text}}" } }
// GPU worker polls its own queue
POST /workers/tasks/poll/queue
{ "queue_name": "gpu_workers", "handler_names": ["model_predict", "feature_extract"] }
// EU-region worker polls its queue
POST /workers/tasks/poll/queue
{ "queue_name": "eu_workers", "handler_names": ["save_to_db"] }context_access — per-step visibility control
Restrict which sections of the execution context a handler can see. The engine strips restricted sections before serializing the worker request — they are absent, not redacted. Use this when passing control to third-party plugins or untrusted code.
Each section has its own default. data defaults to true (full access). config defaults to true. audit and runtime default to false.
The data field accepts three forms: a boolean (true / false), a keyword string ("all" / "none"), or a field list that trims context.data to specific top-level keys.
// Full access — defaults if context_access is omitted entirely
{ "context_access": { "data": true, "config": true, "audit": false, "runtime": false } }
// Field-level trim: handler only sees context.data.user_id and context.data.plan
{ "context_access": { "data": { "fields": ["user_id", "plan"] }, "config": false } }
// Third-party plugin: data only, secrets hidden
{
"type": "step",
"id": "third_party_enrichment",
"handler": "grpc://plugin:50051/Enrich.Lookup",
"params": { "company_domain": "{{context.data.domain}}" },
"context_access": {
"data": true,
"config": false,
"audit": false,
"runtime": false
}
}
// ✓ receives: context.data.*
// ✗ stripped: context.config (API keys), context.audit, context.runtimeExecution Context
Every workflow instance carries a single ExecutionContext object with four sections, each with different permission semantics.
The main payload. Step handlers can read and write this freely. Changes from one step are visible to all subsequent steps.
Set at instance creation and never modified by handlers. Use it for API keys, feature flags, or any value that should be immutable during the run.
Engine-written audit trail. Each entry has a timestamp, event name, and a details object. Handlers cannot modify or delete entries.
Read-only to handlers. Contains: current_step (block ID), attempt (retry count, 0-based), started_at (UTC timestamp), resource_key (concurrency slot if used).
The full context is serialized on every scheduler tick. The default size ceiling is 256 KiB. Writes that would exceed the limit are rejected with a ContextTooLarge error. Override the limit with ORCH8_SCHEDULER__MAX_CONTEXT_BYTES (set to 0 to disable the check entirely).
// Reading context sections in a handler (Node.js SDK)
export async function handler(task) {
const { data, config, runtime } = task.context;
// data: read/write — persist results back
await task.complete({ processed: true, count: data.items.length });
// config: read-only — API keys, immutable settings
const apiKey = config.third_party_api_key;
// runtime: engine metadata
console.log(`attempt ${runtime.attempt}, step ${runtime.current_step}`);
}
// Creating an instance with config (immutable for the run)
POST /sequences/{id}/instances
{
"context": {
"data": { "user_id": "u_123", "items": [1, 2, 3] },
"config": { "third_party_api_key": "sk-...", "dry_run": false }
}
}Output externalization
Step outputs are stored inline in the instance row by default. For workflows that produce large outputs — LLM responses, batch results, embeddings — this bloats the hot-path rows the scheduler reads on every tick.
Set externalize_output_threshold in [engine] to move outputs above a byte threshold into a separate externalized_state table. The inline row stores only a reference.
# orch8.toml — externalize outputs larger than 64 KiB
[engine]
externalize_output_threshold = 65536 # bytes (0 = disabled, default)
# ORCH8_EXTERNALIZE_OUTPUT_THRESHOLD env var also acceptedWhen a step output is externalized, the block output object in the API response includes a non-null output_ref field instead of the inline value. Fetch the full output via the ref if needed.
// Block output — not externalized (small output)
{ "block_id": "summarize", "output": { "summary": "..." }, "output_ref": null }
// Block output — externalized (output exceeded threshold)
{ "block_id": "run_inference", "output": null, "output_ref": "ext://abc123" }
// context.data writes are NOT externalized — only block outputs are.
// Keep context.data small; use output_ref for large intermediate results.