Built-in Handlers
The engine ships with eighteen built-in step handlers. Custom handlers are registered in the handler registry.
noopDoes nothing, returns an empty object. Useful as a placeholder or for delay-only steps.logLogs a message at a configured level (debug, info, warn). Reads params.message and params.level.sleepSleeps for params.duration_ms milliseconds. Returns the duration slept.http_requestMakes an HTTP request. Supports GET, POST, PUT. Reads params.url, params.method, and params.body. Retries on 5xx.self_modifyInjects new blocks into the running instance. params.blocks is a JSON array of block definitions. params.position controls insertion point (0 = before next unexecuted block; omit to append at end). Use for AI agents that decide their own next steps at runtime.llm_callBuilt-in handler for calling language model APIs. params.provider selects the provider (openai, anthropic, gemini, deepseek, qwen, perplexity, groq, together, mistral, openrouter). API keys should live in context.config, never in params.tool_callHTTP POST to a tool endpoint. params.url (required) is the endpoint, params.tool_name identifies the tool, params.arguments contains the payload, params.method (default POST), params.headers for extra headers, params.timeout_ms (default 30000).human_reviewPauses an agent instance and waits for a human signal before continuing. Equivalent to wait_for_input but named for agent contexts. Uses zero scheduler resources while waiting.emit_eventFire an event trigger to spawn a new workflow instance (same tenant only). Supports deduplication via dedupe_key and dedupe_scope (parent or tenant). params.trigger_slug identifies the target trigger.send_signalSend a signal to another running instance within the same tenant. params.instance_id targets the instance, params.signal_type and params.payload define the signal.query_instanceRead another instance's context and state within the same tenant. params.instance_id selects the target. Returns the instance's current context data and status.failImmediately fails the step with a custom error message. Reads params.message.set_stateWrite a value to session-scoped state. Reads params.key and params.value.get_stateRead a value from session-scoped state by params.key.delete_stateRemove a key from session-scoped state. Reads params.key.transformTransform context data using expressions. Reads params.expression and params.output_key.assertEvaluate a condition and fail the step if it is false. Reads params.condition and params.message.merge_stateMerge an object into context.data. Reads params.data (the object to merge).Dynamic step injection (self_modify)
A running instance can inject new steps into itself using the self_modify built-in handler. An LLM or orchestration layer returns a list of tool calls as JSON; the handler inserts those as real execution steps. You can also inject blocks externally via the REST API.
// In sequence: LLM decides next steps, self_modify injects them
{
"type": "step", "id": "analyze_task", "handler": "llm_call",
"params": { "prompt": "Return a JSON array of tool calls for: {{context.data.request}}" }
},
{
"type": "step", "id": "inject_tools", "handler": "self_modify",
"params": {
"blocks": "{{steps.analyze_task.output.tool_calls}}",
"position": 0
}
}
// Or inject from outside (e.g., from an AI orchestration layer)
POST /instances/{id}/inject-blocks
{
"blocks": [{ "type": "step", "id": "new_step", "handler": "fetch_data", "params": { "url": "..." } }],
"inject_after": "current"
}gRPC handlers
Call any gRPC service as a workflow step by prefixing the handler with grpc://. The engine sends step params as JSON and receives JSON back. Retry, timeout, and circuit breaker all apply identically to gRPC handlers.
// Direct gRPC call — no worker polling needed
{ "type": "step", "id": "classify",
"handler": "grpc://classifier:50051/Classifier.Predict",
"params": { "text": "{{context.data.input}}" },
"timeout": 30000,
"retry": { "max_attempts": 2, "initial_backoff": 1000, "max_backoff": 5000, "backoff_multiplier": 2.0 } }Step execution flow
- ›Check for memoized output — if the step already ran, return cached result
- ›Apply timeout (if configured) — fail the step if it exceeds the limit
- ›Call the handler function with a StepContext (params, context, metadata)
- ›On success: persist output to block_outputs with size tracking
- ›On retryable error: calculate next backoff (initial_ms × multiplier^attempt, capped at max), reschedule
- ›On permanent error: mark step as failed, propagate up the execution tree
Retry & backoff
Each step can configure independent retry behavior. All fields are optional.
max_attemptsTotal attempts including the initial (e.g., 5 = 1 initial + 4 retries)initial_backoffDelay before the first retry (e.g., "1s")max_backoffCeiling on backoff growth (e.g., "60s")backoff_multiplierGrowth factor per attempt. Default: 2.0
Example progression — initial=1s, multiplier=2.0, max=60s
Attempt 1: immediate
Attempt 2: wait 1s
Attempt 3: wait 2s
Attempt 4: wait 4s
Attempt 5: wait 8s
Attempt 6: wait 16s
Attempt 7: wait 32s
Attempt 8+: wait 60s ← capped at max_backoff