Budget Guardrails
The Turn VM enforces a token budget per process, a gas meter that prevents runaway agents, infinite inference loops, and surprise API bills. When the budget reaches zero, the VM halts the process cleanly and propagates an exit signal to any linked supervisors.
How the Budget Works
Every Turn process starts with a budget. Each VM instruction consumes one unit. When the budget is exhausted, the process terminates gracefully. It does not crash the VM, it does not corrupt state, and it sends a structured exit signal to all linked processes.
This is not a token count of your LLM calls. It is a computational gas counter for the Turn VM itself. LLM token limits are enforced by the inference provider separately.
Bounding Inference with Memory
The idiomatic pattern for budget-aware agents is to store results using remember so that progress is never lost:
struct ResearchSummary {
key_findings: Str,
confidence_level: Str,
sources_consulted: Num
};
let topic = "Advances in neuromorphic computing in 2025";
call("echo", "[Research Agent] Starting analysis...");
// Check if we have a prior result to avoid repeating work
let cached = recall("latest_research");
if cached != null {
call("echo", "Resuming from prior result.");
call("echo", cached["key_findings"]);
} else {
let prompt = "Summarize 2025 advances in: " + topic + ". Be concise.";
let result = infer ResearchSummary { prompt; };
// Persist the result so restarts don't lose progress
remember("latest_research", result);
call("echo", "Confidence: " + result["confidence_level"]);
call("echo", "Findings: " + result["key_findings"]);
}NOTE
remember persists to .turn_store/ automatically. If the VM budget is exhausted mid-run, the stored result survives. On the next run, the agent picks up exactly where it left off.
Running It
export TURN_LLM_PROVIDER=openai
export OPENAI_API_KEY=sk-...
turn run impl/examples/budget_guardrail.tn