Installation
Turn ships as a single static binary. You install it, point it at a Wasm inference provider, set an API key, and run. No runtimes, no virtual environments, no build pipelines.
1. Install Turn
The fastest path is the one-line installer:
curl -fsSL https://turn-lang.dev/install.sh | bashThis downloads the turn binary appropriate for your OS and architecture, installs it to ~/.turn/bin/, and adds it to your PATH.
Supported platforms: macOS (ARM + Intel), Linux (x64 + ARM64)
Verify the installation:
turn --version
# turn v0.5.0-alphaBuilding from Source
If you prefer to build from source, you need a Rust toolchain (1.75+):
git clone https://github.com/ekizito96/Turn.git
cd Turn/impl
cargo build --release
# Add to PATH
export PATH="$PWD/target/release:$PATH"
turn --version2. Install an Inference Provider
Turn's infer primitive delegates LLM calls to a Wasm driver — a sandboxed .wasm file that knows how to format requests for a specific LLM provider. The driver cannot access your filesystem or network directly; Turn executes the HTTP call on its behalf.
The installer optionally downloads the official OpenAI provider. If you skipped it, install it manually:
# The provider lives in ~/.turn/providers/
ls ~/.turn/providers/
# turn_provider_openai.wasm
# Tell Turn which provider to use
export TURN_INFER_PROVIDER=~/.turn/providers/turn_provider_openai.wasmAdd this to your shell profile (~/.zshrc or ~/.bashrc) to make it permanent.
NOTE
If TURN_INFER_PROVIDER is not set, Turn will attempt to find a provider in ~/.turn/providers/. If none is found, infer calls will fail with a configuration error — but the rest of the language will work fine.
3. Set Your API Key
Each provider reads credentials from environment variables. For the standard OpenAI provider:
export OPENAI_API_KEY=sk-...
export OPENAI_MODEL=gpt-4o # optional, defaults to gpt-4oAPI keys are never embedded in .wasm drivers or Turn scripts. The Turn VM substitutes them from the host environment at the moment of the HTTP call, keeping credentials out of sandboxed code entirely.
Other official providers:
| Provider | Required Env Vars |
|---|---|
| Azure OpenAI | AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, AZURE_OPENAI_DEPLOYMENT |
| Azure Anthropic | AZURE_ANTHROPIC_ENDPOINT, AZURE_ANTHROPIC_API_KEY |
See Inference Providers for the full list and configuration details.
4. Write and Run Your First Program
Create a file called hello.tn:
struct Greeting {
message: Str,
language: Str
};
let result = infer Greeting {
"Generate a warm greeting for a developer starting a new language.";
};
call("echo", result.message);
call("echo", "Detected language: " + result.language);
return result;Run it:
turn run hello.tnYou should see something like:
Welcome, brave developer! You're about to write something that actually thinks.
Detected language: EnglishWhat just happened:
- Turn compiled
hello.tnto bytecode and began executing - When it hit
infer Greeting { ... }, it generated a JSON Schema from theGreetingstruct - It passed your prompt + the schema to the Wasm provider via a sandboxed call
- The Wasm provider formatted an OpenAI API request; Turn executed it
- The response was validated against the
Greetingschema and bound toresult - You accessed
result.messageandresult.languageas typed fields — no parsing needed
5. Editor Support (VS Code)
Turn ships a VS Code extension with syntax highlighting, diagnostics, hover documentation, and go-to-definition — backed by a built-in Language Server Protocol implementation.
# Start the LSP manually (VS Code extension does this automatically)
turn lspThe extension is in the Turn repository under editors/vscode/. You can install it directly from the .vsix file or install from the VS Code Marketplace once published.
CLI Reference
turn run <file.tn> Run a Turn program
turn run --id <agent-id> Run with a named persistent agent identity
turn serve [--port 3000] Start the HTTP API server
turn lsp Start the Language Server (stdio)
turn --version Print versionNext Steps
- Hello Turn — Step-by-step walkthrough of your first program
- The
inferPrimitive — Cognitive Type Safety and how inference works - Inference Providers — How the Wasm sandbox works and which providers are available