podflareRunCode() returns a small adapter object with a description, an execute function, and a close method. You wrap it in Vercel AI SDK’s tool() helper yourself, which keeps Podflare’s package free of ai and zod as peer dependencies while giving you full control over the tool’s parameter schema at the call site.
Install
npm install podflare ai zod
Quickstart with generateText
Create the adapter
Call podflareRunCode() to get the adapter. No sandbox is started yet.import { podflareRunCode } from "podflare/ai-sdk";
const pf = podflareRunCode();
Wrap it in a tool
Use Vercel AI SDK’s tool() and Zod to declare the parameter schema.
Wire pf.description and pf.execute directly into the tool definition.import { tool } from "ai";
import { z } from "zod";
const runCode = tool({
description: pf.description,
parameters: z.object({
code: z.string().describe("Python (default) or bash source"),
language: z.enum(["python", "bash"]).optional(),
}),
execute: pf.execute,
});
Pass the tool to generateText
Add runCode to the tools map and call generateText as usual.import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
const { text } = await generateText({
model: anthropic("claude-opus-4-7"),
tools: { runCode },
prompt: "Load /data/sales.csv and tell me the top 5 products by revenue.",
});
console.log(text);
Close the sandbox
Call pf.close() when you’re done to destroy the underlying VM.
Full example
import { generateText, tool } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import { podflareRunCode } from "podflare/ai-sdk";
const pf = podflareRunCode();
const runCode = tool({
description: pf.description,
parameters: z.object({
code: z.string().describe("Python (default) or bash source"),
language: z.enum(["python", "bash"]).optional(),
}),
execute: pf.execute,
});
const { text } = await generateText({
model: anthropic("claude-opus-4-7"),
tools: { runCode },
prompt: "Load /data/sales.csv and tell me the top 5 products by revenue.",
});
console.log(text);
await pf.close();
Streaming with streamText
The same runCode tool works with streamText without any changes:
import { streamText } from "ai";
const stream = await streamText({
model: anthropic("claude-opus-4-7"),
tools: { runCode },
prompt: "Analyse /data/sales.csv and summarise the monthly trends.",
});
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}
await pf.close();
Options
Pass options to podflareRunCode() to override the host or select a sandbox template:
const pf = podflareRunCode({
host: "https://api.podflare.dev", // default: PODFLARE_HOST env var
template: "python-datasci", // default: primary pool
});
If host is omitted, the adapter reads the PODFLARE_HOST environment
variable. For local development it falls back to http://127.0.0.1:7070.
Model-agnostic
The adapter does not depend on any specific model provider. Pass any Vercel AI SDK-compatible model — OpenAI, Anthropic, Google, Mistral, or a local model — and the tool shape works identically.
Because you declare the Zod schema yourself, you can extend parameters
with project-specific fields (for example, a working_directory string)
and handle them inside a thin wrapper around pf.execute.