Use streamText when you want partial tokens as they arrive.
import { aiStats } from "@ai-stats/ai-sdk-provider";
import { streamText } from "ai";
const result = streamText({
model: aiStats("openai/gpt-5-nano"),
prompt: "Explain SSE streaming in plain language.",
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
Streaming with messages
import { aiStats } from "@ai-stats/ai-sdk-provider";
import { streamText } from "ai";
const result = streamText({
model: aiStats("openai/gpt-5-nano"),
messages: [
{ role: "user", content: "Write a five-step rollout plan for a gateway migration." },
],
});
let text = "";
for await (const chunk of result.textStream) {
text += chunk;
}
console.log(text);
Production tips
- Prefer streaming for long responses and interactive UIs.
- Handle cancellation/timeout in your runtime for abandoned streams.
- Keep model IDs fresh via
/v1/gateway/models to avoid stale aliases.
Last modified on March 16, 2026