Use generateText with aiStats(modelId) for one-shot completions.
import { aiStats } from "@ai-stats/ai-sdk-provider";
import { generateText } from "ai";
const result = await generateText({
model: aiStats("openai/gpt-5-nano"),
prompt: "Write a concise summary of AI Stats Gateway.",
});
console.log(result.text);
With chat history
import { aiStats } from "@ai-stats/ai-sdk-provider";
import { generateText } from "ai";
const result = await generateText({
model: aiStats("openai/gpt-5-nano"),
messages: [
{ role: "system", content: "You are a terse assistant." },
{ role: "user", content: "Give me three bullet points on retrieval augmentation." },
],
});
console.log(result.text);
Per-model settings
import { aiStats } from "@ai-stats/ai-sdk-provider";
import { generateText } from "ai";
const result = await generateText({
model: aiStats("openai/gpt-5-nano", {
temperature: 0.2,
topP: 0.9,
maxTokens: 300,
}),
prompt: "Draft a release note for an API update.",
});
console.log(result.text);
Prompt vs messages
- Use
prompt for quick single-turn requests.
- Use
messages for multi-turn context and role control.
- Keep model IDs dynamic by querying
/v1/gateway/models (see Model Discovery).
Last modified on March 16, 2026