Documentation Index
Fetch the complete documentation index at: https://docs.ai-stats.phaseo.app/llms.txt
Use this file to discover all available pages before exploring further.
Method: client.generateText() (stream with client.streamText()).
Example
const chat = await client.generateText({
model: "openai/gpt-4o-mini",
messages: [
{ role: "system", content: "You are helpful." },
{ role: "user", content: "Write a limerick about lighthouses." },
],
temperature: 0.5,
});
Streaming
for await (const line of client.streamText({
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Stream a story" }],
})) {
console.log(line);
}
Key parameters
model (required): Target model id.
messages (required): Ordered messages with roles system|user|assistant|tool; content as strings or parts.
- Sampling:
temperature (0–2), top_p (0–1), top_k (>=1), seed (int, optional).
- Length/penalties:
max_output_tokens (int), presence_penalty and frequency_penalty (-2 to 2), stop (string|string[]).
- Tools:
tools (definitions), tool_choice (auto/none/specific tool), max_tool_calls (int), parallel_tool_calls (bool).
- Logprobs:
logprobs (bool), top_logprobs (0–20).
- Output:
response_format (json/text), metadata (object passthrough), stream (bool), service_tier.
- Gateway extras:
usage (bool to request usage), meta (bool to include meta block).
Returns
ChatCompletionsResponse
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-3.5-turbo-0125",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello there, how may I assist you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}
Or SSE frames when stream: true:
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-3.5-turbo-0125","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-3.5-turbo-0125","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-3.5-turbo-0125","choices":[{"index":0,"delta":{},"finish_reason":"stop"}],"usage":{"prompt_tokens":9,"completion_tokens":12,"total_tokens":21}}
data: [DONE]