Documentation Index
Fetch the complete documentation index at: https://docs.ai-stats.phaseo.app/llms.txt
Use this file to discover all available pages before exploring further.
Method: client.generateResponse() (stream with client.streamResponse()).
Example
const resp = await client.generateResponse({
model: "openai/gpt-4.1",
input: [{ role: "user", content: [{ type: "output_text", text: "Summarise this text" }] }],
temperature: 0.7,
stream: false,
});
Streaming
for await (const line of client.streamResponse({
model: "openai/gpt-4.1",
input: [{ role: "user", content: [{ type: "output_text", text: "Stream this" }] }],
stream: true,
})) {
console.log(line);
}
Key parameters
model (required): Target model id.
input (required): Ordered array of input items (messages, tool calls, etc.).
temperature (0-2): Higher = more random.
top_p (0-1) / top_k (>=1): Nucleus / k-best sampling controls.
max_output_tokens (int): Hard cap on tokens generated per response.
- Tools:
tools (definitions), tool_choice (auto/none/specific), max_tool_calls (int), parallel_tool_calls (bool).
- Logprobs:
logprobs (bool), top_logprobs (0-20) to return per-token logprobs.
- Output:
response_format (json/text), service_tier, store (bool), stream (bool).
- Metadata:
metadata (object) for passthrough, reasoning (object) for effort hints.
- Gateway extras:
usage (bool to request usage), meta (bool to include meta block).
Returns
ResponsesResponse
{
"id": "resp_123",
"object": "response",
"created_at": 1677652288,
"status": "completed",
"model": "gpt-4.1",
"output": [
{
"type": "message",
"id": "msg_123",
"status": "completed",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "Hello there, how may I assist you today?"
}
]
}
],
"usage": {
"input_tokens": 9,
"output_tokens": 12,
"total_tokens": 21
}
}
Or newline-delimited SSE frames when stream: true:
data: {"id":"resp_123","object":"response","created_at":1677652288,"status":"in_progress","model":"gpt-4.1","output":[{"type":"message","id":"msg_123","status":"in_progress","role":"assistant","content":[]}]}
data: {"id":"resp_123","object":"response","created_at":1677652288,"status":"in_progress","model":"gpt-4.1","output":[{"type":"message","id":"msg_123","status":"in_progress","role":"assistant","content":[{"type":"output_text","text":"Hello"}]}]}
data: {"id":"resp_123","object":"response","created_at":1677652288,"status":"completed","model":"gpt-4.1","output":[{"type":"message","id":"msg_123","status":"completed","role":"assistant","content":[{"type":"output_text","text":"Hello there, how may I assist you today?"}]}],"usage":{"input_tokens":9,"output_tokens":12,"total_tokens":21}}
data: [DONE]