Skip to main content
Method: ResponsesApi#create_response.

Example

require 'ai_stats_sdk'

config = AIStatsSdk::Configuration.default
config.access_token = 'your_api_key'
api_client = AIStatsSdk::ApiClient.new(config)
responses_api = AIStatsSdk::ResponsesApi.new(api_client)

request = AIStatsSdk::ResponsesRequest.new(
  model: 'openai/gpt-4.1',
  input: [{ role: 'user', content: [{ type: 'output_text', text: 'Summarise this text' }] }],
  temperature: 0.7
)

resp = responses_api.create_response(request)

Key parameters

  • model (required): Target model id.
  • input (required): Ordered array of input items (messages, tool calls, etc.).
  • temperature (0–2): Higher = more random.
  • top_p (0–1) / top_k (>=1): Nucleus / k-best sampling controls.
  • max_output_tokens (int): Hard cap on tokens generated per response; max_output_tokens_per_message to cap each message item.
  • Tools: tools (definitions), tool_choice (auto/none/specific), max_tool_calls (int), parallel_tool_calls (bool).
  • Logprobs: logprobs (bool), top_logprobs (0–20) to return per-token logprobs.
  • Output: response_format (json/text), service_tier, store (bool), stream (bool).
  • Metadata: metadata (object) for passthrough, reasoning (object) for effort hints.
  • Gateway extras: usage (bool to request usage), meta (bool to include meta block).

Returns

ResponsesResponse