Skip to main content
Method: client.GenerateResponse()

Example

var client = new Client(apiKey);

var request = new ResponsesRequest(
    model: "openai/gpt-4.1",
    input: new List<ResponsesRequestInputItem> {
        new ResponsesRequestInputItem(new ResponsesRequestInputItemUser(
            new List<ResponsesRequestInputItemUserContentItem> {
                new ResponsesRequestInputItemUserContentItem(new ResponsesRequestInputItemUserContentItemOutputText("Summarise this text"))
            }
        ))
    },
    temperature: 0.7
);

var response = client.GenerateResponse(request);

Key parameters

  • model (required): Target model id.
  • input (required): Ordered array of input items (messages, tool calls, etc.).
  • temperature (0–2): Higher = more random.
  • top_p (0–1) / top_k (>=1): Nucleus / k-best sampling controls.
  • max_output_tokens (int): Hard cap on tokens generated per response; max_output_tokens_per_message to cap each message item.
  • Tools: tools (definitions), tool_choice (auto/none/specific), max_tool_calls (int), parallel_tool_calls (bool).
  • Logprobs: logprobs (bool), top_logprobs (0–20) to return per-token logprobs.
  • Output: response_format (json/text), service_tier, store (bool), stream (bool).
  • Metadata: metadata (object) for passthrough, reasoning (object) for effort hints.
  • Gateway extras: usage (bool to request usage), meta (bool to include meta block).

Returns

ResponsesResponse