Skip to main content

Basic Usage

import { AIStats, type ChatCompletionsRequest } from "@ai-stats/ts-sdk";

const client = new AIStats({ apiKey: process.env.AI_STATS_API_KEY! });

// Chat completions
const response = await client.generateText({
	model: "openai/gpt-4o-mini",
	messages: [
		{ role: "system", content: "You are a helpful assistant." },
		{ role: "user", content: "What is AI Stats?" },
	],
	temperature: 0.7,
	max_tokens: 1000,
});

console.log(response.choices[0].message.content);

Available Methods

Chat Completions

const completion = await client.generateText({
	model: "anthropic/claude-3-haiku",
	messages: [{ role: "user", content: "Explain quantum computing" }],
	temperature: 0.3,
});
The primary method for generating text responses from AI models. Supports all standard parameters including temperature, max_tokens, and message history.

Models

// List all available models
const models = await client.getModels();

// Get a specific model
const model = await client.getModel({ model: "openai/gpt-4o" });
Retrieve information about available models or get details about a specific model including capabilities, pricing, and metadata.

Error Handling

try {
	const response = await client.generateText({
		model: "invalid-model",
		messages: [{ role: "user", content: "Hello" }],
	});
} catch (error) {
	console.error("API Error:", error.message);
	// Handle rate limits, authentication errors, etc.
}
Handle API errors gracefully including rate limits, authentication failures, and network issues.

Configuration Options

const client = new AIStats({
	apiKey: "your-api-key",
	baseURL: "https://api.ai-stats.phaseo.app/v1", // Optional custom base URL
	timeout: 30000, // Request timeout in milliseconds
});
Customize client behavior with base URLs, timeouts, and other configuration options.

TypeScript Support

import type {
	ChatCompletionsRequest,
	ChatCompletionsResponse,
	Model,
	CreditsResponse,
} from "@ai-stats/ts-sdk";

// All parameters and responses are fully typed
const request: ChatCompletionsRequest = {
	model: "openai/gpt-4o",
	messages: [{ role: "user", content: "Hello" }],
};
Full TypeScript support with generated types for all request/response objects and parameters.

Best Practices

  • Store your API key securely (environment variables, not in code)
  • Handle rate limits gracefully with exponential backoff
  • Use appropriate timeouts for your use case
  • Check credits before making expensive requests