If you rely on AI Stats to look up, compare, or monitor models, a simple acknowledgement helps others discover the platform.
Quick version
If AI Stats was helpful, please just link to it somewhere appropriate (methods, footnote, or acknowledgements):This work uses model metadata and aggregated benchmark results from AI Stats.
Suggested citation (optional)
If you prefer a more formal reference, you can use:Benchmark data
AI Stats does not run benchmarks itself. We surface and normalise scores reported by benchmark authors and model providers. When you use benchmark results that you accessed via AI Stats:- Cite the original benchmark and paper as the primary source.
- Optionally acknowledge AI Stats as the place you retrieved or compared the scores.
- Include:
- Benchmark name and split (for example
MMLU test). - Model identifier and provider.
- URL to the relevant AI Stats page or API.
- Date accessed.
Benchmark scores were retrieved via AI Stats (MMLU test split), accessed 3 March 2025. Model: openai-gpt-4o, provider: OpenAI.
Gateway and tooling
When describing integrations, you can mention AI Stats as the gateway layer rather than as a research benchmark source:“We used the AI Stats Gateway to unify access to OpenAI, Anthropic, and Google models through a single API.”Link directly to the API reference so readers can explore further.