Skip to main content
AI Stats aggregates public information and benchmark results from model providers and evaluation projects.
If you rely on AI Stats to look up, compare, or monitor models, a simple acknowledgement helps others discover the platform.

Quick version

If AI Stats was helpful, please just link to it somewhere appropriate (methods, footnote, or acknowledgements):
This work uses model metadata and aggregated benchmark results from AI Stats.

Suggested citation (optional)

If you prefer a more formal reference, you can use:
AI Stats. (2025). AI Stats: Aggregated benchmarks and gateway for AI models. https://ai-stats.phaseo.app
BibTeX (optional):
@misc{ai-stats-2025,
    title  = {AI Stats: Aggregated benchmarks and gateway for AI models},
    author = {{AI Stats Team}},
    year   = {2025},
    url    = {https://ai-stats.phaseo.app}
}

Benchmark data

AI Stats does not run benchmarks itself. We surface and normalise scores reported by benchmark authors and model providers. When you use benchmark results that you accessed via AI Stats:
  1. Cite the original benchmark and paper as the primary source.
  2. Optionally acknowledge AI Stats as the place you retrieved or compared the scores.
  3. Include:
  • Benchmark name and split (for example MMLU test).
  • Model identifier and provider.
  • URL to the relevant AI Stats page or API.
  • Date accessed.
Example text:
Benchmark scores were retrieved via AI Stats (MMLU test split), accessed 3 March 2025. Model: openai-gpt-4o, provider: OpenAI.

Gateway and tooling

When describing integrations, you can mention AI Stats as the gateway layer rather than as a research benchmark source:
“We used the AI Stats Gateway to unify access to OpenAI, Anthropic, and Google models through a single API.”
Link directly to the API reference so readers can explore further.