What you can do
Discover Organisations
See what models are offered by organisations, as well as their latest
releases and how they stack up.
Explore Models
View detailed model pages with key information like capabilities,
pricing, benchmark results, and real world performance metrics.
Compare Benchmarks
See how models perform across common benchmark datasets and tasks.
Analyse Pricing & Performance
Compare model costs and token efficiency to find the right balance for
your use case.
Data structure
AI Stats collects and organises data in several key categories:| Category | Description |
|---|---|
| Organisations | The companies or research labs that develop and publish models. |
| Models | The individual models that are released by the organisations. |
| Benchmarks | The standardised tests that organisations use to evaluate model performance. |
| API Providers | The platforms that offer programmatic access to models. |
| Pricing | Detailed cost breakdowns per model, measured per token, image, or generation. |
| Subscription Plans | Information on different pricing tiers and their included features or limits. |
How AI Stats collects and updates data
AI Stats is powered by a vast, fully open database - built and maintained through a mix of automation and incredible community effort. While we have automated systems that track updates across providers and models, human contribution is the back bone of the project and is what keeps it running smoothly. When updates do occur:- 🧩 New models are added swiftly and documented with as much detail as possible.
- 🔄 Provider changes - from pricing to endpoint tweaks - are continuously monitored and reviewed.
- 🧠 Community contributions are carefully reviewed and merged through GitHub pull requests, ensuring quality and transparency.
Interpreting data on AI Stats
All data on AI Stats has been normalised for consistency:- Dates should always be formatted as
DD MMM YYYY(e.g.01 Jan 2022). - Prices are displayed in USD by default, normalised per 1 million tokens where applicable or appropriate units.
- Performance scores are standardised to a common scale where possible.
- Model families group together related models (e.g., GPT-5, Claude 4, Gemini 2.5, etc.).
Example use cases
Here are some examples of how you might use AI Stats:| Goal | Example |
|---|---|
| Compare two models | “Which performs better on GPQA - Claude 4.5 Sonnet or GPT-5?” |
| Find the cheapest model for a task | “What’s the most cost-efficient model for summarisation?” |
| Analyse benchmark trends | “How have reasoning models improved on GPQA over the last year?” |
| Explore API access | “Which providers currently support JSON mode?” |
| Identify gaps for contribution | “Which models are missing key benchmarks or metadata?” |
Next steps
Explore Models
Explore the latest models in the database and dive deeper into how each
one performs.
Learn About Benchmarks
Understand what benchmarks measure and how we can interpret results.