A community-driven repository of LLM data and benchmarks. Compare and explore language models through our interactive dashboard at llm-stats.com.
Our repository contains detailed information on hundreds of LLMs:
- Model parameters, context window sizes, licensing details, capabilities, and more
- Provider pricing
- Performance metrics (throughput, latency)
- Standardized benchmark results
We welcome community contributions to keep our data accurate and up-to-date:
-
Update Model Data
- Browse
models/
andproviders/
directories - Submit a PR following our contribution guidelines
- Check
schemas/
for data formats
- Browse
-
Report Issues with llm-stats.com
- Have a feature request or found a bug? Open an issue
Accuracy is our priority. To ensure reliable information:
- All benchmark data requires verifiable source links
- Community review process for all changes
- Multiple source citations encouraged
- Regular validation of submitted data
There's no guarantee that the data is 100% accurate, but we do our best to ensure it's as accurate as possible.
Built with 💙 by the AI community, for the AI community.
Star this repo if you find it useful!
Star this repo if you find it useful!