Release Announcement: PocketPal AI 1.6.2 – Now with Benchmarking! #155
a-ghorbani
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey everyone,
We’re excited to announce the release of PocketPal AI 1.6.2, introducing a major new feature: AI-Phone Benchmarking! 🎉
As we transition from traditional smartphones to AI-phones, understanding device performance in running language models becomes much more important. That's why we’ve integrated benchmarking directly into PocketPal AI.
How to Get Started:
1. Join the Beta Program and Download the Beta Version (1.6.2):
2. Benchmark Your Device:
3. Submit Your Results:
Your results will appear on the AI-Phone Leaderboard, where you can compare your device’s performance with others: AI-Phone Leaderboard, and you can find out how AI poor/rich is you phone :)
How the Benchmarking Works:
Our ranking system evaluates devices based on:
Weights:
(Weights are adhoc but prompt processing is weighted less since it’s a one-time cost per prompt, while token generation is ongoing.)
Quantization Quality Factors:
(Scales linearly to Q1 at 0.1.)
Performance Score Formula:
base_score = (TG_speed * 0.6) + (PP_speed * 0.4)
performance_score = base_score * model_size * quant_factor
normalized_score = (performance_score / max_performance_score) * 100
Data Aggregation for Consistency:
We normalize device IDs to ensure fair comparisons:
Open-Source and Community Feedback:
The benchmarking app is open-source too!
We’re looking for feedback on:
Your input will help us refine the app and create a community-driven standard for benchmarking AI-phones.
Let’s shape the future of AI-phone benchmarking together! 🚀
Beta Was this translation helpful? Give feedback.
All reactions