AgMoDB
Models
Agents
Evals
Explore
Tools
Industry
Menu
AlfredPros: CodeLLaMa 7B Instruct Solidity | AgMoDB
All Models
AlfredPros: CodeLLaMa 7B Instruct Solidity
Last synced Apr 7, 2026, 2:03 PM
4K context
Blended Price
$0.90/M
Input Price
$0.80/M
Output Price
$1.20/M
Speed
—
TTFT
—
Benchmark Scores
Intelligence Index
External Benchmarks
AA-Omniscience Accuracy
Add to comparison
How AlfredPros: CodeLLaMa 7B Instruct Solidity Compares
Axes
X Axis
Blended Price (USD)
Y Axis
AgMoBench Overall
Bubble Size
Context Window
Filters
Blended Price (USD)
$0.00 – $30.0
AgMoBench Overall
≥ 3.5
Providers
ai21-labs
alibaba
anthropic
aws
azure
baidu
cohere
deepseek
+15 more
Table view
Filters
Show quadrants
Top models
anthropic
openai
google
meta
mistral
nvidia
kimi
xai
azure
deepseek
aws
cohere
baidu
ai21-labs
zai
reka-ai
xiaomi
alibaba
minimax
ibm
Bubble size = Context Window
16384.00
2000000.00
Compare with other models
—
Coding Index
—
Math Index
—
MMLU Pro
—
/ 100
GPQA Diamond
—
/ 100
HLE
—
/ 100
LiveCodeBench
—
/ 100
SciCode
—
/ 100
MATH-500
—
/ 100
AIME
—
/ 30
AIME 2025
—
/ 30
IFBench
—
/ 100
LCR
—
/ 100
Terminal-Bench Hard
—
/ 100
τ²-Bench
—
/ 100
Predicted
53.1
/ 100
AA-Omniscience Hallucination Rate
Predicted
98.5
/ 100
Aider Polyglot
Predicted
77.9
/ 100
AIME
Predicted
0.0
/ 30
AIME 2025
Predicted
0.5
/ 30
AlpacaEval 2.0 LC
Predicted
8.2
/ 100
AlpacaEval 2.0 Raw
Predicted
5.6
/ 100
ARC-AGI-1
Predicted
45.0
/ 100
ARC-AGI-1 Cost per Task
Predicted
3.8
ARC-AGI-2
Predicted
99.5
/ 100
ARC-AGI-2 Cost per Task
Predicted
7.6
BFCL (Berkeley Function Calling)
Predicted
40.3
AA Intelligence Index (Matrix)
Predicted
48.0
AA Long Context Reasoning (Matrix)
Predicted
80.2
AIME 2024
Predicted
96.8
AIME 2025 (Matrix)
Predicted
100.0
Arena-Hard Auto
Predicted
42.3
BrowseComp
Predicted
91.9
BRUMO 2025
Predicted
100.0
CMIMC 2025
Predicted
88.8
CritPt
Predicted
75.2
GPQA Diamond (Matrix)
Predicted
83.8
GSM8K
Predicted
94.8
HLE (Matrix)
Predicted
29.5
HMMT Feb 2025
Predicted
97.7
HMMT Nov 2025
Predicted
96.0
HumanEval
Predicted
90.8
IFBench (Matrix)
Predicted
66.6
IFEval
Predicted
77.3
IMO 2025
Predicted
30.5
LiveCodeBench (Matrix)
Predicted
74.1
MATH-500 (Matrix)
Predicted
96.7
MathArena Apex 2025
Predicted
49.7
MMLU
Predicted
88.2
MMLU-Pro (Matrix)
Predicted
83.5
MMMU-Pro
Predicted
76.1
MRCR v2
Predicted
85.3
OSWorld
Predicted
85.1
SimpleQA
Predicted
62.8
SMT 2025
Predicted
97.4
SWE-bench Pro
Predicted
65.8
Tau-Bench Telecom (Matrix)
Predicted
99.5
Terminal-Bench 2.0
Predicted
91.2
Terminal-Bench 1.0
Predicted
61.7
USAMO 2025
Predicted
38.5
Video-MMU
Predicted
84.2
browsecomp
Predicted
93.3
BullshitBench
Predicted
74.2
/ 100
Aider Polyglot
Predicted
2.4
Apex Agents
Predicted
5.5
Arc Agi 2
Predicted
3.9
BALROG
Predicted
0.0
CAD-Eval
Predicted
3.5
Chess Puzzles
Predicted
0.6
CyBench
Predicted
0.4
DeepResearchBench
Predicted
0.6
FictionLiveBench
Predicted
0.9
Gdpval
Predicted
1.1
GeoBench
Predicted
0.0
GSO
Predicted
15.7
Hle
Predicted
0.3
Lech Mazur Writing
Predicted
8.4
METR Time Horizons
Predicted
73.6
OTIS Mock AIME 2024–2025
Predicted
0.3
Posttrainbench
Predicted
0.0
SimpleQA Verified (Epoch)
Predicted
1.1
The Agent Company
Predicted
1.9
VPCT
Predicted
0.4
FrontierMath
Predicted
32.7
/ 100
GAIA Level 1
Predicted
21.7
GAIA Level 2
Predicted
6.9
GAIA Level 3
Predicted
0.1
GAIA
Predicted
13.9
/ 100
GPQA Diamond
Predicted
0.4
/ 100
HLE
Predicted
0.1
/ 100
IFBench
Predicted
0.8
/ 100
LCR
Predicted
0.0
/ 100
LiveBench Coding
Predicted
84.0
/ 100
LiveBench Data Analysis
Predicted
83.0
/ 100
LiveBench Language
Predicted
91.1
/ 100
LiveBench Math
Predicted
94.1
/ 100
LiveBench Overall
Predicted
86.8
/ 100
LiveBench Reasoning
Predicted
92.5
/ 100
LiveCodeBench
Predicted
0.2
/ 100
LongBench v2 Easy
Predicted
41.2
LongBench v2 Hard
Predicted
29.4
LongBench v2
Predicted
36.3
/ 100
MATH-500
Predicted
0.5
/ 100
MathVista
Predicted
19.3
/ 100
MedQA (USMLE)
Predicted
89.8
MLE-bench
Predicted
54.3
/ 100
MMLU Pro
Predicted
0.5
/ 100
MMMU
Predicted
76.3
/ 100
MMTU Table Understanding
Predicted
67.3
/ 100
MT-Bench
Predicted
6.6
/ 10
NoLiMa (NIAH)
Predicted
86.0
/ 100
OCRBench v2
Predicted
62.7
/ 100
RE-Bench
Predicted
100.0
SciCode
Predicted
0.6
/ 100
SimpleBench
Predicted
76.3
/ 100
simpleqa
Predicted
68.8
SWE-bench Lite
Predicted
75.8
/ 100
SWE-bench Verified
Predicted
83.0
/ 100
τ²-Bench
Predicted
0.7
/ 100
tau-bench Retail
Predicted
95.8
/ 100
Terminal-Bench Hard
Predicted
0.0
/ 100
Vectara Factual Consistency
Predicted
31.7
/ 100
Vectara Hallucination Rate
Predicted
68.3
/ 100
WeirdML
Predicted
70.1
/ 100
WildBench
Predicted
22.6
BigCodeBench Complete
bigcodebench
25.7
/ 100
BigCodeBench Instruct
bigcodebench
21.9
/ 100
BIG-Bench Hard
epoch_ai
3.0
BoolQ
epoch_ai
0.7
Epoch Capabilities Index
epoch_ai
95.5
GSM8K (Epoch)
epoch_ai
0.0
HellaSwag
epoch_ai
0.0
LAMBADA
epoch_ai
0.7
OpenBookQA
epoch_ai
0.4
PIQA
epoch_ai
0.8
ScienceQA
epoch_ai
0.4
TriviaQA
epoch_ai
64.0
WinoGrande
epoch_ai
0.7
LegalBench
legalbench
10.0
/ 100
Open LLM Average
open_llm_leaderboard
6.4
/ 100
Open LLM: BBH
open_llm_leaderboard
32.8
/ 100
Open LLM: GPQA
open_llm_leaderboard
25.3
/ 100
Open LLM: IFEval
open_llm_leaderboard
25.0
/ 100
Open LLM: MATH Level 5
open_llm_leaderboard
0.8
/ 100
Open LLM: MMLU-PRO
open_llm_leaderboard
13.1
/ 100
Open LLM: MUSR
open_llm_leaderboard
33.5
/ 100
WebArena
webarena
0.0
/ 100