AgMoDB
Models
Agents
Evals
Explore
Tools
Industry
Menu
Cohere: Command R7B (12-2024) | AgMoDB
All Models
Cohere: Command R7B (12-2024)
Last synced Apr 7, 2026, 2:04 PM
128K context
Blended Price
$0.066/M
Input Price
$0.037/M
Output Price
$0.15/M
Speed
—
TTFT
—
Benchmark Scores
Intelligence Index
External Benchmarks
AlpacaEval 2.0 LC
Add to comparison
How Cohere: Command R7B (12-2024) Compares
Axes
X Axis
Blended Price (USD)
Y Axis
AgMoBench Overall
Bubble Size
Context Window
Filters
Blended Price (USD)
$0.00 – $30.0
AgMoBench Overall
≥ 3.5
Providers
ai21-labs
alibaba
anthropic
aws
azure
baidu
cohere
deepseek
+15 more
Table view
Filters
Show quadrants
Top models
anthropic
openai
google
meta
mistral
nvidia
kimi
xai
azure
deepseek
aws
cohere
baidu
ai21-labs
zai
reka-ai
xiaomi
alibaba
minimax
ibm
Bubble size = Context Window
16384.00
2000000.00
Compare with other models
—
Coding Index
—
Math Index
—
MMLU Pro
—
/ 100
GPQA Diamond
—
/ 100
HLE
—
/ 100
LiveCodeBench
—
/ 100
SciCode
—
/ 100
MATH-500
—
/ 100
AIME
—
/ 30
AIME 2025
—
/ 30
IFBench
—
/ 100
LCR
—
/ 100
Terminal-Bench Hard
—
/ 100
τ²-Bench
—
/ 100
alpacaeval
10.9
/ 100
AlpacaEval 2.0 Raw
alpacaeval
12.9
/ 100
AA-Omniscience Accuracy
Predicted
50.5
/ 100
AA-Omniscience Hallucination Rate
Predicted
97.6
/ 100
Aider Polyglot
Predicted
46.5
/ 100
AIME
Predicted
0.0
/ 30
AIME 2025
Predicted
0.1
/ 30
ARC-AGI-1
Predicted
86.1
/ 100
ARC-AGI-1 Cost per Task
Predicted
1.3
ARC-AGI-2
Predicted
56.3
/ 100
ARC-AGI-2 Cost per Task
Predicted
2.2
BigCodeBench Complete
Predicted
26.9
/ 100
BigCodeBench Instruct
Predicted
20.9
/ 100
AA Intelligence Index (Matrix)
Predicted
55.7
AA Long Context Reasoning (Matrix)
Predicted
77.8
AIME 2024
Predicted
73.4
AIME 2025 (Matrix)
Predicted
98.6
Arena-Hard Auto
Predicted
39.6
BrowseComp
Predicted
87.8
BRUMO 2025
Predicted
99.8
CMIMC 2025
Predicted
95.7
CritPt
Predicted
24.9
GPQA Diamond (Matrix)
Predicted
68.4
GSM8K
Predicted
84.1
HLE (Matrix)
Predicted
43.5
HMMT Feb 2025
Predicted
89.1
HMMT Nov 2025
Predicted
94.7
HumanEval
Predicted
83.8
IFBench (Matrix)
Predicted
31.2
IFEval
Predicted
83.1
IMO 2025
Predicted
55.1
LiveCodeBench (Matrix)
Predicted
55.6
MATH-500 (Matrix)
Predicted
91.4
MathArena Apex 2025
Predicted
22.5
MMLU
Predicted
82.7
MMLU-Pro (Matrix)
Predicted
73.5
MMMU-Pro
Predicted
76.9
MRCR v2
Predicted
82.7
OSWorld
Predicted
43.1
SimpleQA
Predicted
61.0
SMT 2025
Predicted
94.9
SWE-bench Pro
Predicted
58.9
Tau-Bench Telecom (Matrix)
Predicted
99.1
Terminal-Bench 2.0
Predicted
79.7
Terminal-Bench 1.0
Predicted
42.1
USAMO 2025
Predicted
19.6
Video-MMU
Predicted
84.3
browsecomp
Predicted
89.7
BullshitBench
Predicted
54.0
/ 100
Aider Polyglot
Predicted
0.1
Apex Agents
Predicted
4.5
Arc Agi 2
Predicted
0.6
BALROG
Predicted
0.0
BIG-Bench Hard
Predicted
3.0
BoolQ
Predicted
0.8
CAD-Eval
Predicted
12.6
Chess Puzzles
Predicted
0.4
CyBench
Predicted
0.2
DeepResearchBench
Predicted
0.5
FictionLiveBench
Predicted
0.6
Gdpval
Predicted
0.7
GeoBench
Predicted
0.0
GSM8K (Epoch)
Predicted
0.0
GSO
Predicted
0.6
HellaSwag
Predicted
0.0
Hle
Predicted
0.2
Lech Mazur Writing
Predicted
7.7
METR Time Horizons
Predicted
27.2
OTIS Mock AIME 2024–2025
Predicted
0.1
PIQA
Predicted
0.8
Posttrainbench
Predicted
0.0
SimpleQA Verified (Epoch)
Predicted
0.5
The Agent Company
Predicted
1.3
TriviaQA
Predicted
16.8
VPCT
Predicted
0.4
WinoGrande
Predicted
0.7
FrontierMath
Predicted
47.4
/ 100
GAIA Level 1
Predicted
21.1
GAIA Level 2
Predicted
2.8
GAIA Level 3
Predicted
0.0
GAIA
Predicted
12.5
/ 100
GPQA Diamond
Predicted
0.5
/ 100
HLE
Predicted
0.1
/ 100
IFBench
Predicted
0.4
/ 100
LCR
Predicted
0.0
/ 100
LegalBench
Predicted
29.8
/ 100
LiveBench Coding
Predicted
79.7
/ 100
LiveBench Data Analysis
Predicted
73.5
/ 100
LiveBench Language
Predicted
84.1
/ 100
LiveBench Math
Predicted
88.9
/ 100
LiveBench Overall
Predicted
77.9
/ 100
LiveBench Reasoning
Predicted
83.3
/ 100
LiveCodeBench
Predicted
0.1
/ 100
LongBench v2 Easy
Predicted
36.2
LongBench v2 Hard
Predicted
29.0
LongBench v2
Predicted
27.1
/ 100
MATH-500
Predicted
0.4
/ 100
MathVista
Predicted
27.0
/ 100
MedQA (USMLE)
Predicted
78.9
MLE-bench
Predicted
72.0
/ 100
MMLU Pro
Predicted
0.5
/ 100
MMMU
Predicted
67.0
/ 100
MMTU Table Understanding
Predicted
57.7
/ 100
MT-Bench
Predicted
7.7
/ 10
NoLiMa (NIAH)
Predicted
85.3
/ 100
OCRBench v2
Predicted
67.0
/ 100
Open LLM Average
Predicted
12.9
/ 100
Open LLM: BBH
Predicted
36.0
/ 100
Open LLM: GPQA
Predicted
25.7
/ 100
Open LLM: IFEval
Predicted
42.3
/ 100
Open LLM: MATH Level 5
Predicted
4.0
/ 100
Open LLM: MMLU-PRO
Predicted
21.1
/ 100
Open LLM: MUSR
Predicted
36.3
/ 100
RE-Bench
Predicted
100.0
SciCode
Predicted
0.0
/ 100
SimpleBench
Predicted
34.1
/ 100
simpleqa
Predicted
49.8
SWE-bench Lite
Predicted
20.6
/ 100
SWE-bench Verified
Predicted
65.1
/ 100
τ²-Bench
Predicted
0.1
/ 100
tau-bench Retail
Predicted
93.1
/ 100
Terminal-Bench Hard
Predicted
0.0
/ 100
Vectara Factual Consistency
Predicted
85.2
/ 100
Vectara Hallucination Rate
Predicted
14.8
/ 100
WebArena
Predicted
0.0
/ 100
WeirdML
Predicted
36.5
/ 100
WildBench
Predicted
27.5
BFCL (Berkeley Function Calling)
bfcl
32.1