Mistral Small 3.1

mistral-small-3-1 · 4 samples · combined / raw

Scoring

Choose the quality benchmark and final score formula for this model.

Cost adjustment

Tune formulas that account for benchmark cost.

Score over time

13.3 11.3 #1 · 12.3 · 2026-05-11 23:17:33#2 · 12.3 · 2026-05-12 00:00:09#3 · 12.3 · 2026-05-12 01:00:09#4 · 12.3 · 2026-05-12 02:00:11

Cost over time

$49.0 $47.0 #1 · $48.0 · 2026-05-11 23:17:33#2 · $48.0 · 2026-05-12 00:00:09#3 · $48.0 · 2026-05-12 01:00:09#4 · $48.0 · 2026-05-12 02:00:11
Run?Fetched?Score?Quality?Cost?Intel?Code?Agent?MMMU%?
#4 2026-05-12 02:00:11 12.3 12.3 $48.0 14.5 13.9 8.4 -
#3 2026-05-12 01:00:09 12.3 12.3 $48.0 14.5 13.9 8.4 -
#2 2026-05-12 00:00:09 12.3 12.3 $48.0 14.5 13.9 8.4 -
#1 2026-05-11 23:17:33 12.3 12.3 $48.0 14.5 13.9 8.4 -