Llama 4 Maverick
Meta · released 2026-01-25
Quality (AA Index)80
Input price$0.50
Output price$1.50
Context256K
Throughput220 tok/s
P50 latency0.5s
Side-by-side pricing, capabilities, real-world cost across common scenarios, and our editorial pick.
Meta · released 2026-01-25
Google · released 2025-09-01
Links open in a new tab via our OpenRouter referral. Affiliate disclosure
| Metric | Llama 4 Maverick | Gemini 2.5 Pro | Verdict |
|---|---|---|---|
Input price /1M tokens | $0.50 | $1.25 | Llama 4 Maverick −60% |
Output price /1M tokens | $1.50 | $10.00 | Llama 4 Maverick −85% |
Context window max input length | 256K | 1M | Gemini 2.5 Pro +3.9× |
AA Quality AA Intelligence Index (0–100) | 80 | 87 | Gemini 2.5 Pro +7pt |
Arena Elo LMArena human-pref Elo (800–2000) | — | — | Tied |
Throughput tokens per second | 220 | 140 | Llama 4 Maverick +57% |
P50 latency first token | 0.5s | 0.8s | Llama 4 Maverick −38% |
Vision multimodal | Tied | ||
Function calling tool use | Tied | ||
Reasoning mode chain-of-thought | — |
| Scenario | Llama 4 Maverick | Gemini 2.5 Pro |
|---|---|---|
customer support 1000K req · 600/180 tok | $570 | $2.5K |
chat with docs 300K req · 4000/300 tok | $735 | $2.4K |
code generation 500K req · 2000/500 tok | $875 | $3.8K |
voice assistant 600K req · 800/200 tok | $420 | $1.8K |
Best open-weight alternative for self-hosted deployments. Quality lags closed-source frontier but cost-per-token via inference providers is the lowest in the table.
Set Llama 4 Maverick as primary, Gemini 2.5 Pro as fallback. One key, one bill, automatic failover when Llama 4 Maverick errors.
$ curl https://api.aipricly.com/v1/chat/completions \
-H "Authorization: Bearer $AIPC_KEY" \
-d '{
"routing": {
"primary": "meta/llama-4-maverick",
"fallback": ["google/gemini-2-5-pro"]
},
"messages": [{"role": "user", "content": "..."}]
}'