Llama 4 Maverick
Meta · released 2026-01-25
Quality (AA Index)80
Input price$0.50
Output price$1.50
Context256K
Throughput220 tok/s
P50 latency0.5s
Side-by-side pricing, capabilities, real-world cost across common scenarios, and our editorial pick.
Meta · released 2026-01-25
Anthropic · released 2026-04-08
Links open in a new tab via our OpenRouter referral. Affiliate disclosure
| Metric | Llama 4 Maverick | Claude Haiku 4.5 | Verdict |
|---|---|---|---|
Input price /1M tokens | $0.50 | $1.00 | Llama 4 Maverick −50% |
Output price /1M tokens | $1.50 | $5.00 | Llama 4 Maverick −70% |
Context window max input length | 256K | 200K | Llama 4 Maverick +1.3× |
AA Quality AA Intelligence Index (0–100) | 80 | 79 | Llama 4 Maverick +1pt |
Arena Elo LMArena human-pref Elo (800–2000) | — | — | Tied |
Throughput tokens per second | 220 | 250 | Claude Haiku 4.5 +14% |
P50 latency first token | 0.5s | 0.4s | Claude Haiku 4.5 −20% |
Vision multimodal | Tied | ||
Function calling tool use | Tied | ||
Reasoning mode chain-of-thought | — |
| Scenario | Llama 4 Maverick | Claude Haiku 4.5 |
|---|---|---|
customer support 1000K req · 600/180 tok | $570 | $1.5K |
chat with docs 300K req · 4000/300 tok | $735 | $1.6K |
code generation 500K req · 2000/500 tok | $875 | $2.3K |
voice assistant 600K req · 800/200 tok | $420 | $1.1K |
Best open-weight alternative for self-hosted deployments. Quality lags closed-source frontier but cost-per-token via inference providers is the lowest in the table.
Set Llama 4 Maverick as primary, Claude Haiku 4.5 as fallback. One key, one bill, automatic failover when Llama 4 Maverick errors.
$ curl https://api.aipricly.com/v1/chat/completions \
-H "Authorization: Bearer $AIPC_KEY" \
-d '{
"routing": {
"primary": "meta/llama-4-maverick",
"fallback": ["anthropic/claude-haiku-4-5"]
},
"messages": [{"role": "user", "content": "..."}]
}'