Llama 4 Maverick
Meta · released 2026-01-25
Quality (AA Index)80
Input price$0.50
Output price$1.50
Context256K
Throughput220 tok/s
P50 latency0.5s
Side-by-side pricing, capabilities, real-world cost across common scenarios, and our editorial pick.
Meta · released 2026-01-25
DeepSeek · released 2025-12-01
Links open in a new tab via our OpenRouter referral. Affiliate disclosure
| Metric | Llama 4 Maverick | DeepSeek V3.5 | Verdict |
|---|---|---|---|
Input price /1M tokens | $0.50 | $0.14 | DeepSeek V3.5 −72% |
Output price /1M tokens | $1.50 | $0.28 | DeepSeek V3.5 −81% |
Context window max input length | 256K | 128K | Llama 4 Maverick +2.0× |
AA Quality AA Intelligence Index (0–100) | 80 | 81 | DeepSeek V3.5 +1pt |
Arena Elo LMArena human-pref Elo (800–2000) | — | — | Tied |
Throughput tokens per second | 220 | 95 | Llama 4 Maverick +132% |
P50 latency first token | 0.5s | 1.5s | Llama 4 Maverick −67% |
Vision multimodal | — | ||
Function calling tool use | Tied | ||
Reasoning mode chain-of-thought | — | — | Tied |
| Scenario | Llama 4 Maverick | DeepSeek V3.5 |
|---|---|---|
customer support 1000K req · 600/180 tok | $570 | $134 |
chat with docs 300K req · 4000/300 tok | $735 | $193 |
code generation 500K req · 2000/500 tok | $875 | $210 |
voice assistant 600K req · 800/200 tok | $420 | $101 |
Roughly GPT-4o-class quality at roughly 90% off. Tradeoff is region availability (Chinese provider) and slower output throughput than Western frontier tiers.
Set DeepSeek V3.5 as primary, Llama 4 Maverick as fallback. One key, one bill, automatic failover when DeepSeek V3.5 errors.
$ curl https://api.aipricly.com/v1/chat/completions \
-H "Authorization: Bearer $AIPC_KEY" \
-d '{
"routing": {
"primary": "deepseek/deepseek-v3-5",
"fallback": ["meta/llama-4-maverick"]
},
"messages": [{"role": "user", "content": "..."}]
}'