Skip to main content
AIpricly

Meta · released 2026-01-25 · text model

Llama 4 Maverick

AA INDEX80editorial 2026-05editorial estimate

Goes to OpenRouter · referral link · See disclosure →

Input price$0.50/1M tokens
Output price$1.50/1M tokens
Context256Ktokens
P50 latency0.5sfirst tokvendor-reported
Throughput220tok/svendor-reported

The best open-weight alternative for self-hosted or sovereign deployments. Closed-source frontier still leads on raw quality, but cost-per-token via inference providers like Together / Fireworks is the lowest in the table, and there is no vendor lock-in. Worth the integration work when latency, sovereignty, or fine-tuning flexibility matter more than absolute benchmark performance.

Full pricing details

all prices in USD per 1M tokens · All hidden costs surfaced
Pricing tierInputOutputNotes
Standard
Pay-as-you-go API rate card
$0.50$1.50Default if no headers set
With prompt caching
Cached read · cache write
$0.05−90%$1.50Cache writes priced at standard input ($0.50); cached reads 90% off. Requires cache_control header.
Batch API (50% off)
24h SLA, async
$0.25−50%$0.75−50%Send up to 50K requests in one batch; results within 24h
Image input (vision)
Per image, low/high detail differ
$0.50 + image fee$1.50$0.50 per 1024×1024 image at high detail; low detail ~$0.10
Reasoning tokens
Hidden chain-of-thought tokens
$0.50$1.50 × ~3-8When using reasoning.effort: high, output tokens include hidden reasoning. Real cost can be 3-8× headline output.
Structured output
JSON mode + schema
$0.50$1.50No extra cost; first call may be slower (~2x latency) due to schema compilation

Capabilities

Vision (image understanding)Function callingStructured output (JSON schema)Reasoning modeMultilingual (32+ langs)Batch APIAudio I/OFine-tuning

Monthly cost in common scenarios

Default usage assumptions

Consider these alternatives

More from Meta