Check.AI

Model reference · Synced 2025-04-29

Llama 4 Maverick 17B 128E Instruct FP8

Llama 4 Maverick 17B 128E Instruct FP8 is an AI model from Llama. 128K context window. Capabilities: reasoning, tool calling, multimodal vision, open weights. Available on 7 providers. Cheapest listing: $0 input / $0 output per 1M tokens.

Quick facts

→ Add Llama 4 Maverick 17B 128E Instruct FP8 to the comparison tool

Provider pricing

Same model, different providers, different prices. Cheapest first.

ProviderInput / 1MOutput / 1MContextListed
Llama $0 $0 128K 2025-04-05
Vercel AI Gateway $0 $0 128K 2025-04-05
GitHub Models $0 $0 128K 2025-01-31
Abacus $0.14 $0.59 1M 2025-04-05
Synthetic $0.22 $0.88 524K 2025-04-05
Azure Cognitive Services $0.25 $1 128K 2025-04-05
Azure $0.25 $1 128K 2025-04-05

Prices synced daily from models.dev + provider docs.

How to use this model

If you're picking Llama 4 Maverick 17B 128E Instruct FP8 for a project, the three things that matter most:

Related models

FAQ

How much does Llama 4 Maverick 17B 128E Instruct FP8 cost? $0 input / $0 output per 1M tokens at the cheapest listing. See the table above for other providers.

What is the context window? 128K tokens.

Which providers offer it? Abacus, Llama, Azure Cognitive Services, Synthetic, Vercel AI Gateway, Azure, GitHub Models.

Where do these numbers come from? models.dev + provider documentation, synced daily. About the data.