Model reference · Synced 2025-04-29
Llama 3.2 90B Vision Instruct
Llama 3.2 90B Vision Instruct is an AI model from GitHub Models. 128K context window. Capabilities: reasoning, tool calling, multimodal vision, audio, open weights. Available on 5 providers. Cheapest listing: $0 input / $0 output per 1M tokens.
Quick facts
- Cheapest input: $0 per 1M tokens (GitHub Models)
- Cheapest output: $0 per 1M tokens
- Context window: 128K tokens
- Max output: 8K tokens
- Release date: 2024-09-25
- Knowledge cutoff: 2023-12
- Capabilities: reasoning, tool calling, multimodal vision, audio, open weights
- Provider count: 5
Provider pricing
Same model, different providers, different prices. Cheapest first.
| Provider | Input / 1M | Output / 1M | Context | Listed |
|---|---|---|---|---|
| GitHub Models | $0 | $0 | 128K | 2024-09-25 |
| IO.NET | $0.35 | $0.4 | 16K | 2024-09-25 |
| Vercel AI Gateway | $0.72 | $0.72 | 128K | 2024-09-25 |
| Azure Cognitive Services | $2.04 | $2.04 | 128K | 2024-09-25 |
| Azure | $2.04 | $2.04 | 128K | 2024-09-25 |
Prices synced daily from models.dev + provider docs.
How to use this model
If you're picking Llama 3.2 90B Vision Instruct for a project, the three things that matter most:
- Compare it side-by-side with one or two alternatives in the live comparison tool. Pricing differences matter more than benchmarks at scale.
- Pick the cheapest provider that meets your latency / SLA need. Big spread across providers for the same weights.
- Re-evaluate every 3 months. Frontier prices drop fast; a model that's cheapest today may not be in a quarter.
Related models
FAQ
How much does Llama 3.2 90B Vision Instruct cost? $0 input / $0 output per 1M tokens at the cheapest listing. See the table above for other providers.
What is the context window? 128K tokens.
Which providers offer it? IO.NET, Azure Cognitive Services, Vercel AI Gateway, Azure, GitHub Models.
Where do these numbers come from? models.dev + provider documentation, synced daily. About the data.