Model reference · Synced 2025-04-29
Gemini 3.1 Pro Preview
Gemini 3.1 Pro Preview is an AI model from GitHub Copilot. 128K context window. Capabilities: reasoning, tool calling, multimodal vision, audio. Available on 15 providers. Cheapest listing: $0 input / $0 output per 1M tokens.
Quick facts
- Cheapest input: $0 per 1M tokens (GitHub Copilot)
- Cheapest output: $0 per 1M tokens
- Context window: 128K tokens
- Max output: 64K tokens
- Release date: 2026-02-19
- Knowledge cutoff: 2025-01
- Capabilities: reasoning, tool calling, multimodal vision, audio
- Provider count: 15
Provider pricing
Same model, different providers, different prices. Cheapest first.
| Provider | Input / 1M | Output / 1M | Context | Listed |
|---|---|---|---|---|
| GitHub Copilot | $0 | $0 | 128K | 2026-02-19 |
| Abacus | $2 | $12 | 1M | 2026-02-19 |
| Perplexity Agent | $2 | $12 | 1M | 2026-02-19 |
| OpenRouter | $2 | $12 | 1M | 2026-02-19 |
| ZenMux | $2 | $12 | 1M | 2026-02-19 |
| Vivgrid | $2 | $12 | 1M | 2026-02-19 |
| OpenCode Zen | $2 | $12 | 1M | 2026-02-19 |
| Poe | $2 | $12 | 1M | 2026-02-19 |
| FrogBot | $2 | $12 | 1M | 2026-02-18 |
| AIHubMix | $2 | $12 | 1M | 2026-02-19 |
| Vercel AI Gateway | $2 | $12 | 1M | 2026-02-19 |
| LLM Gateway | $2 | $12 | 1M | 2026-02-19 |
| Vertex | $2 | $12 | 1M | 2026-02-19 |
| $2 | $12 | 1M | 2026-02-19 | |
| Venice AI | $2.5 | $15 | 1M | 2026-02-19 |
Prices synced daily from models.dev + provider docs.
How to use this model
If you're picking Gemini 3.1 Pro Preview for a project, the three things that matter most:
- Compare it side-by-side with one or two alternatives in the live comparison tool. Pricing differences matter more than benchmarks at scale.
- Pick the cheapest provider that meets your latency / SLA need. Big spread across providers for the same weights.
- Re-evaluate every 3 months. Frontier prices drop fast; a model that's cheapest today may not be in a quarter.
Related models
FAQ
How much does Gemini 3.1 Pro Preview cost? $0 input / $0 output per 1M tokens at the cheapest listing. See the table above for other providers.
What is the context window? 128K tokens.
Which providers offer it? Abacus, Perplexity Agent, OpenRouter, ZenMux, Vivgrid, GitHub Copilot, OpenCode Zen, Poe, FrogBot, Venice AI, and others — see the full table above.
Where do these numbers come from? models.dev + provider documentation, synced daily. About the data.