Check.AI

Model reference · Synced 2025-04-29

gemini-3-flash-preview

gemini-3-flash-preview is an AI model from GitHub Copilot. 128K context window. Capabilities: reasoning, tool calling, multimodal vision, audio. Available on 18 providers. Cheapest listing: $0 input / $0 output per 1M tokens.

Quick facts

→ Add gemini-3-flash-preview to the comparison tool

Provider pricing

Same model, different providers, different prices. Cheapest first.

ProviderInput / 1MOutput / 1MContextListed
GitHub Copilot $0 $0 128K 2025-12-17
QiHang $0.07 $0.43 1M 2025-12-17
Poe $0.4 $2.4 1M 2025-10-07
302.AI $0.5 $3 1M 2025-12-18
Abacus $0.5 $3 1M 2025-12-17
Perplexity Agent $0.5 $3 1M 2025-12-17
OpenRouter $0.5 $3 1M 2025-12-17
Jiekou.AI $0.5 $3 1M 2026-01
ZenMux $0.5 $3 1M 2025-12-17
OpenCode Zen $0.5 $3 1M 2025-12-17
FrogBot $0.5 $3 1M 2025-12-17
AIHubMix $0.5 $3 1M 2025-12-17
Requesty $0.5 $3 1M 2025-12-17
Vercel AI Gateway $0.5 $3 1M 2025-12-17
LLM Gateway $0.5 $3 1M 2025-12-17
Vertex $0.5 $3 1M 2025-12-17
Google $0.5 $3 1M 2025-12-17
Venice AI $0.7 $3.75 256K 2025-12-19

Prices synced daily from models.dev + provider docs.

How to use this model

If you're picking gemini-3-flash-preview for a project, the three things that matter most:

Related models

FAQ

How much does gemini-3-flash-preview cost? $0 input / $0 output per 1M tokens at the cheapest listing. See the table above for other providers.

What is the context window? 128K tokens.

Which providers offer it? 302.AI, Abacus, Perplexity Agent, OpenRouter, Jiekou.AI, ZenMux, GitHub Copilot, OpenCode Zen, Poe, FrogBot, and others — see the full table above.

Where do these numbers come from? models.dev + provider documentation, synced daily. About the data.