Model reference · Synced 2025-04-29
glm-for-coding
glm-for-coding is an AI model from 302.AI. 200K context window. Capabilities: reasoning, tool calling. Available on 1 provider. Cheapest listing: $0.086 input / $0.343 output per 1M tokens.
Quick facts
- Cheapest input: $0.086 per 1M tokens (302.AI)
- Cheapest output: $0.343 per 1M tokens
- Context window: 200K tokens
- Max output: 131K tokens
- Release date: 2025-09-30
- Capabilities: reasoning, tool calling
- Provider count: 1
Provider pricing
Same model, different providers, different prices. Cheapest first.
| Provider | Input / 1M | Output / 1M | Context | Listed |
|---|---|---|---|---|
| 302.AI | $0.086 | $0.343 | 200K | 2025-09-30 |
Prices synced daily from models.dev + provider docs.
How to use this model
If you're picking glm-for-coding for a project, the three things that matter most:
- Compare it side-by-side with one or two alternatives in the live comparison tool. Pricing differences matter more than benchmarks at scale.
- Pick the cheapest provider that meets your latency / SLA need. Big spread across providers for the same weights.
- Re-evaluate every 3 months. Frontier prices drop fast; a model that's cheapest today may not be in a quarter.
Related models
FAQ
How much does glm-for-coding cost? $0.086 input / $0.343 output per 1M tokens at the cheapest listing. See the table above for other providers.
What is the context window? 200K tokens.
Which providers offer it? 302.AI.
Where do these numbers come from? models.dev + provider documentation, synced daily. About the data.