Context window, cost and speed comparison across different AI models.
Models | |||||
---|---|---|---|---|---|
Llama 4 Scout | 32,000,000 | 900 t/s | 0.33s | ||
Llama 4 Maverick | 32,000,000 | 900 t/s | 0.44s | ||
Gemini 3 Pro | 128,000 | n/a | n/a | 800 t/s | 0.72s |
Qwen 3 [Beta] | n/a | n/a | n/a | n/a | n/a |
Gemini 2.0 Pro | 1,000,000 | 800 t/s | 0.50s | ||
Claude 3.7 Sonnet | 200,000 | 750 t/s | 0.55s | ||
GPT-4.5 | 128,000 | 450 t/s | 1.20s | ||
Claude 3.7 Sonnet [P] | 200,000 | 750 t/s | 0.55s | ||
DeepSeek V3 | 128,000 | 350 t/s | 4.00s | ||
OpenAI o1-mini | 200,000 | 250 t/s | 14.00s | ||
OpenAI o1 | 128,000 | 200 t/s | 12.64s | ||
OpenAI o1-mini-2 | 750,000 | n/a | n/a | n/a | n/a |
DeepSeek V3 G324 | 128,000 | 350 t/s | 4.00s | ||
Qwen o1 | 200,000 | 900 t/s | 20.00s | ||
Gemini 2.0 Flash | 128,000 | 500 t/s | 0.32s | ||
Llama 3.1 70b | 128,000 | 500 t/s | 0.72s | ||
Nous Pro | 300,000 | 500 t/s | 0.64s | ||
Claude 3.5 Haiku | 200,000 | 850 t/s | 0.35s | ||
Llama 3.1 405b | 128,000 | 350 t/s | 0.72s | ||
GPT-4o-mini | 128,000 | 450 t/s | 0.50s | ||
GPT-4o | 128,000 | 450 t/s | 0.50s | ||
Claude 3.5 Sonnet | 200,000 | 750 t/s | 1.20s |