Category

API

App

Gemini
Date
Price

gemini-3.1-pro-preview

The most powerful agentic and coding model, with the best multimodal understanding capabilities
Model
LLM
Model capability: imageModel capability: function_call
Input:
$2/1M tokensstarting from
Output:
$12/1M tokensstarting from

gemini-3-flash-preview

Google’s lightweight flagship model, designed specifically for high-throughput, low-latency scenarios while balancing speed, intelligence, and cost.
Model
LLM
Model capability: imageModel capability: function_call
Input:
$0.5/1M tokens
Output:
$3/1M tokens

gemini-3-pro-preview

The most powerful agentic and coding model, with the best multimodal understanding capabilities
Model
LLM
Model capability: imageModel capability: function_call
Input:
$2/1M tokensstarting from
Output:
$12/1M tokensstarting from

gemini-2.5-flash-preview-09-2025

Tasks suitable for large-scale processing, low latency, high data volume, and those requiring reasoning, as well as proxy application scenarios.
Model
LLM
Model capability: imageModel capability: function_call
Input:
$0.3/1M tokens
Output:
$2.5/1M tokens

gemini-2.5-flash-lite

The lightweight model in the Gemini 2.5 family—fastest, lowest cost, and equipped with multimodal processing capabilities.
Model
LLM
Model capability: imageModel capability: function_call
Input:
$0.1/1M tokens
Output:
$0.4/1M tokens

gemini-2.5-flash

Tasks suitable for large-scale processing, low latency, high data volume, and those requiring reasoning, as well as proxy application scenarios.
Model
LLM
Model capability: imageModel capability: function_call
Input:
$0.3/1M tokens
Output:
$2.5/1M tokens

gemini-2.5-pro

Advanced thinking model, capable of reasoning about complex problems in coding, mathematics, and STEM fields.
Model
LLM
Model capability: imageModel capability: function_call
Input:
$1.25/1M tokensstarting from
Output:
$10/1M tokensstarting from

gemini-2.0-flash-lite

The fastest Gemini 2.0 model improves cost-effectiveness and reduces latency.
Model
LLM
Model capability: image
Input:
$0.075/1M tokens
Output:
$0.3/1M tokens

gemini-2.0-flash

Gemini second generation workhorse model, with a 1 million token context window
Model
LLM
Model capability: imageModel capability: function_call
Input:
$0.11/1M tokens
Output:
$0.44/1M tokens