Category

API

App

Minimax
Date
Price

MiniMax-M2.7-highspeed

The ultra-fast version of Mininax-M2.7—same performance, yet faster (output speed around 100 tps, compared to 60 tps for the standard version).
Model
LLM
Model capability: thinkingModel capability: function_call
Input:
$0.6/1M tokens
Output:
$4.8/1M tokens

MiniMax-M2.5-highspeed

The ultra-fast version of Mininax-M2.5—same performance, yet faster and more agile (output speed approximately 100 tps, compared to 60 tps for M2.5).
Model
LLM
Model capability: thinkingModel capability: function_call
Input:
$0.6/1M tokens
Output:
$4.8/1M tokens

MiniMax-M2.5

MiniMax’s text-generation model achieves outstanding performance in complex task scenarios such as programming, tool invocation, search, and office work, with extremely low costs and high inference efficiency.
Model
LLM
Model capability: thinkingModel capability: function_call
Input:
$0.3/1M tokens
Output:
$1.2/1M tokens

M2-Her

MiniMax’s flagship language model dedicated to role-playing (Role-Play)
Model
LLM
Model capability: thinkingModel capability: function_call
Input:
$0.3/1M tokens
Output:
$1.2/1M tokens

MiniMax-M2.1

Significantly enhances multilingual programming, designed for real-world complex tasks.
Model
LLM
Model capability: thinkingModel capability: function_call
Input:
$0.3/1M tokensstarting from
Output:
$1.2/1M tokensstarting from

MiniMax-M2

Specifically designed for efficient coding and agent workflows.
Model
LLM
Model capability: thinkingModel capability: function_call
Input:
$0.33/1M tokens
Output:
$1.32/1M tokens

MiniMax-M1

Open-source large-scale hybrid architecture inference model, featuring 80K thought chains × 1M inputs.
Model
LLM
Model capability: function_call
Input:
$0.132/1M tokens
Output:
$1.254/1M tokens

MiniMax-Text-01

Adopts a hybrid architecture integrating Lightning Attention, Softmax Attention, and Mixture-of-Experts (MoE)
Model
LLM
Model capability: imageModel capability: function_call
Input:
$0.154/1M tokens
Output:
$1.232/1M tokens