Mistral AI
LLMFREE TIERMistral Large, Codestral — European frontier models
Operational
All systems responding normally
Last checked 30/04/2026, 9:16:37 pm
253ms response
Uptime History100.00% uptime
2026-04-26Today
Uptime
100.00%
Avg Latency
299ms
P95 Latency
380ms
Fastest
151ms
Checks
150
Response Time
Last 60 checks151ms min299ms avg471ms max
💰 Pricing
mistral-large-2
Input: $2/1MOutput: $6/1M
mistral-small: $0.10/$0.30
⚡ Rate Limits
free
RPM: 1TPM: 500,000
Experimental access only. Very strict.
premier
RPM: 60
Paid plan. Higher limits on request.
🤖 Models (3)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Mistral Large 2 | llm | 131k | — | ✅ | ✅ |
| Codestral Code specialist. Best context for code tasks | code | 256k | — | ✅ | ✅ |
| Mistral Small $0.10/$0.30 per 1M | llm | 33k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
253ms30 Apr, 09:16 pm
Operational
172ms30 Apr, 08:37 pm
Operational
182ms30 Apr, 07:51 pm
Operational
245ms30 Apr, 06:59 pm
Operational
188ms30 Apr, 06:04 pm
Operational
151ms30 Apr, 05:12 pm
Operational
168ms30 Apr, 04:11 pm
Operational
229ms30 Apr, 03:01 pm
Operational
277ms30 Apr, 01:46 pm
Operational
191ms30 Apr, 12:20 pm
Operational
176ms30 Apr, 11:23 am
Operational
266ms30 Apr, 10:45 am
Operational
180ms30 Apr, 09:54 am
Operational
311ms30 Apr, 09:29 am
Operational
232ms30 Apr, 09:01 am
Other LLM Providers
OpenAI
GPT-4o, o3, o1 — the benchmark everyone chases
Anthropic
Claude — safety-first reasoning and long context
DeepSeek
DeepSeek V3, R1 — elite reasoning at fraction of cost
xAI Grok
Grok-3 — real-time web access, strong reasoning
AWS Bedrock
Managed foundation models — Claude, Llama, Titan on AWS
Azure OpenAI
OpenAI models on Microsoft Azure — enterprise SLA