Anthropic
LLMFREE TIERClaude — safety-first reasoning and long context
Operational
All systems responding normally
Last checked 27/04/2026, 9:14:41 pm
710ms response
Uptime History95.33% uptime
2026-04-24Today
Uptime
95.33%
Avg Latency
581ms
P95 Latency
735ms
Fastest
367ms
Checks
150
Response Time
Last 60 checks367ms min581ms avg1189ms max
💰 Pricing
claude-sonnet-4-5
Input: $3/1MOutput: $15/1M
claude-haiku-3-5: $0.80/$4.00
⚡ Rate Limits
free
RPM: 5TPM: 25,000Concurrent: 5
tier1
RPM: 50TPM: 50,000Concurrent: 5
After $5 spend.
tier4
RPM: 4,000TPM: 400,000
After $1000 spend.
🤖 Models (3)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Claude Sonnet 4.5 | llm | 200k | ✅ | ✅ | ✅ |
| Claude Opus 4.5 Most capable Anthropic model. $15/$75 per 1M | llm | 200k | ✅ | ✅ | ✅ |
| Claude Haiku 3.5 Fastest/cheapest Claude | llm | 200k | ✅ | ✅ | ✅ |
Recent Checks
Showing last 15Operational
710ms27 Apr, 09:14 pm
Operational
682ms27 Apr, 08:25 pm
Operational
577ms27 Apr, 07:34 pm
Operational
507ms27 Apr, 06:40 pm
Operational
589ms27 Apr, 05:41 pm
Operational
681ms27 Apr, 04:40 pm
Operational
520ms27 Apr, 03:40 pm
Operational
645ms27 Apr, 02:31 pm
Operational
549ms27 Apr, 01:10 pm
Operational
713ms27 Apr, 11:50 am
Operational
506ms27 Apr, 11:06 am
Operational
433ms27 Apr, 10:03 am
Operational
609ms27 Apr, 09:46 am
Operational
665ms27 Apr, 09:20 am
Operational
611ms27 Apr, 08:56 am
Other LLM Providers
OpenAI
GPT-4o, o3, o1 — the benchmark everyone chases
DeepSeek
DeepSeek V3, R1 — elite reasoning at fraction of cost
Mistral AI
Mistral Large, Codestral — European frontier models
xAI Grok
Grok-3 — real-time web access, strong reasoning
AWS Bedrock
Managed foundation models — Claude, Llama, Titan on AWS
Azure OpenAI
OpenAI models on Microsoft Azure — enterprise SLA