Together AI
InferenceFREE TIEROpen-source model inference — Llama, Mixtral, FLUX
Operational
All systems responding normally
Last checked 30/04/2026, 7:02:47 am
742ms response
Uptime History100.00% uptime
2026-04-25Today
Uptime
100.00%
Avg Latency
670ms
P95 Latency
836ms
Fastest
459ms
Checks
150
Response Time
Last 60 checks459ms min670ms avg1078ms max
💰 Pricing
llama-3.3-70b-turbo
Input: $0.88/1MOutput: $0.88/1M
Free $1 credit on signup
⚡ Rate Limits
standard
RPM: 60
Limits vary by model and account tier.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B Turbo Open-source model inference. Fast and reliable. | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
742ms30 Apr, 07:02 am
Operational
621ms30 Apr, 06:32 am
Operational
581ms30 Apr, 05:56 am
Operational
592ms30 Apr, 05:18 am
Operational
579ms30 Apr, 04:43 am
Operational
825ms30 Apr, 04:00 am
Operational
632ms30 Apr, 03:23 am
Operational
753ms30 Apr, 02:48 am
Operational
584ms30 Apr, 02:07 am
Operational
728ms30 Apr, 01:26 am
Operational
567ms30 Apr, 12:34 am
Operational
671ms29 Apr, 11:41 pm
Operational
567ms29 Apr, 10:56 pm
Operational
697ms29 Apr, 10:18 pm
Operational
759ms29 Apr, 09:45 pm
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand
fal.ai
Ultra-fast image & video model inference for agents