NVIDIA NIM
InferenceFREE TIEROptimised inference microservices — GPU-native, enterprise-grade
Operational
All systems responding normally
Last checked 30/04/2026, 9:16:37 pm
310ms response
Uptime History100.00% uptime
2026-04-26Today
Uptime
100.00%
Avg Latency
295ms
P95 Latency
390ms
Fastest
214ms
Checks
150
Response Time
Last 60 checks214ms min295ms avg412ms max
💰 Pricing
llama-3.3-70bFREE
Input: $0.77/1MOutput: $0.77/1M
Build.nvidia.com free tier. Enterprise pricing via sales.
1000 credits free
⚡ Rate Limits
free
RPM: 40
1000 free API credits on signup.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B Enterprise-grade inference microservices. | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
310ms30 Apr, 09:16 pm
Operational
314ms30 Apr, 08:37 pm
Operational
349ms30 Apr, 07:51 pm
Operational
363ms30 Apr, 06:59 pm
Operational
223ms30 Apr, 06:04 pm
Operational
303ms30 Apr, 05:12 pm
Operational
371ms30 Apr, 04:11 pm
Operational
292ms30 Apr, 03:01 pm
Operational
297ms30 Apr, 01:46 pm
Operational
247ms30 Apr, 12:20 pm
Operational
294ms30 Apr, 11:23 am
Operational
283ms30 Apr, 10:45 am
Operational
326ms30 Apr, 09:54 am
Operational
245ms30 Apr, 09:29 am
Operational
300ms30 Apr, 09:01 am
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand