Rate limiting is coming soon. Currently the API does not enforce rate limits, but we recommend designing your integration to handle them gracefully.
Planned Limits
Rate limits will be enforced per API key on a per-minute basis. Planned limits:
| Tier | Requests per minute | Burst |
|---|
| Standard | 60 | 100 |
| Premium | 300 | 500 |
When rate limiting is active, all responses will include these headers:
| Header | Description |
|---|
X-RateLimit-Limit | Maximum requests allowed per window |
X-RateLimit-Remaining | Requests remaining in the current window |
X-RateLimit-Reset | Unix timestamp when the window resets |
Handling Rate Limits
When you exceed the limit, the API returns 429 Too Many Requests:
{
"code": "rate_limited",
"message": "Rate limit exceeded. Retry after 30 seconds.",
"trace_id": "d4e5f6a7-b8c9-0123-defg-234567890123"
}
Recommended Approach
Implement exponential backoff with jitter:
async function fetchWithRetry(url, options, maxRetries = 3) {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = response.headers.get("X-RateLimit-Reset");
const waitMs = retryAfter
? (parseInt(retryAfter) * 1000) - Date.now()
: Math.pow(2, attempt) * 1000 + Math.random() * 1000;
await new Promise((resolve) => setTimeout(resolve, waitMs));
continue;
}
return response;
}
throw new Error("Max retries exceeded");
}
Best Practices
- Batch requests where possible instead of making many individual calls
- Cache responses for data that doesn’t change frequently (providers, facilities)
- Use pagination with larger page sizes to reduce the number of requests
- Implement backoff logic proactively, even before rate limits are enforced