API Rate Limits
Per-service rate limits and usage guidelines for optimal API integration and performance.
Per-Service Rate Limits
Each API service has specific rate limits based on the endpoint and HTTP method:
Service | Endpoint | GET Limit | POST Limit | Burst Policy | Window | Basis |
---|---|---|---|---|---|---|
CNAM | https://api-service.verirouteintel.io/api/v1/cnam |
100 rps | 50 rps | Up to 2× short bursts | 60s | Per API key |
LRN | https://api-service.verirouteintel.io/api/v1/lrn |
200 rps | 100 rps | Up to 2× short bursts | 60s | Per API key |
Message Provider Lookup | https://api-service.verirouteintel.io/api/v1/messaging |
75 rps | 40 rps | Up to 2× short bursts | 60s | Per API key |
Trust (Spam/Fraud/Compliance) | https://api-service.verirouteintel.io/api/v1/trust |
50 rps | 25 rps | Up to 2× short bursts | 60s | Per API key |
GraphQL | https://api-service.verirouteintel.io/graphql |
— | 60 rps | Up to 2× short bursts; cost-based | 60s | Per API key |
Higher limits and dedicated lanes are available on enterprise plans. Limits are enforced per API key. 429 responses include Retry-After.
Headers & 429 Behavior
When rate limits are exceeded or approached, the API provides detailed information:
- HTTP 429 Too Many Requests when limits are exceeded.
- Headers: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, Retry-After (seconds).
- Back off and retry after the indicated interval. Exponential backoff is recommended for clients that may spike.
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640995200
Retry-After: 5
{
"error": "Too Many Requests",
"code": "rate_limited",
"message": "You have exceeded the allowed request rate. Please retry after the specified interval.",
"bucket": "per_api_key",
"window_seconds": 60,
"retry_after": 5
}
GraphQL Cost Limits
The GraphQL endpoint enforces operation complexity and depth limits in addition to request rate limits:
- Maximum Cost: 1000 points per window
- Depth Limit: 12 levels maximum
- Cost Calculation: Based on field fanout and query depth
- Recommendation: Use persisted queries and avoid unbounded lists for production applications
Best Practices
Follow these guidelines to optimize your API usage and avoid rate limiting:
- Implement Caching: Cache responses to reduce redundant API calls
- Batch Requests: Use bulk endpoints when available for multiple lookups
- Monitor Headers: Check rate limit headers in responses to track usage
- Implement Retry Logic: Use exponential backoff for failed requests
- Optimize Timing: Spread requests evenly rather than bursting
- Upgrade Plans: Consider higher tiers for increased limits