Overview

Rate limits are implemented at multiple levels to protect our infrastructure and ensure fair usage:

  1. Host-level rate limits
  2. API endpoint-specific rate limits
  3. User-level rate limits
  4. Special rate limits for specific features

Host-Level Rate Limits

The system implements a base rate limit of 250 requests per second (RPS) per hostname. This is enforced through a combination of:

  • In-memory rate limiting
  • Distributed rate limiting

Note: To avoid host-level rate limits which are primarily for bot protection, you should not use admin.snagsolutions.io as a base URL. Instead, set up your own hostname via our admin panel under the Hostname tab.

API Endpoint Rate Limits

Different endpoints have specific rate limits:

Loyalty System Endpoints

  • /api/loyalty/transaction_entries: 30 RPS
  • /api/loyalty/transaction_entries/count: 10 RPS
  • /api/loyalty/account_streaks: 10 RPS
  • /api/loyalty/accounts/[id]/rank: 1 RPS
  • /api/loyalty/multipliers: 30 RPS

Loyalty Rule Completion

The loyalty rule completion endpoint has multiple rate limit layers:

  1. User-based rate limit: 5 requests per minute per user
  2. Rule-based rate limit: 1 request per minute per user per rule
  3. Website-level rate limit: 10 requests per second per rule

For external rules, these limits are multiplied by a factor of 5.

Rate Limit Responses

When you hit a rate limit, the API will respond with a 429 status code and a message indicating that you’ve exceeded the rate limit. The response will include:

{
  "message": "Too many requests, please try again later."
}

Best Practices

  1. Implement exponential backoff when you receive 429 responses
  2. Cache responses when possible to reduce API calls
  3. Use appropriate hostnames for your integration
  4. Monitor your API usage to stay within limits
  5. Consider implementing client-side rate limiting for high-volume applications

Avoiding Rate Limits

Here are several strategies to effectively work with rate limits:

1. Implement Caching

  • Cache frequently accessed data on your end
  • Use appropriate cache TTLs based on data freshness requirements
  • Consider implementing a distributed cache for high-traffic applications

2. Batch Requests

  • Combine multiple requests into a single batch request where possible
  • Use bulk endpoints instead of individual requests
  • Implement request queuing for non-time-sensitive operations

3. Optimize API Usage

  • Only request data that you actually need
  • Use pagination to limit response sizes
  • Implement proper error handling and retry logic

4. Implement Client-Side Rate Limiting

  • Add rate limiting logic in your application
  • Use token bucket or leaky bucket algorithms
  • Distribute requests evenly across time windows

5. Monitor and Adjust

  • Keep track of your API usage patterns
  • Adjust your implementation based on rate limit responses
  • Implement proper logging to identify rate limit issues early