Skip to main content

Overview

Rate limits are implemented at multiple levels to protect our infrastructure and ensure fair usage:
  1. Host-level rate limits
  2. API endpoint-specific rate limits
  3. User-level rate limits
  4. Special rate limits for specific features

Host-Level Rate Limits

The system implements a base rate limit of 250 requests per second (RPS) per hostname. This is enforced through a combination of:
  • In-memory rate limiting
  • Distributed rate limiting
Note: To avoid host-level rate limits which are primarily for bot protection, you should not use admin.snagsolutions.io as a base URL. Instead, set up your own hostname via our admin panel under the Hostname tab.

API Endpoint Rate Limits

Different endpoints have specific rate limits based on the endpoint type and usage patterns.

Loyalty System Endpoints

These endpoints are rate limited per website:
  • /api/loyalty/transaction_entries: 50 RPS
  • /api/loyalty/account_streaks: 50 RPS
  • /api/loyalty/accounts: 50 RPS
  • /api/loyalty/accounts/[id]/rank: 50 RPS
  • /api/loyalty/multipliers: 50 RPS

Loyalty Rule Completion

The loyalty rule completion endpoint (/api/loyalty/rules/[id]/complete) has multiple rate limit layers to prevent abuse:
  1. User-based rate limit: 7 requests per minute per user
  2. User-rule-based rate limit: 2 requests per minute per user per specific rule
  3. Website-level rate limit: 10 requests per second per rule
External Rules: For external loyalty rules, all limits are multiplied by a factor of 5, allowing for higher throughput.

Minting Endpoints

Minting endpoints have both user-level and website-level rate limits:

Mint/Generate Signature (/api/minting/contracts/mint)

  • User-level: 5 requests per minute per user
  • Website-level: 100 RPS

Precreate Asset (/api/minting/contracts/precreate_asset)

  • User-level: 5 requests per minute per user
  • Website-level: 8 RPS

Allowlist Management (/api/minting/assets/allowlist)

  • 5 RPS per website (applies to both GET and POST requests)

Rate Limit Responses

When you hit a rate limit, the API will respond with a 429 status code and a message indicating that you’ve exceeded the rate limit. The response will include:
{
  "message": "Too many requests, please try again later."
}

Understanding Rate Limit Layers

Many endpoints implement multiple rate limit layers for comprehensive protection:
  1. Website-level limits: Applied to all requests for a specific website
  2. User-level limits: Applied per individual user to prevent single-user abuse
  3. Combined limits: Some endpoints apply both simultaneously
When a request is rate limited, you’ll receive a 429 status code. The endpoint will check rate limits in order and reject the request at the first limit exceeded.

Best Practices

  1. Implement exponential backoff when you receive 429 responses
  2. Cache responses when possible to reduce API calls
  3. Use appropriate hostnames for your integration (avoid using admin.snagsolutions.io)
  4. Monitor your API usage to stay within limits and identify issues early
  5. Consider client-side rate limiting for high-volume applications to prevent hitting server limits
  6. Distribute requests evenly across time windows rather than bursting
  7. Handle rate limit responses gracefully with proper error messages to end users
  8. Use Cloudflare with custom hostnames for enhanced DDoS protection, bot management, and WAF security

Avoiding Rate Limits

Here are several strategies to effectively work with rate limits:

1. Implement Caching

  • Cache frequently accessed data on your end
  • Use appropriate cache TTLs based on data freshness requirements
  • Consider implementing a distributed cache for high-traffic applications

2. Batch Requests

  • Combine multiple requests into a single batch request where possible
  • Use bulk endpoints instead of individual requests
  • Implement request queuing for non-time-sensitive operations

3. Optimize API Usage

  • Only request data that you actually need
  • Use pagination to limit response sizes
  • Implement proper error handling and retry logic

4. Implement Client-Side Rate Limiting

  • Add rate limiting logic in your application
  • Use token bucket or leaky bucket algorithms
  • Distribute requests evenly across time windows

5. Monitor and Adjust

  • Keep track of your API usage patterns
  • Adjust your implementation based on rate limit responses
  • Implement proper logging to identify rate limit issues early