Table of contents

How to Handle Rate Limiting with cURL

Rate limiting is a crucial mechanism that APIs use to control the number of requests a client can make within a specific time period. When working with cURL, understanding how to properly handle rate limits ensures your scripts run reliably without being blocked or banned by the target server.

Understanding Rate Limiting

Rate limiting typically manifests as HTTP status codes like 429 Too Many Requests or 503 Service Unavailable. Some APIs also include rate limit information in response headers, which cURL can capture and use to implement intelligent retry strategies.

Basic Rate Limiting Detection

First, let's examine how to detect rate limiting responses with cURL:

# Check HTTP status code and headers
curl -I -w "%{http_code}\n" https://api.example.com/data

# Save response headers to file for analysis
curl -D headers.txt https://api.example.com/data

Common rate limit headers to watch for: - X-RateLimit-Limit: Maximum requests allowed - X-RateLimit-Remaining: Requests remaining in current window - X-RateLimit-Reset: Time when the rate limit resets - Retry-After: Seconds to wait before retrying

Implementing Delays Between Requests

The simplest approach to handle rate limiting is adding delays between requests:

#!/bin/bash
# Simple delay between requests
for i in {1..10}; do
    curl https://api.example.com/data/$i
    sleep 2  # Wait 2 seconds between requests
done

For more sophisticated timing:

#!/bin/bash
# Variable delay based on API response time
for url in $(cat urls.txt); do
    start_time=$(date +%s.%N)
    curl "$url"
    end_time=$(date +%s.%N)

    # Calculate processing time and add buffer
    processing_time=$(echo "$end_time - $start_time" | bc)
    sleep_time=$(echo "1.5 - $processing_time" | bc)

    if (( $(echo "$sleep_time > 0" | bc -l) )); then
        sleep "$sleep_time"
    fi
done

Retry Logic with Exponential Backoff

Implementing retry logic helps handle temporary rate limit errors:

#!/bin/bash
retry_request() {
    local url=$1
    local max_retries=5
    local retry_count=0
    local base_delay=1

    while [ $retry_count -lt $max_retries ]; do
        # Make the request and capture HTTP status
        http_code=$(curl -s -o response.json -w "%{http_code}" "$url")

        if [ "$http_code" -eq 200 ]; then
            echo "Success!"
            cat response.json
            return 0
        elif [ "$http_code" -eq 429 ] || [ "$http_code" -eq 503 ]; then
            retry_count=$((retry_count + 1))
            delay=$((base_delay * 2**retry_count))
            echo "Rate limited (HTTP $http_code). Retrying in $delay seconds..."
            sleep $delay
        else
            echo "Request failed with HTTP $http_code"
            return 1
        fi
    done

    echo "Max retries exceeded"
    return 1
}

# Usage
retry_request "https://api.example.com/data"

Reading Rate Limit Headers

Extract rate limit information from response headers to make informed decisions:

#!/bin/bash
check_rate_limit() {
    local url=$1

    # Get headers and response
    response=$(curl -s -D /tmp/headers -o /tmp/response "$url")
    http_code=$(curl -s -o /dev/null -w "%{http_code}" "$url")

    # Extract rate limit headers
    remaining=$(grep -i "x-ratelimit-remaining" /tmp/headers | cut -d' ' -f2 | tr -d '\r')
    reset_time=$(grep -i "x-ratelimit-reset" /tmp/headers | cut -d' ' -f2 | tr -d '\r')
    retry_after=$(grep -i "retry-after" /tmp/headers | cut -d' ' -f2 | tr -d '\r')

    echo "HTTP Code: $http_code"
    echo "Remaining requests: $remaining"
    echo "Reset time: $reset_time"
    echo "Retry after: $retry_after seconds"

    # Adaptive delay based on remaining requests
    if [ "$remaining" -lt 10 ] && [ -n "$remaining" ]; then
        echo "Low remaining requests. Adding delay..."
        sleep 5
    fi
}

Advanced Rate Limiting Strategies

Token Bucket Implementation

For APIs that use token bucket rate limiting:

#!/bin/bash
# Simple token bucket simulation
BUCKET_SIZE=10
TOKENS=$BUCKET_SIZE
REFILL_RATE=1  # tokens per second
LAST_REFILL=$(date +%s)

make_request_with_tokens() {
    local url=$1
    local current_time=$(date +%s)
    local time_passed=$((current_time - LAST_REFILL))

    # Refill tokens
    TOKENS=$((TOKENS + time_passed * REFILL_RATE))
    if [ $TOKENS -gt $BUCKET_SIZE ]; then
        TOKENS=$BUCKET_SIZE
    fi
    LAST_REFILL=$current_time

    if [ $TOKENS -gt 0 ]; then
        TOKENS=$((TOKENS - 1))
        curl "$url"
        echo "Request made. Tokens remaining: $TOKENS"
    else
        echo "No tokens available. Waiting..."
        sleep 1
        make_request_with_tokens "$url"
    fi
}

Parallel Requests with Rate Limiting

When making parallel requests, coordinate rate limiting across processes:

#!/bin/bash
# Create a named pipe for coordination
PIPE="/tmp/rate_limit_pipe"
mkfifo "$PIPE"

# Fill pipe with tokens
for i in $(seq 1 5); do
    echo "token" > "$PIPE" &
done

make_parallel_request() {
    local url=$1

    # Wait for token
    read token < "$PIPE"

    # Make request
    curl "$url"

    # Return token after delay
    (sleep 2; echo "token" > "$PIPE") &
}

# Launch parallel requests
for url in $(cat urls.txt); do
    make_parallel_request "$url" &
done

wait  # Wait for all background jobs
rm "$PIPE"  # Cleanup

Handling Different Rate Limit Scenarios

Time-Based Windows

For APIs with time-based rate limits:

#!/bin/bash
# Track requests per time window
WINDOW_SIZE=60  # 60 seconds
MAX_REQUESTS=100
REQUEST_TIMES=()

can_make_request() {
    local current_time=$(date +%s)
    local window_start=$((current_time - WINDOW_SIZE))

    # Remove old timestamps
    local filtered_times=()
    for timestamp in "${REQUEST_TIMES[@]}"; do
        if [ "$timestamp" -gt "$window_start" ]; then
            filtered_times+=("$timestamp")
        fi
    done
    REQUEST_TIMES=("${filtered_times[@]}")

    # Check if we can make another request
    if [ ${#REQUEST_TIMES[@]} -lt $MAX_REQUESTS ]; then
        REQUEST_TIMES+=("$current_time")
        return 0
    else
        return 1
    fi
}

# Usage
while read -r url; do
    if can_make_request; then
        curl "$url"
    else
        echo "Rate limit reached. Waiting..."
        sleep 10
    fi
done < urls.txt

API Key Rotation

For scenarios requiring multiple API keys:

#!/bin/bash
API_KEYS=("key1" "key2" "key3")
CURRENT_KEY_INDEX=0
KEY_REQUEST_COUNT=0
MAX_REQUESTS_PER_KEY=1000

get_current_api_key() {
    if [ $KEY_REQUEST_COUNT -ge $MAX_REQUESTS_PER_KEY ]; then
        CURRENT_KEY_INDEX=$(( (CURRENT_KEY_INDEX + 1) % ${#API_KEYS[@]} ))
        KEY_REQUEST_COUNT=0
        echo "Switched to API key index: $CURRENT_KEY_INDEX"
    fi

    echo "${API_KEYS[$CURRENT_KEY_INDEX]}"
}

make_authenticated_request() {
    local url=$1
    local api_key=$(get_current_api_key)

    http_code=$(curl -s -H "Authorization: Bearer $api_key" \
                     -w "%{http_code}" "$url")

    if [ "$http_code" -eq 429 ]; then
        echo "Rate limit hit for key $CURRENT_KEY_INDEX"
        KEY_REQUEST_COUNT=$MAX_REQUESTS_PER_KEY  # Force key rotation
        make_authenticated_request "$url"
    else
        KEY_REQUEST_COUNT=$((KEY_REQUEST_COUNT + 1))
    fi
}

Monitoring and Logging

Implement comprehensive logging to track rate limiting behavior:

#!/bin/bash
log_request() {
    local url=$1
    local http_code=$2
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')

    echo "[$timestamp] $url - HTTP $http_code" >> rate_limit.log

    if [ "$http_code" -eq 429 ]; then
        echo "[$timestamp] RATE LIMITED: $url" >> rate_limit_errors.log
    fi
}

# Enhanced request function with logging
monitored_request() {
    local url=$1

    start_time=$(date +%s.%N)
    http_code=$(curl -s -w "%{http_code}" "$url")
    end_time=$(date +%s.%N)

    duration=$(echo "$end_time - $start_time" | bc)
    log_request "$url" "$http_code"

    echo "Request to $url completed in ${duration}s with HTTP $http_code"
}

JavaScript Implementation for Client-Side Rate Limiting

For web applications using cURL-like functionality through fetch or XMLHttpRequest:

class RateLimiter {
    constructor(maxRequests = 10, timeWindow = 60000) {
        this.maxRequests = maxRequests;
        this.timeWindow = timeWindow;
        this.requests = [];
    }

    async makeRequest(url, options = {}) {
        await this.waitForSlot();

        const now = Date.now();
        this.requests.push(now);

        try {
            const response = await fetch(url, options);

            if (response.status === 429) {
                const retryAfter = response.headers.get('retry-after');
                if (retryAfter) {
                    await this.delay(parseInt(retryAfter) * 1000);
                    return this.makeRequest(url, options);
                }
            }

            return response;
        } catch (error) {
            console.error('Request failed:', error);
            throw error;
        }
    }

    async waitForSlot() {
        const now = Date.now();
        const windowStart = now - this.timeWindow;

        // Remove old requests
        this.requests = this.requests.filter(time => time > windowStart);

        if (this.requests.length >= this.maxRequests) {
            const oldestRequest = Math.min(...this.requests);
            const waitTime = oldestRequest + this.timeWindow - now;
            await this.delay(waitTime);
            return this.waitForSlot();
        }
    }

    delay(ms) {
        return new Promise(resolve => setTimeout(resolve, ms));
    }
}

// Usage
const rateLimiter = new RateLimiter(50, 60000); // 50 requests per minute

async function fetchWithRateLimit(url) {
    try {
        const response = await rateLimiter.makeRequest(url);
        return await response.json();
    } catch (error) {
        console.error('Failed to fetch:', error);
    }
}

Python Implementation for Server-Side Rate Limiting

For Python applications that need to make HTTP requests with rate limiting:

import time
import requests
from collections import deque
from typing import Optional, Dict, Any

class RateLimitedClient:
    def __init__(self, max_requests: int = 100, time_window: int = 60):
        self.max_requests = max_requests
        self.time_window = time_window
        self.requests = deque()

    def _wait_for_slot(self):
        now = time.time()
        window_start = now - self.time_window

        # Remove old requests
        while self.requests and self.requests[0] <= window_start:
            self.requests.popleft()

        if len(self.requests) >= self.max_requests:
            sleep_time = self.requests[0] + self.time_window - now
            if sleep_time > 0:
                time.sleep(sleep_time)
                return self._wait_for_slot()

    def request(self, method: str, url: str, **kwargs) -> requests.Response:
        self._wait_for_slot()

        now = time.time()
        self.requests.append(now)

        max_retries = 3
        base_delay = 1

        for attempt in range(max_retries):
            try:
                response = requests.request(method, url, **kwargs)

                if response.status_code == 429:
                    retry_after = response.headers.get('retry-after')
                    if retry_after:
                        time.sleep(int(retry_after))
                    else:
                        delay = base_delay * (2 ** attempt)
                        time.sleep(delay)
                    continue

                return response

            except requests.RequestException as e:
                if attempt == max_retries - 1:
                    raise e
                delay = base_delay * (2 ** attempt)
                time.sleep(delay)

        raise Exception("Max retries exceeded")

    def get(self, url: str, **kwargs) -> requests.Response:
        return self.request('GET', url, **kwargs)

    def post(self, url: str, **kwargs) -> requests.Response:
        return self.request('POST', url, **kwargs)

# Usage
client = RateLimitedClient(max_requests=50, time_window=60)

try:
    response = client.get('https://api.example.com/data')
    data = response.json()
    print(data)
except Exception as e:
    print(f"Request failed: {e}")

Best Practices

  1. Respect robots.txt and Terms of Service: Always check the target website's policies
  2. Use appropriate User-Agent headers: Identify your application properly
  3. Implement graceful degradation: Handle rate limits without crashing
  4. Monitor your usage: Track requests to stay within limits
  5. Cache responses: Avoid repeated requests for the same data
# Example with proper headers and caching
curl -H "User-Agent: MyApp/1.0 (contact@example.com)" \
     -H "Accept: application/json" \
     --compressed \
     -z "cache_file.json" \
     -o "cache_file.json" \
     "https://api.example.com/data"

When dealing with more complex scenarios involving JavaScript-heavy websites, you might need to consider using headless browsers like Puppeteer for advanced automation or implement proper timeout handling strategies for better reliability.

Conclusion

Handling rate limiting with cURL requires a combination of proper error detection, intelligent retry logic, and respectful request patterns. By implementing the strategies outlined above, you can build robust scripts that work reliably with rate-limited APIs while maintaining good relationships with service providers.

Remember to always test your rate limiting logic thoroughly and monitor your application's behavior in production to ensure optimal performance and compliance with API terms of service.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon