Table of contents

What is the Firecrawl Rate Limit and How Do I Manage It?

When using Firecrawl for web scraping, understanding and managing rate limits is essential to ensure your applications run smoothly without interruptions. Rate limits control how many API requests you can make within a given time period, preventing abuse and ensuring fair usage across all users.

This comprehensive guide covers everything you need to know about Firecrawl rate limits, including how to check your current limits, implement proper rate limiting strategies, and optimize your scraping workflows to stay within quota.

Understanding Firecrawl Rate Limits

Firecrawl implements rate limiting at multiple levels to ensure optimal performance and fair resource allocation. The specific rate limits depend on your subscription plan and the type of operation you're performing.

Types of Rate Limits

Firecrawl enforces several types of rate limits:

  1. Requests Per Minute (RPM) - Maximum number of API calls you can make per minute
  2. Concurrent Requests - Maximum number of simultaneous requests
  3. Monthly Credits - Total number of API credits allocated per billing cycle
  4. Crawl Page Limits - Maximum pages you can crawl in a single crawl job

Typical Rate Limit Tiers

While specific limits vary by plan, here's a general overview:

# Example rate limit structure (check your plan for exact values)
FREE_TIER = {
    'requests_per_minute': 5,
    'concurrent_requests': 1,
    'monthly_credits': 500
}

STARTER_TIER = {
    'requests_per_minute': 30,
    'concurrent_requests': 3,
    'monthly_credits': 10000
}

PROFESSIONAL_TIER = {
    'requests_per_minute': 100,
    'concurrent_requests': 10,
    'monthly_credits': 50000
}

Checking Your Current Rate Limits

Using the Firecrawl Dashboard

The easiest way to check your rate limits is through the Firecrawl dashboard:

  1. Log in to your Firecrawl account at firecrawl.dev
  2. Navigate to the "API" or "Usage" section
  3. View your current plan limits and usage statistics

Programmatic Rate Limit Checking

You can also check your current usage programmatically by monitoring API response headers:

from firecrawl import FirecrawlApp
import requests

app = FirecrawlApp(api_key='your_api_key')

# Make a request and inspect headers
def check_rate_limits():
    try:
        response = app.scrape_url('https://example.com')

        # Note: Actual header names may vary - check API documentation
        print(f"Rate limit info:")
        print(f"Requests remaining: {response.get('rate_limit_remaining', 'N/A')}")
        print(f"Rate limit reset: {response.get('rate_limit_reset', 'N/A')}")

        return response
    except Exception as e:
        print(f"Error checking rate limits: {e}")

check_rate_limits()

In JavaScript/Node.js:

import FirecrawlApp from '@mendable/firecrawl-js';

const app = new FirecrawlApp({ apiKey: 'your_api_key' });

async function checkRateLimits() {
  try {
    const response = await app.scrapeUrl('https://example.com');

    // Inspect response metadata
    console.log('Rate limit info:', response.metadata);

    return response;
  } catch (error) {
    if (error.response?.status === 429) {
      console.log('Rate limit exceeded!');
      console.log('Retry after:', error.response.headers['retry-after']);
    }
    throw error;
  }
}

checkRateLimits();

Implementing Rate Limiting in Your Application

Python Rate Limiting Implementation

Here's a robust rate limiter implementation for Python:

from firecrawl import FirecrawlApp
import time
from datetime import datetime, timedelta
from collections import deque

class RateLimitedFirecrawl:
    """Firecrawl wrapper with built-in rate limiting"""

    def __init__(self, api_key, requests_per_minute=30, max_concurrent=3):
        self.app = FirecrawlApp(api_key=api_key)
        self.requests_per_minute = requests_per_minute
        self.max_concurrent = max_concurrent

        # Track request timestamps
        self.request_times = deque()
        self.concurrent_requests = 0

    def _wait_if_needed(self):
        """Wait if we're approaching rate limits"""
        now = datetime.now()

        # Remove requests older than 1 minute
        cutoff_time = now - timedelta(minutes=1)
        while self.request_times and self.request_times[0] < cutoff_time:
            self.request_times.popleft()

        # Check if we need to wait
        if len(self.request_times) >= self.requests_per_minute:
            # Calculate wait time until oldest request expires
            wait_until = self.request_times[0] + timedelta(minutes=1)
            wait_seconds = (wait_until - now).total_seconds()

            if wait_seconds > 0:
                print(f"Rate limit approaching. Waiting {wait_seconds:.2f} seconds...")
                time.sleep(wait_seconds + 0.1)  # Add small buffer
                self._wait_if_needed()  # Recursive check

    def scrape_url(self, url, params=None):
        """Scrape URL with rate limiting"""
        # Wait if necessary
        self._wait_if_needed()

        # Wait for concurrent request slot
        while self.concurrent_requests >= self.max_concurrent:
            print("Max concurrent requests reached. Waiting...")
            time.sleep(1)

        try:
            self.concurrent_requests += 1
            self.request_times.append(datetime.now())

            result = self.app.scrape_url(url, params=params)
            return result

        except Exception as e:
            if '429' in str(e) or 'rate limit' in str(e).lower():
                print("Rate limit error received. Implementing exponential backoff...")
                time.sleep(60)  # Wait 1 minute before retry
                return self.scrape_url(url, params)  # Retry
            raise

        finally:
            self.concurrent_requests -= 1

# Usage
rate_limited_app = RateLimitedFirecrawl(
    api_key='your_api_key',
    requests_per_minute=30,
    max_concurrent=3
)

# Scrape multiple URLs without exceeding rate limits
urls = [
    'https://example.com/page1',
    'https://example.com/page2',
    'https://example.com/page3'
]

for url in urls:
    result = rate_limited_app.scrape_url(url)
    print(f"Scraped: {url}")

JavaScript Rate Limiting Implementation

Here's an equivalent implementation for JavaScript/Node.js:

import FirecrawlApp from '@mendable/firecrawl-js';

class RateLimitedFirecrawl {
  constructor(apiKey, requestsPerMinute = 30, maxConcurrent = 3) {
    this.app = new FirecrawlApp({ apiKey });
    this.requestsPerMinute = requestsPerMinute;
    this.maxConcurrent = maxConcurrent;

    this.requestTimes = [];
    this.concurrentRequests = 0;
  }

  async _waitIfNeeded() {
    const now = Date.now();
    const oneMinuteAgo = now - 60000;

    // Remove requests older than 1 minute
    this.requestTimes = this.requestTimes.filter(time => time > oneMinuteAgo);

    // Check if we need to wait
    if (this.requestTimes.length >= this.requestsPerMinute) {
      const oldestRequest = this.requestTimes[0];
      const waitMs = (oldestRequest + 60000) - now + 100; // Add 100ms buffer

      if (waitMs > 0) {
        console.log(`Rate limit approaching. Waiting ${(waitMs/1000).toFixed(2)} seconds...`);
        await new Promise(resolve => setTimeout(resolve, waitMs));
        return this._waitIfNeeded(); // Recursive check
      }
    }
  }

  async _waitForConcurrentSlot() {
    while (this.concurrentRequests >= this.maxConcurrent) {
      console.log('Max concurrent requests reached. Waiting...');
      await new Promise(resolve => setTimeout(resolve, 1000));
    }
  }

  async scrapeUrl(url, params = {}) {
    // Wait if necessary
    await this._waitIfNeeded();
    await this._waitForConcurrentSlot();

    try {
      this.concurrentRequests++;
      this.requestTimes.push(Date.now());

      const result = await this.app.scrapeUrl(url, params);
      return result;

    } catch (error) {
      if (error.response?.status === 429 || error.message.includes('rate limit')) {
        console.log('Rate limit error received. Implementing exponential backoff...');
        await new Promise(resolve => setTimeout(resolve, 60000)); // Wait 1 minute
        return this.scrapeUrl(url, params); // Retry
      }
      throw error;

    } finally {
      this.concurrentRequests--;
    }
  }
}

// Usage
const rateLimitedApp = new RateLimitedFirecrawl(
  'your_api_key',
  30,  // requests per minute
  3    // max concurrent
);

// Scrape multiple URLs without exceeding rate limits
const urls = [
  'https://example.com/page1',
  'https://example.com/page2',
  'https://example.com/page3'
];

for (const url of urls) {
  const result = await rateLimitedApp.scrapeUrl(url);
  console.log(`Scraped: ${url}`);
}

Handling Rate Limit Errors

Detecting Rate Limit Errors

When you exceed rate limits, Firecrawl returns a 429 HTTP status code. Here's how to handle it gracefully:

from firecrawl import FirecrawlApp
import time

app = FirecrawlApp(api_key='your_api_key')

def scrape_with_backoff(url, max_retries=5):
    """Scrape with exponential backoff on rate limit errors"""

    retry_delays = [1, 2, 5, 10, 30]  # Exponential backoff in seconds

    for attempt, delay in enumerate(retry_delays):
        try:
            result = app.scrape_url(url)
            return result

        except Exception as e:
            error_message = str(e)

            # Check if it's a rate limit error
            if '429' in error_message or 'rate limit' in error_message.lower():
                if attempt < len(retry_delays) - 1:
                    print(f"Rate limit hit. Waiting {delay} seconds before retry {attempt + 1}...")
                    time.sleep(delay)
                else:
                    print("Max retries reached due to rate limiting")
                    raise
            else:
                # Different error, don't retry
                raise

    raise Exception("Failed to scrape after all retries")

# Usage
try:
    data = scrape_with_backoff('https://example.com')
    print("Success:", data)
except Exception as e:
    print(f"Failed: {e}")

Implementing Retry-After Headers

When rate limited, APIs often return a Retry-After header indicating when you can retry. Similar to handling browser sessions in Puppeteer, proper session management is crucial:

import FirecrawlApp from '@mendable/firecrawl-js';

const app = new FirecrawlApp({ apiKey: 'your_api_key' });

async function scrapeWithRetryAfter(url, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await app.scrapeUrl(url);

    } catch (error) {
      if (error.response?.status === 429) {
        const retryAfter = error.response.headers['retry-after'];
        const waitSeconds = retryAfter ? parseInt(retryAfter) : Math.pow(2, attempt) * 5;

        console.log(`Rate limited. Waiting ${waitSeconds} seconds (attempt ${attempt + 1})...`);
        await new Promise(resolve => setTimeout(resolve, waitSeconds * 1000));

        if (attempt === maxRetries - 1) {
          throw new Error('Max retries reached due to rate limiting');
        }
      } else {
        throw error;
      }
    }
  }
}

// Usage
try {
  const data = await scrapeWithRetryAfter('https://example.com');
  console.log('Success:', data);
} catch (error) {
  console.error('Failed:', error);
}

Optimizing Your Scraping Strategy

Batch Processing with Rate Limiting

When scraping many URLs, batch processing with proper rate limiting is essential:

from firecrawl import FirecrawlApp
import time
from concurrent.futures import ThreadPoolExecutor, as_completed

app = FirecrawlApp(api_key='your_api_key')

def batch_scrape_urls(urls, batch_size=5, delay_between_batches=12):
    """
    Scrape URLs in batches to respect rate limits.

    For 30 RPM limit: batch_size=5, delay=12 means 5 requests every 12 seconds
    This equals 25 RPM, leaving a safety margin.
    """
    results = []

    # Split URLs into batches
    for i in range(0, len(urls), batch_size):
        batch = urls[i:i + batch_size]
        batch_start_time = time.time()

        print(f"Processing batch {i//batch_size + 1} ({len(batch)} URLs)...")

        # Process batch concurrently
        with ThreadPoolExecutor(max_workers=batch_size) as executor:
            futures = {
                executor.submit(app.scrape_url, url): url
                for url in batch
            }

            for future in as_completed(futures):
                url = futures[future]
                try:
                    result = future.result()
                    results.append({'url': url, 'data': result, 'success': True})
                    print(f"✓ Scraped: {url}")
                except Exception as e:
                    results.append({'url': url, 'error': str(e), 'success': False})
                    print(f"✗ Failed: {url} - {e}")

        # Wait before next batch
        batch_duration = time.time() - batch_start_time
        remaining_wait = delay_between_batches - batch_duration

        if remaining_wait > 0 and i + batch_size < len(urls):
            print(f"Waiting {remaining_wait:.2f}s before next batch...")
            time.sleep(remaining_wait)

    return results

# Usage
urls_to_scrape = [
    'https://example.com/page1',
    'https://example.com/page2',
    'https://example.com/page3',
    # ... more URLs
]

results = batch_scrape_urls(urls_to_scrape, batch_size=5, delay_between_batches=12)

# Analyze results
successful = sum(1 for r in results if r['success'])
print(f"\nCompleted: {successful}/{len(results)} successful")

Using Crawl Jobs for Efficient Scraping

Instead of making individual requests for each page, use Firecrawl's crawl feature to let it handle rate limiting internally:

from firecrawl import FirecrawlApp

app = FirecrawlApp(api_key='your_api_key')

# Let Firecrawl manage rate limiting during crawls
crawl_result = app.crawl_url(
    'https://example.com',
    params={
        'limit': 100,  # Maximum pages to crawl
        'scrapeOptions': {
            'formats': ['markdown'],
            'onlyMainContent': True
        }
    },
    poll_interval=5  # Check status every 5 seconds
)

print(f"Crawled {len(crawl_result['data'])} pages")

This approach is more efficient as Firecrawl handles rate limiting, retries, and pagination automatically.

Monitoring and Analytics

Building a Usage Monitor

Track your API usage to avoid unexpected rate limit errors:

import json
from datetime import datetime
from firecrawl import FirecrawlApp

class FirecrawlUsageMonitor:
    """Monitor Firecrawl API usage and rate limits"""

    def __init__(self, api_key, log_file='firecrawl_usage.json'):
        self.app = FirecrawlApp(api_key=api_key)
        self.log_file = log_file
        self.load_usage_data()

    def load_usage_data(self):
        """Load historical usage data"""
        try:
            with open(self.log_file, 'r') as f:
                self.usage_data = json.load(f)
        except FileNotFoundError:
            self.usage_data = {
                'requests': [],
                'total_requests': 0,
                'failed_requests': 0,
                'rate_limit_hits': 0
            }

    def save_usage_data(self):
        """Save usage data to file"""
        with open(self.log_file, 'w') as f:
            json.dump(self.usage_data, f, indent=2)

    def log_request(self, url, success, rate_limited=False):
        """Log an API request"""
        self.usage_data['requests'].append({
            'timestamp': datetime.now().isoformat(),
            'url': url,
            'success': success,
            'rate_limited': rate_limited
        })

        self.usage_data['total_requests'] += 1
        if not success:
            self.usage_data['failed_requests'] += 1
        if rate_limited:
            self.usage_data['rate_limit_hits'] += 1

        self.save_usage_data()

    def scrape_url(self, url, params=None):
        """Scrape with usage monitoring"""
        try:
            result = self.app.scrape_url(url, params=params)
            self.log_request(url, success=True)
            return result
        except Exception as e:
            rate_limited = '429' in str(e) or 'rate limit' in str(e).lower()
            self.log_request(url, success=False, rate_limited=rate_limited)
            raise

    def get_statistics(self):
        """Get usage statistics"""
        return {
            'total_requests': self.usage_data['total_requests'],
            'failed_requests': self.usage_data['failed_requests'],
            'rate_limit_hits': self.usage_data['rate_limit_hits'],
            'success_rate': (
                (self.usage_data['total_requests'] - self.usage_data['failed_requests']) /
                self.usage_data['total_requests'] * 100
                if self.usage_data['total_requests'] > 0 else 0
            )
        }

# Usage
monitor = FirecrawlUsageMonitor(api_key='your_api_key')

# Scrape with monitoring
result = monitor.scrape_url('https://example.com')

# Get statistics
stats = monitor.get_statistics()
print(f"Total requests: {stats['total_requests']}")
print(f"Rate limit hits: {stats['rate_limit_hits']}")
print(f"Success rate: {stats['success_rate']:.2f}%")

Best Practices for Rate Limit Management

1. Start with Conservative Limits

Always start with lower request rates than your plan allows to provide a safety buffer:

# If your plan allows 100 RPM, configure for 80-90 RPM
rate_limited_app = RateLimitedFirecrawl(
    api_key='your_api_key',
    requests_per_minute=80,  # 20% buffer below 100 RPM limit
    max_concurrent=8         # 2 below max if limit is 10
)

2. Implement Circuit Breaker Pattern

Stop making requests after consecutive rate limit errors to avoid wasting credits:

class CircuitBreaker {
  constructor(threshold = 5, resetTimeout = 60000) {
    this.failureCount = 0;
    this.threshold = threshold;
    this.resetTimeout = resetTimeout;
    this.state = 'CLOSED'; // CLOSED, OPEN, HALF_OPEN
    this.nextAttempt = Date.now();
  }

  async execute(fn) {
    if (this.state === 'OPEN') {
      if (Date.now() < this.nextAttempt) {
        throw new Error('Circuit breaker is OPEN. Too many rate limit errors.');
      }
      this.state = 'HALF_OPEN';
    }

    try {
      const result = await fn();
      this.onSuccess();
      return result;
    } catch (error) {
      this.onFailure(error);
      throw error;
    }
  }

  onSuccess() {
    this.failureCount = 0;
    this.state = 'CLOSED';
  }

  onFailure(error) {
    if (error.response?.status === 429) {
      this.failureCount++;

      if (this.failureCount >= this.threshold) {
        this.state = 'OPEN';
        this.nextAttempt = Date.now() + this.resetTimeout;
        console.log(`Circuit breaker OPEN. Will retry after ${this.resetTimeout/1000}s`);
      }
    }
  }
}

// Usage
const breaker = new CircuitBreaker(5, 60000);
const app = new FirecrawlApp({ apiKey: 'your_api_key' });

async function scrapeWithBreaker(url) {
  return breaker.execute(() => app.scrapeUrl(url));
}

3. Use Webhooks for Large Crawl Jobs

For large-scale crawling, use webhooks instead of polling to avoid unnecessary API calls. This is particularly useful when handling AJAX requests using Puppeteer or similar async operations:

from firecrawl import FirecrawlApp

app = FirecrawlApp(api_key='your_api_key')

# Start crawl with webhook notification
crawl_result = app.crawl_url(
    'https://example.com',
    params={
        'limit': 500,
        'webhook': 'https://your-domain.com/firecrawl-webhook'
    },
    wait_until_done=False  # Don't poll, wait for webhook
)

print(f"Crawl started with ID: {crawl_result['id']}")
print("Will be notified via webhook when complete")

4. Cache Results to Reduce API Calls

Implement caching to avoid re-scraping the same URLs:

import hashlib
import json
import os
from datetime import datetime, timedelta

class CachedFirecrawl:
    """Firecrawl with built-in caching"""

    def __init__(self, api_key, cache_dir='firecrawl_cache', cache_ttl_hours=24):
        self.app = FirecrawlApp(api_key=api_key)
        self.cache_dir = cache_dir
        self.cache_ttl = timedelta(hours=cache_ttl_hours)

        os.makedirs(cache_dir, exist_ok=True)

    def _get_cache_key(self, url, params):
        """Generate cache key from URL and params"""
        cache_input = f"{url}:{json.dumps(params, sort_keys=True)}"
        return hashlib.md5(cache_input.encode()).hexdigest()

    def _get_cache_path(self, cache_key):
        """Get file path for cache key"""
        return os.path.join(self.cache_dir, f"{cache_key}.json")

    def scrape_url(self, url, params=None):
        """Scrape with caching"""
        params = params or {}
        cache_key = self._get_cache_key(url, params)
        cache_path = self._get_cache_path(cache_key)

        # Check cache
        if os.path.exists(cache_path):
            with open(cache_path, 'r') as f:
                cached_data = json.load(f)

            cached_time = datetime.fromisoformat(cached_data['cached_at'])
            if datetime.now() - cached_time < self.cache_ttl:
                print(f"Cache hit for {url}")
                return cached_data['result']

        # Cache miss - fetch from API
        print(f"Cache miss for {url} - fetching from API")
        result = self.app.scrape_url(url, params=params)

        # Save to cache
        cache_data = {
            'url': url,
            'params': params,
            'result': result,
            'cached_at': datetime.now().isoformat()
        }

        with open(cache_path, 'w') as f:
            json.dump(cache_data, f)

        return result

# Usage - automatically uses cache when available
cached_app = CachedFirecrawl(
    api_key='your_api_key',
    cache_ttl_hours=24
)

result = cached_app.scrape_url('https://example.com')

Conclusion

Managing rate limits effectively is crucial for building reliable and efficient web scraping applications with Firecrawl. By implementing proper rate limiting strategies, error handling, monitoring, and optimization techniques, you can maximize your API usage while staying within your plan limits.

Key takeaways:

  • Always implement client-side rate limiting as a safety measure
  • Use exponential backoff when handling rate limit errors
  • Monitor your usage to identify patterns and optimize
  • Leverage Firecrawl's crawl feature for efficient multi-page scraping
  • Implement caching to reduce unnecessary API calls
  • Use circuit breakers to prevent cascading failures

With these strategies in place, your Firecrawl-powered web scraping applications will be robust, efficient, and respectful of rate limits, ensuring smooth operation even at scale.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon