Table of contents

How to Handle Rate Limiting When Using MechanicalSoup

Rate limiting is a crucial aspect of responsible web scraping with MechanicalSoup. Implementing proper rate limiting protects both your scraper and the target server from being overwhelmed, helps avoid IP blocks, and ensures sustainable long-term scraping operations.

Understanding Rate Limiting

Rate limiting refers to controlling the frequency of requests sent to a web server. Most websites implement rate limiting to prevent abuse and maintain service quality for all users. When scraping with MechanicalSoup, failing to respect these limits can result in:

  • HTTP 429 (Too Many Requests) errors
  • IP address blocking
  • CAPTCHA challenges
  • Temporary or permanent access restrictions

Basic Rate Limiting with Time Delays

The simplest approach to rate limiting in MechanicalSoup is adding delays between requests using Python's time.sleep() function:

import mechanicalsoup
import time

# Create browser instance
browser = mechanicalsoup.StatefulBrowser()
browser.set_user_agent('Mozilla/5.0 (compatible; ResponsibleBot/1.0)')

urls = [
    'https://example.com/page1',
    'https://example.com/page2',
    'https://example.com/page3'
]

for url in urls:
    try:
        # Navigate to the page
        page = browser.open(url)

        # Process the page content
        soup = browser.get_current_page()
        title = soup.find('title').get_text()
        print(f"Page title: {title}")

        # Add delay between requests (1-3 seconds)
        time.sleep(2)

    except Exception as e:
        print(f"Error processing {url}: {e}")
        time.sleep(5)  # Longer delay on errors

Implementing Exponential Backoff

For more sophisticated rate limiting, implement exponential backoff that increases delay times when encountering rate limit errors:

import mechanicalsoup
import time
import random
from requests.exceptions import HTTPError

class RateLimitedScraper:
    def __init__(self, base_delay=1, max_delay=60, backoff_factor=2):
        self.browser = mechanicalsoup.StatefulBrowser()
        self.browser.set_user_agent('Mozilla/5.0 (compatible; ResponsibleBot/1.0)')
        self.base_delay = base_delay
        self.max_delay = max_delay
        self.backoff_factor = backoff_factor
        self.current_delay = base_delay

    def fetch_with_retry(self, url, max_retries=3):
        for attempt in range(max_retries):
            try:
                # Add jitter to avoid thundering herd
                jitter = random.uniform(0.5, 1.5)
                time.sleep(self.current_delay * jitter)

                response = self.browser.open(url)

                # Check for rate limiting
                if response.status_code == 429:
                    retry_after = response.headers.get('Retry-After', self.current_delay)
                    wait_time = int(retry_after) if retry_after.isdigit() else self.current_delay
                    print(f"Rate limited. Waiting {wait_time} seconds...")
                    time.sleep(wait_time)
                    self._increase_delay()
                    continue

                # Reset delay on successful request
                self.current_delay = self.base_delay
                return response

            except HTTPError as e:
                if e.response.status_code == 429:
                    self._increase_delay()
                    print(f"Rate limit hit. Retrying in {self.current_delay} seconds...")
                    time.sleep(self.current_delay)
                else:
                    raise e
            except Exception as e:
                print(f"Attempt {attempt + 1} failed: {e}")
                if attempt == max_retries - 1:
                    raise e
                time.sleep(self.current_delay)

        raise Exception(f"Failed to fetch {url} after {max_retries} attempts")

    def _increase_delay(self):
        self.current_delay = min(self.current_delay * self.backoff_factor, self.max_delay)

# Usage example
scraper = RateLimitedScraper(base_delay=1, max_delay=30)

urls = ['https://example.com/page1', 'https://example.com/page2']
for url in urls:
    try:
        response = scraper.fetch_with_retry(url)
        soup = scraper.browser.get_current_page()
        # Process the content
        print(f"Successfully scraped: {url}")
    except Exception as e:
        print(f"Failed to scrape {url}: {e}")

Respecting robots.txt and Server Policies

Always check and respect the website's robots.txt file and any explicit rate limiting policies:

import mechanicalsoup
import urllib.robotparser
import time
from urllib.parse import urljoin, urlparse

class ResponsibleScraper:
    def __init__(self, user_agent='*', default_delay=1):
        self.browser = mechanicalsoup.StatefulBrowser()
        self.browser.set_user_agent('Mozilla/5.0 (compatible; ResponsibleBot/1.0)')
        self.user_agent = user_agent
        self.default_delay = default_delay
        self.robots_cache = {}

    def can_fetch(self, url):
        """Check if we're allowed to fetch the URL according to robots.txt"""
        parsed_url = urlparse(url)
        base_url = f"{parsed_url.scheme}://{parsed_url.netloc}"

        if base_url not in self.robots_cache:
            robots_url = urljoin(base_url, '/robots.txt')
            rp = urllib.robotparser.RobotFileParser()
            rp.set_url(robots_url)
            try:
                rp.read()
                self.robots_cache[base_url] = rp
            except Exception:
                # If robots.txt can't be read, assume we can fetch
                self.robots_cache[base_url] = None

        rp = self.robots_cache[base_url]
        if rp:
            return rp.can_fetch(self.user_agent, url)
        return True

    def get_crawl_delay(self, url):
        """Get the crawl delay specified in robots.txt"""
        parsed_url = urlparse(url)
        base_url = f"{parsed_url.scheme}://{parsed_url.netloc}"

        rp = self.robots_cache.get(base_url)
        if rp:
            delay = rp.crawl_delay(self.user_agent)
            return delay if delay else self.default_delay
        return self.default_delay

    def scrape_url(self, url):
        if not self.can_fetch(url):
            print(f"Robots.txt disallows fetching: {url}")
            return None

        delay = self.get_crawl_delay(url)
        time.sleep(delay)

        try:
            response = self.browser.open(url)
            return self.browser.get_current_page()
        except Exception as e:
            print(f"Error scraping {url}: {e}")
            return None

# Usage
scraper = ResponsibleScraper(default_delay=2)
soup = scraper.scrape_url('https://example.com/page')
if soup:
    title = soup.find('title').get_text()
    print(f"Page title: {title}")

Advanced Rate Limiting with Token Bucket

For more sophisticated scenarios, implement a token bucket algorithm:

import mechanicalsoup
import time
import threading
from collections import deque

class TokenBucket:
    def __init__(self, capacity, refill_rate):
        self.capacity = capacity
        self.tokens = capacity
        self.refill_rate = refill_rate
        self.last_refill = time.time()
        self.lock = threading.Lock()

    def consume(self, tokens=1):
        with self.lock:
            now = time.time()
            # Add tokens based on time elapsed
            tokens_to_add = (now - self.last_refill) * self.refill_rate
            self.tokens = min(self.capacity, self.tokens + tokens_to_add)
            self.last_refill = now

            if self.tokens >= tokens:
                self.tokens -= tokens
                return True
            return False

    def wait_for_token(self, tokens=1):
        while not self.consume(tokens):
            time.sleep(0.1)

class RateLimitedMechanicalSoup:
    def __init__(self, requests_per_second=1):
        self.browser = mechanicalsoup.StatefulBrowser()
        self.browser.set_user_agent('Mozilla/5.0 (compatible; ResponsibleBot/1.0)')
        self.bucket = TokenBucket(capacity=10, refill_rate=requests_per_second)

    def get(self, url):
        self.bucket.wait_for_token()
        return self.browser.open(url)

# Usage
rate_limited_browser = RateLimitedMechanicalSoup(requests_per_second=0.5)  # 1 request per 2 seconds

urls = ['https://example.com/page1', 'https://example.com/page2', 'https://example.com/page3']
for url in urls:
    try:
        response = rate_limited_browser.get(url)
        soup = rate_limited_browser.browser.get_current_page()
        print(f"Scraped: {url}")
    except Exception as e:
        print(f"Error: {e}")

Handling Rate Limit Headers

Many websites provide rate limit information in HTTP headers. Parse these headers to implement dynamic rate limiting:

import mechanicalsoup
import time
from datetime import datetime

class HeaderAwareScraper:
    def __init__(self):
        self.browser = mechanicalsoup.StatefulBrowser()
        self.browser.set_user_agent('Mozilla/5.0 (compatible; ResponsibleBot/1.0)')

    def scrape_with_header_awareness(self, url):
        response = self.browser.open(url)

        # Check common rate limit headers
        rate_limit_remaining = response.headers.get('X-RateLimit-Remaining')
        rate_limit_reset = response.headers.get('X-RateLimit-Reset')
        retry_after = response.headers.get('Retry-After')

        if rate_limit_remaining and int(rate_limit_remaining) < 5:
            if rate_limit_reset:
                reset_time = int(rate_limit_reset)
                current_time = int(time.time())
                wait_time = max(0, reset_time - current_time)
                print(f"Rate limit nearly exceeded. Waiting {wait_time} seconds...")
                time.sleep(wait_time)

        if retry_after:
            wait_time = int(retry_after)
            print(f"Server requested delay: {wait_time} seconds")
            time.sleep(wait_time)

        return self.browser.get_current_page()

# Usage
scraper = HeaderAwareScraper()
soup = scraper.scrape_with_header_awareness('https://api.example.com/data')

Best Practices for Rate Limiting

1. Start Conservative

Begin with longer delays and gradually optimize based on the target website's behavior and response times.

2. Monitor Server Response

Watch for signs of rate limiting such as slower response times, error messages, or changes in content structure.

3. Use Session Management

When dealing with authentication or session-based sites, similar to how browser sessions are handled in Puppeteer, maintain consistent session state while respecting rate limits.

4. Implement Graceful Degradation

When rate limits are hit, gracefully back off rather than aggressively retrying.

Conclusion

Proper rate limiting is essential for sustainable web scraping with MechanicalSoup. By implementing delays, exponential backoff, respecting robots.txt, and monitoring server responses, you can build robust scrapers that maintain good relationships with target websites. Remember that the goal is not just to avoid blocks, but to be a responsible web citizen.

For complex scenarios involving dynamic content, consider complementary approaches like handling AJAX requests using Puppeteer for sites where MechanicalSoup's stateful browsing capabilities need enhancement.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon