Table of contents

How do I implement concurrent scraping with JavaScript while avoiding rate limits?

Concurrent scraping in JavaScript allows you to extract data from multiple sources simultaneously while respecting rate limits and avoiding server overload. This approach significantly improves scraping performance while maintaining ethical practices and preventing IP blocks.

Understanding Concurrent Scraping

Concurrent scraping involves running multiple scraping tasks simultaneously rather than sequentially. However, without proper rate limiting, concurrent requests can overwhelm target servers, leading to IP blocks, temporary bans, or degraded performance.

Key Benefits

  • Improved Performance: Reduce overall scraping time by processing multiple pages simultaneously
  • Better Resource Utilization: Make efficient use of network bandwidth and system resources
  • Scalability: Handle large-scale scraping operations more effectively

Rate Limiting Strategies

1. Request Queue with Concurrency Control

Implement a queue system that controls the number of concurrent requests:

class RateLimitedScraper {
  constructor(maxConcurrent = 5, delayMs = 1000) {
    this.maxConcurrent = maxConcurrent;
    this.delayMs = delayMs;
    this.activeRequests = 0;
    this.queue = [];
    this.lastRequestTime = 0;
  }

  async scrape(url) {
    return new Promise((resolve, reject) => {
      this.queue.push({ url, resolve, reject });
      this.processQueue();
    });
  }

  async processQueue() {
    if (this.activeRequests >= this.maxConcurrent || this.queue.length === 0) {
      return;
    }

    const { url, resolve, reject } = this.queue.shift();
    this.activeRequests++;

    try {
      // Ensure minimum delay between requests
      const timeSinceLastRequest = Date.now() - this.lastRequestTime;
      if (timeSinceLastRequest < this.delayMs) {
        await this.delay(this.delayMs - timeSinceLastRequest);
      }

      this.lastRequestTime = Date.now();
      const result = await this.fetchPage(url);
      resolve(result);
    } catch (error) {
      reject(error);
    } finally {
      this.activeRequests--;
      // Process next item in queue
      setTimeout(() => this.processQueue(), 0);
    }
  }

  async fetchPage(url) {
    const response = await fetch(url, {
      headers: {
        'User-Agent': 'Mozilla/5.0 (compatible; bot/1.0)',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'
      }
    });

    if (!response.ok) {
      throw new Error(`HTTP ${response.status}: ${response.statusText}`);
    }

    return await response.text();
  }

  delay(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
  }
}

// Usage example
const scraper = new RateLimitedScraper(3, 2000); // 3 concurrent, 2 second delay

const urls = [
  'https://example.com/page1',
  'https://example.com/page2',
  'https://example.com/page3',
  // ... more URLs
];

Promise.all(urls.map(url => scraper.scrape(url)))
  .then(results => {
    console.log(`Scraped ${results.length} pages successfully`);
  })
  .catch(error => {
    console.error('Scraping failed:', error);
  });

2. Advanced Rate Limiting with Token Bucket

Implement a token bucket algorithm for more sophisticated rate limiting:

class TokenBucket {
  constructor(capacity, refillRate) {
    this.capacity = capacity;
    this.tokens = capacity;
    this.refillRate = refillRate;
    this.lastRefill = Date.now();
  }

  async consume(tokens = 1) {
    this.refill();

    if (this.tokens >= tokens) {
      this.tokens -= tokens;
      return true;
    }

    // Wait for tokens to become available
    const waitTime = (tokens - this.tokens) * (1000 / this.refillRate);
    await new Promise(resolve => setTimeout(resolve, waitTime));
    return this.consume(tokens);
  }

  refill() {
    const now = Date.now();
    const timePassed = (now - this.lastRefill) / 1000;
    const tokensToAdd = Math.floor(timePassed * this.refillRate);

    this.tokens = Math.min(this.capacity, this.tokens + tokensToAdd);
    this.lastRefill = now;
  }
}

class AdvancedScraper {
  constructor(tokensPerSecond = 2, burstCapacity = 10) {
    this.bucket = new TokenBucket(burstCapacity, tokensPerSecond);
    this.activeRequests = new Set();
  }

  async scrapeWithRateLimit(url) {
    await this.bucket.consume();

    const requestPromise = this.performScraping(url);
    this.activeRequests.add(requestPromise);

    try {
      const result = await requestPromise;
      return result;
    } finally {
      this.activeRequests.delete(requestPromise);
    }
  }

  async performScraping(url) {
    // Add random jitter to avoid thundering herd
    const jitter = Math.random() * 500;
    await new Promise(resolve => setTimeout(resolve, jitter));

    const response = await fetch(url);
    return await response.text();
  }
}

Puppeteer-Based Concurrent Scraping

For JavaScript-heavy sites, use Puppeteer with proper concurrency control. When implementing parallel page processing in Puppeteer, it's crucial to manage browser resources effectively:

const puppeteer = require('puppeteer');

class PuppeteerConcurrentScraper {
  constructor(options = {}) {
    this.maxConcurrent = options.maxConcurrent || 3;
    this.delayBetweenRequests = options.delay || 1500;
    this.browser = null;
    this.semaphore = new Semaphore(this.maxConcurrent);
  }

  async initialize() {
    this.browser = await puppeteer.launch({
      headless: true,
      args: ['--no-sandbox', '--disable-setuid-sandbox']
    });
  }

  async scrapePage(url) {
    await this.semaphore.acquire();

    try {
      const page = await this.browser.newPage();

      // Configure page for stealth
      await page.setUserAgent('Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36');
      await page.setViewport({ width: 1366, height: 768 });

      // Navigate with timeout
      await page.goto(url, { 
        waitUntil: 'networkidle2',
        timeout: 30000 
      });

      // Extract data
      const data = await page.evaluate(() => {
        return {
          title: document.title,
          content: document.body.innerText.slice(0, 1000),
          links: Array.from(document.links).map(link => link.href)
        };
      });

      await page.close();

      // Add delay between requests
      await this.delay(this.delayBetweenRequests);

      return { url, data, success: true };
    } catch (error) {
      return { url, error: error.message, success: false };
    } finally {
      this.semaphore.release();
    }
  }

  async scrapeMultiple(urls) {
    await this.initialize();

    try {
      const results = await Promise.allSettled(
        urls.map(url => this.scrapePage(url))
      );

      return results.map(result => 
        result.status === 'fulfilled' ? result.value : result.reason
      );
    } finally {
      if (this.browser) {
        await this.browser.close();
      }
    }
  }

  delay(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
  }
}

// Semaphore implementation for concurrency control
class Semaphore {
  constructor(count) {
    this.count = count;
    this.waiting = [];
  }

  async acquire() {
    if (this.count > 0) {
      this.count--;
      return;
    }

    return new Promise(resolve => {
      this.waiting.push(resolve);
    });
  }

  release() {
    this.count++;
    if (this.waiting.length > 0) {
      this.count--;
      const resolve = this.waiting.shift();
      resolve();
    }
  }
}

Best Practices for Rate Limiting

1. Implement Exponential Backoff

Handle rate limit responses gracefully with exponential backoff:

class ResilientScraper {
  async scrapeWithRetry(url, maxRetries = 3) {
    for (let attempt = 0; attempt <= maxRetries; attempt++) {
      try {
        const response = await fetch(url);

        if (response.status === 429) { // Too Many Requests
          const retryAfter = response.headers.get('Retry-After');
          const delay = retryAfter ? parseInt(retryAfter) * 1000 : Math.pow(2, attempt) * 1000;

          console.log(`Rate limited. Waiting ${delay}ms before retry ${attempt + 1}`);
          await this.delay(delay);
          continue;
        }

        if (response.ok) {
          return await response.text();
        }

        throw new Error(`HTTP ${response.status}`);
      } catch (error) {
        if (attempt === maxRetries) {
          throw error;
        }

        const backoffDelay = Math.pow(2, attempt) * 1000 + Math.random() * 1000;
        await this.delay(backoffDelay);
      }
    }
  }

  delay(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
  }
}

2. Monitor and Adapt

Implement monitoring to adjust scraping speed based on server responses:

class AdaptiveScraper {
  constructor() {
    this.successRate = 1.0;
    this.currentDelay = 1000;
    this.minDelay = 500;
    this.maxDelay = 10000;
    this.recentResults = [];
  }

  updateMetrics(success) {
    this.recentResults.push(success);

    // Keep only last 10 results
    if (this.recentResults.length > 10) {
      this.recentResults.shift();
    }

    this.successRate = this.recentResults.filter(r => r).length / this.recentResults.length;

    // Adjust delay based on success rate
    if (this.successRate < 0.8) {
      this.currentDelay = Math.min(this.maxDelay, this.currentDelay * 1.5);
    } else if (this.successRate > 0.95) {
      this.currentDelay = Math.max(this.minDelay, this.currentDelay * 0.9);
    }
  }

  async adaptiveScrape(url) {
    try {
      await this.delay(this.currentDelay);
      const result = await this.performScraping(url);
      this.updateMetrics(true);
      return result;
    } catch (error) {
      this.updateMetrics(false);
      throw error;
    }
  }
}

Performance Optimization

Connection Pooling

Reuse HTTP connections to improve performance:

const https = require('https');

const agent = new https.Agent({
  keepAlive: true,
  maxSockets: 10,
  maxFreeSockets: 5,
  timeout: 30000
});

// Use agent in fetch requests
const response = await fetch(url, { agent });

Memory Management

Monitor and manage memory usage in long-running scraping operations:

class MemoryEfficientScraper {
  constructor() {
    this.processedCount = 0;
    this.memoryThreshold = 500 * 1024 * 1024; // 500MB
  }

  async processUrls(urls) {
    for (let i = 0; i < urls.length; i += 10) {
      const batch = urls.slice(i, i + 10);
      await this.processBatch(batch);

      // Check memory usage
      const memUsage = process.memoryUsage();
      if (memUsage.heapUsed > this.memoryThreshold) {
        console.log('Running garbage collection...');
        global.gc && global.gc();
      }
    }
  }

  async processBatch(urls) {
    const results = await Promise.allSettled(
      urls.map(url => this.scrapePage(url))
    );

    // Process results immediately and don't store them
    results.forEach(result => {
      if (result.status === 'fulfilled') {
        this.handleResult(result.value);
      }
    });
  }
}

Handling Complex Scenarios

Managing Timeouts and Errors

Implement robust timeout handling, especially when working with timeout management in Puppeteer:

class TimeoutAwareScraper {
  constructor() {
    this.defaultTimeout = 30000;
    this.retryCount = 3;
  }

  async scrapeWithTimeout(url, customTimeout = null) {
    const timeout = customTimeout || this.defaultTimeout;

    return Promise.race([
      this.performScraping(url),
      new Promise((_, reject) => 
        setTimeout(() => reject(new Error('Request timeout')), timeout)
      )
    ]);
  }

  async robustScrape(url) {
    for (let attempt = 1; attempt <= this.retryCount; attempt++) {
      try {
        return await this.scrapeWithTimeout(url);
      } catch (error) {
        if (attempt === this.retryCount) {
          throw error;
        }

        console.log(`Attempt ${attempt} failed for ${url}, retrying...`);
        await this.delay(1000 * attempt); // Progressive delay
      }
    }
  }
}

Handling Dynamic Content

For pages with dynamic content loading, consider using specialized waiting strategies:

class DynamicContentScraper {
  async waitForContent(page, selector, timeout = 10000) {
    try {
      await page.waitForSelector(selector, { timeout });
      // Additional wait for content to fully load
      await page.waitForFunction(
        sel => document.querySelector(sel) && document.querySelector(sel).textContent.trim().length > 0,
        { timeout: 5000 },
        selector
      );
    } catch (error) {
      console.log(`Content not loaded within timeout for selector: ${selector}`);
    }
  }

  async scrapeWithDynamicWait(url, contentSelectors = []) {
    const page = await this.browser.newPage();

    try {
      await page.goto(url, { waitUntil: 'networkidle0' });

      // Wait for specific content
      for (const selector of contentSelectors) {
        await this.waitForContent(page, selector);
      }

      return await page.evaluate(() => {
        // Extract data after ensuring content is loaded
        return {
          title: document.title,
          content: document.body.innerText
        };
      });
    } finally {
      await page.close();
    }
  }
}

Conclusion

Implementing concurrent scraping with proper rate limiting in JavaScript requires careful balance between performance and respect for target servers. By using queue systems, token buckets, and adaptive algorithms, you can build robust scrapers that efficiently extract data while avoiding rate limits.

Key strategies include:

  • Queue-based concurrency control to limit simultaneous requests
  • Token bucket algorithms for sophisticated rate limiting
  • Exponential backoff for handling temporary failures
  • Adaptive rate adjustment based on server responses
  • Proper resource management for long-running operations

For complex scenarios involving JavaScript rendering, consider integrating with specialized services or implementing sophisticated browser automation techniques. Remember to always respect robots.txt files, implement proper error handling, and monitor your scraping operations to ensure sustainable and ethical data extraction practices.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon