Table of contents

How do I implement parallel scraping in Ruby for better performance?

Parallel scraping is essential for improving web scraping performance when dealing with multiple URLs or large-scale data extraction. Ruby offers several approaches to implement parallelism, from built-in threading to advanced asynchronous libraries. This guide covers the most effective methods for implementing parallel scraping in Ruby.

Understanding Ruby's Parallelism Options

Ruby provides multiple ways to achieve parallelism, each with its own advantages and use cases:

1. Threads (Concurrent I/O)

Best for I/O-bound operations like web scraping where you're waiting for HTTP responses.

2. Processes (True Parallelism)

Ideal for CPU-intensive tasks or when you need to bypass Ruby's Global Interpreter Lock (GIL).

3. Asynchronous Libraries

Modern approach using libraries like async for non-blocking I/O operations.

Method 1: Thread-Based Parallel Scraping

Threads are perfect for web scraping since most time is spent waiting for HTTP responses rather than processing data.

Basic Thread Implementation

require 'net/http'
require 'uri'
require 'nokogiri'

class ParallelScraper
  def initialize(urls, max_threads = 10)
    @urls = urls
    @max_threads = max_threads
    @results = []
    @mutex = Mutex.new
  end

  def scrape_all
    threads = []
    url_queue = Queue.new
    @urls.each { |url| url_queue.push(url) }

    @max_threads.times do
      threads << Thread.new do
        while !url_queue.empty?
          begin
            url = url_queue.pop(true)
            result = scrape_single_url(url)

            @mutex.synchronize do
              @results << result
            end
          rescue ThreadError
            # Queue is empty
            break
          rescue => e
            puts "Error scraping #{url}: #{e.message}"
          end
        end
      end
    end

    threads.each(&:join)
    @results
  end

  private

  def scrape_single_url(url)
    uri = URI(url)
    response = Net::HTTP.get_response(uri)

    if response.code == '200'
      doc = Nokogiri::HTML(response.body)
      {
        url: url,
        title: doc.css('title').text.strip,
        status: 'success',
        scraped_at: Time.now
      }
    else
      { url: url, status: 'failed', error: "HTTP #{response.code}" }
    end
  rescue => e
    { url: url, status: 'failed', error: e.message }
  end
end

# Usage example
urls = [
  'https://example.com',
  'https://httpbin.org/html',
  'https://httpbin.org/json',
  # Add more URLs...
]

scraper = ParallelScraper.new(urls, 5)
results = scraper.scrape_all
puts "Scraped #{results.length} pages"

Advanced Thread Pool Implementation

For better resource management, implement a proper thread pool:

require 'thread'

class ThreadPoolScraper
  def initialize(pool_size = 10)
    @pool_size = pool_size
    @jobs = Queue.new
    @pool = []
    @mutex = Mutex.new
    @results = []
  end

  def add_job(url)
    @jobs << url
  end

  def start_scraping
    @pool_size.times do
      @pool << Thread.new do
        while job = @jobs.pop
          begin
            result = process_url(job)
            @mutex.synchronize { @results << result }
          rescue => e
            puts "Error processing #{job}: #{e.message}"
          end
        end
      end
    end
  end

  def wait_for_completion
    @pool.each(&:join)
    @results
  end

  def shutdown
    @pool_size.times { @jobs << nil }
    wait_for_completion
  end

  private

  def process_url(url)
    # Your scraping logic here
    uri = URI(url)
    response = Net::HTTP.get_response(uri)

    {
      url: url,
      content_length: response.body.length,
      status: response.code,
      timestamp: Time.now
    }
  end
end

Method 2: Process-Based Parallel Scraping

For CPU-intensive scraping or to bypass Ruby's GIL limitations:

require 'parallel'

class ProcessBasedScraper
  def self.scrape_urls(urls, process_count = 4)
    Parallel.map(urls, in_processes: process_count) do |url|
      scrape_single_url(url)
    end
  end

  def self.scrape_single_url(url)
    require 'net/http'
    require 'nokogiri'

    uri = URI(url)
    response = Net::HTTP.get_response(uri)
    doc = Nokogiri::HTML(response.body)

    {
      url: url,
      title: doc.css('title').text.strip,
      links_count: doc.css('a').length,
      images_count: doc.css('img').length,
      process_id: Process.pid
    }
  rescue => e
    { url: url, error: e.message, process_id: Process.pid }
  end
end

# Usage
urls = ['https://example.com', 'https://github.com']
results = ProcessBasedScraper.scrape_urls(urls, 2)

Method 3: Asynchronous Scraping with Async Gem

The async gem provides modern asynchronous programming capabilities:

require 'async'
require 'async/http'
require 'nokogiri'

class AsyncScraper
  def initialize(concurrency = 10)
    @concurrency = concurrency
  end

  def scrape_urls(urls)
    results = []

    Async do |task|
      semaphore = Async::Semaphore.new(@concurrency)

      tasks = urls.map do |url|
        task.async do
          semaphore.acquire do
            scrape_url(url)
          end
        end
      end

      results = tasks.map(&:wait)
    end

    results
  end

  private

  def scrape_url(url)
    Async do
      endpoint = Async::HTTP::Endpoint.parse(url)
      client = Async::HTTP::Client.new(endpoint)

      response = client.get(endpoint.path)
      body = response.read
      doc = Nokogiri::HTML(body)

      {
        url: url,
        title: doc.css('title').text.strip,
        status: response.status,
        content_length: body.length
      }
    ensure
      client&.close
    end
  rescue => e
    { url: url, error: e.message }
  end
end

Method 4: Using Concurrent-Ruby for Advanced Patterns

The concurrent-ruby gem provides high-level concurrency abstractions:

require 'concurrent'
require 'net/http'
require 'nokogiri'

class ConcurrentScraper
  def initialize(max_threads = 10)
    @executor = Concurrent::ThreadPoolExecutor.new(
      min_threads: 2,
      max_threads: max_threads,
      max_queue: 100
    )
  end

  def scrape_urls_with_futures(urls)
    futures = urls.map do |url|
      Concurrent::Future.execute(executor: @executor) do
        scrape_url(url)
      end
    end

    # Wait for all futures to complete
    results = futures.map { |future| future.value(10) } # 10 second timeout

    @executor.shutdown
    @executor.wait_for_termination(30)

    results.compact
  end

  def scrape_urls_with_promises(urls)
    promises = urls.map do |url|
      Concurrent::Promise.execute(executor: @executor) do
        scrape_url(url)
      end
    end

    # Process results as they complete
    results = []
    promises.each do |promise|
      promise.on_success { |result| results << result }
      promise.on_failure { |error| puts "Scraping failed: #{error}" }
    end

    # Wait for all promises
    Concurrent::Promise.zip(*promises).wait(30)
    results
  end

  private

  def scrape_url(url)
    uri = URI(url)
    response = Net::HTTP.get_response(uri)
    doc = Nokogiri::HTML(response.body)

    {
      url: url,
      title: doc.css('title').text.strip,
      meta_description: doc.css('meta[name="description"]').first&.[]('content'),
      thread_id: Thread.current.object_id
    }
  rescue => e
    { url: url, error: e.message }
  end
end

Performance Optimization Strategies

1. Connection Pooling

Reuse HTTP connections to reduce overhead:

require 'net/http/persistent'

class OptimizedScraper
  def initialize
    @http = Net::HTTP::Persistent.new('scraper')
    @http.idle_timeout = 10
    @http.keep_alive = 30
  end

  def scrape_with_persistent_connection(url)
    uri = URI(url)
    response = @http.request(uri)
    # Process response...
  ensure
    @http.shutdown if @http
  end
end

2. Rate Limiting and Politeness

Implement rate limiting to be respectful to target servers:

class PoliteScraper
  def initialize(requests_per_second = 5)
    @rate_limiter = Concurrent::TimerTask.new(execution_interval: 1.0 / requests_per_second) do
      # Rate limiting logic
    end
    @request_queue = Queue.new
  end

  def add_request(url)
    @request_queue << url
  end

  def start_scraping
    @rate_limiter.execute

    Thread.new do
      while url = @request_queue.pop
        scrape_url(url)
        sleep(1.0 / @requests_per_second)
      end
    end
  end
end

3. Error Handling and Retries

Implement robust error handling with exponential backoff:

def scrape_with_retry(url, max_retries = 3)
  retries = 0

  begin
    scrape_url(url)
  rescue Net::TimeoutError, Net::OpenTimeout => e
    retries += 1
    if retries <= max_retries
      sleep(2 ** retries) # Exponential backoff
      retry
    else
      { url: url, error: "Max retries exceeded: #{e.message}" }
    end
  end
end

Monitoring and Metrics

Track performance and success rates:

class ScrapingMetrics
  def initialize
    @start_time = Time.now
    @total_requests = 0
    @successful_requests = 0
    @failed_requests = 0
    @mutex = Mutex.new
  end

  def record_success
    @mutex.synchronize do
      @total_requests += 1
      @successful_requests += 1
    end
  end

  def record_failure
    @mutex.synchronize do
      @total_requests += 1
      @failed_requests += 1
    end
  end

  def report
    duration = Time.now - @start_time
    success_rate = (@successful_requests.to_f / @total_requests * 100).round(2)

    puts "Scraping completed in #{duration.round(2)} seconds"
    puts "Total requests: #{@total_requests}"
    puts "Success rate: #{success_rate}%"
    puts "Requests per second: #{(@total_requests / duration).round(2)}"
  end
end

Best Practices for Parallel Scraping

  1. Choose the Right Concurrency Level: Start with 5-10 concurrent requests and adjust based on target server response and your system capabilities.

  2. Implement Proper Error Handling: Always handle network errors, timeouts, and HTTP error codes gracefully.

  3. Respect robots.txt: Check and follow the target website's robots.txt file and crawling policies.

  4. Use Appropriate User Agents: Set realistic user agent strings and rotate them if necessary.

  5. Monitor Resource Usage: Keep an eye on memory consumption and CPU usage, especially when dealing with large datasets.

  6. Implement Caching: Cache responses when appropriate to avoid redundant requests.

Similar to how Puppeteer handles multiple pages in parallel, Ruby's parallel scraping capabilities allow you to efficiently process multiple web pages simultaneously while maintaining control over resource usage and error handling.

Conclusion

Implementing parallel scraping in Ruby significantly improves performance when dealing with multiple URLs or large-scale web scraping projects. Choose threads for I/O-bound operations, processes for CPU-intensive tasks, and asynchronous libraries for modern non-blocking approaches. Always implement proper error handling, rate limiting, and monitoring to ensure reliable and respectful scraping operations.

Remember to test your parallel scraping implementation thoroughly and adjust concurrency levels based on your specific use case and target server capabilities.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon