Table of contents

What are the common HTTP status codes and how does Mechanize handle them?

Understanding HTTP status codes is crucial for successful web scraping with Mechanize. These three-digit codes indicate the outcome of HTTP requests and help developers handle different scenarios appropriately. Mechanize, being a sophisticated web scraping library, provides built-in mechanisms to handle various status codes automatically while also allowing for custom handling when needed.

Understanding HTTP Status Code Categories

HTTP status codes are organized into five categories, each serving a specific purpose:

  • 1xx (Informational): Request received, continuing process
  • 2xx (Success): Request successfully received, understood, and accepted
  • 3xx (Redirection): Further action must be taken to complete the request
  • 4xx (Client Error): Request contains bad syntax or cannot be fulfilled
  • 5xx (Server Error): Server failed to fulfill an apparently valid request

Common HTTP Status Codes in Web Scraping

2xx Success Codes

200 OK: The most common success status, indicating the request was successful and the response contains the requested data.

201 Created: Typically returned after successful POST requests that create new resources.

204 No Content: Successful request with no response body, often used for DELETE operations.

3xx Redirection Codes

301 Moved Permanently: The resource has been permanently moved to a new URL.

302 Found: Temporary redirect to a different URL.

304 Not Modified: Resource hasn't changed since last request (used with caching).

4xx Client Error Codes

400 Bad Request: The request was malformed or invalid.

401 Unauthorized: Authentication is required to access the resource.

403 Forbidden: Server understood the request but refuses to authorize it.

404 Not Found: The requested resource doesn't exist on the server.

429 Too Many Requests: Rate limiting is in effect.

5xx Server Error Codes

500 Internal Server Error: Generic server error.

502 Bad Gateway: Server received an invalid response from upstream server.

503 Service Unavailable: Server is temporarily unavailable.

How Mechanize Handles HTTP Status Codes

Automatic Success Handling

Mechanize automatically handles successful responses (2xx codes) by returning the page object:

require 'mechanize'

agent = Mechanize.new
page = agent.get('https://example.com')

# Mechanize automatically handles 200 OK responses
puts page.title
puts page.body

Automatic Redirect Handling

One of Mechanize's most valuable features is its automatic handling of redirects. By default, Mechanize follows redirects up to a configurable limit:

agent = Mechanize.new

# Configure redirect limit (default is 20)
agent.max_redirects = 5

# Mechanize automatically follows 301, 302, 303, 307, and 308 redirects
page = agent.get('https://example.com/old-page')
# If redirected, page will contain the final destination content

# Check if redirects occurred
puts "Final URL: #{page.uri}"
puts "Response code: #{page.code}"

To disable automatic redirect following:

agent.redirect_ok = false

Error Code Handling

Mechanize raises specific exceptions for different error conditions:

require 'mechanize'

agent = Mechanize.new

begin
  page = agent.get('https://example.com/nonexistent')
rescue Mechanize::ResponseCodeError => e
  case e.response_code
  when '404'
    puts "Page not found: #{e.page.uri}"
  when '403'
    puts "Access forbidden: #{e.page.uri}"
  when '500'
    puts "Server error: #{e.page.uri}"
  else
    puts "HTTP Error #{e.response_code}: #{e.page.uri}"
  end
rescue Mechanize::UnauthorizedError => e
  puts "Authentication required: #{e.page.uri}"
end

Custom Status Code Handling

You can implement custom handling for specific status codes using Mechanize's error handling capabilities:

agent = Mechanize.new

# Custom handling for specific status codes
agent.pre_connect_hooks << lambda do |agent, request|
  puts "Making request to: #{request.uri}"
end

agent.post_connect_hooks << lambda do |agent, uri, response, body|
  case response.code
  when '429'
    puts "Rate limited. Waiting before retry..."
    sleep(60)
    # You could implement retry logic here
  when '503'
    puts "Service unavailable. Server might be down."
  end
end

begin
  page = agent.get('https://api.example.com/data')
rescue Mechanize::ResponseCodeError => e
  # Handle specific error codes
  if e.response_code == '429'
    puts "Rate limit exceeded. Implement backoff strategy."
  elsif e.response_code.start_with?('5')
    puts "Server error. Consider retrying later."
  end
end

Advanced Status Code Management

Checking Response Details

Mechanize provides access to detailed response information:

page = agent.get('https://example.com')

puts "Status: #{page.code}"
puts "Headers: #{page.response}"
puts "Final URL: #{page.uri}"

# Access specific headers
puts "Content-Type: #{page.response['content-type']}"
puts "Server: #{page.response['server']}"

Implementing Retry Logic

For robust web scraping, implement retry logic for temporary failures:

def fetch_with_retry(agent, url, max_retries = 3)
  retries = 0

  begin
    return agent.get(url)
  rescue Mechanize::ResponseCodeError => e
    retries += 1

    if retries <= max_retries && ['500', '502', '503'].include?(e.response_code)
      sleep_time = 2 ** retries  # Exponential backoff
      puts "Retry #{retries}/#{max_retries} after #{sleep_time}s for #{e.response_code}"
      sleep(sleep_time)
      retry
    else
      raise e
    end
  end
end

# Usage
agent = Mechanize.new
page = fetch_with_retry(agent, 'https://unreliable-server.com/data')

Handling Rate Limiting

When dealing with APIs or sites that implement rate limiting:

class RateLimitedScraper
  def initialize
    @agent = Mechanize.new
    @last_request_time = Time.now - 1
    @min_delay = 1.0  # Minimum delay between requests
  end

  def get_page(url)
    # Implement rate limiting
    time_since_last = Time.now - @last_request_time
    if time_since_last < @min_delay
      sleep(@min_delay - time_since_last)
    end

    begin
      @last_request_time = Time.now
      return @agent.get(url)
    rescue Mechanize::ResponseCodeError => e
      if e.response_code == '429'
        # Extract retry delay from headers if available
        retry_after = e.page.response['retry-after']
        delay = retry_after ? retry_after.to_i : 60

        puts "Rate limited. Waiting #{delay} seconds..."
        sleep(delay)
        retry
      else
        raise e
      end
    end
  end
end

Integration with Modern Web Scraping Workflows

When building comprehensive web scraping solutions, understanding how different tools handle HTTP status codes is essential. While Mechanize excels at traditional web scraping scenarios, modern applications often require handling JavaScript-heavy sites that might need browser automation tools for dynamic content.

For complex scraping workflows that involve multiple stages and different types of responses, you might also need to implement proper error handling strategies similar to those used in browser automation.

Handling JavaScript-Heavy Sites

While Mechanize excels at traditional HTML parsing, some modern websites rely heavily on JavaScript for content rendering. In such cases, you might need to consider browser automation tools. However, you can still use Mechanize for initial requests and handle page redirections systematically before deciding whether additional tools are needed.

Best Practices for Status Code Handling

1. Always Handle Exceptions

Never assume requests will always succeed. Implement comprehensive error handling:

begin
  page = agent.get(url)
  # Process successful response
rescue Mechanize::ResponseCodeError => e
  # Handle HTTP errors
rescue Mechanize::Error => e
  # Handle other Mechanize-specific errors
rescue StandardError => e
  # Handle unexpected errors
end

2. Log Response Details

Maintain detailed logs for debugging and monitoring:

require 'logger'

logger = Logger.new('scraping.log')

begin
  page = agent.get(url)
  logger.info("Success: #{page.code} - #{url}")
rescue Mechanize::ResponseCodeError => e
  logger.error("HTTP #{e.response_code}: #{url} - #{e.message}")
end

3. Implement Circuit Breaker Pattern

For high-volume scraping, implement circuit breaker patterns to handle persistent failures:

class CircuitBreaker
  def initialize(failure_threshold = 5, timeout = 60)
    @failure_threshold = failure_threshold
    @timeout = timeout
    @failure_count = 0
    @last_failure_time = nil
    @state = :closed  # :closed, :open, :half_open
  end

  def call(&block)
    case @state
    when :open
      if Time.now - @last_failure_time > @timeout
        @state = :half_open
      else
        raise "Circuit breaker is open"
      end
    end

    begin
      result = block.call
      reset if @state == :half_open
      result
    rescue Mechanize::ResponseCodeError => e
      record_failure
      raise e
    end
  end

  private

  def record_failure
    @failure_count += 1
    @last_failure_time = Time.now
    @state = :open if @failure_count >= @failure_threshold
  end

  def reset
    @failure_count = 0
    @state = :closed
  end
end

4. Monitor and Handle Authentication

For sites requiring authentication, implement robust session management:

def handle_authentication(agent, login_url, username, password)
  login_page = agent.get(login_url)
  form = login_page.form_with(id: 'login-form')

  if form
    form.username = username
    form.password = password
    result = agent.submit(form)

    if result.code == '200' && !result.body.include?('login error')
      puts "Authentication successful"
      return true
    else
      puts "Authentication failed"
      return false
    end
  else
    raise "Login form not found"
  end
rescue Mechanize::ResponseCodeError => e
  puts "Authentication error: #{e.response_code}"
  return false
end

Performance Optimization

Connection Reuse

Mechanize automatically handles connection pooling, but you can optimize for better performance:

agent = Mechanize.new

# Configure connection settings
agent.keep_alive = true
agent.max_history = 10  # Limit history for memory efficiency

# Set reasonable timeouts
agent.open_timeout = 10
agent.read_timeout = 60

Conditional Requests

Use conditional requests to reduce bandwidth and improve performance:

def fetch_if_modified(agent, url, last_modified = nil, etag = nil)
  headers = {}
  headers['If-Modified-Since'] = last_modified if last_modified
  headers['If-None-Match'] = etag if etag

  begin
    page = agent.get(url, [], nil, headers)
    puts "Content updated: #{page.code}"
    return page
  rescue Mechanize::ResponseCodeError => e
    if e.response_code == '304'
      puts "Content not modified"
      return nil
    else
      raise e
    end
  end
end

Conclusion

Understanding HTTP status codes and how Mechanize handles them is fundamental to building robust web scraping applications. Mechanize provides excellent built-in handling for common scenarios like redirects and basic error conditions, while also offering the flexibility to implement custom logic for specific requirements.

Key takeaways for effective status code handling in Mechanize:

  • Leverage automatic redirect following for seamless navigation
  • Implement comprehensive error handling for 4xx and 5xx status codes
  • Use retry logic with exponential backoff for temporary failures
  • Monitor and log response codes for debugging and optimization
  • Consider rate limiting and circuit breaker patterns for production systems
  • Combine Mechanize with other tools when handling JavaScript-heavy content

By mastering these concepts, you'll be able to build more reliable and maintainable web scraping solutions that gracefully handle the various scenarios encountered in real-world web scraping projects.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon