How do I implement retry logic for failed HTTP requests in Ruby?
Implementing retry logic for HTTP requests is crucial for building resilient Ruby applications that can handle network failures, temporary server issues, and rate limiting. This guide covers various approaches to implement robust retry mechanisms in Ruby, from simple retry patterns to advanced strategies with exponential backoff and circuit breakers.
Basic Retry Implementation
Simple Retry with Kernel#retry
Ruby's built-in retry
keyword provides the simplest way to implement retry logic:
require 'net/http'
require 'uri'
def fetch_data(url, max_retries = 3)
retries = 0
begin
uri = URI(url)
response = Net::HTTP.get_response(uri)
if response.code.to_i >= 400
raise "HTTP Error: #{response.code}"
end
response.body
rescue => e
retries += 1
if retries <= max_retries
puts "Attempt #{retries} failed: #{e.message}. Retrying..."
sleep(1) # Basic delay
retry
else
raise "Failed after #{max_retries} retries: #{e.message}"
end
end
end
# Usage
begin
data = fetch_data('https://api.example.com/data')
puts data
rescue => e
puts "Final error: #{e.message}"
end
Enhanced Retry with Exponential Backoff
Exponential backoff prevents overwhelming the server and is more respectful to rate limits:
require 'net/http'
require 'uri'
require 'json'
class HttpRetryClient
def initialize(base_delay: 1, max_delay: 60, backoff_factor: 2, max_retries: 3)
@base_delay = base_delay
@max_delay = max_delay
@backoff_factor = backoff_factor
@max_retries = max_retries
end
def get(url, headers = {})
make_request(:get, url, nil, headers)
end
def post(url, body = nil, headers = {})
make_request(:post, url, body, headers)
end
private
def make_request(method, url, body = nil, headers = {})
retries = 0
begin
uri = URI(url)
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = uri.scheme == 'https'
request = create_request(method, uri, body, headers)
response = http.request(request)
# Check if response indicates a retryable error
if should_retry?(response)
raise RetryableError.new("HTTP #{response.code}: #{response.message}")
end
response
rescue RetryableError, Net::TimeoutError, Net::OpenTimeout,
Errno::ECONNREFUSED, Errno::ECONNRESET => e
retries += 1
if retries <= @max_retries
delay = calculate_delay(retries)
puts "Attempt #{retries} failed: #{e.message}. Retrying in #{delay}s..."
sleep(delay)
retry
else
raise "Failed after #{@max_retries} retries: #{e.message}"
end
end
end
def create_request(method, uri, body, headers)
case method
when :get
request = Net::HTTP::Get.new(uri)
when :post
request = Net::HTTP::Post.new(uri)
request.body = body.is_a?(Hash) ? body.to_json : body
request['Content-Type'] = 'application/json' if body
end
headers.each { |key, value| request[key] = value }
request
end
def should_retry?(response)
# Retry on 5xx server errors and specific 4xx errors
status_code = response.code.to_i
status_code >= 500 || [408, 429].include?(status_code)
end
def calculate_delay(attempt)
delay = @base_delay * (@backoff_factor ** (attempt - 1))
[delay, @max_delay].min + rand(0.1..1.0) # Add jitter
end
class RetryableError < StandardError; end
end
# Usage
client = HttpRetryClient.new(base_delay: 1, max_retries: 5)
begin
response = client.get('https://api.example.com/data')
puts JSON.parse(response.body)
rescue => e
puts "Request failed: #{e.message}"
end
Using Popular Gems
Retries Gem
The retries
gem provides a clean DSL for retry logic:
# Gemfile
gem 'retries'
require 'retries'
def fetch_with_retries(url)
with_retries(max_tries: 3, base_sleep_seconds: 1, max_sleep_seconds: 10) do
uri = URI(url)
response = Net::HTTP.get_response(uri)
if response.code.to_i >= 400
raise "HTTP Error: #{response.code}"
end
response.body
end
rescue => e
puts "All retries exhausted: #{e.message}"
end
Retriable Gem
The retriable
gem offers more configuration options:
# Gemfile
gem 'retriable'
require 'retriable'
Retriable.configure do |c|
c.sleep_disabled = false
c.base_interval = 1
c.max_interval = 60
c.rand_factor = 0.5
c.multiplier = 2
c.max_elapsed_time = 300
end
def fetch_with_retriable(url)
Retriable.retriable(
tries: 5,
on: [Net::TimeoutError, Net::HTTPError, Errno::ECONNREFUSED],
base_interval: 1,
multiplier: 2,
rand_factor: 0.5
) do
uri = URI(url)
response = Net::HTTP.get_response(uri)
raise Net::HTTPError.new("HTTP #{response.code}", nil) if response.code.to_i >= 400
response.body
end
end
Advanced Patterns
Circuit Breaker Pattern
Implement a circuit breaker to prevent cascading failures:
class CircuitBreaker
STATES = [:closed, :open, :half_open].freeze
def initialize(failure_threshold: 5, recovery_timeout: 60)
@failure_threshold = failure_threshold
@recovery_timeout = recovery_timeout
@failure_count = 0
@last_failure_time = nil
@state = :closed
end
def call
case @state
when :open
if Time.now - @last_failure_time > @recovery_timeout
@state = :half_open
attempt_call { yield }
else
raise CircuitBreakerOpenError, "Circuit breaker is open"
end
when :half_open
attempt_call { yield }
when :closed
attempt_call { yield }
end
end
private
def attempt_call
result = yield
on_success
result
rescue => e
on_failure
raise e
end
def on_success
@failure_count = 0
@state = :closed
end
def on_failure
@failure_count += 1
@last_failure_time = Time.now
@state = :open if @failure_count >= @failure_threshold
end
class CircuitBreakerOpenError < StandardError; end
end
# Usage with HTTP client
class ResilientHttpClient
def initialize
@circuit_breaker = CircuitBreaker.new(failure_threshold: 3, recovery_timeout: 30)
@http_client = HttpRetryClient.new
end
def get(url)
@circuit_breaker.call do
@http_client.get(url)
end
end
end
Rate Limiting with Retry
Handle rate limiting responses gracefully:
class RateLimitedClient
def initialize
@rate_limit_delay = 1
end
def make_request(url)
retries = 0
max_retries = 5
begin
response = perform_request(url)
case response.code.to_i
when 429 # Too Many Requests
retry_after = extract_retry_after(response)
raise RateLimitError.new("Rate limited", retry_after)
when 500..599
raise ServerError.new("Server error: #{response.code}")
else
return response
end
rescue RateLimitError => e
retries += 1
if retries <= max_retries
delay = e.retry_after || calculate_rate_limit_delay(retries)
puts "Rate limited. Waiting #{delay}s before retry..."
sleep(delay)
retry
else
raise "Rate limit retries exhausted"
end
rescue ServerError => e
retries += 1
if retries <= max_retries
delay = 2 ** retries
puts "Server error. Retrying in #{delay}s..."
sleep(delay)
retry
else
raise "Server error retries exhausted"
end
end
end
private
def perform_request(url)
uri = URI(url)
Net::HTTP.get_response(uri)
end
def extract_retry_after(response)
retry_after = response['Retry-After']
retry_after ? retry_after.to_i : nil
end
def calculate_rate_limit_delay(attempt)
[@rate_limit_delay * (2 ** attempt), 60].min
end
class RateLimitError < StandardError
attr_reader :retry_after
def initialize(message, retry_after = nil)
super(message)
@retry_after = retry_after
end
end
class ServerError < StandardError; end
end
Testing Retry Logic
RSpec Testing Examples
require 'rspec'
require 'webmock/rspec'
RSpec.describe HttpRetryClient do
let(:client) { HttpRetryClient.new(base_delay: 0.1, max_retries: 3) }
let(:url) { 'https://api.example.com/data' }
before do
WebMock.disable_net_connect!
end
context 'when request succeeds on first attempt' do
it 'returns the response' do
stub_request(:get, url).to_return(status: 200, body: '{"success": true}')
response = client.get(url)
expect(response.code).to eq('200')
end
end
context 'when request fails then succeeds' do
it 'retries and returns success' do
stub_request(:get, url)
.to_return(status: 500).then
.to_return(status: 200, body: '{"success": true}')
response = client.get(url)
expect(response.code).to eq('200')
end
end
context 'when all retries are exhausted' do
it 'raises an error' do
stub_request(:get, url).to_return(status: 500)
expect { client.get(url) }.to raise_error(/Failed after 3 retries/)
end
end
end
Command Line Tools
Ruby Script for Testing Retry Logic
# Create a test script
cat > test_retry.rb << 'EOF'
#!/usr/bin/env ruby
require_relative 'http_retry_client'
# Test with a failing endpoint
client = HttpRetryClient.new(base_delay: 1, max_retries: 3)
begin
response = client.get('https://httpstat.us/500')
puts "Response: #{response.code}"
rescue => e
puts "Error: #{e.message}"
end
EOF
# Make it executable
chmod +x test_retry.rb
# Run the test
ruby test_retry.rb
Benchmarking Retry Performance
# Install benchmark gem if needed
gem install benchmark-ips
# Create benchmark script
cat > benchmark_retry.rb << 'EOF'
require 'benchmark/ips'
require_relative 'http_retry_client'
Benchmark.ips do |x|
x.config(time: 5, warmup: 2)
x.report("basic retry") do
# Basic retry implementation
end
x.report("exponential backoff") do
# Exponential backoff implementation
end
x.compare!
end
EOF
Best Practices
1. Choose Appropriate Retry Conditions
Only retry on transient errors:
RETRYABLE_ERRORS = [
Net::TimeoutError,
Net::OpenTimeout,
Net::ReadTimeout,
Errno::ECONNREFUSED,
Errno::ECONNRESET,
Errno::EHOSTUNREACH,
SocketError
].freeze
RETRYABLE_HTTP_CODES = [408, 429, 500, 502, 503, 504].freeze
def retryable_error?(error, response = nil)
return true if RETRYABLE_ERRORS.any? { |klass| error.is_a?(klass) }
return true if response && RETRYABLE_HTTP_CODES.include?(response.code.to_i)
false
end
2. Implement Proper Logging
require 'logger'
class LoggingRetryClient
def initialize
@logger = Logger.new(STDOUT)
@logger.level = Logger::INFO
end
def get_with_retry(url)
attempt = 0
max_attempts = 3
begin
attempt += 1
@logger.info("Attempting request to #{url} (attempt #{attempt})")
response = make_request(url)
@logger.info("Request successful on attempt #{attempt}")
response
rescue => e
@logger.warn("Attempt #{attempt} failed: #{e.message}")
if attempt < max_attempts
delay = 2 ** attempt
@logger.info("Retrying in #{delay} seconds...")
sleep(delay)
retry
else
@logger.error("All #{max_attempts} attempts failed for #{url}")
raise
end
end
end
end
3. Monitor and Alert
class MonitoredRetryClient
def initialize
@retry_metrics = {
total_requests: 0,
successful_requests: 0,
failed_requests: 0,
retry_attempts: 0
}
end
def get_with_monitoring(url)
@retry_metrics[:total_requests] += 1
attempt = 0
begin
attempt += 1
@retry_metrics[:retry_attempts] += 1 if attempt > 1
response = make_request(url)
@retry_metrics[:successful_requests] += 1
response
rescue => e
if should_retry?(attempt)
retry
else
@retry_metrics[:failed_requests] += 1
report_failure(url, e)
raise
end
end
end
def metrics
success_rate = @retry_metrics[:successful_requests].to_f / @retry_metrics[:total_requests]
@retry_metrics.merge(success_rate: success_rate)
end
private
def report_failure(url, error)
# Send to monitoring service
puts "ALERT: Request to #{url} failed after retries: #{error.message}"
end
end
Integration with Web Scraping
When implementing retry logic for web scraping projects, consider combining it with proper timeout handling techniques and effective error management strategies to create robust scraping solutions.
Configuration Management
Environment-Based Configuration
class RetryConfig
def self.load_from_env
{
base_delay: ENV.fetch('RETRY_BASE_DELAY', 1).to_f,
max_delay: ENV.fetch('RETRY_MAX_DELAY', 60).to_f,
backoff_factor: ENV.fetch('RETRY_BACKOFF_FACTOR', 2).to_f,
max_retries: ENV.fetch('RETRY_MAX_ATTEMPTS', 3).to_i,
jitter_enabled: ENV.fetch('RETRY_JITTER_ENABLED', 'true') == 'true'
}
end
end
# Usage
config = RetryConfig.load_from_env
client = HttpRetryClient.new(**config)
YAML Configuration
# config/retry.yml
development:
base_delay: 0.5
max_delay: 30
backoff_factor: 1.5
max_retries: 3
jitter_enabled: true
production:
base_delay: 1
max_delay: 60
backoff_factor: 2
max_retries: 5
jitter_enabled: true
require 'yaml'
class ConfigurableRetryClient
def initialize(env = 'development')
config = YAML.load_file('config/retry.yml')[env]
@base_delay = config['base_delay']
@max_delay = config['max_delay']
@backoff_factor = config['backoff_factor']
@max_retries = config['max_retries']
@jitter_enabled = config['jitter_enabled']
end
end
Conclusion
Implementing retry logic for HTTP requests in Ruby requires careful consideration of error types, backoff strategies, and monitoring. Start with simple retry mechanisms and gradually add complexity as needed. Remember to:
- Use exponential backoff with jitter to avoid thundering herd problems
- Only retry on transient errors
- Implement circuit breakers for external service dependencies
- Add comprehensive logging and monitoring
- Test retry logic thoroughly with various failure scenarios
- Consider rate limiting and respect server constraints
By following these patterns and best practices, you'll build resilient Ruby applications that gracefully handle network failures and temporary service disruptions while maintaining good performance and user experience.