Table of contents

How can I monitor and measure Reqwest request performance?

Monitoring and measuring request performance is crucial for building robust and efficient applications with Reqwest. This guide covers various techniques to track metrics like response times, request counts, error rates, and resource usage to optimize your HTTP client performance.

Basic Performance Monitoring

1. Measuring Request Duration

The most fundamental metric is measuring how long requests take to complete:

use reqwest;
use std::time::Instant;
use tokio;

#[tokio::main]
async fn main() -> Result<(), reqwest::Error> {
    let client = reqwest::Client::new();

    let start = Instant::now();
    let response = client
        .get("https://httpbin.org/delay/2")
        .send()
        .await?;
    let duration = start.elapsed();

    println!("Request took: {:?}", duration);
    println!("Status: {}", response.status());

    Ok(())
}

2. Comprehensive Performance Metrics

Create a more detailed performance monitoring structure:

use reqwest;
use std::time::{Duration, Instant};
use serde::Serialize;

#[derive(Debug, Serialize)]
struct RequestMetrics {
    url: String,
    method: String,
    status_code: u16,
    duration_ms: u64,
    content_length: Option<u64>,
    response_headers_count: usize,
    timestamp: String,
}

impl RequestMetrics {
    fn new(
        url: String,
        method: String,
        response: &reqwest::Response,
        duration: Duration,
    ) -> Self {
        Self {
            url,
            method,
            status_code: response.status().as_u16(),
            duration_ms: duration.as_millis() as u64,
            content_length: response.content_length(),
            response_headers_count: response.headers().len(),
            timestamp: chrono::Utc::now().to_rfc3339(),
        }
    }
}

async fn monitored_request(url: &str) -> Result<RequestMetrics, reqwest::Error> {
    let client = reqwest::Client::new();
    let start = Instant::now();

    let response = client.get(url).send().await?;
    let duration = start.elapsed();

    let metrics = RequestMetrics::new(
        url.to_string(),
        "GET".to_string(),
        &response,
        duration,
    );

    println!("Metrics: {}", serde_json::to_string_pretty(&metrics).unwrap());

    Ok(metrics)
}

Advanced Performance Monitoring

3. Request Middleware with Metrics Collection

Implement middleware to automatically collect metrics for all requests:

use reqwest::{Client, Request, Response};
use reqwest_middleware::{ClientBuilder, ClientWithMiddleware, Middleware, Next};
use reqwest_middleware::Result as MiddlewareResult;
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use std::time::Instant;
use async_trait::async_trait;

#[derive(Clone)]
pub struct MetricsMiddleware {
    metrics: Arc<Mutex<HashMap<String, Vec<u64>>>>,
}

impl MetricsMiddleware {
    pub fn new() -> Self {
        Self {
            metrics: Arc::new(Mutex::new(HashMap::new())),
        }
    }

    pub fn get_metrics(&self) -> HashMap<String, Vec<u64>> {
        self.metrics.lock().unwrap().clone()
    }

    pub fn get_average_duration(&self, endpoint: &str) -> Option<f64> {
        let metrics = self.metrics.lock().unwrap();
        if let Some(durations) = metrics.get(endpoint) {
            if !durations.is_empty() {
                let sum: u64 = durations.iter().sum();
                Some(sum as f64 / durations.len() as f64)
            } else {
                None
            }
        } else {
            None
        }
    }
}

#[async_trait]
impl Middleware for MetricsMiddleware {
    async fn handle(
        &self,
        req: Request,
        extensions: &mut task_local_extensions::Extensions,
        next: Next<'_>,
    ) -> MiddlewareResult<Response> {
        let start = Instant::now();
        let url = req.url().to_string();

        let response = next.run(req, extensions).await?;
        let duration = start.elapsed().as_millis() as u64;

        // Store metrics
        let mut metrics = self.metrics.lock().unwrap();
        metrics.entry(url.clone()).or_insert_with(Vec::new).push(duration);

        println!("Request to {} took {}ms", url, duration);

        Ok(response)
    }
}

// Usage example
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let metrics_middleware = MetricsMiddleware::new();

    let client = ClientBuilder::new(reqwest::Client::new())
        .with(metrics_middleware.clone())
        .build();

    // Make some requests
    for i in 1..=5 {
        let url = format!("https://httpbin.org/delay/{}", i % 3);
        let _response = client.get(&url).send().await?;
    }

    // Print metrics summary
    for (endpoint, durations) in metrics_middleware.get_metrics() {
        if let Some(avg) = metrics_middleware.get_average_duration(&endpoint) {
            println!("Endpoint: {}, Average: {:.2}ms, Requests: {}", 
                     endpoint, avg, durations.len());
        }
    }

    Ok(())
}

4. Connection Pool Monitoring

Monitor connection pool performance and reuse:

use reqwest::{Client, ClientBuilder};
use std::sync::Arc;
use std::time::Duration;

async fn monitor_connection_pool() -> Result<(), reqwest::Error> {
    let client = ClientBuilder::new()
        .pool_max_idle_per_host(10)
        .pool_idle_timeout(Duration::from_secs(30))
        .timeout(Duration::from_secs(10))
        .build()?;

    // Make multiple requests to the same host to observe connection reuse
    let urls = vec![
        "https://httpbin.org/get",
        "https://httpbin.org/json", 
        "https://httpbin.org/uuid",
        "https://httpbin.org/headers",
    ];

    for url in urls {
        let start = std::time::Instant::now();
        let response = client.get(url).send().await?;
        let duration = start.elapsed();

        println!("URL: {}, Duration: {:?}, Status: {}", 
                 url, duration, response.status());

        // Check for connection reuse indicators in headers
        if let Some(connection) = response.headers().get("connection") {
            println!("Connection header: {:?}", connection);
        }
    }

    Ok(())
}

Error Rate and Reliability Monitoring

5. Comprehensive Error Tracking

Track different types of errors and their frequency:

use reqwest;
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use thiserror::Error;

#[derive(Debug, Error)]
enum RequestError {
    #[error("Network error: {0}")]
    Network(#[from] reqwest::Error),
    #[error("Timeout after {duration:?}")]
    Timeout { duration: std::time::Duration },
    #[error("HTTP error: {status}")]
    Http { status: u16 },
    #[error("Rate limited")]
    RateLimit,
}

#[derive(Debug, Clone)]
struct ErrorMetrics {
    network_errors: u64,
    timeout_errors: u64,
    http_errors: HashMap<u16, u64>,
    rate_limit_errors: u64,
    total_requests: u64,
    successful_requests: u64,
}

impl ErrorMetrics {
    fn new() -> Self {
        Self {
            network_errors: 0,
            timeout_errors: 0,
            http_errors: HashMap::new(),
            rate_limit_errors: 0,
            total_requests: 0,
            successful_requests: 0,
        }
    }

    fn record_success(&mut self) {
        self.total_requests += 1;
        self.successful_requests += 1;
    }

    fn record_error(&mut self, error: &RequestError) {
        self.total_requests += 1;
        match error {
            RequestError::Network(_) => self.network_errors += 1,
            RequestError::Timeout { .. } => self.timeout_errors += 1,
            RequestError::Http { status } => {
                *self.http_errors.entry(*status).or_insert(0) += 1;
            },
            RequestError::RateLimit => self.rate_limit_errors += 1,
        }
    }

    fn success_rate(&self) -> f64 {
        if self.total_requests == 0 {
            0.0
        } else {
            (self.successful_requests as f64 / self.total_requests as f64) * 100.0
        }
    }

    fn error_rate(&self) -> f64 {
        100.0 - self.success_rate()
    }
}

async fn monitored_request_with_errors(
    client: &reqwest::Client,
    url: &str,
    metrics: Arc<Mutex<ErrorMetrics>>,
) -> Result<reqwest::Response, RequestError> {
    let response = client.get(url).send().await?;

    let status = response.status();
    if status.is_success() {
        metrics.lock().unwrap().record_success();
        Ok(response)
    } else if status.as_u16() == 429 {
        let error = RequestError::RateLimit;
        metrics.lock().unwrap().record_error(&error);
        Err(error)
    } else {
        let error = RequestError::Http { status: status.as_u16() };
        metrics.lock().unwrap().record_error(&error);
        Err(error)
    }
}

Performance Benchmarking

6. Load Testing and Benchmarking

Create benchmarks to measure performance under different conditions:

use reqwest::Client;
use std::time::{Duration, Instant};
use tokio::time::sleep;

struct BenchmarkResults {
    total_requests: u64,
    successful_requests: u64,
    total_duration: Duration,
    min_duration: Duration,
    max_duration: Duration,
    avg_duration: Duration,
}

async fn benchmark_requests(
    client: &Client,
    url: &str,
    concurrent_requests: usize,
    total_requests: usize,
) -> BenchmarkResults {
    let mut handles = Vec::new();
    let start_time = Instant::now();
    let mut durations = Vec::new();
    let mut successful = 0;

    // Create semaphore to limit concurrent requests
    let semaphore = std::sync::Arc::new(tokio::sync::Semaphore::new(concurrent_requests));

    for _ in 0..total_requests {
        let client = client.clone();
        let url = url.to_string();
        let permit = semaphore.clone();

        let handle = tokio::spawn(async move {
            let _permit = permit.acquire().await.unwrap();
            let start = Instant::now();
            let result = client.get(&url).send().await;
            let duration = start.elapsed();
            (result, duration)
        });

        handles.push(handle);
    }

    // Collect results
    for handle in handles {
        if let Ok((result, duration)) = handle.await.unwrap() {
            durations.push(duration);
            if result.is_ok() {
                successful += 1;
            }
        }
    }

    let total_duration = start_time.elapsed();
    let min_duration = durations.iter().min().copied().unwrap_or(Duration::ZERO);
    let max_duration = durations.iter().max().copied().unwrap_or(Duration::ZERO);
    let avg_duration = if !durations.is_empty() {
        Duration::from_nanos(
            durations.iter().map(|d| d.as_nanos()).sum::<u128>() as u64 / durations.len() as u64
        )
    } else {
        Duration::ZERO
    };

    BenchmarkResults {
        total_requests: total_requests as u64,
        successful_requests: successful,
        total_duration,
        min_duration,
        max_duration,
        avg_duration,
    }
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client = Client::new();

    println!("Running benchmark...");
    let results = benchmark_requests(
        &client,
        "https://httpbin.org/get",
        10, // 10 concurrent requests
        100, // 100 total requests
    ).await;

    println!("Benchmark Results:");
    println!("Total requests: {}", results.total_requests);
    println!("Successful requests: {}", results.successful_requests);
    println!("Success rate: {:.2}%", 
             (results.successful_requests as f64 / results.total_requests as f64) * 100.0);
    println!("Total duration: {:?}", results.total_duration);
    println!("Average request duration: {:?}", results.avg_duration);
    println!("Min request duration: {:?}", results.min_duration);
    println!("Max request duration: {:?}", results.max_duration);
    println!("Requests per second: {:.2}", 
             results.total_requests as f64 / results.total_duration.as_secs_f64());

    Ok(())
}

Logging and Debugging

7. Detailed Request Logging

Implement comprehensive logging for debugging and monitoring:

use reqwest::{Client, ClientBuilder};
use std::time::Duration;
use tracing::{info, warn, error, debug};
use tracing_subscriber;

async fn setup_logging_client() -> Result<(), Box<dyn std::error::Error>> {
    // Initialize tracing
    tracing_subscriber::fmt::init();

    let client = ClientBuilder::new()
        .timeout(Duration::from_secs(10))
        .build()?;

    let urls = vec![
        "https://httpbin.org/get",
        "https://httpbin.org/status/404",
        "https://httpbin.org/delay/3",
    ];

    for url in urls {
        debug!("Starting request to: {}", url);
        let start = std::time::Instant::now();

        match client.get(url).send().await {
            Ok(response) => {
                let duration = start.elapsed();
                let status = response.status();
                let content_length = response.content_length().unwrap_or(0);

                if status.is_success() {
                    info!(
                        url = url,
                        status = %status,
                        duration_ms = duration.as_millis(),
                        content_length = content_length,
                        "Request successful"
                    );
                } else {
                    warn!(
                        url = url,
                        status = %status,
                        duration_ms = duration.as_millis(),
                        "Request returned non-success status"
                    );
                }
            }
            Err(e) => {
                let duration = start.elapsed();
                error!(
                    url = url,
                    duration_ms = duration.as_millis(),
                    error = %e,
                    "Request failed"
                );
            }
        }
    }

    Ok(())
}

Integration with Monitoring Systems

8. Metrics Export for Monitoring

Export metrics in formats compatible with monitoring systems:

use std::collections::HashMap;
use serde_json;

struct PrometheusMetrics {
    request_duration_histogram: HashMap<String, Vec<f64>>,
    request_count_total: HashMap<String, u64>,
    request_errors_total: HashMap<String, u64>,
}

impl PrometheusMetrics {
    fn new() -> Self {
        Self {
            request_duration_histogram: HashMap::new(),
            request_count_total: HashMap::new(),
            request_errors_total: HashMap::new(),
        }
    }

    fn record_request(&mut self, endpoint: &str, duration_ms: f64, success: bool) {
        // Record duration
        self.request_duration_histogram
            .entry(endpoint.to_string())
            .or_insert_with(Vec::new)
            .push(duration_ms);

        // Record request count
        *self.request_count_total
            .entry(endpoint.to_string())
            .or_insert(0) += 1;

        // Record errors
        if !success {
            *self.request_errors_total
                .entry(endpoint.to_string())
                .or_insert(0) += 1;
        }
    }

    fn export_prometheus_format(&self) -> String {
        let mut output = String::new();

        // Export request counts
        output.push_str("# HELP http_requests_total Total number of HTTP requests\n");
        output.push_str("# TYPE http_requests_total counter\n");
        for (endpoint, count) in &self.request_count_total {
            output.push_str(&format!("http_requests_total{{endpoint=\"{}\"}} {}\n", endpoint, count));
        }

        // Export error counts
        output.push_str("# HELP http_request_errors_total Total number of HTTP request errors\n");
        output.push_str("# TYPE http_request_errors_total counter\n");
        for (endpoint, count) in &self.request_errors_total {
            output.push_str(&format!("http_request_errors_total{{endpoint=\"{}\"}} {}\n", endpoint, count));
        }

        output
    }
}

Best Practices for Performance Monitoring

Key Monitoring Metrics

  1. Response Time Metrics: Track average, median, 95th, and 99th percentile response times
  2. Throughput: Monitor requests per second and concurrent request handling
  3. Error Rates: Track HTTP error responses, network failures, and timeout rates
  4. Resource Usage: Monitor memory usage, connection pool utilization, and CPU usage

Performance Optimization Tips

  • Use connection pooling and keep-alive connections for better performance
  • Implement proper timeout configurations to avoid hanging requests
  • Monitor and optimize DNS resolution times
  • Consider using HTTP/2 when supported by target servers
  • Implement request batching for multiple related requests

When working with browser automation tools like Puppeteer for handling timeouts, similar performance monitoring principles apply. You can also learn from monitoring network requests in Puppeteer to understand comprehensive request tracking in different contexts.

Conclusion

Effective performance monitoring in Reqwest requires a combination of timing measurements, error tracking, and comprehensive logging. By implementing the techniques shown in this guide, you can gain valuable insights into your HTTP client's performance, identify bottlenecks, and optimize your application's network operations. Regular monitoring and benchmarking will help you maintain optimal performance as your application scales.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon