Table of contents

Is there a way to measure the response time for a request made with Requests?

Yes, you can measure the response time for HTTP requests made with the Python requests library. There are two primary methods: using the built-in Response.elapsed property or manual timing with the time module.

Method 1: Using Response.elapsed (Recommended)

The requests library provides a built-in Response.elapsed property that returns a datetime.timedelta object representing the time from sending the request to receiving the response headers:

import requests

# Send a GET request
response = requests.get('https://httpbin.org/delay/1')

# Get elapsed time as timedelta object
elapsed_time = response.elapsed
print(f"Response time: {elapsed_time}")

# Convert to seconds (float)
elapsed_seconds = elapsed_time.total_seconds()
print(f"Response time: {elapsed_seconds:.3f} seconds")

# Convert to milliseconds
elapsed_ms = elapsed_time.total_seconds() * 1000
print(f"Response time: {elapsed_ms:.0f} ms")

Method 2: Manual Timing with time Module

For more control over what's being measured, use manual timing:

import requests
import time

# Record start time
start_time = time.time()

# Make the request
response = requests.get('https://httpbin.org/delay/1')

# Record end time
end_time = time.time()

# Calculate elapsed time
elapsed_time = end_time - start_time
print(f"Total time: {elapsed_time:.3f} seconds")
print(f"Built-in elapsed: {response.elapsed.total_seconds():.3f} seconds")

Measuring Multiple Requests

Here's how to measure response times for multiple requests and calculate statistics:

import requests
import statistics
from typing import List

def measure_response_times(url: str, count: int = 5) -> List[float]:
    """Measure response times for multiple requests"""
    times = []

    for i in range(count):
        try:
            response = requests.get(url, timeout=10)
            response_time = response.elapsed.total_seconds()
            times.append(response_time)
            print(f"Request {i+1}: {response_time:.3f}s")
        except requests.exceptions.RequestException as e:
            print(f"Request {i+1} failed: {e}")

    return times

# Test multiple requests
url = "https://httpbin.org/get"
response_times = measure_response_times(url, count=5)

if response_times:
    avg_time = statistics.mean(response_times)
    min_time = min(response_times)
    max_time = max(response_times)

    print(f"\nStatistics:")
    print(f"Average: {avg_time:.3f}s")
    print(f"Minimum: {min_time:.3f}s")
    print(f"Maximum: {max_time:.3f}s")

Advanced Timing with Session Objects

When using requests.Session for multiple requests, you can still measure individual response times:

import requests
import time

# Create a session for connection reuse
session = requests.Session()

urls = [
    'https://httpbin.org/get',
    'https://httpbin.org/status/200',
    'https://httpbin.org/json'
]

for url in urls:
    response = session.get(url)
    print(f"{url}: {response.elapsed.total_seconds():.3f}s")

session.close()

Key Differences Between Methods

| Method | What It Measures | Use Case | |--------|------------------|----------| | response.elapsed | Time from request start to response headers received | Server processing time, network latency | | Manual timing | Complete operation including content download | Total end-to-end performance |

Important Notes

  • response.elapsed measures time until response headers are received, not until the entire response body is downloaded
  • For large file downloads, manual timing will show significantly longer times than response.elapsed
  • Both methods exclude DNS resolution time if using connection pooling
  • Always handle exceptions when measuring response times in production code
  • Consider using timeout parameters to prevent hanging requests

Performance Monitoring Example

Here's a practical example for monitoring API performance:

import requests
import time
from datetime import datetime

def monitor_api_performance(url: str, threshold: float = 1.0):
    """Monitor API response time and alert if threshold exceeded"""
    try:
        start_time = time.time()
        response = requests.get(url, timeout=5)
        total_time = time.time() - start_time
        server_time = response.elapsed.total_seconds()

        print(f"[{datetime.now()}] {url}")
        print(f"  Server response: {server_time:.3f}s")
        print(f"  Total time: {total_time:.3f}s") 
        print(f"  Status: {response.status_code}")

        if server_time > threshold:
            print(f"  ⚠️  WARNING: Response time exceeds {threshold}s threshold")

    except requests.exceptions.Timeout:
        print(f"  ❌ REQUEST TIMEOUT")
    except requests.exceptions.RequestException as e:
        print(f"  ❌ REQUEST FAILED: {e}")

# Monitor an API endpoint
monitor_api_performance("https://httpbin.org/delay/0.5")

This approach provides comprehensive response time measurement capabilities for any HTTP requests made with the requests library.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

📖 Related Blog Guides

Expand your knowledge with these comprehensive tutorials:

Web Scraping with Python

Master HTTP requests for web scraping

Python Web Scraping Libraries

Requests library comprehensive guide

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon