Table of contents

How do I check the status code of a response in Requests?

The Python requests library provides several ways to check HTTP status codes from server responses. Here's a comprehensive guide to different methods.

Basic Status Code Check

The most direct way is using the status_code property:

import requests

response = requests.get('https://httpbin.org/get')
print(f"Status Code: {response.status_code}")  # Output: Status Code: 200

Using response.ok Property

The ok property returns True for successful status codes (200-399):

import requests

response = requests.get('https://httpbin.org/get')

if response.ok:
    print("Request successful!")
    print(f"Data: {response.json()}")
else:
    print(f"Request failed with status: {response.status_code}")

Common HTTP Status Codes

Here are the most frequently encountered status codes:

Success Codes (2xx)

  • 200: OK - Request successful
  • 201: Created - Resource created successfully
  • 204: No Content - Success with no response body

Redirection Codes (3xx)

  • 301: Moved Permanently
  • 302: Found (temporary redirect)
  • 304: Not Modified

Client Error Codes (4xx)

  • 400: Bad Request - Invalid request syntax
  • 401: Unauthorized - Authentication required
  • 403: Forbidden - Access denied
  • 404: Not Found - Resource doesn't exist
  • 429: Too Many Requests - Rate limit exceeded

Server Error Codes (5xx)

  • 500: Internal Server Error
  • 502: Bad Gateway
  • 503: Service Unavailable

Advanced Status Code Handling

Using requests.codes for Readable Comparisons

import requests

response = requests.get('https://httpbin.org/status/404')

if response.status_code == requests.codes.ok:
    print("Success!")
elif response.status_code == requests.codes.not_found:
    print("Page not found")
elif response.status_code == requests.codes.unauthorized:
    print("Authentication required")
else:
    print(f"Unexpected status: {response.status_code}")

Getting Status Code Reason Text

import requests

response = requests.get('https://httpbin.org/status/404')
print(f"Status: {response.status_code} - {response.reason}")
# Output: Status: 404 - NOT FOUND

Comprehensive Status Code Checking

import requests

def check_response_status(url):
    try:
        response = requests.get(url, timeout=10)

        # Print detailed status information
        print(f"URL: {url}")
        print(f"Status Code: {response.status_code}")
        print(f"Status Reason: {response.reason}")
        print(f"Success: {response.ok}")

        # Handle different status code ranges
        if 200 <= response.status_code < 300:
            print("✅ Success")
            return response
        elif 300 <= response.status_code < 400:
            print("🔄 Redirection")
            print(f"Redirected to: {response.headers.get('Location', 'Unknown')}")
        elif 400 <= response.status_code < 500:
            print("❌ Client Error")
        elif 500 <= response.status_code < 600:
            print("🔥 Server Error")

    except requests.exceptions.RequestException as e:
        print(f"Request failed: {e}")
        return None

# Usage examples
check_response_status('https://httpbin.org/get')
check_response_status('https://httpbin.org/status/404')
check_response_status('https://httpbin.org/status/500')

Automatic Exception Handling with raise_for_status()

By default, requests doesn't raise exceptions for HTTP error status codes. Use raise_for_status() to enable automatic exception raising:

import requests

def safe_request(url):
    try:
        response = requests.get(url)
        response.raise_for_status()  # Raises HTTPError for bad status codes
        print(f"Success! Status: {response.status_code}")
        return response.json() if response.headers.get('content-type') == 'application/json' else response.text

    except requests.exceptions.HTTPError as e:
        print(f"HTTP Error: {e}")
        print(f"Status Code: {e.response.status_code}")

    except requests.exceptions.ConnectionError:
        print("Connection error occurred")

    except requests.exceptions.Timeout:
        print("Request timed out")

    except requests.exceptions.RequestException as e:
        print(f"Request failed: {e}")

# Examples
safe_request('https://httpbin.org/get')        # Success
safe_request('https://httpbin.org/status/404') # HTTP Error

Best Practices

  1. Always check status codes before processing response data
  2. Use response.ok for simple success/failure checks
  3. Use raise_for_status() when you want exceptions for HTTP errors
  4. Handle timeouts and connection errors with try-except blocks
  5. Log status codes for debugging and monitoring
import requests
import logging

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

def robust_request(url, max_retries=3):
    for attempt in range(max_retries):
        try:
            response = requests.get(url, timeout=10)
            logger.info(f"Request to {url} returned {response.status_code}")

            if response.ok:
                return response
            elif response.status_code == 429:  # Rate limited
                logger.warning("Rate limited, waiting before retry...")
                time.sleep(2 ** attempt)  # Exponential backoff
                continue
            else:
                logger.error(f"HTTP {response.status_code}: {response.reason}")
                break

        except requests.exceptions.RequestException as e:
            logger.error(f"Request attempt {attempt + 1} failed: {e}")
            if attempt == max_retries - 1:
                raise

    return None

This comprehensive approach ensures robust handling of HTTP status codes in your web scraping and API interaction projects.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon