Table of contents

How do I disable SSL verification in urllib3?

Disabling SSL verification in urllib3 is sometimes necessary during development, testing, or when working with self-signed certificates. While this approach should never be used in production environments, understanding how to properly configure SSL verification settings is an important skill for Python developers.

Understanding SSL Verification in urllib3

urllib3 is a powerful HTTP client library for Python that serves as the foundation for the popular requests library. By default, urllib3 verifies SSL certificates to ensure secure connections and protect against man-in-the-middle attacks. However, there are legitimate scenarios where you might need to disable this verification temporarily:

  • Testing with self-signed certificates in development environments
  • Working with internal corporate networks that use custom certificate authorities
  • Debugging SSL-related issues
  • Interacting with legacy systems that have outdated certificates

Important Security Warning: Disabling SSL verification exposes your application to security risks. Only disable verification in controlled environments and never in production code that handles sensitive data.

Method 1: Disable SSL Verification for a Single Request

The most straightforward way to disable SSL verification in urllib3 is to set the cert_reqs parameter to ssl.CERT_NONE when creating a connection pool manager:

import urllib3
import ssl

# Disable SSL warnings (optional, but recommended when disabling verification)
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

# Create a PoolManager with SSL verification disabled
http = urllib3.PoolManager(
    cert_reqs=ssl.CERT_NONE,
    assert_hostname=False
)

# Make a request
response = http.request('GET', 'https://self-signed.badssl.com/')
print(response.status)
print(response.data.decode('utf-8'))

In this example: - cert_reqs=ssl.CERT_NONE disables certificate verification - assert_hostname=False disables hostname verification - urllib3.disable_warnings() suppresses the InsecureRequestWarning that would otherwise be displayed

Method 2: Using HTTPSConnectionPool

For more granular control over individual connections, you can use HTTPSConnectionPool directly:

import urllib3
import ssl

urllib3.disable_warnings()

# Create an HTTPS connection pool with verification disabled
https = urllib3.HTTPSConnectionPool(
    'self-signed.badssl.com',
    port=443,
    cert_reqs=ssl.CERT_NONE,
    assert_hostname=False
)

# Make a request
response = https.request('GET', '/')
print(f"Status: {response.status}")

This approach is useful when you need to make multiple requests to the same host and want to reuse connections efficiently.

Method 3: Conditional SSL Verification

In real-world applications, you often need to toggle SSL verification based on the environment. Here's a more production-ready approach:

import urllib3
import ssl
import os

class SecureHTTPClient:
    def __init__(self, verify_ssl=True):
        self.verify_ssl = verify_ssl

        if not verify_ssl:
            urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
            self.http = urllib3.PoolManager(
                cert_reqs=ssl.CERT_NONE,
                assert_hostname=False
            )
        else:
            self.http = urllib3.PoolManager()

    def get(self, url):
        return self.http.request('GET', url)

# Usage: Control via environment variable
verify = os.getenv('VERIFY_SSL', 'true').lower() == 'true'
client = SecureHTTPClient(verify_ssl=verify)

response = client.get('https://example.com')
print(response.status)

This pattern allows you to control SSL verification through environment variables, making it easy to enable verification in production while disabling it in development.

Working with Custom Certificate Authorities

Sometimes you don't want to disable SSL verification entirely—you just need to trust a custom certificate authority. This is a more secure approach:

import urllib3
import certifi

# Path to your custom CA bundle
ca_bundle = '/path/to/custom-ca-bundle.crt'

# Create a PoolManager with custom CA bundle
http = urllib3.PoolManager(
    cert_reqs='CERT_REQUIRED',
    ca_certs=ca_bundle
)

response = http.request('GET', 'https://internal-server.company.com')

This method maintains security while allowing you to work with internal or custom certificates.

Handling SSL Verification in Web Scraping

When building web scrapers, you might encounter various SSL configurations across different websites. While solutions like handling authentication in web scraping scenarios require different approaches, SSL verification issues are common when scraping diverse targets.

Here's a robust scraper configuration:

import urllib3
import logging
from typing import Optional

class WebScraper:
    def __init__(self, verify_ssl: bool = True, timeout: float = 30.0):
        self.timeout = urllib3.Timeout(connect=timeout, read=timeout)

        if not verify_ssl:
            urllib3.disable_warnings()
            self.http = urllib3.PoolManager(
                cert_reqs='CERT_NONE',
                assert_hostname=False,
                timeout=self.timeout,
                retries=urllib3.Retry(
                    total=3,
                    backoff_factor=0.3,
                    status_forcelist=[500, 502, 503, 504]
                )
            )
        else:
            self.http = urllib3.PoolManager(
                timeout=self.timeout,
                retries=urllib3.Retry(total=3, backoff_factor=0.3)
            )

        self.logger = logging.getLogger(__name__)

    def fetch(self, url: str, headers: Optional[dict] = None) -> urllib3.HTTPResponse:
        try:
            response = self.http.request(
                'GET',
                url,
                headers=headers or {}
            )
            self.logger.info(f"Fetched {url} - Status: {response.status}")
            return response
        except urllib3.exceptions.SSLError as e:
            self.logger.error(f"SSL Error for {url}: {e}")
            raise
        except urllib3.exceptions.MaxRetryError as e:
            self.logger.error(f"Max retries exceeded for {url}: {e}")
            raise

# Usage
scraper = WebScraper(verify_ssl=False)
response = scraper.fetch('https://example.com')

Using urllib3 with the Requests Library

Since the popular requests library uses urllib3 under the hood, you can disable SSL verification there as well:

import requests
import urllib3

# Disable warnings
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

# Method 1: Using verify=False
response = requests.get('https://self-signed.badssl.com/', verify=False)

# Method 2: Session-based approach
session = requests.Session()
session.verify = False
response = session.get('https://self-signed.badssl.com/')

JavaScript Alternative with Node.js

For developers working with JavaScript, the equivalent approach using the native https module would be:

const https = require('https');

// Create an agent with SSL verification disabled
const agent = new https.Agent({
    rejectUnauthorized: false
});

// Make a request
https.get('https://self-signed.badssl.com/', { agent }, (res) => {
    console.log('Status:', res.statusCode);

    let data = '';
    res.on('data', (chunk) => {
        data += chunk;
    });

    res.on('end', () => {
        console.log('Response received');
    });
}).on('error', (e) => {
    console.error('Error:', e);
});

Or with the popular axios library:

const axios = require('axios');
const https = require('https');

const instance = axios.create({
    httpsAgent: new https.Agent({
        rejectUnauthorized: false
    })
});

instance.get('https://self-signed.badssl.com/')
    .then(response => {
        console.log('Status:', response.status);
    })
    .catch(error => {
        console.error('Error:', error.message);
    });

Best Practices and Security Considerations

  1. Never disable SSL verification in production: This cannot be stressed enough. Disabling SSL verification in production exposes your users to man-in-the-middle attacks and data breaches.

  2. Use environment-based configuration: Control SSL verification through environment variables or configuration files, never hard-code it.

  3. Suppress warnings appropriately: When you disable verification for legitimate reasons, suppress the warnings to avoid log pollution, but document why you're doing it.

  4. Consider using custom CA bundles: Instead of disabling verification entirely, add your custom certificates to a CA bundle.

  5. Implement proper error handling: Always handle SSL errors gracefully and log them for debugging.

  6. Document your decisions: If you must disable SSL verification, document why in your code and set up reminders to review this decision periodically.

  7. Use timeouts: When disabling SSL verification, always implement proper timeouts to prevent hanging connections, similar to how you would handle timeouts in browser automation.

Using a Professional API Service

For production web scraping needs, consider using a dedicated API service that handles SSL certificates, proxies, and other complexities for you. The WebScraping.AI API provides reliable access to web content without requiring you to manage SSL certificates or worry about verification issues. This approach ensures security while providing the flexibility you need for web scraping at scale.

Debugging SSL Issues

If you're experiencing SSL errors, here's a diagnostic script to help identify the issue:

import urllib3
import ssl
import socket

def diagnose_ssl(hostname, port=443):
    print(f"Diagnosing SSL connection to {hostname}:{port}\n")

    # Test basic connectivity
    try:
        socket.create_connection((hostname, port), timeout=10)
        print("✓ TCP connection successful")
    except Exception as e:
        print(f"✗ TCP connection failed: {e}")
        return

    # Test SSL with verification
    try:
        http = urllib3.PoolManager()
        response = http.request('GET', f'https://{hostname}')
        print(f"✓ SSL verification successful (Status: {response.status})")
    except urllib3.exceptions.SSLError as e:
        print(f"✗ SSL verification failed: {e}")

        # Try without verification
        try:
            urllib3.disable_warnings()
            http = urllib3.PoolManager(
                cert_reqs=ssl.CERT_NONE,
                assert_hostname=False
            )
            response = http.request('GET', f'https://{hostname}')
            print(f"✓ Connection without verification successful (Status: {response.status})")
            print("  Issue: Certificate verification problem")
        except Exception as e:
            print(f"✗ Connection failed even without verification: {e}")

# Usage
diagnose_ssl('self-signed.badssl.com')

Conclusion

Disabling SSL verification in urllib3 is straightforward but should be approached with caution. Use the methods described in this guide only in development and testing environments, and always implement proper security measures in production. When you need to work with custom certificates, prefer using custom CA bundles over completely disabling verification. For production web scraping needs, consider using professional services that handle these complexities securely.

Remember: SSL verification exists to protect you and your users. Treat disabling it as a temporary debugging tool, not a permanent solution.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon