Table of contents

Can I perform HTTP POST requests with urllib3, and if so, how?

Yes, urllib3 fully supports HTTP POST requests and provides powerful features like connection pooling, thread safety, and automatic retries. This makes it an excellent choice for web scraping and API interactions.

Installation

First, install urllib3 if you haven't already:

pip install urllib3

Basic POST Request Setup

Create a PoolManager instance to handle all HTTP requests:

import urllib3

# Create a PoolManager for connection pooling and reuse
http = urllib3.PoolManager()

Sending JSON Data

The most common POST request scenario involves sending JSON data:

import urllib3
import json

http = urllib3.PoolManager()

# JSON data to send
data = {
    'name': 'John Doe',
    'email': 'john@example.com',
    'age': 30
}

# Send JSON POST request
response = http.request(
    'POST',
    'https://httpbin.org/post',
    body=json.dumps(data),
    headers={'Content-Type': 'application/json'}
)

print(f'Status: {response.status}')
print(f'Response: {response.data.decode("utf-8")}')

Sending Form-Encoded Data

For HTML form submissions, use form-encoded data:

import urllib3

http = urllib3.PoolManager()

# Form data
form_data = {
    'username': 'johndoe',
    'password': 'secret123',
    'remember': 'on'
}

# Send form POST request
response = http.request(
    'POST',
    'https://httpbin.org/post',
    fields=form_data
)

print(f'Status: {response.status}')
print(f'Response: {response.data.decode("utf-8")}')

File Upload Example

Upload files using multipart/form-data:

import urllib3

http = urllib3.PoolManager()

# Upload a file
with open('document.pdf', 'rb') as f:
    file_data = f.read()

response = http.request(
    'POST',
    'https://httpbin.org/post',
    fields={
        'file': ('document.pdf', file_data, 'application/pdf'),
        'description': 'Important document'
    }
)

print(f'Upload status: {response.status}')

Advanced POST Request with Authentication

Example with custom headers and authentication:

import urllib3
import json

http = urllib3.PoolManager()

headers = {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer your-api-token',
    'User-Agent': 'MyApp/1.0'
}

data = {
    'title': 'New Article',
    'content': 'Article content here',
    'published': True
}

response = http.request(
    'POST',
    'https://api.example.com/articles',
    body=json.dumps(data),
    headers=headers
)

if response.status == 201:
    result = json.loads(response.data.decode('utf-8'))
    print(f'Created article with ID: {result.get("id")}')
else:
    print(f'Error: {response.status} - {response.data.decode("utf-8")}')

Error Handling Best Practices

Always implement proper error handling in production:

import urllib3
import json

http = urllib3.PoolManager()

try:
    response = http.request(
        'POST',
        'https://api.example.com/data',
        body=json.dumps({'key': 'value'}),
        headers={'Content-Type': 'application/json'},
        timeout=10.0  # Set timeout
    )

    if response.status == 200:
        data = json.loads(response.data.decode('utf-8'))
        print('Success:', data)
    else:
        print(f'HTTP Error: {response.status}')

except urllib3.exceptions.HTTPError as e:
    print(f'HTTP Error: {e}')
except urllib3.exceptions.TimeoutError:
    print('Request timed out')
except json.JSONDecodeError:
    print('Invalid JSON response')
except Exception as e:
    print(f'Unexpected error: {e}')

Key Features

  • Connection Pooling: Automatically reuses connections for better performance
  • Thread Safety: Safe to use across multiple threads
  • Automatic Retries: Built-in retry logic for failed requests
  • SSL/TLS Support: Secure HTTPS connections with certificate verification
  • Timeout Control: Configurable request timeouts

urllib3 handles the low-level HTTP details while giving you full control over request construction and response handling, making it ideal for both simple API calls and complex web scraping scenarios.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon