Table of contents

How do I pass URL parameters with my GET request using Requests?

URL parameters (also called query parameters) are key-value pairs appended to a URL after the ? symbol. They're commonly used for filtering, searching, pagination, and API configuration. The Python requests library provides several ways to handle URL parameters efficiently.

Basic Parameter Passing with params

The most common and recommended approach is using the params parameter:

import requests

url = 'https://api.github.com/search/repositories'
params = {
    'q': 'python',
    'sort': 'stars',
    'order': 'desc',
    'per_page': 10
}

response = requests.get(url, params=params)
print(f"Request URL: {response.url}")
# Output: https://api.github.com/search/repositories?q=python&sort=stars&order=desc&per_page=10

Real-World Examples

Search API with Multiple Parameters

import requests

# Search for products with filters
url = 'https://api.example.com/products'
search_params = {
    'category': 'electronics',
    'min_price': 100,
    'max_price': 500,
    'brand': 'apple',
    'in_stock': True,
    'page': 1,
    'limit': 20
}

response = requests.get(url, params=search_params)

if response.status_code == 200:
    products = response.json()
    print(f"Found {len(products['items'])} products")
else:
    print(f"Error: {response.status_code}")

Handling Arrays and Lists

import requests

# Multiple values for the same parameter
url = 'https://api.example.com/data'
params = {
    'tags': ['python', 'web-scraping', 'api'],
    'status': ['active', 'pending']
}

response = requests.get(url, params=params)
print(response.url)
# Output: https://api.example.com/data?tags=python&tags=web-scraping&tags=api&status=active&status=pending

Alternative Methods

Manual URL Construction

import requests
from urllib.parse import urlencode

# Method 1: String formatting
base_url = 'https://api.example.com/search'
query = 'python web scraping'
url = f"{base_url}?q={query}&type=repositories"

# Method 2: Using urlencode for complex parameters
params = {'q': 'python web scraping', 'type': 'repositories', 'sort': 'updated'}
url = f"{base_url}?{urlencode(params)}"

response = requests.get(url)

Pre-encoded URL

import requests

# When you already have a complete URL with parameters
full_url = 'https://api.example.com/data?name=john&age=30&city=new%20york'
response = requests.get(full_url)

Handling Special Characters and Encoding

The params dictionary automatically handles URL encoding:

import requests

url = 'https://api.example.com/search'
params = {
    'query': 'hello world & special chars!',
    'email': 'user@example.com',
    'date': '2024-01-01',
    'tags': 'python, web-scraping, api'
}

response = requests.get(url, params=params)
print(response.url)
# Parameters are automatically URL-encoded

Complete Error Handling Example

import requests
from requests.exceptions import RequestException, HTTPError, ConnectionError, Timeout

def fetch_data_with_params(base_url, params):
    try:
        response = requests.get(base_url, params=params, timeout=10)
        response.raise_for_status()  # Raises HTTPError for bad status codes

        # Handle JSON response
        if 'application/json' in response.headers.get('content-type', ''):
            return response.json()
        else:
            return response.text

    except HTTPError as e:
        print(f"HTTP Error: {e.response.status_code} - {e}")
    except ConnectionError:
        print("Connection Error: Unable to connect to the server")
    except Timeout:
        print("Timeout Error: Request took too long")
    except RequestException as e:
        print(f"Request Error: {e}")

    return None

# Usage
url = 'https://jsonplaceholder.typicode.com/posts'
params = {'userId': 1, '_limit': 5}
data = fetch_data_with_params(url, params)

if data:
    print(f"Retrieved {len(data)} posts")

Best Practices

  1. Always use params dictionary instead of manual string concatenation
  2. Handle encoding automatically by letting requests handle special characters
  3. Validate parameters before sending requests
  4. Use meaningful parameter names that match the API documentation
  5. Implement proper error handling for network and HTTP errors
import requests

def make_api_request(endpoint, **kwargs):
    """Generic function for making API requests with parameters"""
    base_url = 'https://api.example.com'
    url = f"{base_url}/{endpoint}"

    # Filter out None values
    params = {k: v for k, v in kwargs.items() if v is not None}

    try:
        response = requests.get(url, params=params, timeout=10)
        response.raise_for_status()
        return response.json()
    except requests.RequestException as e:
        print(f"API request failed: {e}")
        return None

# Usage
data = make_api_request('users', page=1, limit=10, status='active')

URL parameters are essential for API interactions and web scraping. Using the params parameter in requests.get() is the most reliable and readable approach for handling query parameters in Python.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon