Table of contents

How do I Handle User Agent Requirements When Scraping?

User agents are crucial identifiers that web browsers send to servers to indicate what type of client is making the request. When web scraping, many websites check user agents to determine whether to serve content, block requests, or apply rate limiting. Understanding how to properly handle user agent requirements is essential for successful scraping with Simple HTML DOM and other scraping tools.

Understanding User Agents in Web Scraping

A user agent string contains information about the browser, operating system, and rendering engine making the request. Websites use this information to:

  • Serve appropriate content versions (mobile vs desktop)
  • Block automated scrapers and bots
  • Implement security measures
  • Gather analytics about their visitors

Default Simple HTML DOM requests often use generic user agent strings that can be easily detected as automated tools, leading to blocked requests or limited access to content.

Setting User Agents in Simple HTML DOM

Basic User Agent Configuration

Here's how to set a custom user agent in Simple HTML DOM:

<?php
require_once 'simple_html_dom.php';

// Create a context with custom user agent
$context = stream_context_create([
    'http' => [
        'method' => 'GET',
        'header' => [
            'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'
        ]
    ]
]);

// Load HTML with custom context
$html = file_get_html('https://example.com', false, $context);

if ($html) {
    // Process the DOM
    foreach ($html->find('h1') as $element) {
        echo $element->plaintext . "\n";
    }
    $html->clear();
}
?>

Advanced User Agent Management

For more sophisticated user agent handling, create a dedicated class:

<?php
class UserAgentManager {
    private $userAgents = [
        'chrome_windows' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
        'firefox_windows' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:120.0) Gecko/20100101 Firefox/120.0',
        'safari_mac' => 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.0 Safari/605.1.15',
        'chrome_android' => 'Mozilla/5.0 (Linux; Android 10; SM-G975F) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Mobile Safari/537.36'
    ];

    public function getRandomUserAgent() {
        $keys = array_keys($this->userAgents);
        $randomKey = $keys[array_rand($keys)];
        return $this->userAgents[$randomKey];
    }

    public function createContext($userAgentKey = null) {
        $userAgent = $userAgentKey ? 
            $this->userAgents[$userAgentKey] : 
            $this->getRandomUserAgent();

        return stream_context_create([
            'http' => [
                'method' => 'GET',
                'header' => [
                    "User-Agent: $userAgent",
                    'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
                    'Accept-Language: en-US,en;q=0.5',
                    'Accept-Encoding: gzip, deflate',
                    'Connection: keep-alive'
                ]
            ]
        ]);
    }
}

// Usage
$uaManager = new UserAgentManager();
$context = $uaManager->createContext('chrome_windows');
$html = file_get_html('https://example.com', false, $context);
?>

User Agent Rotation Strategy

To avoid detection, implement user agent rotation:

<?php
class RotatingUserAgentScraper {
    private $userAgents;
    private $currentIndex = 0;

    public function __construct() {
        $this->userAgents = [
            'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
            'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
            'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
            'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:120.0) Gecko/20100101 Firefox/120.0'
        ];
    }

    private function getNextUserAgent() {
        $userAgent = $this->userAgents[$this->currentIndex];
        $this->currentIndex = ($this->currentIndex + 1) % count($this->userAgents);
        return $userAgent;
    }

    public function scrapeUrl($url) {
        $userAgent = $this->getNextUserAgent();

        $context = stream_context_create([
            'http' => [
                'method' => 'GET',
                'header' => "User-Agent: $userAgent\r\n"
            ]
        ]);

        $html = file_get_html($url, false, $context);

        if ($html) {
            echo "Scraped with User Agent: $userAgent\n";
            return $html;
        }

        return false;
    }
}

// Usage
$scraper = new RotatingUserAgentScraper();
$urls = ['https://example1.com', 'https://example2.com', 'https://example3.com'];

foreach ($urls as $url) {
    $html = $scraper->scrapeUrl($url);
    if ($html) {
        // Process the HTML
        $html->clear();
    }
    sleep(1); // Rate limiting
}
?>

User Agent Best Practices

1. Use Realistic User Agent Strings

Always use legitimate user agent strings from real browsers. Avoid generic or obviously fake user agents:

// Good - Real Chrome user agent
$goodUA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36';

// Bad - Obviously fake or generic
$badUA = 'Bot/1.0';
$badUA2 = 'MyCustomScraper/1.0';

2. Match User Agent with Expected Behavior

When using mobile user agents, ensure your scraping behavior matches mobile browsing patterns. For complex scenarios requiring JavaScript execution, consider handling browser sessions in Puppeteer for more sophisticated user agent management.

3. Include Complete Headers

Don't just set the User-Agent header; include other realistic headers:

$context = stream_context_create([
    'http' => [
        'method' => 'GET',
        'header' => [
            'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
            'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
            'Accept-Language: en-US,en;q=0.9',
            'Accept-Encoding: gzip, deflate, br',
            'DNT: 1',
            'Connection: keep-alive',
            'Upgrade-Insecure-Requests: 1'
        ]
    ]
]);

Handling User Agent Detection

Fingerprint Consistency

Maintain consistency between your user agent and other request characteristics:

class ConsistentScraper {
    private $profiles = [
        'chrome_desktop' => [
            'user_agent' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
            'accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
            'accept_language' => 'en-US,en;q=0.9',
            'accept_encoding' => 'gzip, deflate, br'
        ],
        'firefox_desktop' => [
            'user_agent' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:120.0) Gecko/20100101 Firefox/120.0',
            'accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
            'accept_language' => 'en-US,en;q=0.5',
            'accept_encoding' => 'gzip, deflate'
        ]
    ];

    public function scrapeWithProfile($url, $profileName) {
        $profile = $this->profiles[$profileName];

        $context = stream_context_create([
            'http' => [
                'method' => 'GET',
                'header' => [
                    "User-Agent: {$profile['user_agent']}",
                    "Accept: {$profile['accept']}",
                    "Accept-Language: {$profile['accept_language']}",
                    "Accept-Encoding: {$profile['accept_encoding']}"
                ]
            ]
        ]);

        return file_get_html($url, false, $context);
    }
}

Error Handling for Blocked Requests

Implement proper error handling when user agents are rejected:

function scrapeWithFallback($url, $userAgents) {
    foreach ($userAgents as $userAgent) {
        $context = stream_context_create([
            'http' => [
                'method' => 'GET',
                'header' => "User-Agent: $userAgent\r\n",
                'timeout' => 30
            ]
        ]);

        $html = @file_get_html($url, false, $context);

        if ($html !== false) {
            echo "Success with: $userAgent\n";
            return $html;
        }

        echo "Failed with: $userAgent\n";
        sleep(2); // Wait before trying next user agent
    }

    throw new Exception("All user agents failed for URL: $url");
}

Alternative Approaches

Using cURL for Better Control

For more advanced user agent management, consider using cURL with Simple HTML DOM:

function scrapeWithCurl($url, $userAgent) {
    $ch = curl_init();

    curl_setopt_array($ch, [
        CURLOPT_URL => $url,
        CURLOPT_RETURNTRANSFER => true,
        CURLOPT_USERAGENT => $userAgent,
        CURLOPT_FOLLOWLOCATION => true,
        CURLOPT_MAXREDIRS => 3,
        CURLOPT_TIMEOUT => 30,
        CURLOPT_SSL_VERIFYPEER => false,
        CURLOPT_HTTPHEADER => [
            'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
            'Accept-Language: en-US,en;q=0.5',
            'Cache-Control: no-cache'
        ]
    ]);

    $htmlContent = curl_exec($ch);
    $httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
    curl_close($ch);

    if ($httpCode === 200 && $htmlContent !== false) {
        return str_get_html($htmlContent);
    }

    return false;
}

JavaScript-Heavy Sites

For websites that heavily rely on JavaScript and sophisticated bot detection, consider using browser automation tools like Puppeteer for handling AJAX requests, which provide more realistic user agent handling and JavaScript execution capabilities.

Testing User Agent Effectiveness

Create a testing function to verify your user agent configuration:

function testUserAgent($userAgent) {
    $testUrl = 'https://httpbin.org/user-agent';

    $context = stream_context_create([
        'http' => [
            'method' => 'GET',
            'header' => "User-Agent: $userAgent\r\n"
        ]
    ]);

    $result = file_get_contents($testUrl, false, $context);
    $data = json_decode($result, true);

    echo "Sent User Agent: $userAgent\n";
    echo "Received User Agent: " . $data['user-agent'] . "\n";
    echo "Match: " . ($userAgent === $data['user-agent'] ? 'Yes' : 'No') . "\n\n";
}

// Test different user agents
$userAgents = [
    'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36',
    'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15'
];

foreach ($userAgents as $ua) {
    testUserAgent($ua);
}

Console Testing Commands

Test your user agent implementation with these console commands:

# Test user agent with cURL
curl -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36" https://httpbin.org/user-agent

# Check what user agent your current browser is sending
curl https://httpbin.org/user-agent

# Test user agent rotation script
php test_user_agents.php

# Monitor HTTP headers being sent
curl -v -H "User-Agent: Custom-Agent/1.0" https://example.com

Conclusion

Proper user agent handling is essential for successful web scraping with Simple HTML DOM. By implementing realistic user agent strings, rotating them appropriately, and maintaining consistency with other request headers, you can significantly improve your scraping success rate while avoiding detection. Remember to always respect website terms of service and implement appropriate rate limiting in your scraping applications.

For more advanced scenarios involving complex user interactions and JavaScript-heavy sites, consider combining Simple HTML DOM with more sophisticated tools that can handle modern web applications more effectively.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon