Table of contents

What are the best methods for handling large datasets when scraping with PHP?

When scraping large websites or processing millions of records, PHP applications can quickly run into memory limitations and performance bottlenecks. This comprehensive guide covers essential techniques for handling large datasets efficiently while maintaining optimal performance and resource usage.

Understanding PHP Memory Limitations

PHP has built-in memory limits that can cause scripts to fail when processing large datasets. By default, PHP scripts are limited to 128MB of memory, which can be quickly exhausted when scraping large websites.

// Check current memory limit
echo "Memory limit: " . ini_get('memory_limit') . "\n";

// Increase memory limit (use cautiously)
ini_set('memory_limit', '512M');

// Monitor memory usage during scraping
echo "Memory usage: " . memory_get_usage(true) / 1024 / 1024 . " MB\n";
echo "Peak memory: " . memory_get_peak_usage(true) / 1024 / 1024 . " MB\n";

Streaming and Iterator-Based Processing

Instead of loading entire datasets into memory, use streaming approaches to process data incrementally:

Generator Functions for Large Data Processing

function scrapeUrlsGenerator($urls) {
    foreach ($urls as $url) {
        $data = file_get_contents($url);
        $dom = new DOMDocument();
        @$dom->loadHTML($data);

        yield extractDataFromDOM($dom);

        // Free memory after each iteration
        unset($data, $dom);
    }
}

// Process millions of URLs without memory issues
$urls = range(1, 1000000); // Example: 1 million URLs
foreach (scrapeUrlsGenerator($urls) as $scrapedData) {
    // Process each result individually
    saveToDatabase($scrapedData);
}

Chunked Processing with Arrays

function processInChunks($data, $chunkSize = 1000) {
    $chunks = array_chunk($data, $chunkSize);

    foreach ($chunks as $chunk) {
        foreach ($chunk as $item) {
            $result = scrapeItem($item);
            yield $result;
        }

        // Force garbage collection after each chunk
        gc_collect_cycles();
    }
}

$largeDataset = fetchLargeDataset();
foreach (processInChunks($largeDataset, 500) as $processedItem) {
    handleScrapedData($processedItem);
}

Database-Driven Strategies

For extremely large datasets, implement database-driven approaches to manage data efficiently:

Using Database Cursors for Large Result Sets

class DatabaseScraper {
    private $pdo;

    public function __construct($dsn, $username, $password) {
        $this->pdo = new PDO($dsn, $username, $password);
        $this->pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
    }

    public function processLargeDataset($query) {
        // Use unbuffered queries for large datasets
        $this->pdo->setAttribute(PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, false);

        $stmt = $this->pdo->prepare($query);
        $stmt->execute();

        while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
            $scrapedData = $this->scrapeUrl($row['url']);
            $this->saveProcessedData($scrapedData);

            // Free memory
            unset($scrapedData);
        }

        $stmt->closeCursor();
    }

    private function scrapeUrl($url) {
        // Your scraping logic here
        return ['url' => $url, 'data' => 'scraped_content'];
    }

    private function saveProcessedData($data) {
        $insertStmt = $this->pdo->prepare(
            "INSERT INTO scraped_results (url, content) VALUES (?, ?)"
        );
        $insertStmt->execute([$data['url'], $data['data']]);
    }
}

Batch Processing with Queue Systems

class ScrapingQueue {
    private $redis;

    public function __construct() {
        $this->redis = new Redis();
        $this->redis->connect('127.0.0.1', 6379);
    }

    public function addUrlsToQueue($urls) {
        foreach (array_chunk($urls, 100) as $chunk) {
            $this->redis->lPush('scraping_queue', ...array_map('json_encode', $chunk));
        }
    }

    public function processQueue($batchSize = 10) {
        while ($this->redis->lLen('scraping_queue') > 0) {
            $batch = [];

            for ($i = 0; $i < $batchSize && $this->redis->lLen('scraping_queue') > 0; $i++) {
                $item = $this->redis->rPop('scraping_queue');
                if ($item) {
                    $batch[] = json_decode($item, true);
                }
            }

            if (!empty($batch)) {
                $this->processBatch($batch);
            }

            // Prevent overwhelming the system
            usleep(100000); // 100ms delay
        }
    }

    private function processBatch($batch) {
        $multiHandle = curl_multi_init();
        $curlHandles = [];

        foreach ($batch as $item) {
            $ch = curl_init();
            curl_setopt($ch, CURLOPT_URL, $item['url']);
            curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
            curl_setopt($ch, CURLOPT_TIMEOUT, 30);

            curl_multi_add_handle($multiHandle, $ch);
            $curlHandles[] = $ch;
        }

        // Execute all requests simultaneously
        $running = null;
        do {
            curl_multi_exec($multiHandle, $running);
            curl_multi_select($multiHandle);
        } while ($running > 0);

        // Process results
        foreach ($curlHandles as $index => $ch) {
            $content = curl_multi_getcontent($ch);
            $this->saveScrapedData($batch[$index]['url'], $content);

            curl_multi_remove_handle($multiHandle, $ch);
            curl_close($ch);
        }

        curl_multi_close($multiHandle);
    }
}

Memory Optimization Techniques

Proper Resource Management

class MemoryEfficientScraper {
    private $maxMemoryUsage;

    public function __construct($maxMemoryMB = 256) {
        $this->maxMemoryUsage = $maxMemoryMB * 1024 * 1024;
    }

    public function scrapeWithMemoryControl($urls) {
        foreach ($urls as $url) {
            // Check memory usage before processing
            if (memory_get_usage(true) > $this->maxMemoryUsage) {
                $this->freeMemory();

                if (memory_get_usage(true) > $this->maxMemoryUsage) {
                    throw new Exception("Memory limit exceeded");
                }
            }

            $this->scrapeAndProcess($url);
        }
    }

    private function freeMemory() {
        // Force garbage collection
        gc_collect_cycles();

        // Clear any large variables
        $this->clearCaches();
    }

    private function scrapeAndProcess($url) {
        $data = $this->fetchUrl($url);
        $processed = $this->processData($data);
        $this->saveData($processed);

        // Explicitly free variables
        unset($data, $processed);
    }
}

Using Temporary Files for Large Data

function processLargeDataWithTempFiles($largeDataset) {
    $tempFile = tmpfile();
    $tempPath = stream_get_meta_data($tempFile)['uri'];

    // Write data to temporary file instead of keeping in memory
    foreach ($largeDataset as $item) {
        fwrite($tempFile, json_encode($item) . "\n");
    }

    rewind($tempFile);

    // Process data from file
    while (($line = fgets($tempFile)) !== false) {
        $item = json_decode(trim($line), true);
        $result = processItem($item);

        // Save result immediately
        saveToDatabase($result);
    }

    fclose($tempFile); // Automatically deletes temp file
}

Concurrent Processing Strategies

Multi-Process Scraping with pcntl

class MultiProcessScraper {
    private $maxProcesses;
    private $processes = [];

    public function __construct($maxProcesses = 4) {
        $this->maxProcesses = $maxProcesses;
    }

    public function scrapeUrls($urls) {
        $chunks = array_chunk($urls, ceil(count($urls) / $this->maxProcesses));

        foreach ($chunks as $chunk) {
            $pid = pcntl_fork();

            if ($pid === -1) {
                throw new Exception("Could not fork process");
            } elseif ($pid === 0) {
                // Child process
                $this->processChunk($chunk);
                exit(0);
            } else {
                // Parent process
                $this->processes[] = $pid;
            }
        }

        // Wait for all child processes to complete
        foreach ($this->processes as $pid) {
            pcntl_waitpid($pid, $status);
        }
    }

    private function processChunk($urls) {
        foreach ($urls as $url) {
            $data = $this->scrapeUrl($url);
            $this->saveData($data);
        }
    }
}

Advanced Performance Optimization

Connection Pooling and Reuse

class ConnectionPoolScraper {
    private $curlMultiHandle;
    private $availableHandles = [];
    private $activeHandles = [];

    public function __construct($poolSize = 10) {
        $this->curlMultiHandle = curl_multi_init();

        // Pre-create curl handles
        for ($i = 0; $i < $poolSize; $i++) {
            $ch = curl_init();
            curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
            curl_setopt($ch, CURLOPT_TIMEOUT, 30);
            curl_setopt($ch, CURLOPT_USERAGENT, 'PHP Scraper 1.0');

            $this->availableHandles[] = $ch;
        }
    }

    public function scrapeUrlsBatch($urls) {
        $results = [];
        $batches = array_chunk($urls, count($this->availableHandles));

        foreach ($batches as $batch) {
            $batchResults = $this->executeBatch($batch);
            $results = array_merge($results, $batchResults);

            // Process results immediately to free memory
            foreach ($batchResults as $result) {
                $this->processResult($result);
            }
            unset($batchResults);
        }

        return $results;
    }

    private function executeBatch($urls) {
        $handles = [];
        $results = [];

        // Assign URLs to available handles
        foreach ($urls as $index => $url) {
            if (!empty($this->availableHandles)) {
                $ch = array_pop($this->availableHandles);
                curl_setopt($ch, CURLOPT_URL, $url);
                curl_multi_add_handle($this->curlMultiHandle, $ch);
                $handles[$index] = $ch;
            }
        }

        // Execute requests
        $running = null;
        do {
            curl_multi_exec($this->curlMultiHandle, $running);
            curl_multi_select($this->curlMultiHandle);
        } while ($running > 0);

        // Collect results and return handles to pool
        foreach ($handles as $index => $ch) {
            $content = curl_multi_getcontent($ch);
            $results[$index] = [
                'url' => $urls[$index],
                'content' => $content,
                'info' => curl_getinfo($ch)
            ];

            curl_multi_remove_handle($this->curlMultiHandle, $ch);
            $this->availableHandles[] = $ch; // Return to pool
        }

        return $results;
    }
}

Monitoring and Error Handling

Resource Monitoring During Scraping

class ResourceMonitor {
    private $startTime;
    private $startMemory;
    private $logFile;

    public function __construct($logFile = 'scraping_monitor.log') {
        $this->startTime = microtime(true);
        $this->startMemory = memory_get_usage(true);
        $this->logFile = $logFile;
    }

    public function logProgress($itemsProcessed, $totalItems) {
        $currentTime = microtime(true);
        $currentMemory = memory_get_usage(true);
        $peakMemory = memory_get_peak_usage(true);

        $elapsedTime = $currentTime - $this->startTime;
        $memoryDiff = $currentMemory - $this->startMemory;
        $itemsPerSecond = $itemsProcessed / $elapsedTime;
        $estimatedTimeRemaining = ($totalItems - $itemsProcessed) / $itemsPerSecond;

        $logData = [
            'timestamp' => date('Y-m-d H:i:s'),
            'items_processed' => $itemsProcessed,
            'total_items' => $totalItems,
            'progress_percent' => round(($itemsProcessed / $totalItems) * 100, 2),
            'items_per_second' => round($itemsPerSecond, 2),
            'estimated_time_remaining' => round($estimatedTimeRemaining, 2),
            'current_memory_mb' => round($currentMemory / 1024 / 1024, 2),
            'peak_memory_mb' => round($peakMemory / 1024 / 1024, 2),
            'memory_diff_mb' => round($memoryDiff / 1024 / 1024, 2)
        ];

        file_put_contents(
            $this->logFile, 
            json_encode($logData) . "\n", 
            FILE_APPEND | LOCK_EX
        );

        echo "Progress: {$logData['progress_percent']}% | " .
             "Speed: {$logData['items_per_second']} items/sec | " .
             "Memory: {$logData['current_memory_mb']} MB\n";
    }
}

Best Practices for Large Dataset Scraping

1. Implement Graceful Degradation

Always have fallback mechanisms when memory or processing limits are reached.

2. Use Appropriate Data Structures

Choose efficient data structures like SplFixedArray for known-size datasets.

3. Implement Checkpointing

Save progress regularly so you can resume from the last processed item if the script fails.

4. Rate Limiting and Politeness

When dealing with large datasets, implement proper delays to avoid overwhelming target servers.

// Example checkpointing system
function scrapeWithCheckpoints($urls, $checkpointFile = 'checkpoint.json') {
    $checkpoint = file_exists($checkpointFile) 
        ? json_decode(file_get_contents($checkpointFile), true) 
        : ['last_processed' => -1];

    $startIndex = $checkpoint['last_processed'] + 1;

    for ($i = $startIndex; $i < count($urls); $i++) {
        $result = scrapeUrl($urls[$i]);
        saveData($result);

        // Save checkpoint every 100 items
        if ($i % 100 === 0) {
            $checkpoint['last_processed'] = $i;
            file_put_contents($checkpointFile, json_encode($checkpoint));
        }

        // Respectful delay
        usleep(100000); // 100ms
    }

    // Clean up checkpoint file when complete
    unlink($checkpointFile);
}

Conclusion

Handling large datasets in PHP web scraping requires careful consideration of memory management, processing strategies, and system resources. By implementing streaming approaches, using database-driven strategies, optimizing memory usage, and employing concurrent processing techniques, you can successfully scrape and process massive amounts of data while maintaining system stability and performance.

For complex scraping scenarios that require JavaScript execution, consider integrating with headless browser solutions or using specialized APIs that can handle the heavy lifting while your PHP application focuses on data processing and storage.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon