Table of contents

How do I handle browser crashes and recovery with Selenium WebDriver?

Browser crashes are an inevitable challenge when working with Selenium WebDriver, especially during long-running automation tasks or resource-intensive operations. Implementing robust crash detection and recovery mechanisms is essential for maintaining reliable web scraping and testing workflows. This guide covers comprehensive strategies for handling browser crashes and implementing automatic recovery in Selenium WebDriver.

Understanding Browser Crash Scenarios

Browser crashes in Selenium can occur due to various reasons:

  • Memory exhaustion from long-running sessions
  • System resource limitations during intensive operations
  • Browser-specific bugs or compatibility issues
  • Network connectivity problems causing timeouts
  • JavaScript errors in complex web applications
  • WebDriver communication failures between client and browser

Basic Crash Detection and Recovery

Python Implementation

Here's a robust approach to detect and recover from browser crashes in Python:

from selenium import webdriver
from selenium.common.exceptions import WebDriverException, TimeoutException
import time
import logging

class RobustWebDriver:
    def __init__(self, browser_type="chrome", max_retries=3):
        self.browser_type = browser_type
        self.max_retries = max_retries
        self.driver = None
        self.current_url = None
        self.setup_logging()

    def setup_logging(self):
        logging.basicConfig(level=logging.INFO)
        self.logger = logging.getLogger(__name__)

    def create_driver(self):
        """Create a new WebDriver instance with proper options"""
        if self.browser_type == "chrome":
            options = webdriver.ChromeOptions()
            options.add_argument("--no-sandbox")
            options.add_argument("--disable-dev-shm-usage")
            options.add_argument("--disable-gpu")
            options.add_argument("--remote-debugging-port=9222")
            return webdriver.Chrome(options=options)
        elif self.browser_type == "firefox":
            options = webdriver.FirefoxOptions()
            options.add_argument("--no-sandbox")
            return webdriver.Firefox(options=options)

    def is_browser_alive(self):
        """Check if browser is responsive"""
        try:
            # Simple check to see if browser responds
            self.driver.current_url
            return True
        except (WebDriverException, AttributeError):
            return False

    def recover_browser(self):
        """Attempt to recover from browser crash"""
        self.logger.warning("Browser crash detected, attempting recovery...")

        try:
            if self.driver:
                self.driver.quit()
        except Exception:
            pass  # Ignore errors during cleanup

        # Create new driver instance
        self.driver = self.create_driver()

        # Restore previous state if possible
        if self.current_url:
            try:
                self.driver.get(self.current_url)
                self.logger.info(f"Successfully recovered and navigated to {self.current_url}")
                return True
            except Exception as e:
                self.logger.error(f"Failed to restore URL: {e}")
                return False

        return True

    def execute_with_recovery(self, operation, *args, **kwargs):
        """Execute operation with automatic crash recovery"""
        for attempt in range(self.max_retries + 1):
            try:
                if not self.driver or not self.is_browser_alive():
                    if not self.recover_browser():
                        raise Exception("Failed to recover browser")

                # Store current URL for recovery
                try:
                    self.current_url = self.driver.current_url
                except:
                    pass

                # Execute the operation
                return operation(*args, **kwargs)

            except WebDriverException as e:
                self.logger.warning(f"Attempt {attempt + 1} failed: {e}")

                if attempt < self.max_retries:
                    time.sleep(2 ** attempt)  # Exponential backoff
                    continue
                else:
                    raise Exception(f"Operation failed after {self.max_retries} retries")

# Usage example
def scrape_with_recovery():
    robust_driver = RobustWebDriver("chrome", max_retries=3)

    def navigate_and_extract():
        robust_driver.driver.get("https://example.com")
        return robust_driver.driver.find_element("tag name", "title").text

    try:
        result = robust_driver.execute_with_recovery(navigate_and_extract)
        print(f"Successfully extracted: {result}")
    finally:
        if robust_driver.driver:
            robust_driver.driver.quit()

JavaScript/Node.js Implementation

const { Builder, By, until } = require('selenium-webdriver');
const chrome = require('selenium-webdriver/chrome');

class RobustWebDriver {
    constructor(browserType = 'chrome', maxRetries = 3) {
        this.browserType = browserType;
        this.maxRetries = maxRetries;
        this.driver = null;
        this.currentUrl = null;
    }

    async createDriver() {
        const options = new chrome.Options();
        options.addArguments('--no-sandbox');
        options.addArguments('--disable-dev-shm-usage');
        options.addArguments('--disable-gpu');

        return new Builder()
            .forBrowser(this.browserType)
            .setChromeOptions(options)
            .build();
    }

    async isBrowserAlive() {
        try {
            await this.driver.getCurrentUrl();
            return true;
        } catch (error) {
            return false;
        }
    }

    async recoverBrowser() {
        console.warn('Browser crash detected, attempting recovery...');

        try {
            if (this.driver) {
                await this.driver.quit();
            }
        } catch (error) {
            // Ignore cleanup errors
        }

        this.driver = await this.createDriver();

        if (this.currentUrl) {
            try {
                await this.driver.get(this.currentUrl);
                console.log(`Successfully recovered and navigated to ${this.currentUrl}`);
                return true;
            } catch (error) {
                console.error(`Failed to restore URL: ${error}`);
                return false;
            }
        }

        return true;
    }

    async executeWithRecovery(operation) {
        for (let attempt = 0; attempt <= this.maxRetries; attempt++) {
            try {
                if (!this.driver || !(await this.isBrowserAlive())) {
                    if (!(await this.recoverBrowser())) {
                        throw new Error('Failed to recover browser');
                    }
                }

                try {
                    this.currentUrl = await this.driver.getCurrentUrl();
                } catch (error) {
                    // Ignore URL storage errors
                }

                return await operation();

            } catch (error) {
                console.warn(`Attempt ${attempt + 1} failed:`, error.message);

                if (attempt < this.maxRetries) {
                    await new Promise(resolve => setTimeout(resolve, Math.pow(2, attempt) * 1000));
                    continue;
                } else {
                    throw new Error(`Operation failed after ${this.maxRetries} retries`);
                }
            }
        }
    }
}

// Usage example
async function scrapeWithRecovery() {
    const robustDriver = new RobustWebDriver('chrome', 3);

    const navigateAndExtract = async () => {
        await robustDriver.driver.get('https://example.com');
        const title = await robustDriver.driver.findElement(By.tagName('title'));
        return await title.getText();
    };

    try {
        const result = await robustDriver.executeWithRecovery(navigateAndExtract);
        console.log(`Successfully extracted: ${result}`);
    } finally {
        if (robustDriver.driver) {
            await robustDriver.driver.quit();
        }
    }
}

Advanced Recovery Strategies

Health Check Monitoring

Implement periodic health checks to detect issues before they cause crashes:

import threading
import time
from selenium.common.exceptions import WebDriverException

class HealthMonitor:
    def __init__(self, driver, check_interval=30):
        self.driver = driver
        self.check_interval = check_interval
        self.is_healthy = True
        self.monitor_thread = None
        self.stop_monitoring = False

    def start_monitoring(self):
        """Start background health monitoring"""
        self.monitor_thread = threading.Thread(target=self._monitor_loop)
        self.monitor_thread.daemon = True
        self.monitor_thread.start()

    def stop(self):
        """Stop health monitoring"""
        self.stop_monitoring = True
        if self.monitor_thread:
            self.monitor_thread.join()

    def _monitor_loop(self):
        """Background monitoring loop"""
        while not self.stop_monitoring:
            try:
                # Perform health check
                self.driver.current_url
                self.is_healthy = True
            except WebDriverException:
                self.is_healthy = False
                print("Health check failed - browser may be unresponsive")

            time.sleep(self.check_interval)

    def wait_for_healthy(self, timeout=60):
        """Wait for browser to become healthy"""
        start_time = time.time()
        while not self.is_healthy and (time.time() - start_time) < timeout:
            time.sleep(1)
        return self.is_healthy

Session State Management

Preserve and restore important session state during recovery:

class SessionManager:
    def __init__(self, driver):
        self.driver = driver
        self.cookies = []
        self.local_storage = {}
        self.session_storage = {}

    def save_session_state(self):
        """Save current session state"""
        try:
            self.cookies = self.driver.get_cookies()

            # Save local storage
            self.local_storage = self.driver.execute_script(
                "return JSON.stringify(localStorage);"
            )

            # Save session storage
            self.session_storage = self.driver.execute_script(
                "return JSON.stringify(sessionStorage);"
            )
        except Exception as e:
            print(f"Failed to save session state: {e}")

    def restore_session_state(self):
        """Restore saved session state"""
        try:
            # Restore cookies
            for cookie in self.cookies:
                self.driver.add_cookie(cookie)

            # Restore local storage
            if self.local_storage:
                self.driver.execute_script(
                    f"localStorage.clear(); "
                    f"Object.assign(localStorage, {self.local_storage});"
                )

            # Restore session storage
            if self.session_storage:
                self.driver.execute_script(
                    f"sessionStorage.clear(); "
                    f"Object.assign(sessionStorage, {self.session_storage});"
                )

        except Exception as e:
            print(f"Failed to restore session state: {e}")

Browser-Specific Recovery Strategies

Chrome-Specific Recovery

Chrome browsers may require specific handling for memory issues:

def create_chrome_with_recovery():
    options = webdriver.ChromeOptions()

    # Memory optimization
    options.add_argument("--memory-pressure-off")
    options.add_argument("--max_old_space_size=4096")

    # Stability improvements
    options.add_argument("--disable-background-timer-throttling")
    options.add_argument("--disable-renderer-backgrounding")
    options.add_argument("--disable-backgrounding-occluded-windows")

    # Recovery-friendly settings
    options.add_argument("--disable-ipc-flooding-protection")
    options.add_experimental_option("useAutomationExtension", False)
    options.add_experimental_option("excludeSwitches", ["enable-automation"])

    return webdriver.Chrome(options=options)

Firefox-Specific Recovery

Firefox requires different optimization approaches:

def create_firefox_with_recovery():
    options = webdriver.FirefoxOptions()

    # Memory management
    options.set_preference("dom.ipc.processCount", 1)
    options.set_preference("browser.cache.disk.enable", False)
    options.set_preference("browser.cache.memory.enable", False)

    # Stability settings
    options.set_preference("dom.disable_beforeunload", True)
    options.set_preference("browser.tabs.remote.autostart", False)

    return webdriver.Firefox(options=options)

Implementing Circuit Breaker Pattern

For handling repeated failures, implement a circuit breaker pattern:

import time
from enum import Enum

class CircuitState(Enum):
    CLOSED = "closed"
    OPEN = "open"
    HALF_OPEN = "half_open"

class CircuitBreaker:
    def __init__(self, failure_threshold=5, recovery_timeout=60):
        self.failure_threshold = failure_threshold
        self.recovery_timeout = recovery_timeout
        self.failure_count = 0
        self.last_failure_time = None
        self.state = CircuitState.CLOSED

    def call(self, operation, *args, **kwargs):
        """Execute operation with circuit breaker protection"""

        if self.state == CircuitState.OPEN:
            if time.time() - self.last_failure_time > self.recovery_timeout:
                self.state = CircuitState.HALF_OPEN
            else:
                raise Exception("Circuit breaker is OPEN")

        try:
            result = operation(*args, **kwargs)

            if self.state == CircuitState.HALF_OPEN:
                self.state = CircuitState.CLOSED
                self.failure_count = 0

            return result

        except Exception as e:
            self.failure_count += 1
            self.last_failure_time = time.time()

            if self.failure_count >= self.failure_threshold:
                self.state = CircuitState.OPEN

            raise e

Best Practices and Prevention

Resource Management

Implement proper resource cleanup to prevent crashes:

import atexit
import signal

class ResourceManager:
    def __init__(self):
        self.drivers = []
        self.register_cleanup_handlers()

    def register_cleanup_handlers(self):
        """Register cleanup handlers for graceful shutdown"""
        atexit.register(self.cleanup_all)
        signal.signal(signal.SIGTERM, self._signal_handler)
        signal.signal(signal.SIGINT, self._signal_handler)

    def _signal_handler(self, signum, frame):
        """Handle shutdown signals"""
        self.cleanup_all()
        exit(0)

    def add_driver(self, driver):
        """Add driver to managed resources"""
        self.drivers.append(driver)

    def cleanup_all(self):
        """Clean up all managed resources"""
        for driver in self.drivers:
            try:
                driver.quit()
            except Exception:
                pass
        self.drivers.clear()

Performance Monitoring

Monitor performance metrics to predict potential crashes:

# Monitor Chrome process memory usage
ps aux | grep chrome | awk '{print $4, $11}' | sort -nr

# Monitor system resources
top -p $(pgrep chrome) -d 1

# Check available memory
free -m

# Monitor disk usage
df -h /tmp

When implementing browser automation that requires high reliability, similar robust error handling patterns are used across different tools. For comprehensive automation workflows, understanding error handling strategies in browser automation can provide additional insights into building resilient scraping systems.

Conclusion

Handling browser crashes and implementing recovery mechanisms in Selenium WebDriver requires a multi-layered approach combining proactive monitoring, robust error handling, and automatic recovery strategies. The examples provided demonstrate how to build resilient automation systems that can handle unexpected failures gracefully.

Key strategies include implementing health checks, using retry logic with exponential backoff, preserving session state, and applying circuit breaker patterns for repeated failures. For large-scale operations, consider using distributed architectures and implementing comprehensive monitoring to detect and respond to issues quickly.

By following these practices and adapting them to your specific use case, you can build reliable web scraping and automation systems that maintain high availability even in the face of browser instability.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon