Table of contents

Is there a way to throttle the network in Playwright?

Yes, Playwright provides several methods to throttle network requests and simulate various network conditions. This is essential for testing how your application performs under slow or unreliable network connections.

Using Route Interception with Delays

The most common approach is intercepting network requests and adding delays:

const { chromium } = require('playwright');

(async () => {
  const browser = await chromium.launch();
  const context = await browser.newContext();

  // Throttle all network requests with a 2-second delay
  await context.route('**', async (route, request) => {
    // Add delay before continuing the request
    await new Promise(resolve => setTimeout(resolve, 2000));
    await route.continue();
  });

  const page = await context.newPage();
  await page.goto('https://example.com');

  await browser.close();
})();

Selective Network Throttling

You can throttle specific types of requests or URLs:

// Only throttle API requests
await context.route('**/api/**', async (route, request) => {
  await new Promise(resolve => setTimeout(resolve, 1000));
  await route.continue();
});

// Throttle images and stylesheets
await context.route('**/*.{png,jpg,jpeg,gif,css}', async (route, request) => {
  await new Promise(resolve => setTimeout(resolve, 500));
  await route.continue();
});

Simulating Offline/Network Failures

Test network resilience by simulating failures:

// Simulate intermittent network failures
await context.route('**', async (route, request) => {
  // 20% chance of network failure
  if (Math.random() < 0.2) {
    await route.abort('failed');
  } else {
    await route.continue();
  }
});

// Go completely offline
await context.setOffline(true);
await page.goto('https://example.com'); // This will fail

// Go back online
await context.setOffline(false);

Python Example

import asyncio
from playwright.async_api import async_playwright

async def main():
    async with async_playwright() as p:
        browser = await p.chromium.launch()
        context = await browser.new_context()

        # Throttle network requests
        async def handle_route(route, request):
            await asyncio.sleep(1)  # 1-second delay
            await route.continue_()

        await context.route("**", handle_route)

        page = await context.new_page()
        await page.goto("https://example.com")

        await browser.close()

asyncio.run(main())

Advanced Network Conditions

For more sophisticated network simulation:

// Variable delay based on request size
await context.route('**', async (route, request) => {
  const url = request.url();
  let delay = 100; // Base delay

  // Longer delays for larger resources
  if (url.includes('.js') || url.includes('.css')) {
    delay = 500;
  } else if (url.includes('.jpg') || url.includes('.png')) {
    delay = 1000;
  }

  await new Promise(resolve => setTimeout(resolve, delay));
  await route.continue();
});

Limitations and Alternatives

Important considerations:

  • Playwright's network throttling operates at the application level, not true network simulation
  • For more accurate network testing, consider:
    • Chrome DevTools Protocol for bandwidth throttling
    • Network conditioning tools at the OS level
    • Dedicated network simulation tools like tc (Linux) or pfctl (macOS)

Using Chrome DevTools Protocol:

const client = await context.newCDPSession(page);
await client.send('Network.emulateNetworkConditions', {
  offline: false,
  downloadThroughput: 50 * 1024, // 50 KB/s
  uploadThroughput: 20 * 1024,   // 20 KB/s
  latency: 500 // 500ms latency
});

This approach provides more realistic network simulation by actually limiting bandwidth rather than just adding delays.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon