How can I handle dynamic content loading while scraping Trustpilot?

Handling dynamic content loading while scraping Trustpilot requires the use of techniques that can interact with JavaScript and wait for content to load before scraping. Trustpilot, like many modern websites, relies on JavaScript to dynamically load content, often in response to user actions or as the user scrolls down the page.

Here are the steps to handle dynamic content loading:

Step 1: Analyze Network Traffic

First, analyze the network traffic of the Trustpilot page you want to scrape using browser developer tools. This will help you understand how the data is loaded and whether there are any API endpoints you can directly call to obtain the data, which is usually in JSON format.

Step 2: Choose the Right Tool

For dynamic content, traditional HTTP request libraries (like requests in Python) might not be enough. You'll need tools that can execute JavaScript and simulate browser behavior.

In Python, Selenium or Playwright are good choices. In JavaScript (Node.js), Puppeteer or Playwright are commonly used for this purpose.

Python Example with Selenium:

from selenium import webdriver
from import By
from import WebDriverWait
from import expected_conditions as EC

# Set up the Selenium WebDriver
driver = webdriver.Chrome(executable_path='/path/to/chromedriver')

    # Wait for the dynamic content to load
    WebDriverWait(driver, 10).until(
        EC.presence_of_element_located((By.CLASS_NAME, 'review-content'))

    # Now you can parse the page using driver.page_source with BeautifulSoup or similar
    # ...


JavaScript Example with Puppeteer:

const puppeteer = require('puppeteer');

async function scrapeTrustpilot() {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();
    await page.goto('', {
        waitUntil: 'networkidle2' // waits for the network to be idle (no requests for 500ms)

    // You can wait for selectors that indicate dynamic content has loaded
    await page.waitForSelector('.review-content');

    // Now you can evaluate page content using page.content() or page.$eval()
    // ...

    await browser.close();


Step 3: Scroll to Trigger Loading

Some pages may require scrolling to load more content. Both Selenium and Puppeteer provide ways to execute JavaScript on the page, which you can use to scroll.

# Python Selenium Example for Scrolling
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
// JavaScript Puppeteer Example for Scrolling
await page.evaluate(() => {
    window.scrollTo(0, document.body.scrollHeight);

Step 4: Be Respectful and Legal

Always make sure that your scraping activities comply with Trustpilot's terms of service and privacy policies. Also, be respectful and don't overload their servers with too many requests in a short period.

Step 5: Handle Pagination and Rate Limiting

If you need to scrape multiple pages, you'll have to handle pagination. Some pages might use "Load More" buttons or infinite scrolling, which you'll need to automate.

Additionally, be aware that Trustpilot may have rate limiting in place. If you make too many requests too quickly, your IP address might be temporarily blocked. Implement delays or use proxies if necessary.


Scraping dynamic content from Trustpilot requires the use of browser automation tools such as Selenium or Puppeteer. You need to wait for content to load, possibly scroll the page to trigger more loading, and you should always respect the website's terms and rate limits. Remember that web scraping can be a legal gray area and the website's terms of service should be your guide to what is permissible.

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping