How do I handle pagination when scraping multiple pages on Zoominfo?

When scraping data from multiple pages on a website like Zoominfo, handling pagination is a crucial aspect. Pagination refers to the method by which a website divides content across a series of pages. To scrape multiple pages, you need to follow the pagination patterns used by the site.

Please note that web scraping may violate the terms of service of some websites. Zoominfo, for instance, may have strict policies and protections in place to prevent scraping, including legal restrictions. It's important to review Zoominfo's terms of service and privacy policy before attempting to scrape data, and you should consider using their API if one is available and it suits your needs.

If you have verified that scraping is permissible, here's a general approach to handle pagination:

Python Example with Beautiful Soup and Requests

First, install the necessary packages if you haven't already:

pip install requests beautifulsoup4

Here is a Python example using requests and BeautifulSoup to handle pagination:

import requests
from bs4 import BeautifulSoup

# Base URL of the site you want to scrape (replace with actual URL structure)
base_url = "{company}/{page_number}"

# Start session
with requests.Session() as session:
    # Set up headers
    headers = {
        'User-Agent': 'Your User-Agent',

    page_number = 1
    while True:
        # Update URL with the next page number
        url = base_url.format(company='example-company', page_number=page_number)

        # Get the page
        response = session.get(url, headers=headers)
        if response.status_code != 200:
            break  # If the page isn't found, exit the loop

        # Parse the content with BeautifulSoup
        soup = BeautifulSoup(response.content, 'html.parser')

        # Your code to parse the page's data goes here
        # ...

        # Logic to find the 'Next' button/link or to determine if it's the last page
        # This can vary depending on the website's structure
        next_button = soup.find('a', text='Next')  # Example placeholder
        if not next_button or 'disabled' in next_button.get('class', []):
            break  # If there's no 'Next' button or it's disabled, stop scraping

        page_number += 1  # Increment the page number before continuing

# At this point, all pages have been scraped

JavaScript Example with Puppeteer

For JavaScript, you could use Puppeteer, a Node library which provides a high-level API to control headless Chrome. First, install Puppeteer:

npm install puppeteer

Here's how you might handle pagination with Puppeteer:

const puppeteer = require('puppeteer');

(async () => {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();
    await page.setUserAgent('Your User-Agent');

    let page_number = 1;
    let hasNextPage = true;

    while (hasNextPage) {
        const url = `${page_number}`;
        await page.goto(url, { waitUntil: 'networkidle2' });

        // Your logic to extract data goes here
        // ...

        // Logic to find the 'Next' button/link or determine if it's the last page
        // This can vary depending on the website's structure
        const nextButton = await page.$(''); // Example selector
        if (nextButton) {
        } else {
            hasNextPage = false;

    await browser.close();

Remember, these examples are just templates and won't work without modifications specific to Zoominfo's pagination structure. You'll need to inspect the HTML and JavaScript used by Zoominfo to determine the actual selectors and logic required to navigate between pages.

Also, websites like Zoominfo may employ anti-scraping techniques, such as CAPTCHA, rate limiting, or requiring authentication. Bypassing such protections may be against the website's terms of service, and you should proceed with caution and respect the legality and ethical considerations of web scraping.

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping