Table of contents

How do I handle infinite scroll pages in JavaScript web scraping?

Infinite scroll pages present a unique challenge for web scrapers because content loads dynamically as users scroll down. Unlike traditional pagination, these pages require special handling to trigger content loading and detect when all data has been retrieved. This guide covers proven techniques for scraping infinite scroll pages using JavaScript tools like Puppeteer, Playwright, and vanilla JavaScript approaches.

Understanding Infinite Scroll Mechanisms

Before diving into solutions, it's important to understand how infinite scroll works. Most infinite scroll implementations use one of these triggers:

  1. Scroll-based detection: Content loads when the user scrolls near the bottom
  2. Intersection Observer API: Triggers when a sentinel element becomes visible
  3. Manual triggers: "Load More" buttons that appear after scrolling
  4. Time-based loading: Content loads after a specific interval

Method 1: Auto-scrolling with Puppeteer

Puppeteer is one of the most effective tools for handling infinite scroll pages. Here's a comprehensive approach:

const puppeteer = require('puppeteer');

async function scrapeInfiniteScroll(url, maxItems = 100) {
  const browser = await puppeteer.launch({ headless: false });
  const page = await browser.newPage();

  await page.goto(url, { waitUntil: 'networkidle2' });

  let items = [];
  let previousItemCount = 0;
  let scrollAttempts = 0;
  const maxScrollAttempts = 10;

  while (items.length < maxItems && scrollAttempts < maxScrollAttempts) {
    // Extract current items
    const currentItems = await page.evaluate(() => {
      const elements = document.querySelectorAll('.item-selector');
      return Array.from(elements).map(el => ({
        title: el.querySelector('.title')?.textContent?.trim(),
        description: el.querySelector('.description')?.textContent?.trim(),
        url: el.querySelector('a')?.href
      }));
    });

    items = currentItems;

    // Check if new content loaded
    if (items.length === previousItemCount) {
      scrollAttempts++;
    } else {
      scrollAttempts = 0;
      previousItemCount = items.length;
    }

    // Scroll to bottom and wait for content
    await page.evaluate(() => {
      window.scrollTo(0, document.body.scrollHeight);
    });

    // Wait for new content to load
    await page.waitForTimeout(2000);

    // Optional: Wait for network activity to settle
    await page.waitForLoadState('networkidle');
  }

  await browser.close();
  return items;
}

// Usage
scrapeInfiniteScroll('https://example.com/infinite-scroll')
  .then(data => console.log(`Scraped ${data.length} items`))
  .catch(console.error);

Method 2: Intersection Observer Detection

Some infinite scroll pages use Intersection Observer API. Here's how to handle them:

async function scrapeWithIntersectionObserver(url) {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();

  await page.goto(url, { waitUntil: 'networkidle2' });

  // Monitor network requests to detect loading
  const responses = [];
  page.on('response', response => {
    if (response.url().includes('/api/') || response.url().includes('/load-more')) {
      responses.push(response.url());
    }
  });

  let allItems = [];
  let hasMoreContent = true;

  while (hasMoreContent) {
    // Scroll to trigger intersection observer
    await page.evaluate(() => {
      const sentinel = document.querySelector('.loading-sentinel') || 
                      document.querySelector('[data-testid="sentinel"]');
      if (sentinel) {
        sentinel.scrollIntoView();
      } else {
        window.scrollTo(0, document.body.scrollHeight);
      }
    });

    // Wait for loading to complete
    const initialResponseCount = responses.length;
    await page.waitForFunction(
      (count) => window.responses?.length > count || 
                 document.querySelector('.loading')?.style.display === 'none',
      {},
      initialResponseCount
    );

    // Extract new items
    const items = await page.evaluate(() => {
      return Array.from(document.querySelectorAll('.item')).map(item => ({
        id: item.dataset.id,
        content: item.textContent.trim()
      }));
    });

    // Check if we got new content
    if (items.length <= allItems.length) {
      hasMoreContent = false;
    } else {
      allItems = items;
    }

    await page.waitForTimeout(1000);
  }

  await browser.close();
  return allItems;
}

Method 3: Handling "Load More" Buttons

When infinite scroll includes manual triggers, you need to click buttons:

async function scrapeWithLoadMoreButton(url) {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();

  await page.goto(url);

  let allData = [];

  while (true) {
    // Wait for content to load
    await page.waitForSelector('.content-item', { timeout: 5000 });

    // Extract current page data
    const pageData = await page.evaluate(() => {
      return Array.from(document.querySelectorAll('.content-item')).map(item => ({
        title: item.querySelector('h3')?.textContent,
        description: item.querySelector('p')?.textContent,
        image: item.querySelector('img')?.src
      }));
    });

    allData = pageData;

    // Look for load more button
    const loadMoreButton = await page.$('.load-more-btn, [data-testid="load-more"]');

    if (!loadMoreButton) {
      console.log('No more content to load');
      break;
    }

    // Check if button is visible and enabled
    const isVisible = await page.evaluate(btn => {
      const rect = btn.getBoundingClientRect();
      return rect.width > 0 && rect.height > 0 && !btn.disabled;
    }, loadMoreButton);

    if (!isVisible) {
      break;
    }

    // Scroll button into view and click
    await loadMoreButton.scrollIntoView();
    await loadMoreButton.click();

    // Wait for new content
    await page.waitForTimeout(2000);
  }

  await browser.close();
  return allData;
}

Method 4: Using Playwright for Better Performance

Playwright offers enhanced performance and reliability for infinite scroll scraping:

const { chromium } = require('playwright');

async function scrapeWithPlaywright(url) {
  const browser = await chromium.launch();
  const context = await browser.newContext();
  const page = await context.newPage();

  // Monitor network activity
  let pendingRequests = 0;
  page.on('request', () => pendingRequests++);
  page.on('response', () => pendingRequests--);

  await page.goto(url);

  let items = [];
  let stableCount = 0;
  const maxStableAttempts = 3;

  while (stableCount < maxStableAttempts) {
    const previousCount = items.length;

    // Scroll to bottom
    await page.evaluate(() => {
      window.scrollTo(0, document.body.scrollHeight);
    });

    // Wait for network to be idle
    await page.waitForFunction(() => {
      return window.pendingRequests === 0;
    }, { timeout: 10000 }).catch(() => {
      console.log('Network timeout, continuing...');
    });

    // Wait a bit more for rendering
    await page.waitForTimeout(1500);

    // Extract all items
    items = await page.locator('.item-selector').evaluateAll(elements => {
      return elements.map(el => ({
        text: el.textContent?.trim(),
        href: el.querySelector('a')?.href,
        timestamp: Date.now()
      }));
    });

    // Check if content is stable
    if (items.length === previousCount) {
      stableCount++;
    } else {
      stableCount = 0;
    }

    console.log(`Found ${items.length} items, stability: ${stableCount}`);
  }

  await browser.close();
  return items;
}

Advanced Techniques and Best Practices

1. Handling Rate Limits and Performance

async function scrapeWithRateLimit(url, delay = 1000) {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();

  // Set custom user agent and viewport
  await page.setUserAgent('Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36');
  await page.setViewport({ width: 1366, height: 768 });

  await page.goto(url);

  let allItems = [];
  let scrollCount = 0;

  while (scrollCount < 50) { // Limit scrolls to prevent infinite loops
    await page.evaluate(() => {
      window.scrollBy(0, window.innerHeight);
    });

    // Progressive delay to avoid overwhelming the server
    const currentDelay = delay + (scrollCount * 100);
    await page.waitForTimeout(currentDelay);

    scrollCount++;

    // Extract items periodically
    if (scrollCount % 5 === 0) {
      const items = await page.$$eval('.item', elements => 
        elements.map(el => el.textContent)
      );
      allItems = items;
      console.log(`Progress: ${items.length} items after ${scrollCount} scrolls`);
    }
  }

  await browser.close();
  return allItems;
}

2. Error Handling and Retry Logic

async function robustInfiniteScroll(url, maxRetries = 3) {
  let attempt = 0;

  while (attempt < maxRetries) {
    try {
      const browser = await puppeteer.launch({ 
        headless: true,
        args: ['--no-sandbox', '--disable-setuid-sandbox']
      });

      const page = await browser.newPage();

      // Set up error monitoring
      page.on('error', err => console.log('Page error:', err.message));
      page.on('pageerror', err => console.log('Page script error:', err.message));

      await page.goto(url, { 
        waitUntil: 'domcontentloaded',
        timeout: 30000 
      });

      const result = await scrapeContent(page);
      await browser.close();

      return result;

    } catch (error) {
      attempt++;
      console.log(`Attempt ${attempt} failed:`, error.message);

      if (attempt === maxRetries) {
        throw new Error(`Failed after ${maxRetries} attempts: ${error.message}`);
      }

      // Wait before retry
      await new Promise(resolve => setTimeout(resolve, 2000 * attempt));
    }
  }
}

Integration with WebScraping.AI

For production applications, consider using WebScraping.AI's JavaScript execution capabilities to handle infinite scroll pages without managing browser instances:

const WebScrapingAI = require('webscraping-ai');

const client = new WebScrapingAI('your-api-key');

async function scrapeInfiniteScrollWithAPI(url) {
  const response = await client.getHTML(url, {
    js: true,
    js_timeout: 10000,
    js_script: `
      // Custom script to handle infinite scroll
      let scrollCount = 0;
      const maxScrolls = 10;

      async function scrollAndWait() {
        while (scrollCount < maxScrolls) {
          window.scrollTo(0, document.body.scrollHeight);
          await new Promise(resolve => setTimeout(resolve, 2000));
          scrollCount++;
        }
      }

      await scrollAndWait();
      return document.documentElement.outerHTML;
    `
  });

  return response.html;
}

Conclusion

Handling infinite scroll pages requires a combination of proper scrolling techniques, content detection, and robust error handling. When implementing these solutions, always:

  1. Monitor network activity to detect when loading completes
  2. Implement safeguards against infinite loops
  3. Use appropriate delays to avoid overwhelming servers
  4. Handle errors gracefully with retry logic
  5. Consider using waitFor functions for better timing control

By following these patterns and adapting them to your specific use case, you can effectively scrape data from infinite scroll pages while maintaining good performance and reliability.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon