Table of contents

How do I handle dynamic content that loads after page load in Colly?

Colly is a powerful web scraping framework for Go, but it has limitations when dealing with dynamic content that loads after the initial page load through JavaScript. Unlike browser-based tools, Colly operates as a simple HTTP client and doesn't execute JavaScript by default. However, there are several strategies you can employ to handle dynamic content effectively.

Understanding the Challenge

Dynamic content refers to elements that are loaded asynchronously after the initial HTML page is rendered. This includes:

  • AJAX requests that fetch additional data
  • Infinite scroll implementations
  • Content loaded by JavaScript frameworks (React, Vue, Angular)
  • Lazy-loaded images and components
  • Real-time updates via WebSockets

Since Colly doesn't execute JavaScript, it can only see the initial HTML response from the server, missing any content that gets loaded dynamically.

Strategy 1: Direct API Access

The most efficient approach is to identify and access the underlying APIs that provide the dynamic content directly.

Finding API Endpoints

Use browser developer tools to identify the actual API calls:

# Open browser developer tools (F12)
# Go to Network tab
# Reload the page and interact with dynamic elements
# Look for XHR/Fetch requests

Implementing API Access in Colly

package main

import (
    "encoding/json"
    "fmt"
    "log"

    "github.com/gocolly/colly/v2"
)

type ApiResponse struct {
    Data []struct {
        ID    int    `json:"id"`
        Title string `json:"title"`
        Body  string `json:"body"`
    } `json:"data"`
}

func main() {
    c := colly.NewCollector()

    // Set headers to mimic the original request
    c.OnRequest(func(r *colly.Request) {
        r.Headers.Set("Accept", "application/json")
        r.Headers.Set("X-Requested-With", "XMLHttpRequest")
        r.Headers.Set("Referer", "https://example.com/main-page")
    })

    c.OnResponse(func(r *colly.Response) {
        var apiResp ApiResponse
        err := json.Unmarshal(r.Body, &apiResp)
        if err != nil {
            log.Printf("JSON parsing error: %v", err)
            return
        }

        for _, item := range apiResp.Data {
            fmt.Printf("ID: %d, Title: %s\n", item.ID, item.Title)
        }
    })

    // Visit the API endpoint directly
    c.Visit("https://example.com/api/dynamic-content")
}

Strategy 2: Headless Browser Integration

When direct API access isn't possible, integrate Colly with a headless browser like Chrome via the chromedp library.

Installing Dependencies

go mod init colly-dynamic
go get github.com/gocolly/colly/v2
go get github.com/chromedp/chromedp

Browser Integration Example

package main

import (
    "context"
    "fmt"
    "log"
    "strings"
    "time"

    "github.com/chromedp/chromedp"
    "github.com/gocolly/colly/v2"
)

func getRenderedHTML(url string) (string, error) {
    // Create context
    ctx, cancel := chromedp.NewContext(context.Background())
    defer cancel()

    // Set timeout
    ctx, cancel = context.WithTimeout(ctx, 30*time.Second)
    defer cancel()

    var htmlContent string

    err := chromedp.Run(ctx,
        chromedp.Navigate(url),
        chromedp.WaitVisible("body"),
        // Wait for specific dynamic content
        chromedp.WaitVisible(".dynamic-content", chromedp.ByQuery),
        // Optional: Wait for network idle
        chromedp.Sleep(2*time.Second),
        chromedp.OuterHTML("html", &htmlContent),
    )

    return htmlContent, err
}

func main() {
    c := colly.NewCollector()

    c.OnHTML(".dynamic-content", func(e *colly.HTMLElement) {
        fmt.Printf("Dynamic content: %s\n", e.Text)
    })

    // Get rendered HTML from headless browser
    url := "https://example.com/dynamic-page"
    renderedHTML, err := getRenderedHTML(url)
    if err != nil {
        log.Fatal(err)
    }

    // Parse the rendered HTML with Colly
    reader := strings.NewReader(renderedHTML)
    c.OnHTMLDetach(".dynamic-content")
    c.OnHTML(".dynamic-content", func(e *colly.HTMLElement) {
        fmt.Printf("Found dynamic content: %s\n", e.Text)
    })

    err = c.ParseHTML(reader, url)
    if err != nil {
        log.Fatal(err)
    }
}

Strategy 3: Intelligent Delays and Retries

Sometimes adding strategic delays can help capture content that loads shortly after the initial request.

package main

import (
    "fmt"
    "log"
    "net/http"
    "time"

    "github.com/gocolly/colly/v2"
    "github.com/gocolly/colly/v2/debug"
)

func main() {
    c := colly.NewCollector(
        colly.Debugger(&debug.LogDebugger{}),
    )

    // Set delays between requests
    c.Limit(&colly.LimitRule{
        DomainGlob:  "*",
        Parallelism: 1,
        Delay:       2 * time.Second,
    })

    // Implement retry logic for failed requests
    c.OnResponse(func(r *colly.Response) {
        if r.StatusCode != 200 {
            log.Printf("Non-200 status code: %d", r.StatusCode)
            // Retry after delay
            time.Sleep(5 * time.Second)
            r.Request.Retry()
        }
    })

    // Custom transport with timeout
    c.SetClient(&http.Client{
        Timeout: 30 * time.Second,
    })

    c.OnHTML("body", func(e *colly.HTMLElement) {
        // Check if expected content is present
        dynamicElements := e.DOM.Find(".dynamic-content")
        if dynamicElements.Length() == 0 {
            log.Println("Dynamic content not found, might need browser rendering")
            return
        }

        dynamicElements.Each(func(i int, s *goquery.Selection) {
            fmt.Printf("Content %d: %s\n", i+1, s.Text())
        })
    })

    c.Visit("https://example.com/dynamic-page")
}

Strategy 4: Hybrid Approach with Puppeteer

For complex scenarios, you might want to use a two-step process: first render the page with a browser automation tool, then scrape with Colly.

Using Puppeteer for Rendering

// render-page.js
const puppeteer = require('puppeteer');

async function renderPage(url) {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();

    await page.goto(url, { waitUntil: 'networkidle2' });

    // Wait for specific content
    await page.waitForSelector('.dynamic-content', { timeout: 10000 });

    const html = await page.content();
    await browser.close();

    return html;
}

// Export rendered HTML to file or stdout
const url = process.argv[2];
renderPage(url).then(html => {
    console.log(html);
}).catch(console.error);

Integrating with Go

package main

import (
    "os/exec"
    "strings"

    "github.com/gocolly/colly/v2"
)

func getRendereredHTML(url string) (string, error) {
    cmd := exec.Command("node", "render-page.js", url)
    output, err := cmd.Output()
    if err != nil {
        return "", err
    }
    return string(output), nil
}

func main() {
    c := colly.NewCollector()

    c.OnHTML(".dynamic-content", func(e *colly.HTMLElement) {
        fmt.Printf("Dynamic content: %s\n", e.Text)
    })

    url := "https://example.com/spa-page"
    renderedHTML, err := getRendereredHTML(url)
    if err != nil {
        log.Fatal(err)
    }

    reader := strings.NewReader(renderedHTML)
    c.ParseHTML(reader, url)
}

Alternative Solutions

WebScraping.AI API

For production environments where you need reliable handling of dynamic content without the complexity of managing browser instances, consider using a specialized web scraping API. WebScraping.AI provides JavaScript rendering capabilities that can handle dynamic content seamlessly.

package main

import (
    "encoding/json"
    "fmt"
    "io/ioutil"
    "net/http"
    "net/url"
)

func scrapeWithAPI(targetURL, apiKey string) (string, error) {
    baseURL := "https://api.webscraping.ai/html"
    params := url.Values{}
    params.Add("url", targetURL)
    params.Add("js", "true") // Enable JavaScript rendering
    params.Add("wait_for", ".dynamic-content") // Wait for specific element

    req, _ := http.NewRequest("GET", baseURL+"?"+params.Encode(), nil)
    req.Header.Add("X-API-Key", apiKey)

    client := &http.Client{}
    resp, err := client.Do(req)
    if err != nil {
        return "", err
    }
    defer resp.Body.Close()

    body, err := ioutil.ReadAll(resp.Body)
    return string(body), err
}

Best Practices and Troubleshooting

Performance Considerations

  1. Cache rendered content when possible to avoid repeated browser launches
  2. Use connection pooling for API-based approaches
  3. Implement rate limiting to avoid overwhelming target servers
  4. Monitor memory usage when using headless browsers

Common Issues and Solutions

Issue: Content still not loading despite delays Solution: Check if the content requires user interaction (clicks, scrolls) to trigger loading

Issue: Browser automation is too slow Solution: Consider using lighter alternatives like handling AJAX requests using Puppeteer or direct API access

Issue: Dynamic content loads infinitely Solution: Implement proper timeout handling in Puppeteer or set maximum wait times

Error Handling

func robustDynamicScraping(url string) error {
    // Try direct Colly approach first
    c := colly.NewCollector()
    contentFound := false

    c.OnHTML(".target-content", func(e *colly.HTMLElement) {
        contentFound = true
        // Process content
    })

    err := c.Visit(url)
    if err != nil || !contentFound {
        log.Println("Falling back to browser rendering")
        // Fallback to browser rendering approach
        return handleWithBrowser(url)
    }

    return nil
}

Conclusion

While Colly doesn't natively support JavaScript execution, there are multiple strategies to handle dynamic content effectively. The choice of approach depends on your specific use case:

  • Use direct API access for the best performance and reliability
  • Implement browser integration for complex JavaScript-heavy sites
  • Consider hybrid approaches for flexibility
  • Evaluate managed solutions for production environments

For applications requiring robust handling of modern web technologies, you might also want to explore how to crawl single page applications using Puppeteer as an alternative approach.

Remember to always respect robots.txt files, implement proper rate limiting, and follow ethical scraping practices when dealing with dynamic content.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon