Table of contents

How do I handle HTTP redirects in Go web scraping?

HTTP redirects are a common mechanism websites use to redirect users from one URL to another. When web scraping with Go, properly handling redirects is crucial for following content that has moved, dealing with URL canonicalization, and avoiding infinite redirect loops. Go's net/http package provides flexible redirect handling capabilities that can be customized for various scraping scenarios.

Understanding HTTP Redirects

HTTP redirects use status codes in the 3xx range (301, 302, 303, 307, 308) to indicate that the requested resource has moved to a different location. The Location header specifies the new URL. Different redirect types have different semantics:

  • 301 Moved Permanently: The resource has permanently moved
  • 302 Found: Temporary redirect (original behavior)
  • 303 See Other: Should use GET for the redirect
  • 307 Temporary Redirect: Preserves the original HTTP method
  • 308 Permanent Redirect: Permanent redirect that preserves the HTTP method

Default Redirect Behavior in Go

By default, Go's HTTP client automatically follows redirects up to 10 times:

package main

import (
    "fmt"
    "io"
    "net/http"
)

func main() {
    // Default client follows redirects automatically
    resp, err := http.Get("https://httpbin.org/redirect/3")
    if err != nil {
        panic(err)
    }
    defer resp.Body.Close()

    body, err := io.ReadAll(resp.Body)
    if err != nil {
        panic(err)
    }

    fmt.Printf("Final URL: %s\n", resp.Request.URL.String())
    fmt.Printf("Status: %s\n", resp.Status)
    fmt.Printf("Body length: %d bytes\n", len(body))
}

Custom Redirect Policies

You can customize redirect behavior by providing a custom CheckRedirect function:

package main

import (
    "errors"
    "fmt"
    "net/http"
    "net/url"
)

func main() {
    // Create client with custom redirect policy
    client := &http.Client{
        CheckRedirect: func(req *http.Request, via []*http.Request) error {
            // Limit redirects to 5
            if len(via) >= 5 {
                return errors.New("too many redirects")
            }

            // Log each redirect
            fmt.Printf("Redirecting from %s to %s\n", 
                via[len(via)-1].URL.String(), 
                req.URL.String())

            // Allow the redirect
            return nil
        },
    }

    resp, err := client.Get("https://httpbin.org/redirect/3")
    if err != nil {
        fmt.Printf("Error: %v\n", err)
        return
    }
    defer resp.Body.Close()

    fmt.Printf("Final URL: %s\n", resp.Request.URL.String())
}

Preventing All Redirects

Sometimes you want to handle redirects manually or prevent them entirely:

package main

import (
    "errors"
    "fmt"
    "net/http"
)

func main() {
    // Client that doesn't follow redirects
    client := &http.Client{
        CheckRedirect: func(req *http.Request, via []*http.Request) error {
            return http.ErrUseLastResponse
        },
    }

    resp, err := client.Get("https://httpbin.org/redirect/1")
    if err != nil {
        panic(err)
    }
    defer resp.Body.Close()

    fmt.Printf("Status: %s\n", resp.Status)
    fmt.Printf("Location header: %s\n", resp.Header.Get("Location"))

    // Check if it's a redirect
    if resp.StatusCode >= 300 && resp.StatusCode < 400 {
        location := resp.Header.Get("Location")
        fmt.Printf("Would redirect to: %s\n", location)
    }
}

Advanced Redirect Handling with Context

For more sophisticated scraping scenarios, you can track redirect chains and handle timeouts:

package main

import (
    "context"
    "fmt"
    "net/http"
    "net/url"
    "time"
)

type RedirectTracker struct {
    MaxRedirects int
    RedirectChain []string
}

func (rt *RedirectTracker) CheckRedirect(req *http.Request, via []*http.Request) error {
    // Track the redirect chain
    rt.RedirectChain = append(rt.RedirectChain, req.URL.String())

    if len(via) >= rt.MaxRedirects {
        return fmt.Errorf("stopped after %d redirects", rt.MaxRedirects)
    }

    // You can add custom logic here, such as:
    // - Checking if we're being redirected to a different domain
    // - Validating the redirect URL
    // - Implementing custom retry logic

    return nil
}

func scrapeWithRedirectTracking(targetURL string) error {
    tracker := &RedirectTracker{
        MaxRedirects: 10,
        RedirectChain: []string{targetURL},
    }

    client := &http.Client{
        CheckRedirect: tracker.CheckRedirect,
        Timeout:       30 * time.Second,
    }

    ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
    defer cancel()

    req, err := http.NewRequestWithContext(ctx, "GET", targetURL, nil)
    if err != nil {
        return err
    }

    resp, err := client.Do(req)
    if err != nil {
        return err
    }
    defer resp.Body.Close()

    fmt.Printf("Redirect chain:\n")
    for i, url := range tracker.RedirectChain {
        fmt.Printf("%d: %s\n", i+1, url)
    }

    fmt.Printf("Final status: %s\n", resp.Status)
    return nil
}

func main() {
    err := scrapeWithRedirectTracking("https://httpbin.org/redirect/3")
    if err != nil {
        fmt.Printf("Error: %v\n", err)
    }
}

Handling Cross-Domain Redirects

When scraping, you might want to handle cross-domain redirects differently:

package main

import (
    "fmt"
    "net/http"
    "net/url"
    "strings"
)

func allowSameDomainRedirects(req *http.Request, via []*http.Request) error {
    if len(via) >= 10 {
        return fmt.Errorf("too many redirects")
    }

    // Get the original domain
    originalHost := via[0].URL.Host
    newHost := req.URL.Host

    // Allow redirects within the same domain or to subdomains
    if !strings.HasSuffix(newHost, originalHost) && newHost != originalHost {
        return fmt.Errorf("cross-domain redirect blocked: %s -> %s", 
            originalHost, newHost)
    }

    return nil
}

func main() {
    client := &http.Client{
        CheckRedirect: allowSameDomainRedirects,
    }

    resp, err := client.Get("https://example.com/some-path")
    if err != nil {
        fmt.Printf("Error: %v\n", err)
        return
    }
    defer resp.Body.Close()

    fmt.Printf("Successfully scraped: %s\n", resp.Request.URL.String())
}

Redirect Handling with Cookies and Headers

When following redirects, you might need to preserve cookies and headers:

package main

import (
    "fmt"
    "net/http"
    "net/http/cookiejar"
)

func main() {
    // Create a cookie jar to persist cookies across redirects
    jar, err := cookiejar.New(nil)
    if err != nil {
        panic(err)
    }

    client := &http.Client{
        Jar: jar,
        CheckRedirect: func(req *http.Request, via []*http.Request) error {
            if len(via) >= 10 {
                return fmt.Errorf("too many redirects")
            }

            // Preserve custom headers on redirects (optional)
            if len(via) > 0 {
                // Copy headers from the original request
                for key, values := range via[0].Header {
                    // Skip headers that shouldn't be copied
                    if key == "Authorization" && req.URL.Host != via[0].URL.Host {
                        continue // Don't send auth headers to different hosts
                    }
                    for _, value := range values {
                        req.Header.Add(key, value)
                    }
                }
            }

            return nil
        },
    }

    // Set initial headers
    req, err := http.NewRequest("GET", "https://httpbin.org/redirect/2", nil)
    if err != nil {
        panic(err)
    }

    req.Header.Set("User-Agent", "GoScraper/1.0")
    req.Header.Set("Custom-Header", "MyValue")

    resp, err := client.Do(req)
    if err != nil {
        panic(err)
    }
    defer resp.Body.Close()

    fmt.Printf("Final URL: %s\n", resp.Request.URL.String())
}

Error Handling and Retry Logic

Robust redirect handling should include proper error handling and retry mechanisms:

package main

import (
    "fmt"
    "net/http"
    "time"
)

func scrapeWithRetry(url string, maxRetries int) (*http.Response, error) {
    client := &http.Client{
        Timeout: 30 * time.Second,
        CheckRedirect: func(req *http.Request, via []*http.Request) error {
            if len(via) >= 10 {
                return fmt.Errorf("redirect limit exceeded")
            }
            return nil
        },
    }

    var lastErr error
    for attempt := 0; attempt <= maxRetries; attempt++ {
        resp, err := client.Get(url)
        if err == nil {
            return resp, nil
        }

        lastErr = err
        if attempt < maxRetries {
            waitTime := time.Duration(attempt+1) * time.Second
            fmt.Printf("Attempt %d failed: %v. Retrying in %v...\n", 
                attempt+1, err, waitTime)
            time.Sleep(waitTime)
        }
    }

    return nil, fmt.Errorf("failed after %d attempts: %v", maxRetries+1, lastErr)
}

func main() {
    resp, err := scrapeWithRetry("https://httpbin.org/redirect/2", 3)
    if err != nil {
        fmt.Printf("Error: %v\n", err)
        return
    }
    defer resp.Body.Close()

    fmt.Printf("Successfully scraped: %s\n", resp.Request.URL.String())
}

Best Practices for Redirect Handling

  1. Set reasonable redirect limits: The default 10 redirects is usually sufficient, but adjust based on your needs.

  2. Handle cross-domain redirects carefully: Be cautious about following redirects to different domains, especially when dealing with authentication.

  3. Preserve necessary headers and cookies: Use cookie jars and carefully manage header propagation across redirects.

  4. Implement timeout handling: Always set timeouts to prevent hanging on problematic redirects.

  5. Log redirect chains: Track where your requests are being redirected for debugging and monitoring.

  6. Validate redirect URLs: Check that redirect destinations are safe and expected.

Similar to handling page redirections in Puppeteer, proper redirect management is essential for reliable web scraping. Understanding redirect behavior helps ensure your Go scrapers can effectively follow content as it moves across the web while maintaining security and performance.

For complex scenarios involving JavaScript-heavy sites that might use client-side redirects, you might need to complement your Go scraping with tools that can handle dynamic content, much like monitoring network requests in Puppeteer for comprehensive redirect tracking.

Conclusion

Handling HTTP redirects properly in Go web scraping requires understanding the different redirect types, implementing custom redirect policies, and following best practices for security and reliability. By using Go's flexible CheckRedirect function and proper error handling, you can build robust scrapers that handle redirects gracefully while avoiding common pitfalls like infinite loops and security issues.

The key is to balance following legitimate redirects with protecting against malicious or problematic redirect chains. With the examples and patterns shown above, you can implement redirect handling that suits your specific scraping requirements.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon