Table of contents

How do I use Claude API for scraping product data?

Claude API excels at extracting structured product data from HTML content through its advanced natural language understanding capabilities. Unlike traditional web scraping methods that rely on brittle CSS selectors or XPath expressions, Claude can intelligently interpret product information from various page layouts and formats, making it ideal for e-commerce data extraction.

Understanding Claude API for Product Data Extraction

The Claude API uses large language models to understand and extract data from HTML content. When you provide HTML and specify what product information you need, Claude analyzes the page structure and content to extract the requested fields accurately. This approach is particularly effective for:

  • Product listings with varying structures
  • Product detail pages across different e-commerce platforms
  • Dynamic content that's difficult to parse with traditional selectors
  • Unstructured or semi-structured product information

Setting Up Claude API for Web Scraping

First, you'll need to obtain an API key from Anthropic. Once you have your credentials, you can start making requests to extract product data.

Python Implementation

Here's a complete Python example for scraping product data using Claude API:

import anthropic
import requests

def scrape_product_data(url):
    # Fetch the HTML content
    response = requests.get(url, headers={
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
    })
    html_content = response.text

    # Initialize Claude client
    client = anthropic.Anthropic(api_key="your-api-key-here")

    # Create the prompt for product extraction
    prompt = f"""Extract the following product information from this HTML:
    - Product name
    - Price
    - Description
    - Availability status
    - Product images (URLs)
    - SKU or product ID
    - Reviews count and average rating
    - Product specifications

    Return the data as a JSON object with these exact field names.

    HTML content:
    {html_content[:50000]}  # Limit to avoid token limits
    """

    # Make API request
    message = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=4096,
        messages=[
            {"role": "user", "content": prompt}
        ]
    )

    return message.content[0].text

# Example usage
product_url = "https://example.com/product/123"
product_data = scrape_product_data(product_url)
print(product_data)

JavaScript/Node.js Implementation

Here's how to implement the same functionality in JavaScript:

import Anthropic from '@anthropic-ai/sdk';
import axios from 'axios';

async function scrapeProductData(url) {
    // Fetch HTML content
    const response = await axios.get(url, {
        headers: {
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
        }
    });
    const htmlContent = response.data;

    // Initialize Claude client
    const client = new Anthropic({
        apiKey: process.env.ANTHROPIC_API_KEY
    });

    // Extract product data
    const message = await client.messages.create({
        model: 'claude-3-5-sonnet-20241022',
        max_tokens: 4096,
        messages: [{
            role: 'user',
            content: `Extract the following product information from this HTML:
            - Product name
            - Price
            - Description
            - Availability status
            - Product images (URLs)
            - SKU or product ID
            - Reviews count and average rating
            - Product specifications

            Return the data as a JSON object with these exact field names.

            HTML content:
            ${htmlContent.substring(0, 50000)}`
        }]
    });

    return message.content[0].text;
}

// Example usage
const productUrl = 'https://example.com/product/123';
scrapeProductData(productUrl)
    .then(data => console.log(data))
    .catch(error => console.error('Error:', error));

Advanced Techniques for Product Data Extraction

Using Function Calling for Structured Output

Claude's function calling feature ensures you receive properly structured JSON output:

import anthropic
import json

def scrape_products_with_schema(html_content):
    client = anthropic.Anthropic(api_key="your-api-key-here")

    tools = [{
        "name": "extract_product_data",
        "description": "Extract product information from HTML",
        "input_schema": {
            "type": "object",
            "properties": {
                "name": {"type": "string", "description": "Product name"},
                "price": {"type": "number", "description": "Product price as a number"},
                "currency": {"type": "string", "description": "Currency code (USD, EUR, etc.)"},
                "description": {"type": "string", "description": "Product description"},
                "in_stock": {"type": "boolean", "description": "Whether product is in stock"},
                "images": {"type": "array", "items": {"type": "string"}, "description": "Product image URLs"},
                "sku": {"type": "string", "description": "Product SKU or ID"},
                "rating": {"type": "number", "description": "Average rating"},
                "reviews_count": {"type": "integer", "description": "Number of reviews"},
                "specifications": {"type": "object", "description": "Product specifications as key-value pairs"}
            },
            "required": ["name", "price", "currency"]
        }
    }]

    message = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=4096,
        tools=tools,
        messages=[{
            "role": "user",
            "content": f"Extract product data from this HTML:\n\n{html_content[:50000]}"
        }]
    )

    # Parse the tool use response
    for block in message.content:
        if block.type == "tool_use" and block.name == "extract_product_data":
            return block.input

    return None

Handling Multiple Products

When scraping product listing pages, you can extract multiple products at once:

def scrape_product_listing(html_content):
    client = anthropic.Anthropic(api_key="your-api-key-here")

    prompt = """Extract all products from this product listing page.
    For each product, extract:
    - Product name
    - Price
    - Product URL
    - Thumbnail image URL
    - Brief description or tagline

    Return as a JSON array of product objects."""

    message = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=4096,
        messages=[{
            "role": "user",
            "content": f"{prompt}\n\nHTML:\n{html_content[:50000]}"
        }]
    )

    return message.content[0].text

Combining Claude with Traditional Scraping Tools

For optimal results, combine Claude API with traditional web scraping tools. Use a headless browser to fetch JavaScript-rendered content, then pass it to Claude for intelligent extraction:

from playwright.sync_api import sync_playwright
import anthropic

def scrape_dynamic_product_page(url):
    # Use Playwright to render JavaScript
    with sync_playwright() as p:
        browser = p.chromium.launch()
        page = browser.new_page()
        page.goto(url)

        # Wait for product content to load
        page.wait_for_selector('.product-details', timeout=10000)

        # Get rendered HTML
        html_content = page.content()
        browser.close()

    # Use Claude to extract structured data
    client = anthropic.Anthropic(api_key="your-api-key-here")

    message = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=4096,
        messages=[{
            "role": "user",
            "content": f"""Extract product information from this HTML as JSON:

            HTML:
            {html_content[:50000]}"""
        }]
    )

    return message.content[0].text

When dealing with complex single-page applications, handling AJAX requests using Puppeteer can ensure you capture all dynamically loaded product data before passing it to Claude for extraction.

Best Practices for Product Data Scraping

1. Optimize Token Usage

Claude API charges based on token usage, so optimize your input:

from bs4 import BeautifulSoup

def clean_html_for_claude(html_content):
    soup = BeautifulSoup(html_content, 'html.parser')

    # Remove unnecessary elements
    for element in soup(['script', 'style', 'nav', 'footer', 'header']):
        element.decompose()

    # Get text with some structure preserved
    return str(soup)

# Use cleaned HTML
cleaned_html = clean_html_for_claude(html_content)

2. Implement Retry Logic

Handle API errors gracefully with exponential backoff:

import time
from anthropic import APIError

def extract_with_retry(html_content, max_retries=3):
    client = anthropic.Anthropic(api_key="your-api-key-here")

    for attempt in range(max_retries):
        try:
            message = client.messages.create(
                model="claude-3-5-sonnet-20241022",
                max_tokens=4096,
                messages=[{
                    "role": "user",
                    "content": f"Extract product data:\n{html_content[:50000]}"
                }]
            )
            return message.content[0].text

        except APIError as e:
            if attempt < max_retries - 1:
                wait_time = (2 ** attempt) * 1000  # Exponential backoff
                time.sleep(wait_time / 1000)
            else:
                raise

3. Cache Results

Implement caching to avoid redundant API calls:

import hashlib
import json
from functools import lru_cache

@lru_cache(maxsize=1000)
def get_cached_product_data(url_hash):
    # This will cache results in memory
    pass

def scrape_with_cache(url):
    url_hash = hashlib.md5(url.encode()).hexdigest()

    # Check cache first
    try:
        return get_cached_product_data(url_hash)
    except:
        # Scrape and cache
        data = scrape_product_data(url)
        return data

4. Handle Rate Limiting

Respect API rate limits by implementing throttling:

import asyncio
from asyncio import Semaphore

async def scrape_products_batch(urls, max_concurrent=5):
    semaphore = Semaphore(max_concurrent)

    async def scrape_with_limit(url):
        async with semaphore:
            return await scrape_product_data(url)

    tasks = [scrape_with_limit(url) for url in urls]
    return await asyncio.gather(*tasks)

# Usage
urls = ['https://example.com/product/1', 'https://example.com/product/2']
results = asyncio.run(scrape_products_batch(urls))

Monitoring and Error Handling

When scraping product data at scale, implement comprehensive error handling and monitoring network requests to ensure reliability:

import logging

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

def scrape_with_monitoring(url):
    try:
        logger.info(f"Starting scrape for {url}")

        # Fetch HTML
        response = requests.get(url, timeout=30)
        response.raise_for_status()

        # Extract with Claude
        data = extract_product_data(response.text)

        logger.info(f"Successfully scraped {url}")
        return data

    except requests.RequestException as e:
        logger.error(f"HTTP error for {url}: {str(e)}")
        return None

    except anthropic.APIError as e:
        logger.error(f"Claude API error for {url}: {str(e)}")
        return None

    except Exception as e:
        logger.error(f"Unexpected error for {url}: {str(e)}")
        return None

Cost Considerations

Claude API pricing is based on input and output tokens. For product scraping:

  • Input tokens: HTML content (larger pages cost more)
  • Output tokens: Extracted product data

A typical product page extraction might use: - 10,000-30,000 input tokens (depending on HTML size) - 500-2,000 output tokens (depending on data complexity)

To minimize costs: 1. Clean HTML before sending (remove scripts, styles, navigation) 2. Use Claude 3 Haiku for simpler extractions 3. Batch similar requests when possible 4. Cache results to avoid re-processing the same pages

Conclusion

Claude API provides a powerful, flexible approach to product data scraping that adapts to different page structures without requiring constant maintenance of CSS selectors. By combining Claude's intelligence with traditional scraping tools and following best practices for error handling, caching, and token optimization, you can build robust product data extraction pipelines that scale efficiently.

The key advantages of using Claude for product scraping include its ability to understand context, handle layout variations, and extract data from complex or poorly structured HTML—making it an excellent choice for e-commerce data extraction projects.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon