Table of contents

How do I use the GPT API for automated data extraction?

The GPT API from OpenAI provides a powerful way to automate data extraction from unstructured text, HTML, and other content sources. Unlike traditional web scraping methods that rely on rigid CSS selectors or XPath expressions, GPT can understand context, handle varying layouts, and extract structured data from natural language.

Understanding GPT API for Data Extraction

The GPT API uses large language models (LLMs) to interpret and extract information from text. This approach is particularly useful when:

  • Website structures change frequently
  • Data is embedded in natural language text
  • You need to extract semantic meaning, not just visible text
  • Traditional selectors are too brittle or complex to maintain

The key advantage is that GPT can understand the context of the data, making it resilient to layout changes and variations in formatting.

Setting Up the GPT API

First, you'll need to install the OpenAI Python library and set up your API key:

pip install openai

For JavaScript/Node.js:

npm install openai

Set your API key as an environment variable:

export OPENAI_API_KEY='your-api-key-here'

Basic Data Extraction with Python

Here's a simple example of extracting structured data from HTML using the GPT API:

from openai import OpenAI
import json

client = OpenAI()

# Sample HTML content (in practice, you'd fetch this from a website)
html_content = """
<div class="product">
    <h1>Wireless Bluetooth Headphones</h1>
    <p class="price">$79.99</p>
    <p class="description">Premium noise-canceling headphones with 30-hour battery life.</p>
    <span class="rating">4.5 stars</span>
</div>
"""

# Create a prompt for data extraction
prompt = f"""
Extract the following information from this HTML and return it as JSON:
- product_name
- price (as a number)
- description
- rating (as a number)

HTML:
{html_content}

Return only valid JSON, no other text.
"""

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a data extraction assistant. Always return valid JSON."},
        {"role": "user", "content": prompt}
    ],
    temperature=0  # Use 0 for consistent, deterministic output
)

# Parse the extracted data
extracted_data = json.loads(response.choices[0].message.content)
print(json.dumps(extracted_data, indent=2))

Output:

{
  "product_name": "Wireless Bluetooth Headphones",
  "price": 79.99,
  "description": "Premium noise-canceling headphones with 30-hour battery life.",
  "rating": 4.5
}

JavaScript/Node.js Implementation

Here's the equivalent implementation in JavaScript:

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

async function extractData(htmlContent) {
  const prompt = `
Extract the following information from this HTML and return it as JSON:
- product_name
- price (as a number)
- description
- rating (as a number)

HTML:
${htmlContent}

Return only valid JSON, no other text.
  `;

  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [
      { role: 'system', content: 'You are a data extraction assistant. Always return valid JSON.' },
      { role: 'user', content: prompt }
    ],
    temperature: 0
  });

  const extractedData = JSON.parse(response.choices[0].message.content);
  return extractedData;
}

// Example usage
const html = `
<div class="product">
    <h1>Wireless Bluetooth Headphones</h1>
    <p class="price">$79.99</p>
</div>
`;

extractData(html).then(data => {
  console.log(JSON.stringify(data, null, 2));
});

Using JSON Mode for Structured Output

OpenAI provides a JSON mode that guarantees valid JSON responses:

response = client.chat.completions.create(
    model="gpt-4o",
    response_format={"type": "json_object"},
    messages=[
        {"role": "system", "content": "You are a data extraction assistant. Extract data as JSON."},
        {"role": "user", "content": f"Extract product information from: {html_content}"}
    ],
    temperature=0
)

data = json.loads(response.choices[0].message.content)

Advanced: Using Function Calling

Function calling (also called tool calling) provides the most reliable way to extract structured data:

tools = [
    {
        "type": "function",
        "function": {
            "name": "extract_product_data",
            "description": "Extract product information from HTML",
            "parameters": {
                "type": "object",
                "properties": {
                    "product_name": {"type": "string", "description": "Name of the product"},
                    "price": {"type": "number", "description": "Price in USD"},
                    "description": {"type": "string", "description": "Product description"},
                    "rating": {"type": "number", "description": "Rating out of 5"},
                    "in_stock": {"type": "boolean", "description": "Whether product is in stock"}
                },
                "required": ["product_name", "price"]
            }
        }
    }
]

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": f"Extract product data from: {html_content}"}
    ],
    tools=tools,
    tool_choice={"type": "function", "function": {"name": "extract_product_data"}}
)

# Extract the function arguments
tool_call = response.choices[0].message.tool_calls[0]
extracted_data = json.loads(tool_call.function.arguments)
print(extracted_data)

Combining GPT with Traditional Web Scraping

For optimal results, combine GPT with traditional scraping tools. When handling AJAX requests using Puppeteer, you can wait for dynamic content to load before passing it to GPT:

from selenium import webdriver
from openai import OpenAI
import json

client = OpenAI()

# Fetch dynamic content with Selenium
driver = webdriver.Chrome()
driver.get("https://example.com/products")
driver.implicitly_wait(5)  # Wait for dynamic content

# Get the page source
html_content = driver.page_source
driver.quit()

# Extract data with GPT
response = client.chat.completions.create(
    model="gpt-4o",
    response_format={"type": "json_object"},
    messages=[
        {"role": "system", "content": "Extract all product listings as JSON array."},
        {"role": "user", "content": f"Extract products from: {html_content}"}
    ],
    temperature=0
)

products = json.loads(response.choices[0].message.content)

Batch Processing for Multiple Pages

When scraping multiple pages, batch your requests to optimize costs and performance:

def extract_from_multiple_pages(urls):
    """Extract data from multiple URLs efficiently"""
    results = []

    for url in urls:
        # Fetch HTML (using requests, Selenium, etc.)
        html = fetch_html(url)  # Your fetching logic

        # Extract with GPT
        response = client.chat.completions.create(
            model="gpt-4o",
            response_format={"type": "json_object"},
            messages=[
                {"role": "system", "content": "Extract product data as JSON."},
                {"role": "user", "content": f"Extract from: {html[:8000]}"}  # Limit token usage
            ],
            temperature=0
        )

        data = json.loads(response.choices[0].message.content)
        data['source_url'] = url
        results.append(data)

    return results

Best Practices

1. Optimize Token Usage

GPT API charges by token count. Reduce costs by:

  • Preprocessing HTML to remove unnecessary tags and whitespace
  • Extracting only the relevant HTML sections before sending to GPT
  • Using smaller models (gpt-3.5-turbo) for simpler extraction tasks
from bs4 import BeautifulSoup

def clean_html(html):
    """Remove unnecessary elements to reduce token count"""
    soup = BeautifulSoup(html, 'html.parser')

    # Remove script and style tags
    for tag in soup(['script', 'style', 'nav', 'footer', 'header']):
        tag.decompose()

    # Get only the main content
    main_content = soup.find('main') or soup.find('article') or soup.body

    return str(main_content)

2. Set Temperature to 0

For data extraction, always use temperature=0 to get consistent, deterministic results:

response = client.chat.completions.create(
    model="gpt-4o",
    messages=messages,
    temperature=0  # Deterministic output
)

3. Handle Errors and Rate Limits

Implement proper error handling and retry logic:

import time
from openai import RateLimitError, APIError

def extract_with_retry(html_content, max_retries=3):
    """Extract data with exponential backoff retry"""
    for attempt in range(max_retries):
        try:
            response = client.chat.completions.create(
                model="gpt-4o",
                response_format={"type": "json_object"},
                messages=[
                    {"role": "system", "content": "Extract data as JSON."},
                    {"role": "user", "content": html_content}
                ],
                temperature=0
            )
            return json.loads(response.choices[0].message.content)

        except RateLimitError:
            if attempt < max_retries - 1:
                wait_time = 2 ** attempt  # Exponential backoff
                print(f"Rate limit hit. Waiting {wait_time}s...")
                time.sleep(wait_time)
            else:
                raise

        except APIError as e:
            print(f"API error: {e}")
            if attempt < max_retries - 1:
                time.sleep(1)
            else:
                raise

4. Validate Extracted Data

Always validate the extracted data:

from jsonschema import validate, ValidationError

schema = {
    "type": "object",
    "properties": {
        "product_name": {"type": "string"},
        "price": {"type": "number", "minimum": 0},
        "rating": {"type": "number", "minimum": 0, "maximum": 5}
    },
    "required": ["product_name", "price"]
}

def extract_and_validate(html_content):
    """Extract data and validate against schema"""
    response = client.chat.completions.create(
        model="gpt-4o",
        response_format={"type": "json_object"},
        messages=[
            {"role": "system", "content": "Extract product data as JSON."},
            {"role": "user", "content": html_content}
        ],
        temperature=0
    )

    data = json.loads(response.choices[0].message.content)

    try:
        validate(instance=data, schema=schema)
        return data
    except ValidationError as e:
        print(f"Validation error: {e}")
        return None

Cost Optimization Strategies

GPT API calls can become expensive at scale. Here are strategies to optimize costs:

1. Use Cheaper Models When Possible

For simple extraction tasks, GPT-3.5-Turbo is often sufficient:

# For complex extraction
model = "gpt-4o"  # More expensive but more accurate

# For simple extraction
model = "gpt-3.5-turbo"  # Cheaper and faster

2. Cache Results

Cache extracted data to avoid re-processing the same content:

import hashlib
import pickle
from pathlib import Path

def get_cache_key(html_content):
    """Generate cache key from HTML content"""
    return hashlib.md5(html_content.encode()).hexdigest()

def extract_with_cache(html_content, cache_dir='./cache'):
    """Extract data with file-based caching"""
    Path(cache_dir).mkdir(exist_ok=True)
    cache_key = get_cache_key(html_content)
    cache_file = Path(cache_dir) / f"{cache_key}.pkl"

    # Check cache
    if cache_file.exists():
        with open(cache_file, 'rb') as f:
            return pickle.load(f)

    # Extract with GPT
    response = client.chat.completions.create(
        model="gpt-4o",
        response_format={"type": "json_object"},
        messages=[
            {"role": "system", "content": "Extract data as JSON."},
            {"role": "user", "content": html_content}
        ],
        temperature=0
    )

    data = json.loads(response.choices[0].message.content)

    # Save to cache
    with open(cache_file, 'wb') as f:
        pickle.dump(data, f)

    return data

Integration with Web Scraping Workflows

When building a complete scraping solution, you can integrate GPT into your existing workflow. For example, when monitoring network requests in Puppeteer, you can capture API responses and use GPT to extract structured data from them.

Conclusion

The GPT API provides a flexible and powerful approach to automated data extraction. By combining it with traditional web scraping tools, implementing proper error handling, and optimizing for cost, you can build robust data extraction pipelines that are resilient to website changes.

Key takeaways:

  • Use temperature=0 for deterministic results
  • Implement function calling for guaranteed structured output
  • Combine GPT with traditional scraping for dynamic content
  • Optimize token usage to control costs
  • Always validate extracted data
  • Implement caching and retry logic for production systems

For complex scenarios involving dynamic content, consider using authentication mechanisms before extracting data with GPT to access protected content.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon