Table of contents

What is the Deepseek LLM and how does it work for data extraction?

Deepseek is a state-of-the-art large language model (LLM) developed by DeepSeek AI, designed to handle complex reasoning tasks including data extraction from unstructured content. Released in late 2024, Deepseek has gained attention for its exceptional performance in reasoning tasks while maintaining competitive pricing compared to other leading LLMs.

Understanding Deepseek LLM Architecture

Deepseek LLM is built on a transformer-based architecture with several key innovations:

Model Variants

Deepseek offers multiple model variants optimized for different use cases:

  • Deepseek-V3: A general-purpose model with 671 billion parameters, optimized for balanced performance across various tasks
  • Deepseek-R1: A reasoning-focused model that excels at multi-step problem solving and data extraction tasks
  • Deepseek-Coder: Specialized for code generation and technical content analysis

For data extraction tasks, Deepseek-R1 is particularly effective due to its enhanced reasoning capabilities, which help it understand complex data structures and extract information accurately.

How Deepseek Works for Data Extraction

Deepseek processes data extraction requests through a multi-step approach:

  1. Content Analysis: The model analyzes the input text or HTML structure
  2. Pattern Recognition: Identifies relevant data patterns and relationships
  3. Reasoning Chain: Builds a logical reasoning chain to extract specific information
  4. Structured Output: Returns data in the requested format (JSON, CSV, etc.)

Key Advantages for Web Scraping

  • Context Understanding: Handles large context windows (up to 64K tokens in Deepseek-V3)
  • Structured Output: Native support for JSON schema validation
  • Cost Efficiency: Significantly lower pricing compared to GPT-4 or Claude
  • Reasoning Capabilities: Excellent at handling complex extraction logic

Practical Implementation

Python Example with Deepseek API

Here's how to use Deepseek for extracting structured data from HTML:

import requests
import json

def extract_data_with_deepseek(html_content, extraction_schema):
    """
    Extract structured data from HTML using Deepseek LLM
    """
    api_key = "your-deepseek-api-key"

    # Prepare the prompt
    prompt = f"""
    Extract the following information from this HTML content:
    {json.dumps(extraction_schema, indent=2)}

    HTML Content:
    {html_content}

    Return the extracted data as a valid JSON object matching the schema.
    """

    # API request
    response = requests.post(
        "https://api.deepseek.com/v1/chat/completions",
        headers={
            "Authorization": f"Bearer {api_key}",
            "Content-Type": "application/json"
        },
        json={
            "model": "deepseek-reasoner",
            "messages": [
                {
                    "role": "system",
                    "content": "You are a data extraction assistant. Extract information accurately and return valid JSON."
                },
                {
                    "role": "user",
                    "content": prompt
                }
            ],
            "response_format": {"type": "json_object"},
            "temperature": 0.1
        }
    )

    result = response.json()
    return json.loads(result['choices'][0]['message']['content'])

# Example usage
html = """
<div class="product">
    <h2>Wireless Headphones</h2>
    <span class="price">$79.99</span>
    <p class="description">Premium noise-cancelling headphones</p>
    <span class="rating">4.5 stars</span>
</div>
"""

schema = {
    "name": "string",
    "price": "number",
    "description": "string",
    "rating": "number"
}

extracted_data = extract_data_with_deepseek(html, schema)
print(json.dumps(extracted_data, indent=2))

JavaScript/Node.js Example

const axios = require('axios');

async function extractDataWithDeepseek(htmlContent, extractionSchema) {
    const apiKey = 'your-deepseek-api-key';

    const prompt = `
    Extract the following information from this HTML content:
    ${JSON.stringify(extractionSchema, null, 2)}

    HTML Content:
    ${htmlContent}

    Return the extracted data as a valid JSON object matching the schema.
    `;

    try {
        const response = await axios.post(
            'https://api.deepseek.com/v1/chat/completions',
            {
                model: 'deepseek-reasoner',
                messages: [
                    {
                        role: 'system',
                        content: 'You are a data extraction assistant. Extract information accurately and return valid JSON.'
                    },
                    {
                        role: 'user',
                        content: prompt
                    }
                ],
                response_format: { type: 'json_object' },
                temperature: 0.1
            },
            {
                headers: {
                    'Authorization': `Bearer ${apiKey}`,
                    'Content-Type': 'application/json'
                }
            }
        );

        return JSON.parse(response.data.choices[0].message.content);
    } catch (error) {
        console.error('Extraction error:', error.message);
        throw error;
    }
}

// Example usage
const html = `
<article>
    <h1>Breaking News: Tech Innovation</h1>
    <time datetime="2025-01-15">January 15, 2025</time>
    <span class="author">John Smith</span>
    <div class="content">Major breakthrough in AI technology...</div>
</article>
`;

const schema = {
    title: 'string',
    date: 'string',
    author: 'string',
    content: 'string'
};

extractDataWithDeepseek(html, schema)
    .then(data => console.log(JSON.stringify(data, null, 2)))
    .catch(error => console.error(error));

Advanced Data Extraction Techniques

Batch Processing Multiple Pages

When scraping multiple pages, you can combine Deepseek with traditional scraping tools for optimal efficiency:

from bs4 import BeautifulSoup
import requests
import asyncio
from typing import List, Dict

async def scrape_and_extract_batch(urls: List[str], schema: Dict):
    """
    Scrape multiple URLs and extract data using Deepseek
    """
    results = []

    for url in urls:
        # Fetch HTML content
        response = requests.get(url)
        html = response.text

        # Use Deepseek for intelligent extraction
        extracted = extract_data_with_deepseek(html, schema)
        extracted['source_url'] = url
        results.append(extracted)

        # Rate limiting
        await asyncio.sleep(1)

    return results

# Example: Extract product data from multiple pages
urls = [
    'https://example.com/product/1',
    'https://example.com/product/2',
    'https://example.com/product/3'
]

product_schema = {
    "name": "string",
    "price": "number",
    "availability": "boolean",
    "specifications": "object"
}

products = asyncio.run(scrape_and_extract_batch(urls, product_schema))

Handling Dynamic Content

For JavaScript-heavy websites, you can combine browser automation tools with Deepseek:

from playwright.sync_api import sync_playwright
import json

def scrape_dynamic_page_with_deepseek(url: str, schema: Dict):
    """
    Scrape JavaScript-rendered content and extract with Deepseek
    """
    with sync_playwright() as p:
        browser = p.chromium.launch()
        page = browser.new_page()

        # Navigate and wait for content
        page.goto(url)
        page.wait_for_load_state('networkidle')

        # Get rendered HTML
        html = page.content()
        browser.close()

        # Extract data using Deepseek
        return extract_data_with_deepseek(html, schema)

# Extract data from single-page application
spa_data = scrape_dynamic_page_with_deepseek(
    'https://example.com/spa-app',
    {
        "items": "array of objects",
        "total_count": "number",
        "pagination": "object"
    }
)

Cost Optimization Strategies

Deepseek offers competitive pricing, but you can further optimize costs:

1. Pre-process HTML Content

Remove unnecessary content before sending to the API:

from bs4 import BeautifulSoup

def clean_html_for_extraction(html: str, target_selector: str = None):
    """
    Clean and reduce HTML content before LLM processing
    """
    soup = BeautifulSoup(html, 'html.parser')

    # Remove scripts, styles, and comments
    for element in soup(['script', 'style', 'meta', 'link']):
        element.decompose()

    # Extract only relevant section if selector provided
    if target_selector:
        target = soup.select_one(target_selector)
        return str(target) if target else str(soup)

    return str(soup)

# Use cleaned HTML
cleaned_html = clean_html_for_extraction(html, '.main-content')
extracted = extract_data_with_deepseek(cleaned_html, schema)

2. Use Caching for Similar Pages

import hashlib
import pickle
from functools import lru_cache

def get_content_hash(html: str) -> str:
    """Generate hash for HTML content"""
    return hashlib.md5(html.encode()).hexdigest()

cache = {}

def extract_with_cache(html: str, schema: Dict):
    """Cache extraction results for similar content"""
    content_hash = get_content_hash(html)

    if content_hash in cache:
        return cache[content_hash]

    result = extract_data_with_deepseek(html, schema)
    cache[content_hash] = result
    return result

Error Handling and Validation

Robust error handling is crucial when working with LLM-based extraction:

from jsonschema import validate, ValidationError

def safe_extract_with_validation(html: str, schema: Dict, json_schema: Dict = None):
    """
    Extract data with comprehensive error handling and validation
    """
    try:
        # Extract data
        result = extract_data_with_deepseek(html, schema)

        # Validate against JSON schema if provided
        if json_schema:
            try:
                validate(instance=result, schema=json_schema)
            except ValidationError as e:
                print(f"Validation error: {e.message}")
                # Retry with more specific instructions
                return None

        return result

    except requests.exceptions.RequestException as e:
        print(f"API request failed: {e}")
        return None
    except json.JSONDecodeError as e:
        print(f"Invalid JSON response: {e}")
        return None
    except Exception as e:
        print(f"Unexpected error: {e}")
        return None

# JSON Schema for validation
json_schema = {
    "type": "object",
    "properties": {
        "name": {"type": "string"},
        "price": {"type": "number", "minimum": 0},
        "rating": {"type": "number", "minimum": 0, "maximum": 5}
    },
    "required": ["name", "price"]
}

validated_data = safe_extract_with_validation(html, schema, json_schema)

Best Practices

1. Optimize Your Prompts

  • Be specific about the expected output format
  • Provide examples in your schema
  • Use low temperature (0.1-0.3) for consistent extraction

2. Monitor Token Usage

def estimate_token_count(text: str) -> int:
    """Rough estimation: 1 token ≈ 4 characters"""
    return len(text) // 4

html_tokens = estimate_token_count(html)
if html_tokens > 60000:  # Deepseek-V3 context limit
    print("Warning: Content may exceed context window")
    # Consider chunking or cleaning HTML further

3. Combine with Traditional Parsing

Use Deepseek for complex, unstructured content and traditional parsers (XPath, CSS selectors) for well-structured data:

def hybrid_extraction(html: str, simple_fields: Dict, complex_fields: Dict):
    """
    Use CSS selectors for simple fields, LLM for complex ones
    """
    soup = BeautifulSoup(html, 'html.parser')
    result = {}

    # Extract simple fields with BeautifulSoup
    for field, selector in simple_fields.items():
        element = soup.select_one(selector)
        result[field] = element.text.strip() if element else None

    # Use Deepseek for complex extraction
    complex_data = extract_data_with_deepseek(html, complex_fields)
    result.update(complex_data)

    return result

Performance Comparison

Deepseek offers compelling advantages for data extraction:

| Feature | Deepseek-R1 | GPT-4 | Claude 3.5 | |---------|-------------|-------|------------| | Context Window | 64K tokens | 128K tokens | 200K tokens | | Reasoning Quality | Excellent | Excellent | Excellent | | JSON Mode | Yes | Yes | Yes | | Cost per 1M tokens | ~$0.14-$0.28 | ~$2.50-$10 | ~$3-$15 | | Speed | Fast | Medium | Fast |

Conclusion

Deepseek LLM provides a powerful and cost-effective solution for data extraction tasks, especially when dealing with unstructured or complex web content. Its reasoning capabilities make it particularly suitable for scenarios where traditional CSS selectors or XPath would struggle, such as extracting data from inconsistently formatted pages or handling dynamic content.

By combining Deepseek with traditional web scraping tools and following best practices for prompt engineering and error handling, you can build robust, scalable data extraction pipelines that handle a wide variety of web content efficiently.

For production use, consider implementing rate limiting, caching strategies, and fallback mechanisms to ensure reliable operation and cost control.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon