Table of contents

How do I use the n8n web scraping template to get started?

n8n web scraping templates provide pre-configured workflows that simplify the process of extracting data from websites. These templates combine visual workflow automation with powerful scraping capabilities, making it easy for developers to collect, process, and store web data without building everything from scratch.

Understanding n8n Web Scraping Templates

n8n offers several built-in templates for web scraping that leverage different approaches:

  • HTTP Request + HTML Extract nodes: For simple static websites
  • Puppeteer nodes: For JavaScript-heavy dynamic websites
  • API-based scraping: Using dedicated scraping services
  • Scheduled scraping: Automated periodic data collection

Templates serve as starting points that you can customize based on your specific requirements, data structure, and target websites.

Setting Up Your First n8n Web Scraping Template

Step 1: Install and Configure n8n

First, ensure you have n8n installed on your system. You can run n8n using Docker or npm:

# Using npx (easiest for beginners)
npx n8n

# Using Docker
docker run -it --rm \
  --name n8n \
  -p 5678:5678 \
  -v ~/.n8n:/home/node/.n8n \
  n8nio/n8n

# Using npm (for production)
npm install n8n -g
n8n start

Once started, access the n8n interface at http://localhost:5678.

Step 2: Import a Web Scraping Template

n8n provides multiple ways to access templates:

  1. From the n8n interface: Click "Templates" in the left sidebar and search for "web scraping"
  2. From n8n.io website: Browse templates at n8n.io/workflows and import via JSON
  3. From the community: Access shared workflows from the n8n community forum

To import a template:

# Download a template JSON file
curl -o scraping-template.json https://n8n.io/workflows/[template-id].json

# Import through the UI or command line
n8n import:workflow --input=scraping-template.json

Step 3: Basic Template Structure

A typical n8n web scraping template consists of these core nodes:

Start → HTTP Request → HTML Extract → Data Processing → Storage

Here's what each component does:

  • Trigger Node: Schedules when the workflow runs (manual, cron, webhook)
  • HTTP Request Node: Fetches the web page content
  • HTML Extract Node: Parses HTML and extracts specific data
  • Data Processing Nodes: Cleans, transforms, and formats extracted data
  • Storage Node: Saves data to databases, spreadsheets, or files

Building a Custom Web Scraping Workflow

Example 1: Simple Static Website Scraping

Here's a basic template for scraping product information from an e-commerce site:

// HTTP Request Node Configuration
{
  "method": "GET",
  "url": "https://example.com/products",
  "options": {
    "headers": {
      "User-Agent": "Mozilla/5.0 (compatible; n8n-scraper/1.0)"
    }
  }
}

// HTML Extract Node Configuration
{
  "extractionValues": {
    "title": {
      "cssSelector": "h1.product-title",
      "returnValue": "text"
    },
    "price": {
      "cssSelector": ".product-price",
      "returnValue": "text"
    },
    "image": {
      "cssSelector": "img.product-image",
      "returnValue": "attribute",
      "attribute": "src"
    }
  }
}

Example 2: Dynamic Website with Puppeteer

For JavaScript-rendered pages, you'll need to use Puppeteer nodes similar to handling dynamic content in browser automation:

// Puppeteer Node Configuration
{
  "operation": "getPageContent",
  "url": "https://example.com/dynamic-content",
  "waitUntil": "networkidle2",
  "evaluate": {
    "code": `() => {
      const products = [];
      document.querySelectorAll('.product-card').forEach(card => {
        products.push({
          title: card.querySelector('h2').innerText,
          price: card.querySelector('.price').innerText,
          availability: card.querySelector('.stock').innerText
        });
      });
      return products;
    }`
  }
}

When working with complex pages, you may need to handle page navigation and wait for specific elements before extracting data.

Using API-Based Scraping Services in n8n

For production-grade scraping, consider integrating dedicated scraping APIs into your n8n templates. Here's how to configure an HTTP Request node to use a scraping service:

// HTTP Request Node for API-based Scraping
{
  "method": "GET",
  "url": "https://api.webscraping.ai/html",
  "qs": {
    "api_key": "={{$credentials.apiKey}}",
    "url": "={{$node['Start'].json['targetUrl']}}",
    "js": "true",
    "proxy": "datacenter"
  },
  "options": {
    "timeout": 30000,
    "response": {
      "response": {
        "fullResponse": true
      }
    }
  }
}

This approach offers several advantages:

  • JavaScript rendering: Automatically handles dynamic content
  • Proxy rotation: Built-in IP rotation to avoid blocking
  • Error handling: Retry logic and fallback mechanisms
  • Scalability: Handle high-volume scraping without infrastructure

Processing and Storing Scraped Data

Data Transformation Example

After extraction, use n8n's Function node to clean and transform data:

// Function Node - Data Cleaning
const items = $input.all();

return items.map(item => {
  const data = item.json;

  return {
    json: {
      title: data.title.trim(),
      price: parseFloat(data.price.replace(/[^0-9.]/g, '')),
      currency: data.price.match(/[^0-9.\s]+/)?.[0] || 'USD',
      scrapedAt: new Date().toISOString(),
      url: data.url,
      inStock: data.availability.toLowerCase().includes('in stock')
    }
  };
});

Storing Results

Common storage options in n8n templates:

PostgreSQL Database:

// PostgreSQL Node Configuration
{
  "operation": "insert",
  "table": "scraped_products",
  "columns": "title,price,currency,scraped_at,url,in_stock",
  "returnFields": "*"
}

Google Sheets:

// Google Sheets Node Configuration
{
  "operation": "append",
  "sheetId": "={{$credentials.sheetId}}",
  "range": "Sheet1!A:F",
  "options": {
    "valueInputMode": "USER_ENTERED"
  }
}

JSON File:

// Write Binary File Node
{
  "operation": "write",
  "fileName": "scraped-data-{{$now.format('YYYY-MM-DD')}}.json",
  "data": "={{JSON.stringify($json, null, 2)}}"
}

Handling Common Challenges

Rate Limiting and Delays

Add delay nodes between requests to avoid overwhelming target servers:

// Function Node - Random Delay
const minDelay = 1000; // 1 second
const maxDelay = 3000; // 3 seconds
const delay = Math.floor(Math.random() * (maxDelay - minDelay + 1)) + minDelay;

return new Promise(resolve => {
  setTimeout(() => {
    resolve($input.all());
  }, delay);
});

Error Handling

Implement robust error handling in your templates:

// Error Trigger Node Configuration
{
  "errorWorkflow": "error-notification-workflow",
  "continueOnFail": true,
  "retryOnFail": true,
  "maxTries": 3,
  "waitBetweenTries": 5000
}

Pagination Support

Handle multi-page scraping with loops:

// Function Node - Pagination Logic
const currentPage = $node['Loop'].json.page || 1;
const maxPages = 10;
const baseUrl = "https://example.com/products";

if (currentPage <= maxPages) {
  return {
    json: {
      url: `${baseUrl}?page=${currentPage}`,
      page: currentPage + 1,
      continue: true
    }
  };
} else {
  return {
    json: {
      continue: false
    }
  };
}

Scheduling Your Scraping Workflow

n8n templates can be scheduled to run automatically:

// Cron Node Configuration
{
  "mode": "cronExpression",
  "cronExpression": "0 */6 * * *", // Every 6 hours
  "triggerTimes": {
    "mode": "everyX",
    "value": 6,
    "unit": "hours"
  }
}

Common scheduling patterns: - 0 0 * * * - Daily at midnight - 0 */4 * * * - Every 4 hours - 0 9 * * 1-5 - Weekdays at 9 AM - */30 * * * * - Every 30 minutes

Best Practices for n8n Web Scraping Templates

  1. Respect robots.txt: Always check and follow website scraping policies
  2. Use appropriate delays: Add reasonable delays between requests
  3. Implement error handling: Use try-catch blocks and error workflows
  4. Monitor your workflows: Set up notifications for failures
  5. Store credentials securely: Use n8n's credential system, never hardcode API keys
  6. Test incrementally: Start with small data sets before scaling up
  7. Document your workflows: Add note nodes explaining complex logic
  8. Version control: Export and backup your workflows regularly

Advanced Template Customization

Conditional Scraping

Use IF nodes to create conditional logic:

// IF Node - Check Data Quality
{
  "conditions": {
    "boolean": [],
    "number": [
      {
        "value1": "={{$json['price']}}",
        "operation": "larger",
        "value2": 0
      }
    ],
    "string": [
      {
        "value1": "={{$json['title']}}",
        "operation": "notEmpty"
      }
    ]
  },
  "combineOperation": "all"
}

Webhook Triggers

Create on-demand scraping via webhooks:

// Webhook Node Configuration
{
  "path": "scrape-product",
  "method": "POST",
  "responseMode": "lastNode",
  "options": {
    "rawBody": false
  }
}

Trigger the webhook:

curl -X POST https://your-n8n-instance.com/webhook/scrape-product \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com/product/123"}'

Conclusion

n8n web scraping templates provide a powerful foundation for building automated data extraction workflows. By starting with a template and customizing it to your needs, you can quickly deploy production-ready scraping solutions without extensive coding. Remember to follow ethical scraping practices, implement proper error handling, and regularly monitor your workflows for optimal performance.

Whether you're scraping simple static pages or complex JavaScript applications, n8n's visual workflow editor combined with powerful nodes like Puppeteer and HTTP Request makes web scraping accessible to developers of all skill levels. Start with a basic template, experiment with different configurations, and gradually build more sophisticated workflows as your requirements evolve.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon