Table of contents

How to Use XPath Functions like starts-with() and contains() for Text Matching

XPath text matching functions are essential tools for web scraping when dealing with dynamic content, partial text matches, or when you need more flexible element selection criteria. The starts-with() and contains() functions are among the most powerful XPath functions for matching text patterns in HTML elements.

Understanding XPath Text Matching Functions

XPath provides several built-in functions for text manipulation and matching. These functions allow you to create more robust selectors that can handle dynamic content, whitespace variations, and partial text matches that are common in modern web applications.

The contains() Function

The contains() function checks if a string contains a specific substring. It's particularly useful when you need to match elements with partial text content or when dealing with dynamic class names.

Syntax:

contains(string, substring)

Basic Example:

//div[contains(text(), 'Welcome')]

This XPath expression selects all div elements that contain the text "Welcome" anywhere within their content.

The starts-with() Function

The starts-with() function checks if a string begins with a specific substring. This is especially useful for matching elements with dynamic suffixes or when you want to target elements based on the beginning of their text content.

Syntax:

starts-with(string, substring)

Basic Example:

//button[starts-with(text(), 'Submit')]

This expression selects all button elements whose text starts with "Submit".

Practical Implementation Examples

Python with Selenium

Here's how to use these XPath functions with Python and Selenium:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
driver.get("https://example.com")

# Using contains() to find elements with partial text
elements_with_welcome = driver.find_elements(
    By.XPATH, 
    "//div[contains(text(), 'Welcome')]"
)

# Using starts-with() to find buttons that start with specific text
submit_buttons = driver.find_elements(
    By.XPATH, 
    "//button[starts-with(text(), 'Submit')]"
)

# Combining with attribute matching
dynamic_links = driver.find_elements(
    By.XPATH, 
    "//a[contains(@class, 'nav-item') and contains(text(), 'Home')]"
)

# Using with WebDriverWait
wait = WebDriverWait(driver, 10)
element = wait.until(
    EC.presence_of_element_located(
        (By.XPATH, "//span[starts-with(text(), 'Loading')]")
    )
)

driver.quit()

JavaScript with Puppeteer

When working with Puppeteer for web scraping, XPath functions become particularly valuable for handling dynamic content and complex DOM structures:

const puppeteer = require('puppeteer');

(async () => {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();
    await page.goto('https://example.com');

    // Using contains() with Puppeteer
    const welcomeElements = await page.$x("//div[contains(text(), 'Welcome')]");

    // Using starts-with() for button selection
    const submitButtons = await page.$x("//button[starts-with(text(), 'Submit')]");

    // Extract text from matched elements
    for (const element of welcomeElements) {
        const text = await page.evaluate(el => el.textContent, element);
        console.log('Found element with text:', text);
    }

    // Click on first matching button
    if (submitButtons.length > 0) {
        await submitButtons[0].click();
    }

    await browser.close();
})();

Advanced Text Matching Techniques

Case-Insensitive Matching

XPath doesn't have built-in case-insensitive functions, but you can achieve this using the translate() function:

//div[contains(translate(text(), 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'welcome')]

Matching Multiple Conditions

You can combine text matching functions with logical operators:

//div[contains(text(), 'Product') and starts-with(@class, 'item')]

Handling Whitespace

When dealing with elements that might have leading or trailing whitespace, use the normalize-space() function:

//p[contains(normalize-space(text()), 'Important')]

Matching Descendant Text

To match text in descendant elements, use the .//text() pattern:

//div[contains(.//text(), 'Search term')]

Real-World Use Cases

E-commerce Product Matching

# Finding products with specific features in their descriptions
products = driver.find_elements(
    By.XPATH, 
    "//div[@class='product'][contains(.//text(), 'Free Shipping')]"
)

# Selecting price elements that start with currency symbols
prices = driver.find_elements(
    By.XPATH, 
    "//span[starts-with(text(), '$') or starts-with(text(), '€')]"
)

Form Field Selection

// Selecting form fields based on label text
const emailField = await page.$x("//input[starts-with(@placeholder, 'Enter your email')]");
const submitButton = await page.$x("//button[contains(text(), 'Sign Up')]");

Navigation Menu Handling

# Finding navigation items that contain specific keywords
nav_items = driver.find_elements(
    By.XPATH, 
    "//nav//a[contains(text(), 'About') or contains(text(), 'Contact')]"
)

Performance Considerations

Optimizing XPath Queries

When using text matching functions, consider these performance tips:

  1. Be specific with element types: Instead of using //*[contains(text(), 'search')], use //div[contains(text(), 'search')]

  2. Limit search scope: Use descendant selectors when possible: //main//div[contains(text(), 'content')]

  3. Combine with other attributes: //button[@type='submit' and starts-with(text(), 'Submit')]

Handling Dynamic Content

For applications with dynamic content loading, combining XPath functions with proper wait strategies is crucial. This is particularly important when monitoring network requests to ensure content has loaded:

from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

# Wait for element with partial text to appear
wait = WebDriverWait(driver, 10)
element = wait.until(
    EC.presence_of_element_located(
        (By.XPATH, "//div[contains(@class, 'status') and starts-with(text(), 'Success')]")
    )
)

Common Pitfalls and Solutions

Text vs. String Value Matching

Remember the difference between text() and string value:

# Matches direct text content
//div[contains(text(), 'Hello')]

# Matches string value (includes all descendant text)
//div[contains(., 'Hello')]

Handling Special Characters

When matching text with special characters, be careful with escaping:

# For text containing quotes
elements = driver.find_elements(
    By.XPATH, 
    "//div[contains(text(), \"It's working\")]"
)

# For text with apostrophes
elements = driver.find_elements(
    By.XPATH, 
    '//div[contains(text(), "User\'s Profile")]'
)

Debugging XPath Expressions

Use browser developer tools to test XPath expressions:

// In browser console
$x("//div[contains(text(), 'Welcome')]")

Integration with Web Scraping APIs

When using web scraping services, XPath functions can be particularly powerful. Many APIs support XPath selectors for element targeting:

# Example with curl using XPath for element selection
curl -X GET "https://api.webscraping.ai/html" \
  -H "Api-Key: YOUR_API_KEY" \
  -G \
  --data-urlencode "url=https://example.com" \
  --data-urlencode "selector=//div[contains(text(), 'Product')]"

Conclusion

XPath text matching functions like starts-with() and contains() are invaluable tools for creating robust web scraping solutions. They provide the flexibility needed to handle dynamic content, partial matches, and complex text patterns that are common in modern web applications.

By mastering these functions and combining them with proper wait strategies and error handling, you can create more reliable scrapers that can adapt to changes in website structure and content. Whether you're using Selenium with Python, Puppeteer with JavaScript, or other web automation tools, these XPath functions will help you create more precise and maintainable element selectors.

Remember to always test your XPath expressions thoroughly and consider performance implications when scraping large-scale websites. With practice, these text matching functions will become an essential part of your web scraping toolkit.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon