Web scraping is a method used to extract large amounts of data from websites. Selenium is a powerful tool for controlling a web browser through the program. It is functional for all browsers, works on all major OS and its scripts are written in various languages i.e Python, Java, C#, etc.
Here's an example of how to use Selenium to scrape data from a complex website.
Prerequisites
You need to have Selenium, BeautifulSoup and pandas installed. If not installed, use the below commands to get them installed:
pip install selenium beautifulsoup4 pandas
You also need to have the correct WebDriver for the browser you want to use (Chrome, Firefox, etc). WebDrivers are needed to access and launch the browsers.
Python Example
Let's say you're trying to scrape data from a website that requires you to navigate through some pages, fill out forms, or deal with AJAX content. Here's an example of how you might do that using Python, Selenium, and Beautiful Soup:
from selenium import webdriver
from bs4 import BeautifulSoup
import pandas as pd
# Create a new instance of the Firefox driver
driver = webdriver.Firefox()
# Go to the page that we want to scrape
driver.get("https://www.example.com")
# Find the element that we want to interact with
element = driver.find_element_by_name("q")
# Type into the element
element.send_keys("web scraping with python")
# Submit the form
element.submit()
# Wait for the page to load
driver.implicitly_wait(10)
# Parse the HTML of the page with Beautiful Soup
soup = BeautifulSoup(driver.page_source, 'html.parser')
# Find the relevant data and save it
data = soup.find_all('div', class_="data")
data_list = [d.text for d in data]
# Convert data into DataFrame
df = pd.DataFrame(data_list, columns=["Data"])
# Close the browser
driver.quit()
In this code, we start by creating a new Selenium WebDriver instance and navigating to the page we want to scrape. We then find the element we need to interact with, interact with it, and submit the form. After the page loads, we parse the HTML with Beautiful Soup, find the data we need, and save it. Finally, we close the browser.
Remember, Selenium can be quite slow compared to requests-based scraping since it has to load the entire webpage (including all images, CSS, JavaScript, etc.), so it's best used when requests-based scraping isn't an option.
JavaScript Example
In Javascript, you can use libraries like Puppeteer or webdriver.io for web scraping using a method similar to Selenium. Here's an example using Puppeteer:
const puppeteer = require('puppeteer');
async function scrape() {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://www.example.com');
// Interact with the page
await page.type('#search', 'web scraping with javascript');
await page.click('#submit');
// Wait for the results to load
await page.waitForNavigation();
// Extract the data
const data = await page.evaluate(() => {
const list = document.querySelectorAll('.data');
return Array.from(list).map(d => d.textContent);
});
console.log(data);
await browser.close();
}
scrape();
This script does essentially the same thing as the Python script above. It navigates to a page, interacts with it, waits for the new page to load, then extracts and logs the data.
Remember, when using Selenium or similar tools, always respect the terms of service of the website you are scraping.