Scraping Glassdoor reviews without duplicates requires a crawler that can keep track of already visited pages or processed reviews. This can be achieved through various methods, such as maintaining a set of unique identifiers for reviews or using a database to persistently store and check for duplicates.
Please note that web scraping may violate Glassdoor's Terms of Service. Always check the legal agreements of the website you are scraping and obtain permission if necessary.
Here's a high-level approach using Python with libraries like requests
, BeautifulSoup
, and pandas
to ensure you don't encounter duplicates:
Step 1: Setup
Install the required libraries (if you haven't already):
pip install requests beautifulsoup4 pandas
Step 2: Define a Function to Parse and Extract Reviews
import requests
from bs4 import BeautifulSoup
import pandas as pd
def parse_reviews(soup):
reviews = []
for review in soup.find_all('article', class_='review'):
review_id = review.get('data-id')
if review_id not in seen_reviews:
seen_reviews.add(review_id)
# Extract review details here, for example:
title = review.find('h2', class_='title').text.strip()
text = review.find('div', class_='reviewText').text.strip()
# Add more fields as needed
reviews.append({
'id': review_id,
'title': title,
'text': text,
# Add more fields as needed
})
return reviews
Step 3: Iterate Over Pages and Collect Reviews
seen_reviews = set()
all_reviews = []
for page in range(1, num_pages + 1):
url = f"https://www.glassdoor.com/Reviews/company-reviews-P{page}.htm"
response = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'})
if response.status_code == 200:
soup = BeautifulSoup(response.content, 'html.parser')
all_reviews.extend(parse_reviews(soup))
else:
print(f"Failed to retrieve page: {page}")
# Convert the collected reviews to a DataFrame
df_reviews = pd.DataFrame(all_reviews)
In the code above, num_pages
should be set to the number of pages you want to scrape. The seen_reviews
set is used to store unique review identifiers, which helps in avoiding duplicates. The parse_reviews
function is responsible for parsing the HTML content and extracting the desired data.
Step 4: Save the Data
df_reviews.to_csv('glassdoor_reviews.csv', index=False)
This code will save the collected reviews to a CSV file without including any duplicates.
Additional Considerations
- Pagination and JavaScript Rendering: Glassdoor might require you to deal with pagination and JavaScript-rendered content. If that's the case, you'll need to use tools like Selenium or Puppeteer to interact with the webpage dynamically.
- Rate Limiting: Websites often have rate limits to prevent excessive access to their servers. Make sure to respect these limits and consider adding delays between your requests.
- Headers and Sessions: Maintain a consistent session and set proper headers to mimic a real user's browser session. This can help avoid detection and potential blocking.
- Error Handling: Implement robust error handling to deal with network issues, unexpected HTML structure changes, or website updates that might affect your scraper's functionality.
JavaScript Example with Puppeteer (Node.js)
Here's a simple example of how you might approach this with Puppeteer in JavaScript:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
const seenReviews = new Set();
const allReviews = [];
for (let pageNum = 1; pageNum <= numPages; pageNum++) {
await page.goto(`https://www.glassdoor.com/Reviews/company-reviews-P${pageNum}.htm`);
const reviews = await page.evaluate(() => {
let reviewsOnPage = [];
document.querySelectorAll('article.review').forEach(review => {
const reviewId = review.dataset.id;
if (!seenReviews.has(reviewId)) {
seenReviews.add(reviewId);
const title = review.querySelector('h2.title').innerText.trim();
const text = review.querySelector('div.reviewText').innerText.trim();
reviewsOnPage.push({ id: reviewId, title, text });
}
});
return reviewsOnPage;
});
allReviews.push(...reviews);
}
await browser.close();
// Further processing and saving of allReviews
})();
This JavaScript snippet uses Puppeteer to navigate Glassdoor and collect reviews. It assumes that numPages
has been defined with the number of pages you want to scrape. Similar to the Python example, it uses a Set
to track already seen reviews and avoid duplicates.
Remember to respect the website's robots.txt
file and understand that scraping can be legally and ethically controversial. Use these examples as a learning tool and make sure you have permission to scrape the data you're after.