When web scraping, you will encounter a variety of HTTP status codes that indicate the success or failure of your HTTP requests. Here are some of the most common HTTP status codes that you might come across while scraping web pages:
1xx Informational
These status codes are informational and are typically not seen during web scraping.
2xx Success
- 200 OK: The request has succeeded, and the server has returned the requested data. This is the ideal status code you want to see when scraping.
- 201 Created: Indicates that the request has been fulfilled and has resulted in one or more new resources being created. This is less common in web scraping since you're typically not posting data to create resources.
- 204 No Content: The server successfully processed the request, but is not returning any content. This might occur when you make a request to a web service that doesn't return data in the response body.
3xx Redirection
- 301 Moved Permanently: The requested resource has been assigned a new permanent URI, and any future references to this resource should use one of the returned URIs. Your scraping tool should handle this and follow the redirection if it's set to do so.
- 302 Found: This tells the client that the resource is temporarily located at another URI. Scraping tools often follow these redirects automatically.
- 303 See Other: The response to the request can be found under another URI using a GET method. When received in response to a POST (or PUT/DELETE), the client should presume that the server has received the data and should issue a redirect with a separate GET message.
- 304 Not Modified: This is used for caching purposes. It tells the client that the response has not been modified, so the client can continue to use the same cached version of the response.
4xx Client Error
- 400 Bad Request: The server cannot or will not process the request due to an apparent client error (e.g., malformed request syntax).
- 401 Unauthorized: Authentication is required, and it has failed or has not yet been provided. You may need to provide login credentials to access the content.
- 403 Forbidden: The request was a valid request, but the server is refusing to respond to it. This might occur if the server has detected scraping behavior.
- 404 Not Found: The requested resource could not be found but may be available again in the future. This is common when scraping URLs that have been removed or changed.
- 429 Too Many Requests: The user has sent too many requests in a given amount of time ("rate limiting"). This happens when you're scraping too aggressively and need to slow down your request rate.
5xx Server Error
- 500 Internal Server Error: A generic error message, given when an unexpected condition was encountered and no more specific message is suitable.
- 502 Bad Gateway: The server was acting as a gateway or proxy and received an invalid response from the upstream server.
- 503 Service Unavailable: The server is currently unable to handle the request due to a temporary overload or maintenance of the server.
- 504 Gateway Timeout: The server was acting as a gateway or proxy and did not receive a timely response from the upstream server.
When writing a web scraper, it's important to handle these HTTP status codes appropriately. For instance, in case of a 429 Too Many Requests
response, you may need to implement a retry mechanism with exponential backoff or respect the Retry-After
header if it's provided by the server.
Here's an example of handling HTTP status codes in Python using the requests
library:
import requests
from time import sleep
response = requests.get('http://example.com')
if response.status_code == 200:
print('Success!')
elif response.status_code == 404:
print('Not Found.')
elif response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
print(f'Rate limited. Retrying after {retry_after} seconds.')
sleep(retry_after)
# Optionally, you could make the request again here.
else:
print(f'Error: {response.status_code}')
And here's an example in JavaScript using axios
:
const axios = require('axios');
axios.get('http://example.com')
.then(response => {
console.log('Success!');
})
.catch(error => {
if (error.response) {
if (error.response.status === 404) {
console.log('Not Found.');
} else if (error.response.status === 429) {
const retryAfter = error.response.headers['retry-after'] || 60;
console.log(`Rate limited. Retrying after ${retryAfter} seconds.`);
setTimeout(() => {
// Optionally, you could make the request again here.
}, retryAfter * 1000);
} else {
console.log(`Error: ${error.response.status}`);
}
} else if (error.request) {
console.log('No response was received.');
} else {
console.log('Error setting up the request.');
}
});
Handling these status codes correctly is crucial for building a robust and respectful web scraping tool. Always remember to follow the website's robots.txt
file rules and terms of service to avoid legal issues and to be a good netizen.