How do I use the Requests library to download images or videos?

To download images or videos using the Python Requests library, you need to perform a GET request to the URL where the media is located and then write the content of the response to a file. Below are the steps and a code example for downloading an image and a video using the Requests library.

Step 1: Install Requests

If you haven't already installed the Requests library, you can do so using pip:

pip install requests

Step 2: Use Requests to Fetch the Media

Perform a GET request to the media URL to fetch its content.

Step 3: Save the Media to a Local File

Open a file in binary write mode and write the content of the response to the file.

Here is an example of how to download an image:

import requests

# URL of the image
image_url = 'https://example.com/path/to/image.jpg'

# Get the image content
response = requests.get(image_url)

# Check if the request was successful
if response.status_code == 200:
    # Open a local file in binary write mode
    with open('downloaded_image.jpg', 'wb') as file:
        file.write(response.content)
else:
    print(f'Failed to retrieve image: status code {response.status_code}')

And here is an example for downloading a video:

import requests

# URL of the video
video_url = 'https://example.com/path/to/video.mp4'

# Get the video content
response = requests.get(video_url, stream=True)

# Check if the request was successful
if response.status_code == 200:
    # Open a local file in binary write mode
    with open('downloaded_video.mp4', 'wb') as file:
        for chunk in response.iter_content(chunk_size=1024*1024):  # Download in 1 MB chunks
            if chunk:  # filter out keep-alive new chunks
                file.write(chunk)
else:
    print(f'Failed to retrieve video: status code {response.status_code}')

Notes:

  • When downloading large files like videos, it is recommended to use the stream=True parameter in the requests.get() call to download the content in chunks instead of loading it all into memory at once. This helps to manage memory usage.
  • The iter_content method is used with a suitable chunk size to iterate over the content. Each chunk is written to the file before proceeding to the next one.
  • Make sure to handle exceptions and check for HTTP errors (e.g., status codes other than 200) to make your code robust.

Remember to respect the terms of service of websites and obtain permission before scraping or downloading content.

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon