Table of contents

How can I install Puppeteer-Sharp in a .NET project?

PuppeteerSharp is the official .NET port of Google's Puppeteer library, providing a high-level API to control headless Chrome or Chromium browsers via the DevTools Protocol. This guide covers all installation methods and troubleshooting tips.

Prerequisites

Before installing PuppeteerSharp, ensure your project meets these requirements:

  • .NET 6.0 or later (recommended for latest features)
  • .NET Standard 2.0 (minimum requirement)
  • Visual Studio 2019+ or VS Code with C# extension

Installation Methods

1. Using .NET CLI (Recommended)

The fastest way to install PuppeteerSharp is through the .NET CLI:

# Navigate to your project directory
cd YourProjectDirectory

# Add PuppeteerSharp package
dotnet add package PuppeteerSharp

# Restore packages (optional, done automatically)
dotnet restore

To install a specific version:

dotnet add package PuppeteerSharp --version 13.0.2

2. Using Visual Studio Package Manager Console

Open the Package Manager Console in Visual Studio (ToolsNuGet Package ManagerPackage Manager Console):

# Install latest version
Install-Package PuppeteerSharp

# Install specific version
Install-Package PuppeteerSharp -Version 13.0.2

3. Using Visual Studio NuGet Package Manager GUI

  1. Right-click your project in Solution Explorer
  2. Select Manage NuGet Packages
  3. Click the Browse tab
  4. Search for PuppeteerSharp
  5. Select the package by Kensho Technologies
  6. Click Install

4. Manual Installation via .csproj

Edit your project file (.csproj) and add the package reference:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net6.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="PuppeteerSharp" Version="13.0.2" />
  </ItemGroup>

</Project>

Then restore packages:

dotnet restore

Verification and First Use

Basic Setup Example

After installation, verify PuppeteerSharp works with this simple example:

using PuppeteerSharp;

class Program
{
    public static async Task Main(string[] args)
    {
        // Download Chromium if not already installed
        await new BrowserFetcher().DownloadAsync();

        // Launch browser
        using var browser = await Puppeteer.LaunchAsync(new LaunchOptions
        {
            Headless = true,
            DefaultViewport = new ViewPortOptions
            {
                Width = 1920,
                Height = 1080
            }
        });

        // Create new page
        using var page = await browser.NewPageAsync();

        // Navigate to URL
        await page.GoToAsync("https://example.com");

        // Take screenshot
        await page.ScreenshotAsync("screenshot.png");

        Console.WriteLine("Screenshot saved as screenshot.png");
    }
}

Advanced Configuration Example

For production applications, consider this more robust setup:

using PuppeteerSharp;

public class WebScrapingService
{
    private readonly LaunchOptions _launchOptions;

    public WebScrapingService()
    {
        _launchOptions = new LaunchOptions
        {
            Headless = true,
            Args = new[]
            {
                "--no-sandbox",
                "--disable-setuid-sandbox",
                "--disable-dev-shm-usage",
                "--disable-gpu"
            },
            DefaultViewport = new ViewPortOptions
            {
                Width = 1920,
                Height = 1080
            }
        };
    }

    public async Task<string> GetPageContentAsync(string url)
    {
        using var browser = await Puppeteer.LaunchAsync(_launchOptions);
        using var page = await browser.NewPageAsync();

        await page.GoToAsync(url, new NavigationOptions
        {
            WaitUntil = new[] { WaitUntilNavigation.Networkidle0 }
        });

        return await page.GetContentAsync();
    }
}

Troubleshooting Common Issues

Issue 1: Chromium Download Fails

Problem: BrowserFetcher fails to download Chromium automatically.

Solution:

// Manually specify Chromium path
var browserFetcher = new BrowserFetcher();
await browserFetcher.DownloadAsync(BrowserFetcher.DefaultChromiumRevision);

// Or use system Chrome/Chromium
var browser = await Puppeteer.LaunchAsync(new LaunchOptions
{
    ExecutablePath = "/path/to/chrome", // Adjust path
    Headless = true
});

Issue 2: Linux/Docker Compatibility

Problem: Browser crashes in Linux containers.

Solution: Add required arguments:

var launchOptions = new LaunchOptions
{
    Headless = true,
    Args = new[]
    {
        "--no-sandbox",
        "--disable-setuid-sandbox",
        "--disable-dev-shm-usage"
    }
};

Issue 3: Memory Issues

Problem: High memory usage or OutOfMemoryException.

Solution: Properly dispose resources:

// Always use 'using' statements
using var browser = await Puppeteer.LaunchAsync(options);
using var page = await browser.NewPageAsync();

// Or explicitly dispose
try
{
    var browser = await Puppeteer.LaunchAsync(options);
    // ... use browser
}
finally
{
    await browser?.CloseAsync();
}

Version Compatibility

| PuppeteerSharp Version | .NET Framework | Chrome Version | |------------------------|----------------|----------------| | 13.x | .NET 6.0+ | 119+ | | 12.x | .NET 6.0+ | 118+ | | 11.x | .NET 6.0+ | 117+ | | 8.x | .NET Standard 2.0+ | 108+ |

Next Steps

After successful installation:

  1. Configure browser options for your specific use case
  2. Set up error handling and logging
  3. Implement resource disposal patterns
  4. Consider using dependency injection in ASP.NET Core projects
  5. Review the official documentation at PuppeteerSharp GitHub

The package is now ready for web scraping, automated testing, or PDF generation tasks in your .NET application.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon