How to Use Puppeteer for Web Performance Testing
Web performance testing is crucial for ensuring your applications deliver optimal user experiences. Puppeteer, Google's headless Chrome automation library, provides powerful capabilities for comprehensive performance testing and monitoring. This guide covers everything you need to know about using Puppeteer for web performance testing.
Understanding Web Performance Metrics
Before diving into implementation, it's essential to understand the key performance metrics that Puppeteer can measure:
- First Contentful Paint (FCP): Time when the first text or image appears
- Largest Contentful Paint (LCP): Time when the largest content element becomes visible
- Time to Interactive (TTI): Time when the page becomes fully interactive
- Cumulative Layout Shift (CLS): Measure of visual stability
- Total Blocking Time (TBT): Time between FCP and TTI when main thread is blocked
Basic Performance Testing Setup
Installing Puppeteer
npm install puppeteer
# or
yarn add puppeteer
Basic Performance Test Script
const puppeteer = require('puppeteer');
async function performanceTest() {
const browser = await puppeteer.launch({
headless: true,
args: ['--no-sandbox', '--disable-setuid-sandbox']
});
const page = await browser.newPage();
// Enable performance monitoring
await page.tracing.start({ path: 'trace.json' });
// Navigate to the page
const response = await page.goto('https://example.com', {
waitUntil: 'networkidle2'
});
// Stop tracing
await page.tracing.stop();
// Get performance metrics
const performanceMetrics = await page.metrics();
console.log('Performance Metrics:', performanceMetrics);
await browser.close();
}
performanceTest();
Advanced Performance Testing Techniques
Measuring Core Web Vitals
const puppeteer = require('puppeteer');
async function measureCoreWebVitals(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Navigate to the page
await page.goto(url, { waitUntil: 'networkidle2' });
// Measure Core Web Vitals
const metrics = await page.evaluate(() => {
return new Promise((resolve) => {
let LCP = 0;
let FID = 0;
let CLS = 0;
let FCP = 0;
// Largest Contentful Paint
new PerformanceObserver((list) => {
const entries = list.getEntries();
LCP = entries[entries.length - 1].startTime;
}).observe({ type: 'largest-contentful-paint', buffered: true });
// First Input Delay
new PerformanceObserver((list) => {
const entries = list.getEntries();
FID = entries[0].processingStart - entries[0].startTime;
}).observe({ type: 'first-input', buffered: true });
// Cumulative Layout Shift
new PerformanceObserver((list) => {
const entries = list.getEntries();
CLS = entries.reduce((sum, entry) => sum + entry.value, 0);
}).observe({ type: 'layout-shift', buffered: true });
// First Contentful Paint
new PerformanceObserver((list) => {
const entries = list.getEntries();
FCP = entries[0].startTime;
}).observe({ type: 'paint', buffered: true });
// Wait for measurements to complete
setTimeout(() => {
resolve({ LCP, FID, CLS, FCP });
}, 3000);
});
});
await browser.close();
return metrics;
}
// Usage
measureCoreWebVitals('https://example.com')
.then(metrics => console.log('Core Web Vitals:', metrics));
Network Performance Testing
async function testNetworkPerformance(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Enable request interception
await page.setRequestInterception(true);
const requests = [];
const responses = [];
page.on('request', (request) => {
requests.push({
url: request.url(),
method: request.method(),
timestamp: Date.now()
});
request.continue();
});
page.on('response', (response) => {
responses.push({
url: response.url(),
status: response.status(),
timestamp: Date.now(),
size: response.headers()['content-length'] || 0
});
});
const startTime = Date.now();
await page.goto(url, { waitUntil: 'networkidle2' });
const loadTime = Date.now() - startTime;
// Analyze network performance
const networkMetrics = {
totalRequests: requests.length,
totalResponses: responses.length,
totalLoadTime: loadTime,
averageResponseTime: responses.reduce((sum, r) => sum + (r.timestamp - startTime), 0) / responses.length,
failedRequests: responses.filter(r => r.status >= 400).length
};
console.log('Network Performance:', networkMetrics);
await browser.close();
return networkMetrics;
}
Performance Testing with Different Network Conditions
async function testWithNetworkThrottling(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Emulate different network conditions
const networkConditions = [
{ name: 'Fast 3G', downloadThroughput: 1.5 * 1024 * 1024 / 8, uploadThroughput: 750 * 1024 / 8, latency: 40 },
{ name: 'Slow 3G', downloadThroughput: 0.5 * 1024 * 1024 / 8, uploadThroughput: 0.5 * 1024 * 1024 / 8, latency: 400 },
{ name: 'Offline', downloadThroughput: 0, uploadThroughput: 0, latency: 0 }
];
const results = [];
for (const condition of networkConditions) {
// Set network conditions
const client = await page.target().createCDPSession();
await client.send('Network.emulateNetworkConditions', {
offline: condition.name === 'Offline',
downloadThroughput: condition.downloadThroughput,
uploadThroughput: condition.uploadThroughput,
latency: condition.latency
});
const startTime = Date.now();
try {
await page.goto(url, { waitUntil: 'networkidle2', timeout: 30000 });
const loadTime = Date.now() - startTime;
results.push({
condition: condition.name,
loadTime,
success: true
});
} catch (error) {
results.push({
condition: condition.name,
loadTime: null,
success: false,
error: error.message
});
}
}
await browser.close();
return results;
}
Performance Monitoring and Reporting
Creating Performance Reports
const fs = require('fs');
const path = require('path');
async function generatePerformanceReport(urls) {
const report = {
timestamp: new Date().toISOString(),
results: []
};
for (const url of urls) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Start performance monitoring
await page.tracing.start({ path: `trace-${Date.now()}.json` });
const startTime = Date.now();
await page.goto(url, { waitUntil: 'networkidle2' });
const loadTime = Date.now() - startTime;
// Get performance metrics
const metrics = await page.metrics();
// Get Core Web Vitals
const webVitals = await page.evaluate(() => {
return new Promise((resolve) => {
const observer = new PerformanceObserver((list) => {
const entries = list.getEntries();
const vitals = {};
entries.forEach(entry => {
if (entry.entryType === 'largest-contentful-paint') {
vitals.LCP = entry.startTime;
} else if (entry.entryType === 'first-input') {
vitals.FID = entry.processingStart - entry.startTime;
} else if (entry.entryType === 'layout-shift') {
vitals.CLS = (vitals.CLS || 0) + entry.value;
}
});
resolve(vitals);
});
observer.observe({ entryTypes: ['largest-contentful-paint', 'first-input', 'layout-shift'] });
// Fallback timeout
setTimeout(() => resolve({}), 5000);
});
});
await page.tracing.stop();
report.results.push({
url,
loadTime,
metrics,
webVitals,
timestamp: new Date().toISOString()
});
await browser.close();
}
// Save report
const reportPath = path.join(__dirname, 'performance-report.json');
fs.writeFileSync(reportPath, JSON.stringify(report, null, 2));
return report;
}
Mobile Performance Testing
async function testMobilePerformance(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Emulate mobile device
await page.emulate(puppeteer.devices['iPhone 12']);
// Set up performance monitoring
const metrics = [];
page.on('metrics', (metric) => {
metrics.push(metric);
});
// Navigate and measure
const startTime = Date.now();
await page.goto(url, { waitUntil: 'networkidle2' });
const loadTime = Date.now() - startTime;
// Get mobile-specific metrics
const mobileMetrics = await page.evaluate(() => {
return {
devicePixelRatio: window.devicePixelRatio,
screenWidth: window.screen.width,
screenHeight: window.screen.height,
viewportWidth: window.innerWidth,
viewportHeight: window.innerHeight
};
});
await browser.close();
return {
loadTime,
mobileMetrics,
performanceMetrics: metrics
};
}
Performance Optimization Testing
Testing Performance Improvements
async function comparePerformance(url, scenarios) {
const results = [];
for (const scenario of scenarios) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Apply scenario configurations
if (scenario.blockResources) {
await page.setRequestInterception(true);
page.on('request', (req) => {
if (scenario.blockResources.includes(req.resourceType())) {
req.abort();
} else {
req.continue();
}
});
}
if (scenario.cacheEnabled === false) {
await page.setCacheEnabled(false);
}
// Measure performance
const startTime = Date.now();
await page.goto(url, { waitUntil: 'networkidle2' });
const loadTime = Date.now() - startTime;
const metrics = await page.metrics();
results.push({
scenario: scenario.name,
loadTime,
metrics
});
await browser.close();
}
return results;
}
// Usage example
const scenarios = [
{ name: 'Baseline', blockResources: [], cacheEnabled: true },
{ name: 'No Images', blockResources: ['image'], cacheEnabled: true },
{ name: 'No JavaScript', blockResources: ['script'], cacheEnabled: true },
{ name: 'No Cache', blockResources: [], cacheEnabled: false }
];
comparePerformance('https://example.com', scenarios)
.then(results => console.log('Performance Comparison:', results));
Best Practices for Performance Testing
1. Consistent Test Environment
async function createConsistentBrowser() {
return await puppeteer.launch({
headless: true,
args: [
'--no-sandbox',
'--disable-setuid-sandbox',
'--disable-dev-shm-usage',
'--disable-background-timer-throttling',
'--disable-backgrounding-occluded-windows',
'--disable-renderer-backgrounding'
]
});
}
2. Multiple Test Runs
async function runMultipleTests(url, iterations = 5) {
const results = [];
for (let i = 0; i < iterations; i++) {
const browser = await createConsistentBrowser();
const page = await browser.newPage();
const startTime = Date.now();
await page.goto(url, { waitUntil: 'networkidle2' });
const loadTime = Date.now() - startTime;
const metrics = await page.metrics();
results.push({ iteration: i + 1, loadTime, metrics });
await browser.close();
}
// Calculate averages
const avgLoadTime = results.reduce((sum, r) => sum + r.loadTime, 0) / results.length;
const avgJSHeapUsedSize = results.reduce((sum, r) => sum + r.metrics.JSHeapUsedSize, 0) / results.length;
return {
individual: results,
averages: {
loadTime: avgLoadTime,
jsHeapUsedSize: avgJSHeapUsedSize
}
};
}
3. Performance Budgets
function checkPerformanceBudget(metrics, budget) {
const results = {
passed: true,
violations: []
};
if (metrics.loadTime > budget.maxLoadTime) {
results.passed = false;
results.violations.push(`Load time ${metrics.loadTime}ms exceeds budget ${budget.maxLoadTime}ms`);
}
if (metrics.LCP > budget.maxLCP) {
results.passed = false;
results.violations.push(`LCP ${metrics.LCP}ms exceeds budget ${budget.maxLCP}ms`);
}
if (metrics.CLS > budget.maxCLS) {
results.passed = false;
results.violations.push(`CLS ${metrics.CLS} exceeds budget ${budget.maxCLS}`);
}
return results;
}
// Usage
const budget = {
maxLoadTime: 3000,
maxLCP: 2500,
maxCLS: 0.1
};
const budgetCheck = checkPerformanceBudget(performanceMetrics, budget);
console.log('Budget Check:', budgetCheck);
Integration with CI/CD
For continuous performance monitoring, integrate your Puppeteer performance tests into your CI/CD pipeline:
// performance-test.js
const puppeteer = require('puppeteer');
async function runPerformanceTest() {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(process.env.TEST_URL || 'http://localhost:3000');
const metrics = await page.metrics();
const loadTime = metrics.Timestamp * 1000; // Convert to milliseconds
await browser.close();
// Exit with error code if performance is poor
if (loadTime > 5000) {
console.error(`Performance test failed: Load time ${loadTime}ms exceeds threshold`);
process.exit(1);
}
console.log(`Performance test passed: Load time ${loadTime}ms`);
}
runPerformanceTest();
Python Alternative with Pyppeteer
For Python developers, you can use Pyppeteer for similar performance testing:
import asyncio
from pyppeteer import launch
import json
import time
async def performance_test(url):
browser = await launch()
page = await browser.newPage()
# Start performance monitoring
await page.tracing.start({'path': 'trace.json'})
start_time = time.time()
await page.goto(url, {'waitUntil': 'networkidle2'})
load_time = (time.time() - start_time) * 1000 # Convert to milliseconds
# Get performance metrics
metrics = await page.metrics()
# Stop tracing
await page.tracing.stop()
await browser.close()
return {
'load_time': load_time,
'metrics': metrics
}
# Usage
async def main():
result = await performance_test('https://example.com')
print(f"Load time: {result['load_time']:.2f}ms")
print(f"Metrics: {json.dumps(result['metrics'], indent=2)}")
asyncio.run(main())
Conclusion
Puppeteer provides comprehensive capabilities for web performance testing, from basic metrics collection to advanced Core Web Vitals measurement. By implementing proper performance testing strategies, you can ensure your web applications deliver optimal user experiences across different devices and network conditions.
For more advanced automation testing scenarios, consider exploring different methods to handle timeouts in Playwright or learning about network throttling techniques in Playwright for cross-platform performance testing approaches.
Remember to establish performance budgets, run tests consistently, and integrate performance testing into your development workflow to maintain optimal web performance over time.