How to Run Playwright Tests in Parallel Across Multiple Machines
Running Playwright tests in parallel across multiple machines is essential for scaling your test suite and reducing overall execution time. This approach, known as test distribution or sharding, allows you to leverage multiple computing resources to run tests simultaneously across different environments.
Understanding Test Distribution vs Parallelization
Before diving into multi-machine setups, it's important to understand the difference between local parallelization and distributed testing:
- Local Parallelization: Running multiple tests simultaneously on the same machine
- Test Distribution: Splitting tests across multiple machines or environments
- Sharding: Dividing your test suite into smaller, manageable chunks
Prerequisites for Multi-Machine Test Distribution
1. Test Suite Organization
Ensure your tests are properly organized and independent:
// tests/auth/login.spec.js
import { test, expect } from '@playwright/test';
test.describe('Authentication Tests', () => {
test('should login successfully', async ({ page }) => {
await page.goto('/login');
await page.fill('#username', 'testuser');
await page.fill('#password', 'password');
await page.click('#login-button');
await expect(page).toHaveURL('/dashboard');
});
});
2. Environment Configuration
Create a shared configuration file:
// playwright.config.js
module.exports = {
testDir: './tests',
fullyParallel: true,
forbidOnly: !!process.env.CI,
retries: process.env.CI ? 2 : 0,
workers: process.env.CI ? 1 : undefined,
reporter: [
['html'],
['json', { outputFile: 'test-results.json' }],
['junit', { outputFile: 'results.xml' }]
],
use: {
baseURL: process.env.BASE_URL || 'http://localhost:3000',
trace: 'on-first-retry',
screenshot: 'only-on-failure',
},
projects: [
{
name: 'chromium',
use: { ...devices['Desktop Chrome'] },
},
{
name: 'firefox',
use: { ...devices['Desktop Firefox'] },
},
{
name: 'webkit',
use: { ...devices['Desktop Safari'] },
},
],
};
Method 1: Using Playwright's Built-in Sharding
Playwright provides native support for test sharding across multiple machines:
Basic Sharding Setup
# Machine 1 (runs shard 1 of 3)
npx playwright test --shard=1/3
# Machine 2 (runs shard 2 of 3)
npx playwright test --shard=2/3
# Machine 3 (runs shard 3 of 3)
npx playwright test --shard=3/3
Advanced Sharding with Custom Logic
// playwright.config.js
module.exports = {
testDir: './tests',
fullyParallel: true,
workers: process.env.WORKERS || 4,
// Custom test matching for sharding
testMatch: process.env.TEST_PATTERN || '**/*.spec.js',
use: {
baseURL: process.env.BASE_URL || 'http://localhost:3000',
},
projects: [
{
name: 'shard-1',
testMatch: '**/auth/**/*.spec.js',
use: { ...devices['Desktop Chrome'] },
},
{
name: 'shard-2',
testMatch: '**/api/**/*.spec.js',
use: { ...devices['Desktop Chrome'] },
},
{
name: 'shard-3',
testMatch: '**/ui/**/*.spec.js',
use: { ...devices['Desktop Chrome'] },
},
],
};
Method 2: CI/CD Pipeline Distribution
GitHub Actions Matrix Strategy
# .github/workflows/playwright.yml
name: Playwright Tests
on:
push:
branches: [ main, master ]
pull_request:
branches: [ main, master ]
jobs:
test:
timeout-minutes: 60
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
shard: [1, 2, 3, 4]
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: 18
- name: Install dependencies
run: npm ci
- name: Install Playwright Browsers
run: npx playwright install --with-deps
- name: Run Playwright tests
run: npx playwright test --shard=${{ matrix.shard }}/4
- uses: actions/upload-artifact@v3
if: always()
with:
name: playwright-report-${{ matrix.shard }}
path: playwright-report/
retention-days: 30
Jenkins Pipeline Distribution
pipeline {
agent none
stages {
stage('Parallel Tests') {
parallel {
stage('Shard 1') {
agent { label 'test-machine-1' }
steps {
sh 'npm ci'
sh 'npx playwright install --with-deps'
sh 'npx playwright test --shard=1/4'
}
post {
always {
publishHTML([
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'playwright-report',
reportFiles: 'index.html',
reportName: 'Playwright Report Shard 1'
])
}
}
}
stage('Shard 2') {
agent { label 'test-machine-2' }
steps {
sh 'npm ci'
sh 'npx playwright install --with-deps'
sh 'npx playwright test --shard=2/4'
}
}
// Additional shards...
}
}
}
}
Method 3: Docker-based Distribution
Docker Compose Setup
# docker-compose.test.yml
version: '3.8'
services:
playwright-shard-1:
image: mcr.microsoft.com/playwright:focal
working_dir: /app
volumes:
- .:/app
environment:
- SHARD=1/4
command: npx playwright test --shard=1/4
playwright-shard-2:
image: mcr.microsoft.com/playwright:focal
working_dir: /app
volumes:
- .:/app
environment:
- SHARD=2/4
command: npx playwright test --shard=2/4
playwright-shard-3:
image: mcr.microsoft.com/playwright:focal
working_dir: /app
volumes:
- .:/app
environment:
- SHARD=3/4
command: npx playwright test --shard=3/4
playwright-shard-4:
image: mcr.microsoft.com/playwright:focal
working_dir: /app
volumes:
- .:/app
environment:
- SHARD=4/4
command: npx playwright test --shard=4/4
Run distributed tests:
# Start all shards in parallel
docker-compose -f docker-compose.test.yml up --abort-on-container-exit
# Run specific shard
docker-compose -f docker-compose.test.yml up playwright-shard-1
Method 4: Cloud-based Test Distribution
Using Playwright Test Cloud Services
// playwright.config.js for cloud testing
module.exports = {
testDir: './tests',
fullyParallel: true,
// Cloud service configuration
use: {
baseURL: process.env.BASE_URL,
// Cloud-specific settings
video: process.env.CI ? 'retain-on-failure' : 'off',
screenshot: 'only-on-failure',
},
projects: [
{
name: 'cloud-chrome',
use: {
...devices['Desktop Chrome'],
// Cloud service specific capabilities
channel: 'chrome',
},
},
],
};
Test Result Aggregation
Combining Results from Multiple Machines
// scripts/merge-reports.js
const fs = require('fs');
const path = require('path');
function mergePlaywrightReports(reportPaths) {
const mergedResults = {
config: {},
suites: [],
tests: [],
errors: [],
stats: {
duration: 0,
expected: 0,
unexpected: 0,
flaky: 0,
skipped: 0
}
};
reportPaths.forEach(reportPath => {
if (fs.existsSync(reportPath)) {
const report = JSON.parse(fs.readFileSync(reportPath, 'utf8'));
// Merge test results
mergedResults.suites.push(...report.suites);
mergedResults.tests.push(...report.tests);
mergedResults.errors.push(...report.errors);
// Aggregate statistics
Object.keys(report.stats).forEach(key => {
if (typeof report.stats[key] === 'number') {
mergedResults.stats[key] += report.stats[key];
}
});
}
});
return mergedResults;
}
// Usage
const reportPaths = [
'shard-1/results.json',
'shard-2/results.json',
'shard-3/results.json',
'shard-4/results.json'
];
const mergedReport = mergePlaywrightReports(reportPaths);
fs.writeFileSync('merged-results.json', JSON.stringify(mergedReport, null, 2));
Best Practices for Multi-Machine Testing
1. Test Independence
Ensure tests don't depend on each other:
// Good: Independent test
test('should create user', async ({ page }) => {
const uniqueEmail = `user-${Date.now()}@example.com`;
await page.goto('/signup');
await page.fill('#email', uniqueEmail);
// ... rest of test
});
// Bad: Dependent test
test('should login with created user', async ({ page }) => {
// This assumes previous test created a user
await page.goto('/login');
await page.fill('#email', 'user@example.com');
// ... rest of test
});
2. Environment Isolation
// playwright.config.js
module.exports = {
use: {
baseURL: process.env.BASE_URL || 'http://localhost:3000',
// Use different storage states for different shards
storageState: process.env.STORAGE_STATE_PATH,
// Unique test data per shard
extraHTTPHeaders: {
'X-Test-Shard': process.env.SHARD || '1'
}
}
};
3. Resource Management
// Limit concurrent browser instances per machine
module.exports = {
workers: process.env.WORKERS || Math.floor(require('os').cpus().length / 2),
use: {
// Reduce memory usage
video: process.env.CI ? 'retain-on-failure' : 'off',
screenshot: 'only-on-failure',
// Timeout settings for distributed environments
actionTimeout: 30000,
navigationTimeout: 30000,
}
};
Monitoring and Debugging
Test Execution Monitoring
# Monitor test execution across shards
#!/bin/bash
echo "Starting distributed test monitoring..."
# Start background monitoring
watch -n 5 'ps aux | grep playwright' &
MONITOR_PID=$!
# Run tests with timing
time npx playwright test --shard=${SHARD}/4 --reporter=dot
# Stop monitoring
kill $MONITOR_PID
# Collect results
echo "Test execution completed for shard ${SHARD}"
Error Handling and Retry Logic
// playwright.config.js
module.exports = {
// Retry failed tests in distributed environment
retries: process.env.CI ? 3 : 1,
// Timeout settings for network issues
timeout: 60000,
use: {
// Retry navigation failures
navigationTimeout: 30000,
actionTimeout: 15000,
},
// Custom error handling
reporter: [
['html'],
['json', { outputFile: `results-shard-${process.env.SHARD}.json` }]
]
};
Advanced Scenarios
Dynamic Test Distribution
For scenarios where you need to distribute tests dynamically based on test complexity or execution time, you can create custom distribution logic similar to how to run multiple pages in parallel with Puppeteer, but adapted for cross-machine distribution.
Load Balancing Tests
// scripts/distribute-tests.js
const fs = require('fs');
const glob = require('glob');
function distributeTests(testPattern, shardCount) {
const testFiles = glob.sync(testPattern);
const shards = Array.from({ length: shardCount }, () => []);
// Distribute tests evenly across shards
testFiles.forEach((file, index) => {
const shardIndex = index % shardCount;
shards[shardIndex].push(file);
});
return shards;
}
// Generate shard-specific configurations
const shards = distributeTests('tests/**/*.spec.js', 4);
shards.forEach((shard, index) => {
const config = {
testMatch: shard,
outputDir: `test-results-shard-${index + 1}`,
};
fs.writeFileSync(
`playwright.shard-${index + 1}.config.js`,
`module.exports = ${JSON.stringify(config, null, 2)};`
);
});
Running Tests with Custom Automation
Python Test Distribution Script
#!/usr/bin/env python3
import subprocess
import sys
import os
import concurrent.futures
from typing import List, Dict
def run_shard(shard_config: Dict[str, any]) -> Dict[str, any]:
"""Run a specific test shard."""
shard_id = shard_config['shard_id']
total_shards = shard_config['total_shards']
try:
# Run Playwright tests for this shard
result = subprocess.run([
'npx', 'playwright', 'test',
f'--shard={shard_id}/{total_shards}',
'--reporter=json'
], capture_output=True, text=True, timeout=3600)
return {
'shard_id': shard_id,
'success': result.returncode == 0,
'output': result.stdout,
'errors': result.stderr
}
except subprocess.TimeoutExpired:
return {
'shard_id': shard_id,
'success': False,
'output': '',
'errors': 'Test execution timed out'
}
def distribute_tests(total_shards: int) -> List[Dict[str, any]]:
"""Distribute tests across multiple processes."""
shard_configs = [
{'shard_id': i + 1, 'total_shards': total_shards}
for i in range(total_shards)
]
results = []
with concurrent.futures.ProcessPoolExecutor(max_workers=total_shards) as executor:
future_to_shard = {
executor.submit(run_shard, config): config
for config in shard_configs
}
for future in concurrent.futures.as_completed(future_to_shard):
result = future.result()
results.append(result)
print(f"Shard {result['shard_id']} completed: {'SUCCESS' if result['success'] else 'FAILED'}")
return results
if __name__ == "__main__":
total_shards = int(sys.argv[1]) if len(sys.argv) > 1 else 4
results = distribute_tests(total_shards)
# Check overall success
all_passed = all(result['success'] for result in results)
print(f"\nOverall result: {'ALL TESTS PASSED' if all_passed else 'SOME TESTS FAILED'}")
sys.exit(0 if all_passed else 1)
Performance Optimization
Resource Allocation Strategies
// playwright.config.js with resource-aware configuration
const os = require('os');
module.exports = {
// Adjust workers based on available CPU cores
workers: process.env.CI ? 2 : Math.min(4, os.cpus().length),
// Memory management
use: {
// Limit browser instances to prevent memory issues
launchOptions: {
args: [
'--max_old_space_size=4096',
'--disable-dev-shm-usage',
'--disable-gpu',
'--no-sandbox'
]
}
},
// Timeout configurations for distributed environments
timeout: 120000,
expect: {
timeout: 30000
},
// Optimize for CI environments
reporter: process.env.CI ? [
['github'],
['json', { outputFile: `results-${process.env.SHARD || 1}.json` }]
] : [['html']]
};
Integration with Container Orchestration
Kubernetes Job Distribution
# playwright-test-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: playwright-tests
spec:
parallelism: 4
completions: 4
template:
spec:
containers:
- name: playwright-shard
image: mcr.microsoft.com/playwright:focal
command: ["/bin/bash"]
args:
- -c
- |
SHARD_ID=$((JOB_COMPLETION_INDEX + 1))
echo "Running shard $SHARD_ID of 4"
npx playwright test --shard=$SHARD_ID/4
env:
- name: NODE_ENV
value: "test"
- name: CI
value: "true"
volumeMounts:
- name: test-code
mountPath: /app
workingDir: /app
volumes:
- name: test-code
configMap:
name: playwright-tests
restartPolicy: Never
Conclusion
Running Playwright tests in parallel across multiple machines significantly reduces test execution time and improves CI/CD pipeline efficiency. Choose the method that best fits your infrastructure: native sharding for simple setups, CI/CD matrix strategies for cloud environments, or Docker-based distribution for containerized workflows.
The key to successful test distribution is ensuring test independence, proper resource management, and effective result aggregation. With these strategies, you can scale your Playwright test suite to handle large applications while maintaining fast feedback loops for your development team.
For additional parallel execution techniques, you might also want to explore how to use Puppeteer with Docker for containerized testing environments that complement Playwright's distributed testing capabilities.