Curl is a powerful command-line tool for transferring data to/from servers using various protocols (HTTP, HTTPS, FTP, etc.). Sending multiple requests is common when testing APIs, performing load tests, or automating batch operations.
Sequential Requests
The simplest approach is to send requests one after another using a loop:
#!/bin/bash
# Basic sequential requests
for i in {1..5}; do
curl -s "https://api.example.com/users/$i"
echo "Request $i completed"
done
With Request Tracking
#!/bin/bash
url="https://api.example.com/data"
# Send 10 requests with status tracking
for i in {1..10}; do
echo "Sending request $i..."
response=$(curl -s -w "%{http_code}" -o response_$i.json "$url")
echo "Request $i: HTTP $response"
sleep 1 # Add delay between requests
done
Concurrent Requests (Background Processes)
For faster execution, run requests in parallel using background processes:
#!/bin/bash
# Send multiple requests concurrently
urls=(
"https://api.example.com/endpoint1"
"https://api.example.com/endpoint2"
"https://api.example.com/endpoint3"
"https://api.example.com/endpoint4"
)
# Start all requests in background
for i in "${!urls[@]}"; do
curl -s "${urls[$i]}" -o "output_$i.json" &
done
# Wait for all background jobs to complete
wait
echo "All requests completed"
Using Curl's Built-in Parallel Feature
Modern curl versions (7.66.0+) support the --parallel
option:
# Send multiple URLs in parallel
curl --parallel --parallel-max 3 \
"https://api.example.com/users/1" \
"https://api.example.com/users/2" \
"https://api.example.com/users/3" \
-o "user_#1.json"
Configuration File Approach
Create a curl config file for complex scenarios:
# urls.txt
url = "https://api.example.com/users/1"
output = "user1.json"
url = "https://api.example.com/users/2"
output = "user2.json"
url = "https://api.example.com/users/3"
output = "user3.json"
curl --parallel --config urls.txt
Advanced Examples
POST Requests with Different Data
#!/bin/bash
# Array of JSON payloads
payloads=(
'{"name":"John","age":30}'
'{"name":"Jane","age":25}'
'{"name":"Bob","age":35}'
)
# Send POST requests concurrently
for i in "${!payloads[@]}"; do
curl -X POST \
-H "Content-Type: application/json" \
-d "${payloads[$i]}" \
"https://api.example.com/users" \
-o "response_$i.json" &
done
wait
With Authentication and Headers
#!/bin/bash
api_key="your-api-key"
base_url="https://api.example.com"
# Multiple authenticated requests
endpoints=("users" "orders" "products" "analytics")
for endpoint in "${endpoints[@]}"; do
curl -H "Authorization: Bearer $api_key" \
-H "Accept: application/json" \
"$base_url/$endpoint" \
-o "$endpoint.json" &
done
wait
Performance Monitoring
Track response times and success rates:
#!/bin/bash
url="https://api.example.com/test"
success_count=0
total_requests=10
for i in $(seq 1 $total_requests); do
start_time=$(date +%s.%N)
http_code=$(curl -s -w "%{http_code}" -o /dev/null "$url")
end_time=$(date +%s.%N)
duration=$(echo "$end_time - $start_time" | bc)
if [ "$http_code" -eq 200 ]; then
((success_count++))
echo "Request $i: SUCCESS (${duration}s)"
else
echo "Request $i: FAILED (HTTP $http_code)"
fi
done
echo "Success rate: $success_count/$total_requests"
Best Practices
- Rate Limiting: Add delays between requests to avoid overwhelming servers
sleep 0.1 # 100ms delay
- Error Handling: Check HTTP status codes and handle failures
if [ "$http_code" -ne 200 ]; then
echo "Request failed with HTTP $http_code"
fi
- Timeouts: Set reasonable timeouts to prevent hanging
curl --connect-timeout 10 --max-time 30 "$url"
- Connection Reuse: Use keep-alive for multiple requests to the same server
curl --keepalive-time 2 "$url"
- Limit Concurrency: Don't exceed server limits
curl --parallel --parallel-max 5 # Limit to 5 concurrent requests
Common Use Cases
- API Testing: Verify endpoints handle multiple requests
- Load Testing: Simulate user traffic patterns
- Data Migration: Batch process multiple records
- Health Checks: Monitor multiple services simultaneously
- Batch Downloads: Retrieve multiple files efficiently
Always respect server rate limits and terms of service when sending multiple requests. Monitor server response and adjust request frequency accordingly.