Table of contents

How do I implement request queueing and throttling with Alamofire?

Implementing request queueing and throttling with Alamofire is essential for managing API rate limits, preventing server overload, and ensuring smooth user experiences in iOS applications. This guide covers various strategies to control request flow and implement sophisticated throttling mechanisms.

Understanding Request Queueing and Throttling

Request queueing manages the order and timing of network requests, while throttling limits the rate at which requests are sent. These techniques are crucial when dealing with APIs that have strict rate limits or when you need to prevent overwhelming backend services.

Basic Request Throttling with Session Configuration

The simplest approach to implement throttling is by configuring Alamofire's session with custom adapters and retriers:

import Alamofire
import Foundation

class ThrottledSession {
    private let session: Session
    private let requestQueue = DispatchQueue(label: "request.throttle.queue", qos: .utility)
    private let semaphore: DispatchSemaphore
    private let requestInterval: TimeInterval
    private var lastRequestTime: Date = Date.distantPast

    init(maxConcurrentRequests: Int = 3, requestInterval: TimeInterval = 1.0) {
        self.semaphore = DispatchSemaphore(value: maxConcurrentRequests)
        self.requestInterval = requestInterval

        let configuration = URLSessionConfiguration.default
        configuration.httpMaximumConnectionsPerHost = maxConcurrentRequests

        self.session = Session(configuration: configuration)
    }

    func request(
        _ url: String,
        method: HTTPMethod = .get,
        parameters: Parameters? = nil,
        encoding: ParameterEncoding = URLEncoding.default,
        headers: HTTPHeaders? = nil
    ) -> DataRequest {

        return session.request(url, method: method, parameters: parameters, encoding: encoding, headers: headers)
            .validate()
    }

    private func throttleRequest() {
        requestQueue.sync {
            let now = Date()
            let timeSinceLastRequest = now.timeIntervalSince(lastRequestTime)

            if timeSinceLastRequest < requestInterval {
                let sleepTime = requestInterval - timeSinceLastRequest
                Thread.sleep(forTimeInterval: sleepTime)
            }

            lastRequestTime = Date()
        }
    }
}

Advanced Queue-Based Request Management

For more sophisticated control, implement a custom request queue manager that handles priority, retry logic, and advanced throttling:

import Alamofire
import Foundation

protocol QueuedRequest {
    var priority: RequestPriority { get }
    var maxRetries: Int { get }
    var retryDelay: TimeInterval { get }
    func execute() -> DataRequest
}

enum RequestPriority: Int, Comparable {
    case low = 0
    case normal = 1
    case high = 2
    case critical = 3

    static func < (lhs: RequestPriority, rhs: RequestPriority) -> Bool {
        return lhs.rawValue < rhs.rawValue
    }
}

class AlamofireRequest: QueuedRequest {
    let url: String
    let method: HTTPMethod
    let parameters: Parameters?
    let headers: HTTPHeaders?
    let priority: RequestPriority
    let maxRetries: Int
    let retryDelay: TimeInterval
    private let session: Session

    init(
        session: Session,
        url: String,
        method: HTTPMethod = .get,
        parameters: Parameters? = nil,
        headers: HTTPHeaders? = nil,
        priority: RequestPriority = .normal,
        maxRetries: Int = 3,
        retryDelay: TimeInterval = 1.0
    ) {
        self.session = session
        self.url = url
        self.method = method
        self.parameters = parameters
        self.headers = headers
        self.priority = priority
        self.maxRetries = maxRetries
        self.retryDelay = retryDelay
    }

    func execute() -> DataRequest {
        return session.request(url, method: method, parameters: parameters, headers: headers)
            .validate()
    }
}

class RequestQueueManager {
    private let session: Session
    private var requestQueue: [QueuedRequest] = []
    private let queueLock = NSLock()
    private let processingQueue = DispatchQueue(label: "request.processing.queue", qos: .utility)
    private let semaphore: DispatchSemaphore
    private let throttleInterval: TimeInterval
    private var isProcessing = false
    private var lastRequestTime = Date.distantPast

    init(maxConcurrentRequests: Int = 5, throttleInterval: TimeInterval = 0.5) {
        let configuration = URLSessionConfiguration.default
        configuration.timeoutIntervalForRequest = 30
        configuration.timeoutIntervalForResource = 60

        self.session = Session(configuration: configuration)
        self.semaphore = DispatchSemaphore(value: maxConcurrentRequests)
        self.throttleInterval = throttleInterval
    }

    func enqueueRequest(_ request: QueuedRequest) {
        queueLock.lock()
        defer { queueLock.unlock() }

        requestQueue.append(request)
        requestQueue.sort { $0.priority > $1.priority }

        startProcessingIfNeeded()
    }

    func enqueueRequest(
        url: String,
        method: HTTPMethod = .get,
        parameters: Parameters? = nil,
        headers: HTTPHeaders? = nil,
        priority: RequestPriority = .normal
    ) {
        let request = AlamofireRequest(
            session: session,
            url: url,
            method: method,
            parameters: parameters,
            headers: headers,
            priority: priority
        )
        enqueueRequest(request)
    }

    private func startProcessingIfNeeded() {
        guard !isProcessing && !requestQueue.isEmpty else { return }

        isProcessing = true
        processingQueue.async { [weak self] in
            self?.processQueue()
        }
    }

    private func processQueue() {
        while true {
            queueLock.lock()
            guard let request = requestQueue.first else {
                isProcessing = false
                queueLock.unlock()
                break
            }
            requestQueue.removeFirst()
            queueLock.unlock()

            semaphore.wait()
            throttleIfNeeded()

            executeRequest(request) { [weak self] in
                self?.semaphore.signal()
            }
        }
    }

    private func throttleIfNeeded() {
        let now = Date()
        let timeSinceLastRequest = now.timeIntervalSince(lastRequestTime)

        if timeSinceLastRequest < throttleInterval {
            let sleepTime = throttleInterval - timeSinceLastRequest
            Thread.sleep(forTimeInterval: sleepTime)
        }

        lastRequestTime = Date()
    }

    private func executeRequest(_ queuedRequest: QueuedRequest, completion: @escaping () -> Void) {
        executeWithRetry(queuedRequest, attemptsLeft: queuedRequest.maxRetries, completion: completion)
    }

    private func executeWithRetry(_ queuedRequest: QueuedRequest, attemptsLeft: Int, completion: @escaping () -> Void) {
        let dataRequest = queuedRequest.execute()

        dataRequest.response { [weak self] response in
            switch response.result {
            case .success:
                print("Request succeeded: \(response.request?.url?.absoluteString ?? "Unknown URL")")
                completion()

            case .failure(let error):
                if attemptsLeft > 0 && self?.shouldRetry(error: error) == true {
                    print("Request failed, retrying in \(queuedRequest.retryDelay)s. Attempts left: \(attemptsLeft)")

                    DispatchQueue.global().asyncAfter(deadline: .now() + queuedRequest.retryDelay) {
                        self?.executeWithRetry(queuedRequest, attemptsLeft: attemptsLeft - 1, completion: completion)
                    }
                } else {
                    print("Request failed permanently: \(error)")
                    completion()
                }
            }
        }
    }

    private func shouldRetry(error: Error) -> Bool {
        if let afError = error as? AFError {
            switch afError {
            case .responseValidationFailed:
                return false
            case .sessionTaskFailed(let sessionError):
                let nsError = sessionError as NSError
                return nsError.code == NSURLErrorTimedOut || 
                       nsError.code == NSURLErrorNetworkConnectionLost
            default:
                return true
            }
        }
        return true
    }
}

Token Bucket Rate Limiting

Implement a token bucket algorithm for more sophisticated rate limiting that allows for burst requests while maintaining long-term rate limits:

class TokenBucket {
    private let capacity: Int
    private let refillRate: Double // tokens per second
    private var tokens: Double
    private var lastRefillTime: Date
    private let lock = NSLock()

    init(capacity: Int, refillRate: Double) {
        self.capacity = capacity
        self.refillRate = refillRate
        self.tokens = Double(capacity)
        self.lastRefillTime = Date()
    }

    func tryConsume(tokens: Int = 1) -> Bool {
        lock.lock()
        defer { lock.unlock() }

        refillTokens()

        if self.tokens >= Double(tokens) {
            self.tokens -= Double(tokens)
            return true
        }

        return false
    }

    private func refillTokens() {
        let now = Date()
        let timePassed = now.timeIntervalSince(lastRefillTime)
        let tokensToAdd = timePassed * refillRate

        tokens = min(Double(capacity), tokens + tokensToAdd)
        lastRefillTime = now
    }
}

class RateLimitedAlamofireManager {
    private let session: Session
    private let tokenBucket: TokenBucket
    private let requestQueue = DispatchQueue(label: "rate.limited.requests", qos: .utility)

    init(requestsPerSecond: Double = 2.0, burstCapacity: Int = 5) {
        self.tokenBucket = TokenBucket(capacity: burstCapacity, refillRate: requestsPerSecond)
        self.session = Session()
    }

    func request(
        _ url: String,
        method: HTTPMethod = .get,
        parameters: Parameters? = nil,
        completion: @escaping (AFDataResponse<Data>) -> Void
    ) {
        requestQueue.async { [weak self] in
            self?.waitForToken()

            self?.session.request(url, method: method, parameters: parameters)
                .validate()
                .response { response in
                    DispatchQueue.main.async {
                        completion(response)
                    }
                }
        }
    }

    private func waitForToken() {
        while !tokenBucket.tryConsume() {
            Thread.sleep(forTimeInterval: 0.1)
        }
    }
}

Integration with Retry and Circuit Breaker Patterns

Combine throttling with retry logic and circuit breaker patterns for robust network handling:

enum CircuitState {
    case closed
    case open
    case halfOpen
}

class CircuitBreaker {
    private let failureThreshold: Int
    private let recoveryTimeout: TimeInterval
    private var failureCount = 0
    private var lastFailureTime: Date?
    private var state: CircuitState = .closed
    private let lock = NSLock()

    init(failureThreshold: Int = 5, recoveryTimeout: TimeInterval = 30.0) {
        self.failureThreshold = failureThreshold
        self.recoveryTimeout = recoveryTimeout
    }

    func canExecute() -> Bool {
        lock.lock()
        defer { lock.unlock() }

        switch state {
        case .closed:
            return true
        case .open:
            if let lastFailure = lastFailureTime,
               Date().timeIntervalSince(lastFailure) > recoveryTimeout {
                state = .halfOpen
                return true
            }
            return false
        case .halfOpen:
            return true
        }
    }

    func recordSuccess() {
        lock.lock()
        defer { lock.unlock() }

        failureCount = 0
        state = .closed
    }

    func recordFailure() {
        lock.lock()
        defer { lock.unlock() }

        failureCount += 1
        lastFailureTime = Date()

        if failureCount >= failureThreshold {
            state = .open
        }
    }
}

class ResilientRequestManager {
    private let queueManager: RequestQueueManager
    private let circuitBreaker: CircuitBreaker

    init() {
        self.queueManager = RequestQueueManager(maxConcurrentRequests: 3, throttleInterval: 0.5)
        self.circuitBreaker = CircuitBreaker(failureThreshold: 5, recoveryTimeout: 30.0)
    }

    func makeRequest(
        url: String,
        method: HTTPMethod = .get,
        parameters: Parameters? = nil,
        priority: RequestPriority = .normal,
        completion: @escaping (Result<Data, Error>) -> Void
    ) {
        guard circuitBreaker.canExecute() else {
            completion(.failure(NSError(domain: "CircuitBreakerOpen", code: 503, userInfo: nil)))
            return
        }

        let request = AlamofireRequest(
            session: Session(),
            url: url,
            method: method,
            parameters: parameters,
            priority: priority
        )

        // Custom execution logic that integrates with circuit breaker
        executeWithCircuitBreaker(request, completion: completion)
    }

    private func executeWithCircuitBreaker(_ request: AlamofireRequest, completion: @escaping (Result<Data, Error>) -> Void) {
        request.execute().response { [weak self] response in
            switch response.result {
            case .success(let data):
                self?.circuitBreaker.recordSuccess()
                completion(.success(data))
            case .failure(let error):
                self?.circuitBreaker.recordFailure()
                completion(.failure(error))
            }
        }
    }
}

Usage Examples

Here's how to use the queue manager and rate-limited components in your application:

class NetworkService {
    private let requestManager = RequestQueueManager(maxConcurrentRequests: 3, throttleInterval: 1.0)
    private let rateLimitedManager = RateLimitedAlamofireManager(requestsPerSecond: 2.0, burstCapacity: 5)

    func fetchUserData(userId: String) {
        // High priority request
        requestManager.enqueueRequest(
            url: "https://api.example.com/users/\(userId)",
            method: .get,
            priority: .high
        )
    }

    func fetchFeedData() {
        // Normal priority request with rate limiting
        rateLimitedManager.request("https://api.example.com/feed") { response in
            switch response.result {
            case .success(let data):
                print("Feed data received: \(data.count) bytes")
            case .failure(let error):
                print("Feed request failed: \(error)")
            }
        }
    }

    func bulkDataSync(urls: [String]) {
        // Queue multiple requests with different priorities
        for (index, url) in urls.enumerated() {
            let priority: RequestPriority = index < 3 ? .high : .normal
            requestManager.enqueueRequest(url: url, priority: priority)
        }
    }
}

Best Practices for Request Management

  1. Choose appropriate concurrency limits based on your server's capacity and API rate limits
  2. Implement exponential backoff for retry mechanisms to avoid overwhelming servers
  3. Use priority queues to ensure critical requests are processed first
  4. Monitor queue depth and implement alerts for unusual patterns
  5. Test thoroughly with various network conditions and load scenarios

When implementing request queueing and throttling, similar principles apply to web scraping scenarios. For handling dynamic content loading, you might want to explore techniques for managing concurrent requests efficiently while respecting rate limits.

Monitoring and Debugging

Add comprehensive logging and metrics to monitor your request queue performance:

class RequestMetrics {
    static let shared = RequestMetrics()

    private var totalRequests = 0
    private var successfulRequests = 0
    private var failedRequests = 0
    private var averageResponseTime: TimeInterval = 0

    func recordRequest(success: Bool, responseTime: TimeInterval) {
        totalRequests += 1

        if success {
            successfulRequests += 1
        } else {
            failedRequests += 1
        }

        // Calculate moving average
        averageResponseTime = (averageResponseTime * Double(totalRequests - 1) + responseTime) / Double(totalRequests)
    }

    func printStats() {
        let successRate = Double(successfulRequests) / Double(totalRequests) * 100
        print("Request Stats - Total: \(totalRequests), Success Rate: \(String(format: "%.2f", successRate))%, Avg Response: \(String(format: "%.2f", averageResponseTime))s")
    }
}

Implementing proper request queueing and throttling with Alamofire requires careful consideration of your specific use case, API limitations, and application requirements. The patterns shown above provide a solid foundation for building robust, scalable network layers that can handle high-volume requests while respecting rate limits and maintaining excellent user experiences.

These techniques are particularly valuable when building applications that need to handle large amounts of data, integrate with multiple APIs, or operate in environments with strict network constraints. Similar request management principles are also essential when handling network requests in browser automation scenarios where timing and throttling are equally important.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon