Table of contents

How do I integrate SwiftSoup with Combine framework for reactive programming?

Integrating SwiftSoup with Apple's Combine framework enables you to create powerful, reactive web scraping solutions in Swift. This combination allows you to handle asynchronous HTML parsing operations with functional reactive programming paradigms, making your code more maintainable and responsive.

Understanding SwiftSoup and Combine Integration

SwiftSoup is a pure Swift HTML parser inspired by jsoup, while Combine is Apple's framework for handling asynchronous events through publishers and subscribers. When combined, they create a robust solution for parsing HTML content reactively.

The key benefits of this integration include: - Asynchronous Processing: Handle multiple parsing operations without blocking the main thread - Error Handling: Built-in error propagation through the Combine pipeline - Cancellation Support: Easily cancel ongoing operations - Composability: Chain multiple parsing operations together

Basic SwiftSoup with Combine Setup

First, ensure you have both SwiftSoup and Combine available in your project:

import SwiftSoup
import Combine
import Foundation

class WebScrapingService {
    private var cancellables = Set<AnyCancellable>()

    func parseHTML(from url: URL) -> AnyPublisher<Document, Error> {
        URLSession.shared.dataTaskPublisher(for: url)
            .map(\.data)
            .tryMap { data -> Document in
                guard let html = String(data: data, encoding: .utf8) else {
                    throw ScrapingError.invalidEncoding
                }
                return try SwiftSoup.parse(html)
            }
            .receive(on: DispatchQueue.main)
            .eraseToAnyPublisher()
    }
}

enum ScrapingError: Error {
    case invalidEncoding
    case parsingFailed
    case elementNotFound
}

Creating Reactive HTML Parsing Publishers

Here's how to create specialized publishers for different parsing tasks:

extension WebScrapingService {
    // Extract all links from a webpage
    func extractLinks(from url: URL) -> AnyPublisher<[String], Error> {
        parseHTML(from: url)
            .tryMap { document -> [String] in
                try document.select("a[href]").array().compactMap { element in
                    try element.attr("href")
                }
            }
            .eraseToAnyPublisher()
    }

    // Extract specific elements by CSS selector
    func extractElements(from url: URL, selector: String) -> AnyPublisher<[Element], Error> {
        parseHTML(from: url)
            .tryMap { document -> [Element] in
                try document.select(selector).array()
            }
            .eraseToAnyPublisher()
    }

    // Extract text content from specific elements
    func extractText(from url: URL, selector: String) -> AnyPublisher<[String], Error> {
        parseHTML(from: url)
            .tryMap { document -> [String] in
                try document.select(selector).array().map { element in
                    try element.text()
                }
            }
            .eraseToAnyPublisher()
    }
}

Advanced Reactive Patterns with SwiftSoup

Chaining Multiple Parsing Operations

extension WebScrapingService {
    func scrapeProductDetails(from urls: [URL]) -> AnyPublisher<[ProductInfo], Error> {
        Publishers.Sequence(sequence: urls)
            .flatMap { url in
                self.parseHTML(from: url)
                    .tryMap { document -> ProductInfo in
                        let title = try document.select("h1.product-title").first()?.text() ?? ""
                        let price = try document.select(".price").first()?.text() ?? ""
                        let description = try document.select(".product-description").first()?.text() ?? ""

                        return ProductInfo(
                            title: title,
                            price: price,
                            description: description,
                            url: url
                        )
                    }
            }
            .collect()
            .eraseToAnyPublisher()
    }
}

struct ProductInfo {
    let title: String
    let price: String
    let description: String
    let url: URL
}

Handling Pagination with Combine

extension WebScrapingService {
    func scrapePaginatedContent(baseURL: URL, maxPages: Int = 10) -> AnyPublisher<[Element], Error> {
        (1...maxPages).publisher
            .flatMap { pageNumber -> AnyPublisher<[Element], Error> in
                let pageURL = baseURL.appendingPathComponent("page/\(pageNumber)")
                return self.extractElements(from: pageURL, selector: ".content-item")
            }
            .reduce([]) { accumulated, newElements in
                accumulated + newElements
            }
            .eraseToAnyPublisher()
    }
}

Error Handling in Reactive SwiftSoup

Proper error handling is crucial when combining SwiftSoup with Combine:

extension WebScrapingService {
    func robustHTMLParsing(from url: URL) -> AnyPublisher<Document, Never> {
        parseHTML(from: url)
            .retry(3) // Retry up to 3 times on failure
            .catch { error -> AnyPublisher<Document, Never> in
                print("Parsing failed: \(error)")
                // Return empty document on failure
                return Just(try! SwiftSoup.parse("<html></html>"))
                    .eraseToAnyPublisher()
            }
            .eraseToAnyPublisher()
    }

    // Custom error handling with specific recovery strategies
    func parseWithFallback(from url: URL) -> AnyPublisher<ParseResult, Never> {
        parseHTML(from: url)
            .map { document in ParseResult.success(document) }
            .catch { error -> AnyPublisher<ParseResult, Never> in
                switch error {
                case ScrapingError.invalidEncoding:
                    return Just(ParseResult.encodingError)
                        .eraseToAnyPublisher()
                case ScrapingError.parsingFailed:
                    return Just(ParseResult.parseError)
                        .eraseToAnyPublisher()
                default:
                    return Just(ParseResult.networkError)
                        .eraseToAnyPublisher()
                }
            }
            .eraseToAnyPublisher()
    }
}

enum ParseResult {
    case success(Document)
    case encodingError
    case parseError
    case networkError
}

Practical Usage Examples

SwiftUI Integration

import SwiftUI

struct ContentView: View {
    @StateObject private var viewModel = WebScrapingViewModel()

    var body: some View {
        NavigationView {
            List(viewModel.articles, id: \.title) { article in
                VStack(alignment: .leading) {
                    Text(article.title)
                        .font(.headline)
                    Text(article.summary)
                        .font(.caption)
                        .foregroundColor(.secondary)
                }
            }
            .navigationTitle("Articles")
            .task {
                await viewModel.loadArticles()
            }
        }
    }
}

class WebScrapingViewModel: ObservableObject {
    @Published var articles: [Article] = []
    private var cancellables = Set<AnyCancellable>()
    private let scrapingService = WebScrapingService()

    func loadArticles() async {
        guard let url = URL(string: "https://example.com/news") else { return }

        scrapingService.extractElements(from: url, selector: "article")
            .tryMap { elements -> [Article] in
                try elements.map { element in
                    let title = try element.select("h2").first()?.text() ?? ""
                    let summary = try element.select("p").first()?.text() ?? ""
                    return Article(title: title, summary: summary)
                }
            }
            .replaceError(with: [])
            .receive(on: DispatchQueue.main)
            .sink { [weak self] articles in
                self?.articles = articles
            }
            .store(in: &cancellables)
    }
}

struct Article {
    let title: String
    let summary: String
}

Background Processing with Combine

class BackgroundScrapingService {
    private let backgroundQueue = DispatchQueue(label: "scraping.background", qos: .utility)
    private var cancellables = Set<AnyCancellable>()

    func performBackgroundScraping(urls: [URL]) {
        Publishers.Sequence(sequence: urls)
            .subscribe(on: backgroundQueue)
            .flatMap { url in
                self.parseHTML(from: url)
                    .catch { _ in Empty<Document, Never>() }
            }
            .tryMap { document -> ScrapedData in
                // Process document on background thread
                let title = try document.select("title").first()?.text() ?? ""
                let metaDescription = try document.select("meta[name=description]").attr("content")
                return ScrapedData(title: title, description: metaDescription)
            }
            .receive(on: DispatchQueue.main)
            .sink(
                receiveCompletion: { completion in
                    if case .failure(let error) = completion {
                        print("Scraping completed with error: \(error)")
                    }
                },
                receiveValue: { data in
                    // Update UI with scraped data
                    print("Scraped: \(data.title)")
                }
            )
            .store(in: &cancellables)
    }
}

struct ScrapedData {
    let title: String
    let description: String
}

Performance Optimization Strategies

Concurrent Processing

extension WebScrapingService {
    func concurrentScraping(urls: [URL], maxConcurrent: Int = 3) -> AnyPublisher<[Document], Error> {
        Publishers.Sequence(sequence: urls)
            .flatMap(maxPublishers: .max(maxConcurrent)) { url in
                self.parseHTML(from: url)
                    .catch { _ in Empty<Document, Never>() }
            }
            .collect()
            .eraseToAnyPublisher()
    }
}

Memory Management

class ManagedScrapingService {
    private var cancellables = Set<AnyCancellable>()
    private weak var delegate: ScrapingDelegate?

    func startScraping(url: URL) {
        parseHTML(from: url)
            .sink(
                receiveCompletion: { [weak self] completion in
                    // Clean up resources
                    self?.cancellables.removeAll()
                },
                receiveValue: { [weak self] document in
                    self?.delegate?.didReceiveDocument(document)
                }
            )
            .store(in: &cancellables)
    }

    func cancelAll() {
        cancellables.removeAll()
    }
}

protocol ScrapingDelegate: AnyObject {
    func didReceiveDocument(_ document: Document)
}

Best Practices and Considerations

  1. Thread Safety: Always receive results on the main queue when updating UI
  2. Memory Management: Use weak references in closures to prevent retain cycles
  3. Error Recovery: Implement robust error handling with retry mechanisms
  4. Cancellation: Store cancellables properly and clean them up when needed
  5. Performance: Use appropriate schedulers and limit concurrent operations

While SwiftSoup with Combine is powerful for iOS applications, you might also consider server-side solutions for more intensive scraping operations, similar to how you might handle browser sessions in Puppeteer for JavaScript-based web scraping.

Testing Reactive SwiftSoup Code

import XCTest
import Combine

class WebScrapingServiceTests: XCTestCase {
    var service: WebScrapingService!
    var cancellables: Set<AnyCancellable>!

    override func setUp() {
        super.setUp()
        service = WebScrapingService()
        cancellables = Set<AnyCancellable>()
    }

    func testHTMLParsing() {
        let expectation = XCTestExpectation(description: "HTML parsing")
        let testURL = URL(string: "https://example.com")!

        service.parseHTML(from: testURL)
            .sink(
                receiveCompletion: { completion in
                    if case .failure(let error) = completion {
                        XCTFail("Parsing failed: \(error)")
                    }
                },
                receiveValue: { document in
                    XCTAssertNotNil(document)
                    expectation.fulfill()
                }
            )
            .store(in: &cancellables)

        wait(for: [expectation], timeout: 10.0)
    }
}

Advanced Use Cases and Real-World Applications

RSS Feed Parser with SwiftSoup and Combine

class RSSFeedParser {
    private let scrapingService = WebScrapingService()

    func parseFeed(from url: URL) -> AnyPublisher<[FeedItem], Error> {
        scrapingService.parseHTML(from: url)
            .tryMap { document -> [FeedItem] in
                try document.select("item").array().compactMap { item in
                    guard let title = try? item.select("title").first()?.text(),
                          let link = try? item.select("link").first()?.text(),
                          let description = try? item.select("description").first()?.text() else {
                        return nil
                    }

                    return FeedItem(title: title, link: link, description: description)
                }
            }
            .eraseToAnyPublisher()
    }
}

struct FeedItem {
    let title: String
    let link: String
    let description: String
}

E-commerce Price Monitoring

class PriceMonitor {
    private let scrapingService = WebScrapingService()
    private var monitoringTimer: Timer?

    func startMonitoring(products: [ProductURL], interval: TimeInterval = 3600) -> AnyPublisher<[PriceUpdate], Never> {
        Timer.publish(every: interval, on: .main, in: .common)
            .autoconnect()
            .flatMap { _ in
                self.checkPrices(for: products)
            }
            .eraseToAnyPublisher()
    }

    private func checkPrices(for products: [ProductURL]) -> AnyPublisher<[PriceUpdate], Never> {
        Publishers.Sequence(sequence: products)
            .flatMap { product in
                self.scrapingService.extractText(from: product.url, selector: product.priceSelector)
                    .map { prices in
                        PriceUpdate(
                            productId: product.id,
                            currentPrice: prices.first ?? "N/A",
                            timestamp: Date()
                        )
                    }
                    .catch { _ in Just(PriceUpdate(productId: product.id, currentPrice: "Error", timestamp: Date())) }
            }
            .collect()
            .eraseToAnyPublisher()
    }
}

struct ProductURL {
    let id: String
    let url: URL
    let priceSelector: String
}

struct PriceUpdate {
    let productId: String
    let currentPrice: String
    let timestamp: Date
}

For more complex web scraping scenarios that require JavaScript execution or sophisticated session management, you might want to explore server-side solutions that can handle timeouts in Puppeteer or other browser automation tools.

Conclusion

Integrating SwiftSoup with Combine creates a powerful foundation for reactive web scraping in iOS applications. This approach provides excellent separation of concerns, robust error handling, and seamless integration with SwiftUI, making it ideal for building responsive, data-driven mobile applications that need to parse web content efficiently. The reactive nature of Combine ensures that your scraping operations remain performant and maintainable while providing a clean API for handling asynchronous HTML parsing tasks.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon