Table of contents

How do I Handle Errors When Connecting to MCP Servers?

Error handling is crucial when working with MCP (Model Context Protocol) servers, as network issues, authentication failures, timeouts, and server errors can disrupt your web scraping workflows. Implementing robust error handling ensures your applications gracefully recover from failures and provide meaningful feedback to users.

Understanding Common MCP Connection Errors

Before diving into solutions, let's identify the most common errors you'll encounter when connecting to MCP servers:

  1. Connection Timeouts: Server takes too long to respond
  2. Authentication Errors: Invalid credentials or expired tokens
  3. Network Failures: DNS resolution issues, connection refused
  4. Server Errors: 500-level HTTP status codes
  5. Rate Limiting: Too many requests in a short period
  6. Protocol Errors: Incompatible MCP versions or malformed requests
  7. Resource Unavailability: Server temporarily down or overloaded

Basic Error Handling Pattern

Python Implementation

Here's a comprehensive error handling approach using Python with the MCP SDK:

import asyncio
from mcp import Client
from mcp.client.session import ClientSession
import logging

# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

class MCPConnectionError(Exception):
    """Custom exception for MCP connection issues"""
    pass

async def connect_to_mcp_with_retry(
    server_url: str,
    max_retries: int = 3,
    retry_delay: int = 5,
    timeout: int = 30
):
    """
    Connect to MCP server with automatic retry logic

    Args:
        server_url: The MCP server URL
        max_retries: Maximum number of connection attempts
        retry_delay: Delay between retries in seconds
        timeout: Connection timeout in seconds

    Returns:
        ClientSession object

    Raises:
        MCPConnectionError: If connection fails after all retries
    """
    last_error = None

    for attempt in range(max_retries):
        try:
            logger.info(f"Connection attempt {attempt + 1}/{max_retries}")

            # Create client with timeout
            client = Client(timeout=timeout)

            # Establish connection
            session = await asyncio.wait_for(
                client.connect(server_url),
                timeout=timeout
            )

            logger.info("Successfully connected to MCP server")
            return session

        except asyncio.TimeoutError:
            last_error = f"Connection timeout after {timeout} seconds"
            logger.warning(f"Attempt {attempt + 1} failed: {last_error}")

        except ConnectionRefusedError:
            last_error = "Connection refused - server may be down"
            logger.warning(f"Attempt {attempt + 1} failed: {last_error}")

        except Exception as e:
            last_error = str(e)
            logger.error(f"Attempt {attempt + 1} failed: {last_error}")

        # Wait before retry (except on last attempt)
        if attempt < max_retries - 1:
            logger.info(f"Retrying in {retry_delay} seconds...")
            await asyncio.sleep(retry_delay)

    # All retries exhausted
    raise MCPConnectionError(
        f"Failed to connect after {max_retries} attempts. Last error: {last_error}"
    )

# Usage example
async def main():
    try:
        session = await connect_to_mcp_with_retry(
            "http://localhost:3000",
            max_retries=3,
            retry_delay=5,
            timeout=30
        )

        # Use the session for scraping
        # ...

    except MCPConnectionError as e:
        logger.error(f"Connection failed: {e}")
        # Implement fallback logic or alert
    finally:
        if session:
            await session.close()

if __name__ == "__main__":
    asyncio.run(main())

JavaScript/TypeScript Implementation

For Node.js applications, here's a robust error handling implementation:

const { Client } = require('@modelcontextprotocol/sdk');

class MCPConnectionError extends Error {
  constructor(message, code = null) {
    super(message);
    this.name = 'MCPConnectionError';
    this.code = code;
  }
}

async function connectToMCPWithRetry(
  serverUrl,
  options = {}
) {
  const {
    maxRetries = 3,
    retryDelay = 5000,
    timeout = 30000,
    authToken = null
  } = options;

  let lastError = null;

  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      console.log(`Connection attempt ${attempt + 1}/${maxRetries}`);

      // Create client with configuration
      const client = new Client({
        timeout: timeout,
        headers: authToken ? {
          'Authorization': `Bearer ${authToken}`
        } : {}
      });

      // Wrap connection in timeout promise
      const session = await Promise.race([
        client.connect(serverUrl),
        new Promise((_, reject) =>
          setTimeout(() => reject(new Error('Connection timeout')), timeout)
        )
      ]);

      console.log('Successfully connected to MCP server');
      return session;

    } catch (error) {
      lastError = error;

      // Handle specific error types
      if (error.code === 'ECONNREFUSED') {
        console.warn(`Attempt ${attempt + 1} failed: Connection refused`);
      } else if (error.code === 'ETIMEDOUT' || error.message === 'Connection timeout') {
        console.warn(`Attempt ${attempt + 1} failed: Timeout`);
      } else if (error.response?.status === 401) {
        // Authentication error - don't retry
        throw new MCPConnectionError('Authentication failed', 401);
      } else if (error.response?.status === 429) {
        // Rate limited - use exponential backoff
        const waitTime = retryDelay * Math.pow(2, attempt);
        console.warn(`Rate limited. Waiting ${waitTime}ms before retry`);
        await new Promise(resolve => setTimeout(resolve, waitTime));
        continue;
      } else {
        console.error(`Attempt ${attempt + 1} failed:`, error.message);
      }

      // Wait before retry (except on last attempt)
      if (attempt < maxRetries - 1) {
        console.log(`Retrying in ${retryDelay}ms...`);
        await new Promise(resolve => setTimeout(resolve, retryDelay));
      }
    }
  }

  // All retries exhausted
  throw new MCPConnectionError(
    `Failed to connect after ${maxRetries} attempts. Last error: ${lastError.message}`,
    lastError.code
  );
}

// Usage example
async function main() {
  let session = null;

  try {
    session = await connectToMCPWithRetry('http://localhost:3000', {
      maxRetries: 3,
      retryDelay: 5000,
      timeout: 30000,
      authToken: process.env.MCP_AUTH_TOKEN
    });

    // Use the session for web scraping operations
    // ...

  } catch (error) {
    if (error instanceof MCPConnectionError) {
      console.error('MCP Connection Error:', error.message);
      // Implement fallback or alerting logic
    } else {
      console.error('Unexpected error:', error);
    }
  } finally {
    if (session) {
      await session.close();
    }
  }
}

main().catch(console.error);

Handling Specific Error Scenarios

Authentication Errors

Authentication failures require special handling since retrying won't help if credentials are invalid:

async def connect_with_auth(server_url: str, api_key: str):
    try:
        client = Client()
        session = await client.connect(
            server_url,
            headers={"Authorization": f"Bearer {api_key}"}
        )
        return session
    except Exception as e:
        if "401" in str(e) or "Unauthorized" in str(e):
            logger.error("Authentication failed - check your API key")
            raise ValueError("Invalid API credentials")
        elif "403" in str(e) or "Forbidden" in str(e):
            logger.error("Access forbidden - insufficient permissions")
            raise ValueError("Insufficient permissions")
        else:
            raise

Timeout Management

Different operations may require different timeout values. Similar to how you handle timeouts in Puppeteer, MCP connections benefit from configurable timeout strategies:

const timeoutConfig = {
  connection: 30000,      // 30 seconds for initial connection
  scraping: 120000,       // 2 minutes for scraping operations
  authentication: 10000   // 10 seconds for auth
};

async function performScrapingWithTimeout(session, url) {
  try {
    const result = await Promise.race([
      session.scrape(url),
      new Promise((_, reject) =>
        setTimeout(
          () => reject(new Error('Scraping timeout')),
          timeoutConfig.scraping
        )
      )
    ]);
    return result;
  } catch (error) {
    if (error.message === 'Scraping timeout') {
      console.error(`Scraping ${url} timed out after ${timeoutConfig.scraping}ms`);
      // Return partial results or retry with different strategy
    }
    throw error;
  }
}

Network Failure Recovery

Implement circuit breaker patterns for persistent network issues:

from datetime import datetime, timedelta

class CircuitBreaker:
    def __init__(self, failure_threshold=5, timeout=60):
        self.failure_threshold = failure_threshold
        self.timeout = timeout
        self.failures = 0
        self.last_failure_time = None
        self.state = "CLOSED"  # CLOSED, OPEN, HALF_OPEN

    def record_failure(self):
        self.failures += 1
        self.last_failure_time = datetime.now()

        if self.failures >= self.failure_threshold:
            self.state = "OPEN"
            logger.warning(f"Circuit breaker OPEN after {self.failures} failures")

    def record_success(self):
        self.failures = 0
        self.state = "CLOSED"
        logger.info("Circuit breaker CLOSED")

    def can_attempt(self):
        if self.state == "CLOSED":
            return True

        if self.state == "OPEN":
            # Check if timeout has passed
            if datetime.now() - self.last_failure_time > timedelta(seconds=self.timeout):
                self.state = "HALF_OPEN"
                logger.info("Circuit breaker HALF_OPEN - attempting recovery")
                return True
            return False

        # HALF_OPEN state
        return True

# Usage
breaker = CircuitBreaker(failure_threshold=5, timeout=60)

async def connect_with_circuit_breaker(server_url):
    if not breaker.can_attempt():
        raise MCPConnectionError("Circuit breaker is OPEN - server appears down")

    try:
        session = await connect_to_mcp_with_retry(server_url)
        breaker.record_success()
        return session
    except Exception as e:
        breaker.record_failure()
        raise

Error Logging and Monitoring

Comprehensive logging helps diagnose connection issues. Just as you would handle errors in Puppeteer with detailed logging, apply the same principles to MCP connections:

import json
from datetime import datetime

class MCPErrorLogger:
    def __init__(self, log_file="mcp_errors.log"):
        self.log_file = log_file

    def log_error(self, error_type, server_url, error_details, context=None):
        log_entry = {
            "timestamp": datetime.now().isoformat(),
            "error_type": error_type,
            "server_url": server_url,
            "error_details": str(error_details),
            "context": context or {}
        }

        with open(self.log_file, 'a') as f:
            f.write(json.dumps(log_entry) + "\n")

        # Also log to console
        logger.error(f"MCP Error: {error_type} - {error_details}")

# Usage
error_logger = MCPErrorLogger()

async def monitored_connection(server_url):
    try:
        return await connect_to_mcp_with_retry(server_url)
    except asyncio.TimeoutError as e:
        error_logger.log_error(
            "TIMEOUT",
            server_url,
            e,
            {"max_timeout": 30}
        )
        raise
    except ConnectionRefusedError as e:
        error_logger.log_error(
            "CONNECTION_REFUSED",
            server_url,
            e,
            {"server_status": "down"}
        )
        raise

Graceful Degradation

When MCP server connections fail, implement fallback strategies:

async function scrapeWithFallback(url, options = {}) {
  const strategies = [
    // Strategy 1: Primary MCP server
    async () => {
      const session = await connectToMCPWithRetry('http://primary-mcp:3000');
      return await session.scrape(url);
    },

    // Strategy 2: Backup MCP server
    async () => {
      console.log('Trying backup server...');
      const session = await connectToMCPWithRetry('http://backup-mcp:3000');
      return await session.scrape(url);
    },

    // Strategy 3: Direct scraping API fallback
    async () => {
      console.log('Falling back to direct API...');
      const response = await fetch(`https://api.webscraping.ai/html?url=${url}`, {
        headers: { 'Authorization': `Bearer ${process.env.WSA_API_KEY}` }
      });
      return await response.text();
    }
  ];

  let lastError = null;

  for (const [index, strategy] of strategies.entries()) {
    try {
      console.log(`Trying strategy ${index + 1}/${strategies.length}`);
      return await strategy();
    } catch (error) {
      lastError = error;
      console.warn(`Strategy ${index + 1} failed:`, error.message);
    }
  }

  throw new Error(`All strategies failed. Last error: ${lastError.message}`);
}

Health Checks and Proactive Monitoring

Implement health checks to detect issues before they affect production:

async def check_mcp_health(server_url: str) -> dict:
    """
    Perform health check on MCP server

    Returns:
        dict with status, latency, and error info
    """
    start_time = datetime.now()

    try:
        client = Client(timeout=5)
        session = await client.connect(server_url)

        # Simple ping operation
        await session.ping()

        latency = (datetime.now() - start_time).total_seconds()

        await session.close()

        return {
            "status": "healthy",
            "latency_ms": latency * 1000,
            "timestamp": datetime.now().isoformat()
        }

    except Exception as e:
        return {
            "status": "unhealthy",
            "error": str(e),
            "timestamp": datetime.now().isoformat()
        }

# Periodic health check
async def monitor_mcp_servers():
    servers = [
        "http://mcp-server-1:3000",
        "http://mcp-server-2:3000"
    ]

    while True:
        for server in servers:
            health = await check_mcp_health(server)
            logger.info(f"Health check for {server}: {health}")

            if health["status"] == "unhealthy":
                # Send alert
                send_alert(f"MCP server {server} is unhealthy: {health['error']}")

        # Check every 60 seconds
        await asyncio.sleep(60)

Best Practices for Error Handling

  1. Use Exponential Backoff: Increase retry delays exponentially to avoid overwhelming recovering servers
  2. Set Appropriate Timeouts: Different operations need different timeout values
  3. Implement Circuit Breakers: Prevent cascading failures in distributed systems
  4. Log Everything: Detailed logs are essential for debugging connection issues
  5. Monitor Health Proactively: Don't wait for failures to detect problems
  6. Use Fallback Strategies: Have alternative approaches when primary method fails
  7. Handle Authentication Separately: Don't retry operations that fail due to invalid credentials
  8. Provide Clear Error Messages: Help users understand what went wrong and how to fix it

Testing Error Scenarios

Create tests that simulate various error conditions:

import pytest
from unittest.mock import patch, AsyncMock

@pytest.mark.asyncio
async def test_connection_timeout():
    with patch('mcp.Client.connect', side_effect=asyncio.TimeoutError):
        with pytest.raises(MCPConnectionError):
            await connect_to_mcp_with_retry(
                "http://test-server:3000",
                max_retries=2,
                retry_delay=1
            )

@pytest.mark.asyncio
async def test_connection_retry_success():
    mock_connect = AsyncMock()
    # Fail twice, then succeed
    mock_connect.side_effect = [
        ConnectionRefusedError(),
        ConnectionRefusedError(),
        AsyncMock()  # Success
    ]

    with patch('mcp.Client.connect', mock_connect):
        session = await connect_to_mcp_with_retry(
            "http://test-server:3000",
            max_retries=3,
            retry_delay=1
        )
        assert session is not None

Conclusion

Robust error handling for MCP server connections requires a multi-layered approach combining retry logic, timeout management, circuit breakers, comprehensive logging, and fallback strategies. By implementing these patterns, you'll build resilient web scraping applications that gracefully handle network issues, server failures, and other connection problems. Remember to test error scenarios thoroughly and monitor your connections proactively to catch issues before they impact your users.

For more advanced scraping scenarios, consider exploring how browser automation tools handle browser sessions and apply similar connection management principles to your MCP implementations.

Try WebScraping.AI for Your Web Scraping Needs

Looking for a powerful web scraping solution? WebScraping.AI provides an LLM-powered API that combines Chromium JavaScript rendering with rotating proxies for reliable data extraction.

Key Features:

  • AI-powered extraction: Ask questions about web pages or extract structured data fields
  • JavaScript rendering: Full Chromium browser support for dynamic content
  • Rotating proxies: Datacenter and residential proxies from multiple countries
  • Easy integration: Simple REST API with SDKs for Python, Ruby, PHP, and more
  • Reliable & scalable: Built for developers who need consistent results

Getting Started:

Get page content with AI analysis:

curl "https://api.webscraping.ai/ai/question?url=https://example.com&question=What is the main topic?&api_key=YOUR_API_KEY"

Extract structured data:

curl "https://api.webscraping.ai/ai/fields?url=https://example.com&fields[title]=Page title&fields[price]=Product price&api_key=YOUR_API_KEY"

Try in request builder

Related Questions

Get Started Now

WebScraping.AI provides rotating proxies, Chromium rendering and built-in HTML parser for web scraping
Icon