RPC Client

Overview

The OpenZeppelin Monitor includes a robust RPC client implementation with automatic endpoint rotation and fallback capabilities. This ensures reliable blockchain monitoring even when individual RPC endpoints experience issues.

  • Multiple RPC endpoint support with weighted load balancing

  • Automatic fallback on endpoint failures

  • Rate limit handling (429 responses)

  • Connection health checks

  • Thread-safe endpoint rotation

Configuration

RPC URLs

RPC endpoints are configured in the network configuration files with weights for load balancing:

{
  "rpc_urls": [
    {
      "type_": "rpc",
      "url": "https://primary-endpoint.example.com",
      "weight": 100
    },
    {
      "type_": "rpc",
      "url": "https://backup-endpoint.example.com",
      "weight": 50
    }
  ]
}

For high-availability setups, configure at least 3 (private) RPC endpoints with appropriate weights to ensure continuous operation even if multiple endpoints fail.

Configuration Fields

Field Type Description

type_

String

Type of endpoint (currently only "rpc" is supported)

url

String

The RPC endpoint URL

weight

Number

Load balancing weight (0-100)

Endpoint Management

The endpoint manager handles

  • Initial endpoint selection based on weights

  • Automatic rotation on failures

  • Connection health checks

  • Thread-safe endpoint updates

Rotation Strategy

The RPC client includes an automatic rotation strategy for handling specific types of failures:

  • For 429 (Too Many Requests) responses:

    • Immediately rotates to a fallback URL

    • Retries the request with the new endpoint

    • Continues this process until successful or all endpoints are exhausted

Configuration Options

The error codes that trigger RPC endpoint rotation can be customized in the src/services/blockchain/transports/mod.rs file.

pub const ROTATE_ON_ERROR_CODES: [u16; 1] = [429];

Retry Strategy

Each transport client implements reqwest-retry middleware with exponential backoff to handle transient failures in network requests. This is implemented separately from the endpoint rotation mechanism.

  • For transient HTTP errors and network failures:

    • Retries up to 3 times (configurable via ExponentialBackoff builder)

    • Applies exponential backoff between retry attempts

    • Returns the final error if all retry attempts fail

    • Maintains the same URL throughout the retry process

    • Independent from the endpoint rotation mechanism

Each blockchain network type has its own specialized transport client that wraps the base HttpTransportClient. This architecture provides common HTTP functionality while allowing customization of network-specific behaviors like connection testing and retry policies. The transport clients are implemented as:

  1. Base HTTP Transport: HttpTransportClient provides core HTTP functionality

  2. Network-Specific Transports:

    • EVMTransportClient for EVM networks

    • StellarTransportClient for Stellar networks

Configuration Options

The retry policy can be customized using the ExponentialBackoff builder in the respective transport client. The default retry policy is:

let retry_policy = ExponentialBackoff::builder()
  .base(2)
  .retry_bounds(Duration::from_millis(250), Duration::from_secs(10))
  .jitter(Jitter::Full)
  .build_with_max_retries(3);

The retry policy can be customized with the following options:

pub struct ExponentialBackoff {
  pub max_n_retries: Option<u32>,     // Maximum number of allowed retries attempts.
  pub min_retry_interval: Duration,   // Minimum waiting time between two retry attempts (it can end up being lower when using full jitter).
  pub max_retry_interval: Duration,   // Maximum waiting time between two retry attempts.
  pub jitter: Jitter,                 // How we apply jitter to the calculated backoff intervals.
  pub base: u32,                      // Base of the exponential.
}

The retry mechanism is implemented at the transport level using a dual-client approach:

  1. A base reqwest HTTP client is created with optimized configurations:

    • Connection pool settings for efficient resource usage

    • Configurable timeouts for request and connection handling

    • Shared across all transport operations

  2. A cloned instance of this client is enhanced with middleware:

    • Wrapped with reqwest_middleware for retry capabilities

    • Configured with exponential backoff and jitter

    • Handles automatic retry logic for failed requests

This architecture ensures:

  1. Direct requests (like health checks) use the base client for minimal overhead

  2. RPC calls benefit from the middleware’s retry capabilities

  3. Both clients maintain efficiency by sharing the same connection pool

Each transport client may define its own retry policy:

// src/services/transports/http.rs
pub struct HttpTransportClient {
  pub client: Arc<RwLock<Client>>,
  endpoint_manager: EndpointManager,
  test_connection_payload: Option<String>,
}

// Example of client creation with retry mechanism
let http_client = reqwest::ClientBuilder::new()
  .pool_idle_timeout(Duration::from_secs(90))
  .pool_max_idle_per_host(32)
  .timeout(Duration::from_secs(30))
  .connect_timeout(Duration::from_secs(20))
  .build()?;

// Create middleware client with retry policy
let client = ClientBuilder::new(cloned_http_client)
  .with(RetryTransientMiddleware::new_with_policy_and_strategy(
    retry_policy,
    RetryTransient,
  ))
  .build();

// src/services/transports/evm/http.rs
pub struct EVMTransportClient {
  http_client: HttpTransportClient,
}

// override with a custom retry policy and strategy
pub async fn new(network: &Network) -> Result<Self, anyhow::Error> {
  let test_connection_payload = Some(r#"{"id":1,"jsonrpc":"2.0","method":"net_version","params":[]}"#.to_string());
  let http_client = HttpTransportClient::new(network, test_connection_payload).await?;
  http_client.set_retry_policy(
    ExponentialBackoff::builder().build_with_total_retry_duration(Duration::from_secs(10)),
    Some(DefaultRetryableStrategy),
  )?;
  Ok(Self { http_client })
}

Implementation Details

This retry and rotation strategies ensure optimal handling of different types of failures while maintaining service availability.

sequenceDiagram participant M as Monitor participant EM as Endpoint Manager participant P as Primary RPC participant F as Fallback RPC rect rgb(240, 240, 240) Note over M,F: Case 1: Rate Limit (429) M->>EM: Send Request EM->>P: Try Primary P-->>EM: 429 Response EM->>EM: Rotate URL EM->>F: Try Fallback F-->>EM: Success EM-->>M: Return Response end rect rgb(240, 240, 240) Note over M,F: Case 2: Other Errors M->>EM: Send Request EM->>P: Try Primary P-->>EM: Error Response Note over EM: Wait with backoff EM->>P: Retry #1 P-->>EM: Error Response Note over EM: Wait with backoff EM->>P: Retry #2 P-->>EM: Success EM-->>M: Return Response end

List of RPC Calls

Below is a list of RPC calls made by the monitor for each network type for each iteration of the cron schedule. As the number of blocks being processed increases, the number of RPC calls grows, potentially leading to rate limiting issues or increased costs if not properly managed.

graph TD subgraph Common Operations A[Main] --> D[Process New Blocks] end subgraph EVM Network Calls B[Network Init] -->|net_version| D D -->|eth_blockNumber| E[For every block in range] E -->|eth_getBlockByNumber| G1[Process Block] G1 -->|eth_getLogs| H[Get Block Logs] H -->|Only when needed| J[Get Transaction Receipt] J -->|eth_getTransactionReceipt| I[Complete] end subgraph Stellar Network Calls C[Network Init] -->|getNetwork| D D -->|getLatestLedger| F[In batches of 200 blocks] F -->|getLedgers| G2[Process Block] G2 -->|For each monitored contract without ABI| M[Fetch Contract Spec] M -->|getLedgerEntries| N[Get WASM Hash] N -->|getLedgerEntries| O[Get WASM Code] O --> G2 G2 -->|In batches of 200| P[Fetch Block Data] P -->|getTransactions| L1[Get Transactions] P -->|getEvents| L2[Get Events] L1 --> Q[Complete] L2 --> Q end

EVM

  • RPC Client initialization (per active network): net_version

  • Fetching the latest block number (per cron iteration): eth_blockNumber

  • Fetching block data (per block): eth_getBlockByNumber

  • Fetching block logs (per block): eth_getLogs

  • Fetching transaction receipt (only when needed):

    • When monitor condition requires receipt-specific fields (e.g., gas_used)

    • When monitoring transaction status and no logs are present to validate status

Stellar

  • RPC Client initialization (per active network): getNetwork

  • Fetching the latest ledger (per cron iteration): getLatestLedger

  • Fetching ledger data (batched up to 200 in a single request): getLedgers

  • During block filtering, for each monitored contract without an ABI in config:

    • Fetching contract instance data: getLedgerEntries

    • Fetching contract WASM code: getLedgerEntries

  • Fetching transactions (batched up to 200 in a single request): getTransactions

  • Fetching events (batched up to 200 in a single request): getEvents

Best Practices

  • Configure multiple private endpoints with appropriate weights

  • Use geographically distributed endpoints when possible

  • Monitor endpoint health and adjust weights as needed

  • Set appropriate retry policies based on network characteristics

Troubleshooting

Common Issues

  • 429 Too Many Requests: Increase the number of fallback URLs, adjust weights or reduce monitoring frequency

  • Connection Timeouts: Check endpoint health and network connectivity

  • Invalid Responses: Verify endpoint compatibility with your network type

Logging

Enable debug logging for detailed transport information:

RUST_LOG=debug

This will show:

  • Endpoint rotations

  • Connection attempts

  • Request/response details