RPC Client
Overview
The OpenZeppelin Monitor includes a robust RPC client implementation with automatic endpoint rotation and fallback capabilities. This ensures reliable blockchain monitoring even when individual RPC endpoints experience issues.
-
Multiple RPC endpoint support with weighted load balancing
-
Automatic fallback on endpoint failures
-
Rate limit handling (429 responses)
-
Connection health checks
-
Thread-safe endpoint rotation
Configuration
RPC URLs
RPC endpoints are configured in the network configuration files with weights for load balancing:
{
"rpc_urls": [
{
"type_": "rpc",
"url": "https://primary-endpoint.example.com",
"weight": 100
},
{
"type_": "rpc",
"url": "https://backup-endpoint.example.com",
"weight": 50
}
]
}
For high-availability setups, configure at least 3 (private) RPC endpoints with appropriate weights to ensure continuous operation even if multiple endpoints fail. |
Endpoint Management
The endpoint manager handles
-
Initial endpoint selection based on weights
-
Automatic rotation on failures
-
Connection health checks
-
Thread-safe endpoint updates
Rotation Strategy
The RPC client includes an automatic rotation strategy for handling specific types of failures:
-
For 429 (Too Many Requests) responses:
-
Immediately rotates to a fallback URL
-
Retries the request with the new endpoint
-
Continues this process until successful or all endpoints are exhausted
-
Retry Strategy
Each transport client implements reqwest-retry
middleware with exponential backoff to handle transient failures in network requests. This is implemented separately from the endpoint rotation mechanism.
-
For transient HTTP errors and network failures:
-
Retries up to 3 times (configurable via
ExponentialBackoff
builder) -
Applies exponential backoff between retry attempts
-
Returns the final error if all retry attempts fail
-
Maintains the same URL throughout the retry process
-
Independent from the endpoint rotation mechanism
-
Each blockchain network type has its own specialized transport client that wraps the base HttpTransportClient
.
This architecture provides common HTTP functionality while allowing customization of network-specific behaviors like connection testing and retry policies.
The transport clients are implemented as:
-
Base HTTP Transport:
HttpTransportClient
provides core HTTP functionality -
Network-Specific Transports:
-
EVMTransportClient
for EVM networks -
StellarTransportClient
for Stellar networks
-
Configuration Options
The retry policy can be customized using the ExponentialBackoff
builder in the respective transport client. The default retry policy is:
let retry_policy = ExponentialBackoff::builder()
.base(2)
.retry_bounds(Duration::from_millis(250), Duration::from_secs(10))
.jitter(Jitter::Full)
.build_with_max_retries(3);
The retry policy can be customized with the following options:
pub struct ExponentialBackoff {
pub max_n_retries: Option<u32>, // Maximum number of allowed retries attempts.
pub min_retry_interval: Duration, // Minimum waiting time between two retry attempts (it can end up being lower when using full jitter).
pub max_retry_interval: Duration, // Maximum waiting time between two retry attempts.
pub jitter: Jitter, // How we apply jitter to the calculated backoff intervals.
pub base: u32, // Base of the exponential.
}
The retry mechanism is implemented at the transport level using a dual-client approach:
-
A base
reqwest
HTTP client is created with optimized configurations:-
Connection pool settings for efficient resource usage
-
Configurable timeouts for request and connection handling
-
Shared across all transport operations
-
-
A cloned instance of this client is enhanced with middleware:
-
Wrapped with
reqwest_middleware
for retry capabilities -
Configured with exponential backoff and jitter
-
Handles automatic retry logic for failed requests
-
This architecture ensures:
-
Direct requests (like health checks) use the base client for minimal overhead
-
RPC calls benefit from the middleware’s retry capabilities
-
Both clients maintain efficiency by sharing the same connection pool
Each transport client may define its own retry policy:
// src/services/transports/http.rs
pub struct HttpTransportClient {
pub client: Arc<RwLock<Client>>,
endpoint_manager: EndpointManager,
test_connection_payload: Option<String>,
}
// Example of client creation with retry mechanism
let http_client = reqwest::ClientBuilder::new()
.pool_idle_timeout(Duration::from_secs(90))
.pool_max_idle_per_host(32)
.timeout(Duration::from_secs(30))
.connect_timeout(Duration::from_secs(20))
.build()?;
// Create middleware client with retry policy
let client = ClientBuilder::new(cloned_http_client)
.with(RetryTransientMiddleware::new_with_policy_and_strategy(
retry_policy,
RetryTransient,
))
.build();
// src/services/transports/evm/http.rs
pub struct EVMTransportClient {
http_client: HttpTransportClient,
}
// override with a custom retry policy and strategy
pub async fn new(network: &Network) -> Result<Self, anyhow::Error> {
let test_connection_payload = Some(r#"{"id":1,"jsonrpc":"2.0","method":"net_version","params":[]}"#.to_string());
let http_client = HttpTransportClient::new(network, test_connection_payload).await?;
http_client.set_retry_policy(
ExponentialBackoff::builder().build_with_total_retry_duration(Duration::from_secs(10)),
Some(DefaultRetryableStrategy),
)?;
Ok(Self { http_client })
}
Implementation Details
This retry and rotation strategies ensure optimal handling of different types of failures while maintaining service availability.
List of RPC Calls
Below is a list of RPC calls made by the monitor for each network type for each iteration of the cron schedule. As the number of blocks being processed increases, the number of RPC calls grows, potentially leading to rate limiting issues or increased costs if not properly managed.
EVM
-
RPC Client initialization (per active network):
net_version
-
Fetching the latest block number (per cron iteration):
eth_blockNumber
-
Fetching block data (per block):
eth_getBlockByNumber
-
Fetching block logs (per block):
eth_getLogs
-
Fetching transaction receipt (only when needed):
-
When monitor condition requires receipt-specific fields (e.g.,
gas_used
) -
When monitoring transaction status and no logs are present to validate status
-
Stellar
-
RPC Client initialization (per active network):
getNetwork
-
Fetching the latest ledger (per cron iteration):
getLatestLedger
-
Fetching ledger data (batched up to 200 in a single request):
getLedgers
-
During block filtering, for each monitored contract without an ABI in config:
-
Fetching contract instance data:
getLedgerEntries
-
Fetching contract WASM code:
getLedgerEntries
-
-
Fetching transactions (batched up to 200 in a single request):
getTransactions
-
Fetching events (batched up to 200 in a single request):
getEvents
Best Practices
-
Configure multiple private endpoints with appropriate weights
-
Use geographically distributed endpoints when possible
-
Monitor endpoint health and adjust weights as needed
-
Set appropriate retry policies based on network characteristics