Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,6 @@ target/*

payer*.json

combined.pem
combined.pem

.env
2 changes: 2 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ hex = "0.4.3"
log = "0.4.17"
maplit = "1.0.2"
rust_decimal = { version = "1.32", features = ["tokio-pg"] }
serde = "1"
serde_json = "1.0.86"
solana-sdk = "2.0.18"
solana-transaction-status = "2.0.18"
Expand All @@ -41,6 +42,7 @@ adrena-abi = { git = "https://github.com/AdrenaFoundation/adrena-abi.git", rev =
anchor-client = { git = "https://github.com/coral-xyz/anchor.git", rev = "04536725c2ea16329e84bcfe3200afd47eeeb464", features = [
"async",
] }
anchor-lang = { git = "https://github.com/coral-xyz/anchor.git", rev = "04536725c2ea16329e84bcfe3200afd47eeeb464" }
anchor-spl = { git = "https://github.com/coral-xyz/anchor.git", rev = "04536725c2ea16329e84bcfe3200afd47eeeb464" }
spl-associated-token-account = { version = "6.0.0", features = [
"no-entrypoint",
Expand Down
167 changes: 159 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,168 @@
# MrOracle

OpenSource rust client (Keeper) handling oracle and alp price updates onchain.
OpenSource Rust keeper handling oracle and ALP price updates on-chain for the Adrena protocol.

MrOracle reads the latest ChaosLabs oracle prices from a PostgreSQL database, formats them into `ChaosLabsBatchPrices`, and pushes them on-chain via the `updatePoolAum` instruction every 5 seconds (longer during RPC fallback). A multi-layer RPC fallback chain (primary → backup → public) ensures price updates continue even if primary endpoints go down.

## Table of Contents

- [Architecture](#architecture)
- [Build](#build)
- [Configuration](#configuration)
- [RPC Fallback Architecture](#rpc-fallback-architecture)
- [Running](#running)
- [Troubleshooting](#troubleshooting)

---

## Architecture

```
ChaosLabs → adrena-data cron → PostgreSQL
│ every 5s
+------------+
| MrOracle |
+-----+------+
│ updatePoolAum TX
Primary RPC → Backup RPC → Public RPC
Solana Blockchain
```

Each cycle:
1. Fetch latest prices from PostgreSQL (`assets_price` table)
2. Format into ChaosLabs batch format
3. Build `updatePoolAum` instruction
4. Sign + send + confirm on primary RPC (falls through to backup → public on failure)

---

## Build

`$> cargo build`
`$> cargo build --release`
```bash
cargo build # debug
cargo build --release # production
```

The binary is at `target/release/mroracle`.

---

## Configuration

### CLI Arguments

| Flag | Required | Default | Description |
|------|----------|---------|-------------|
| `--rpc` | No | `http://127.0.0.1:10000` | Primary Solana JSON-RPC endpoint (with auth key in URL path) |
| `--rpc-backup` | No | — | Backup RPC endpoint (used when primary fails) |
| `--rpc-public` | No | `https://api.mainnet-beta.solana.com` | Last-resort public RPC |
| `--payer-keypair` | Yes | — | Path to funded payer keypair JSON |
| `--db-string` | Yes | — | PostgreSQL connection string for price DB |
| `--combined-cert` | Yes | — | Path to combined certificate file (for DB TLS) |
| `--commitment` | No | `processed` | Solana commitment level |

### Log Levels

Control via `RUST_LOG` environment variable:

```bash
RUST_LOG=info ./target/release/mroracle ...
RUST_LOG=debug ./target/release/mroracle ...
```

---

## RPC Fallback Architecture

Every JSON-RPC call in MrOracle goes through a 3-layer fallback chain:

```
primary (--rpc) → backup (--rpc-backup) → public (--rpc-public)
```

Each layer has a 5-second timeout. On failure, the next layer is tried. Per endpoint: get blockhash → sign → send → confirm (4 polls @ 500ms, up to 2s). On-chain errors short-circuit the chain (no retry on backup/public).

**What's covered by fallback:**
- Pool account fetch at startup
- Priority fee polling (every 5s)
- Oracle price update transaction signing + sending + confirmation

**RPC fallback is stateless** — each RPC call starts fresh from primary. If primary recovers, the very next call hits it first. No circuit breaker, no stickiness.

**Log prefixes:** `[Pool Fetch]`, `[Priority Fees]`, `[Price Update]`. Failures show as:

```
ERROR [Price Update] PRIMARY RPC failed: <reason>
ERROR [Price Update] BACKUP RPC failed: <reason>
WARN [Price Update] TX confirmed via PUBLIC RPC fallback: <signature>
```

On total failure:
```
ERROR [Price Update] All RPCs failed. Please handle ASAP. Critical Priority.
```

---

## Running

### Local

```bash
./target/release/mroracle \
--rpc "https://your-primary-rpc.com/<KEY>" \
--rpc-backup "https://your-backup-rpc.com/<KEY>" \
--rpc-public "https://api.mainnet-beta.solana.com" \
--payer-keypair payer.json \
--commitment processed \
--db-string "postgresql://user:pass@host/db" \
--combined-cert /path/to/combined.pem
```

### On Render

```bash
./target/release/mroracle \
--payer-keypair /etc/secrets/mr_oracle.json \
--rpc "https://primary-rpc.example.com/<KEY>" \
--rpc-backup "https://backup-rpc.example.com/<KEY>" \
--rpc-public "https://api.mainnet-beta.solana.com" \
--commitment processed \
--db-string "postgresql://adrena:<PASS>@<HOST>.singapore-postgres.render.com/transaction_db_celf" \
--combined-cert /etc/secrets/combined.pem
```

Without `--rpc-backup`: the fallback chain is primary → public. With it: primary → backup → public.

---

## Troubleshooting

### All RPCs failing / "Critical Priority" log

```
ERROR [Price Update] All RPCs failed. Please handle ASAP. Critical Priority.
```

All 3 RPC layers failed. Check the individual layer errors above the summary line. Common causes:
- All three URLs misconfigured
- Network outage on your side
- Solana mainnet congestion causing public RPC to throttle (100 req/10s limit)

### Pool fetch crashes at startup

If `[Pool Fetch] All RPCs failed` fires at startup, the service can't boot. Primary, backup, and public RPCs all failed to return the pool account. Verify URLs and auth keys.

## Run
### Priority Fees stale

`$> cargo run -- --payer-keypair payer.json --endpoint https://adrena.rpcpool.com/xxx --commitment finalized --db-string "postgresql://adrena:YYY.singapore-postgres.render.com/transaction_db_celf" --combined-cert /etc/secrets/combined.pem`
If priority fees can't be fetched (e.g., public RPC rate-limited), MrOracle keeps the last known value (0 at startup if never fetched) and continues. Transactions still land but may be slow to confirm during congestion.

Or on Render
### DB connection errors

`./target/release/mroracle --payer-keypair /etc/secrets/mroracle.json --endpoint https://adrena.rpcpool.com/xxx--x-token xxx --commitment finalized --db-string "postgresql://adrena:YYY.singapore-postgres.render.com/transaction_db_celf" --combined-cert /etc/secrets/combined.pem`
Ideally run that on a Render instance.
MrOracle depends on adrena-data writing prices to the PostgreSQL `assets_price` table. If the DB is unreachable, MrOracle retries with exponential backoff (50ms → 100ms → 200ms, 3 attempts) before logging an error and moving to the next cycle.
113 changes: 66 additions & 47 deletions src/client.rs
Original file line number Diff line number Diff line change
@@ -1,20 +1,25 @@
use {
adrena_abi::{self, Pool},
anchor_client::{solana_sdk::signer::keypair::read_keypair_file, Client, Cluster},
clap::Parser,
openssl::ssl::{SslConnector, SslMethod},
postgres_openssl::MakeTlsConnector,
priority_fees::fetch_mean_priority_fee,
solana_sdk::{instruction::AccountMeta, pubkey::Pubkey},
solana_sdk::{
instruction::AccountMeta,
pubkey::Pubkey,
signer::keypair::read_keypair_file,
},
std::{env, sync::Arc, thread::sleep, time::Duration},
tokio::{sync::Mutex, task::JoinHandle, time::interval, time::Instant},
tokio::{sync::Mutex, time::interval, time::Instant},
};

pub mod db;
pub mod handlers;
pub mod priority_fees;
pub mod rpc_fallback;
pub mod utils;

use rpc_fallback::RpcFallback;
use utils::format_chaos_labs_oracle_entry_to_params::format_chaos_labs_oracle_entry_to_params;

const DEFAULT_ENDPOINT: &str = "http://127.0.0.1:10000";
Expand All @@ -30,15 +35,25 @@ enum ArgsCommitment {
Finalized,
}

impl From<ArgsCommitment> for solana_sdk::commitment_config::CommitmentConfig {
fn from(c: ArgsCommitment) -> Self {
use solana_sdk::commitment_config::{CommitmentConfig, CommitmentLevel};
CommitmentConfig {
commitment: match c {
ArgsCommitment::Processed => CommitmentLevel::Processed,
ArgsCommitment::Confirmed => CommitmentLevel::Confirmed,
ArgsCommitment::Finalized => CommitmentLevel::Finalized,
},
}
}
}

#[derive(Debug, Clone, Parser)]
#[clap(author, version, about)]
struct Args {
#[clap(short, long, default_value_t = String::from(DEFAULT_ENDPOINT))]
/// Service endpoint
endpoint: String,

#[clap(long)]
x_token: Option<String>,
#[clap(long, default_value_t = String::from(DEFAULT_ENDPOINT))]
/// Primary Solana JSON-RPC endpoint
rpc: String,

/// Commitment level: processed, confirmed or finalized
#[clap(long)]
Expand All @@ -55,6 +70,14 @@ struct Args {
/// Combined certificate
#[clap(long)]
combined_cert: String,

/// Backup RPC endpoint URL (optional, used as fallback)
#[clap(long)]
rpc_backup: Option<String>,

/// Public RPC endpoint URL (last-resort fallback)
#[clap(long, default_value_t = String::from(rpc_fallback::DEFAULT_PUBLIC_RPC))]
rpc_public: String,
}

#[tokio::main]
Expand All @@ -67,25 +90,19 @@ async fn main() -> Result<(), anyhow::Error> {

let args = Args::parse();

let args = args.clone();
let mut periodical_priority_fees_fetching_task: Option<JoinHandle<Result<(), anyhow::Error>>> =
None;

// In case it errored out, abort the fee task (will be recreated)
if let Some(t) = periodical_priority_fees_fetching_task.take() {
t.abort();
}
let commitment: solana_sdk::commitment_config::CommitmentConfig =
args.commitment.unwrap_or_default().into();

let payer = read_keypair_file(args.payer_keypair.clone()).unwrap();
let payer = Arc::new(payer);
let client = Client::new(
Cluster::Custom(args.endpoint.clone(), args.endpoint.clone()),
Arc::clone(&payer),
);

let program = client
.program(adrena_abi::ID)
.map_err(|e| anyhow::anyhow!("Failed to get program: {:?}", e))?;
let rpc_fallback = Arc::new(RpcFallback::new(
&args.rpc,
args.rpc_backup.as_deref(),
&args.rpc_public,
commitment,
Arc::clone(&payer),
));

// ////////////////////////////////////////////////////////////////
// DB CONNECTION POOL
Expand Down Expand Up @@ -119,33 +136,31 @@ async fn main() -> Result<(), anyhow::Error> {
let median_priority_fee = Arc::new(Mutex::new(0u64));
// Spawn a task to poll priority fees every 5 seconds
log::info!("3 - Spawn a task to poll priority fees every 5 seconds...");
#[allow(unused_assignments)]
{
periodical_priority_fees_fetching_task = Some({
let median_priority_fee = Arc::clone(&median_priority_fee);
tokio::spawn(async move {
let mut fee_refresh_interval = interval(PRIORITY_FEE_REFRESH_INTERVAL);
loop {
fee_refresh_interval.tick().await;
if let Ok(fee) =
fetch_mean_priority_fee(&client, MEAN_PRIORITY_FEE_PERCENTILE).await
{
let mut fee_lock = median_priority_fee.lock().await;
*fee_lock = fee;
log::debug!(
" <> Updated median priority fee 30th percentile to : {} µLamports / cu",
fee
);
}
let median_priority_fee = Arc::clone(&median_priority_fee);
let rpc_fallback_clone = Arc::clone(&rpc_fallback);
tokio::spawn(async move {
let mut fee_refresh_interval = interval(PRIORITY_FEE_REFRESH_INTERVAL);
loop {
fee_refresh_interval.tick().await;
if let Ok(fee) =
fetch_mean_priority_fee(&rpc_fallback_clone, MEAN_PRIORITY_FEE_PERCENTILE).await
{
let mut fee_lock = median_priority_fee.lock().await;
*fee_lock = fee;
log::debug!(
" <> Updated mean priority fee to: {} µLamports / cu",
fee
);
}
})
}
});
}

let pool = program
.account::<Pool>(adrena_abi::MAIN_POOL_ID)
let pool = rpc_fallback
.get_account::<Pool>(&adrena_abi::MAIN_POOL_ID, "Pool Fetch")
.await
.map_err(|e| anyhow::anyhow!("Failed to get pool from DB: {:?}", e))?;
.map_err(|e| anyhow::anyhow!("Failed to get pool: {:?}", e))?;

let mut custodies_accounts: Vec<AccountMeta> = vec![];
for key in pool.custodies.iter() {
Expand Down Expand Up @@ -173,10 +188,14 @@ async fn main() -> Result<(), anyhow::Error> {

match last_trading_prices {
Ok(last_trading_prices) => {
// Copy the fee out of the lock BEFORE awaiting the TX send,
// otherwise the MutexGuard is held across the await and blocks
// the priority-fee refresher task for the duration of the send.
let fee = *median_priority_fee.lock().await;
// ignore errors on call since we want to keep executing IX
let _ = handlers::update_pool_aum(
&program,
*median_priority_fee.lock().await,
&rpc_fallback,
fee,
last_trading_prices,
remaining_accounts.clone(),
)
Expand Down
Loading