feat: serverless Lambda relayer with DynamoDB and live E2E test#334
feat: serverless Lambda relayer with DynamoDB and live E2E test#334
Conversation
- dynamo-monitor-state-store: DynamoDB impl of IMonitorStateStore - dynamo-exchange-history-store: DynamoDB impl of IExchangeHistoryStore - relayer: single-pass relay replacing infinite polling loop - lambda: Lambda handler replacing index.ts entry point - e2e/live-roundtrip: live network roundtrip test (120 WNCG) - infra/bootstrap-dynamodb.sh: DynamoDB table creation script - .github/workflows/live-test.yml: post-deploy E2E workflow No existing files modified except package.json (added deps + scripts). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Lambda timeout is 300s (5 min). Running at 1 min intervals would cause concurrent executions overlapping — set schedule to rate(5 minutes) to ensure each invocation completes before the next one starts. bootstrap-dynamodb.sh now also configures: - EventBridge rule (rate(5 minutes)) - Lambda permission for EventBridge invocation - Lambda timeout enforcement (300s) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 3 potential issues.
Bugbot Autofix prepared fixes for all 3 issues found in the latest run.
- ✅ Fixed: BSC observer hardcodes "ethereum" state key causing corruption
- I made
EthereumBurnEventObservernetwork-aware and passed"bsc"from the Lambda BSC path so monitor state and exchange history are stored under the correct chain key.
- I made
- ✅ Fixed: DynamoDB queries lack pagination risking incomplete results
- I added
LastEvaluatedKeypagination loops to bothtransferredAmountInLast24Hours(Query) andgetPendingTransactions(Scan) so all pages are processed.
- I added
- ✅ Fixed: Promise.all couples independent chains causing cascade failures
- I replaced
Promise.allwithPromise.allSettledin the Lambda handler and log per-chain failures so one relay error no longer aborts independent chain processing.
- I replaced
Or push these changes by commenting:
@cursor push ebf743bedd
Preview (ebf743bedd)
diff --git a/bridge/src/dynamo-exchange-history-store.ts b/bridge/src/dynamo-exchange-history-store.ts
--- a/bridge/src/dynamo-exchange-history-store.ts
+++ b/bridge/src/dynamo-exchange-history-store.ts
@@ -10,6 +10,8 @@
UpdateItemCommand,
ScanCommand,
QueryCommand,
+ QueryCommandOutput,
+ ScanCommandOutput,
ConditionalCheckFailedException,
} from "@aws-sdk/client-dynamodb";
@@ -76,38 +78,45 @@
network: string,
sender: string
): Promise<number> {
- const cutoff = new Date(
- Date.now() - 24 * 60 * 60 * 1000
- ).toISOString();
+ const cutoff = new Date(Date.now() - 24 * 60 * 60 * 1000).toISOString();
+ let amountSum = 0;
+ let lastEvaluatedKey: Record<string, any> | undefined = undefined;
- const result = await this._client.send(
- new QueryCommand({
- TableName: this._tableName,
- IndexName: "network-timestamp-index",
- KeyConditionExpression:
- "network = :network AND #ts > :cutoff",
- FilterExpression: "sender = :sender",
- ExpressionAttributeNames: {
- "#ts": "timestamp",
+ do {
+ const result: QueryCommandOutput = await this._client.send(
+ new QueryCommand({
+ TableName: this._tableName,
+ IndexName: "network-timestamp-index",
+ KeyConditionExpression:
+ "network = :network AND #ts > :cutoff",
+ FilterExpression: "sender = :sender",
+ ExpressionAttributeNames: {
+ "#ts": "timestamp",
+ },
+ ExpressionAttributeValues: {
+ ":network": { S: network },
+ ":cutoff": { S: cutoff },
+ ":sender": { S: sender },
+ },
+ ExclusiveStartKey: lastEvaluatedKey,
+ })
+ );
+
+ const items = result.Items ?? [];
+ amountSum += items.reduce(
+ (
+ sum: number,
+ item: Record<string, { N?: string; S?: string }>
+ ) => {
+ return sum + parseFloat(item.amount?.N ?? "0");
},
- ExpressionAttributeValues: {
- ":network": { S: network },
- ":cutoff": { S: cutoff },
- ":sender": { S: sender },
- },
- })
- );
+ 0
+ );
- const items = result.Items ?? [];
- return items.reduce(
- (
- sum: number,
- item: Record<string, { N?: string; S?: string }>
- ) => {
- return sum + parseFloat(item.amount?.N ?? "0");
- },
- 0
- );
+ lastEvaluatedKey = result.LastEvaluatedKey;
+ } while (lastEvaluatedKey !== undefined);
+
+ return amountSum;
}
async updateStatus(
@@ -132,31 +141,43 @@
}
async getPendingTransactions(): Promise<ExchangeHistory[]> {
- const result = await this._client.send(
- new ScanCommand({
- TableName: this._tableName,
- FilterExpression: "#s = :pending",
- ExpressionAttributeNames: {
- "#s": "status",
- },
- ExpressionAttributeValues: {
- ":pending": { S: TransactionStatus.PENDING },
- },
- })
- );
+ const pendingTransactions: ExchangeHistory[] = [];
+ let lastEvaluatedKey: Record<string, any> | undefined = undefined;
- const items = result.Items ?? [];
- return items.map(
- (item: Record<string, { N?: string; S?: string }>) => ({
- network: item.network?.S ?? "",
- tx_id: item.tx_id?.S ?? "",
- sender: item.sender?.S ?? "",
- recipient: item.recipient?.S ?? "",
- timestamp: item.timestamp?.S ?? "",
- amount: parseFloat(item.amount?.N ?? "0"),
- status: (item.status?.S ??
- TransactionStatus.PENDING) as TransactionStatus,
- })
- );
+ do {
+ const result: ScanCommandOutput = await this._client.send(
+ new ScanCommand({
+ TableName: this._tableName,
+ FilterExpression: "#s = :pending",
+ ExpressionAttributeNames: {
+ "#s": "status",
+ },
+ ExpressionAttributeValues: {
+ ":pending": { S: TransactionStatus.PENDING },
+ },
+ ExclusiveStartKey: lastEvaluatedKey,
+ })
+ );
+
+ const items = result.Items ?? [];
+ pendingTransactions.push(
+ ...items.map(
+ (item: Record<string, { N?: string; S?: string }>) => ({
+ network: item.network?.S ?? "",
+ tx_id: item.tx_id?.S ?? "",
+ sender: item.sender?.S ?? "",
+ recipient: item.recipient?.S ?? "",
+ timestamp: item.timestamp?.S ?? "",
+ amount: parseFloat(item.amount?.N ?? "0"),
+ status: (item.status?.S ??
+ TransactionStatus.PENDING) as TransactionStatus,
+ })
+ )
+ );
+
+ lastEvaluatedKey = result.LastEvaluatedKey;
+ } while (lastEvaluatedKey !== undefined);
+
+ return pendingTransactions;
}
}
diff --git a/bridge/src/lambda.ts b/bridge/src/lambda.ts
--- a/bridge/src/lambda.ts
+++ b/bridge/src/lambda.ts
@@ -506,7 +506,8 @@
ETHERSCAN_ROOT_URL,
integration,
multiPlanetary,
- FAILURE_SUBSCRIBERS
+ FAILURE_SUBSCRIBERS,
+ "bsc"
);
const ncgTransferredEventObserver = new NCGTransferredEventObserver(
@@ -573,11 +574,20 @@
await pendingTransactionRetryHandler.messagePendingTransactions();
- await Promise.all([
+ const relayResults = await Promise.allSettled([
relayEthereum(ethDeps),
relayBSC(bscDeps),
relayNineChronicles(ncDeps),
]);
+ const relayNames = ["ethereum", "bsc", "nineChronicles"];
+ relayResults.forEach((result, index) => {
+ if (result.status === "rejected") {
+ console.error(
+ `[lambda] ${relayNames[index]} relay failed`,
+ result.reason
+ );
+ }
+ });
console.log("[lambda] Done");
}
diff --git a/bridge/src/observers/burn-event-observer.ts b/bridge/src/observers/burn-event-observer.ts
--- a/bridge/src/observers/burn-event-observer.ts
+++ b/bridge/src/observers/burn-event-observer.ts
@@ -37,6 +37,7 @@
private readonly _integration: Integration;
private readonly _multiPlanetary: MultiPlanetary;
private readonly _failureSubscribers: string;
+ private readonly _networkKey: string;
constructor(
ncgTransfer: INCGTransfer,
slackMessageSender: ISlackMessageSender,
@@ -50,7 +51,8 @@
etherscanUrl: string,
integration: Integration,
multiPlanetary: MultiPlanetary,
- failureSubscribers: string
+ failureSubscribers: string,
+ networkKey: string = "ethereum"
) {
this._ncgTransfer = ncgTransfer;
this._slackMessageSender = slackMessageSender;
@@ -65,6 +67,7 @@
this._integration = integration;
this._multiPlanetary = multiPlanetary;
this._failureSubscribers = failureSubscribers;
+ this._networkKey = networkKey;
}
async notify(data: {
@@ -73,7 +76,7 @@
}): Promise<void> {
const { blockHash, events } = data;
if (events.length === 0) {
- await this._monitorStateStore.store("ethereum", {
+ await this._monitorStateStore.store(this._networkKey, {
blockHash,
txId: null,
});
@@ -120,7 +123,7 @@
}
await this._exchangeHistoryStore.put({
- network: "ethereum",
+ network: this._networkKey,
tx_id: transactionHash,
sender,
recipient: user9cAddress,
@@ -165,7 +168,7 @@
memo
);
- await this._monitorStateStore.store("ethereum", {
+ await this._monitorStateStore.store(this._networkKey, {
blockHash,
txId: transactionHash,
});
diff --git a/bridge/test/observers/burn-event-observer.spec.ts b/bridge/test/observers/burn-event-observer.spec.ts
--- a/bridge/test/observers/burn-event-observer.spec.ts
+++ b/bridge/test/observers/burn-event-observer.spec.ts
@@ -153,6 +153,22 @@
multiPlanetary,
failureSubscribers
);
+ const bscObserver = new EthereumBurnEventObserver(
+ mockNcgTransfer,
+ mockSlackMessageSender,
+ mockOpenSearchClient,
+ mockSpreadSheetClient,
+ mockMonitorStateStore,
+ mockExchangeHistoryStore,
+ "https://explorer.libplanet.io/9c-internal",
+ "https://internal.9cscan.com",
+ false,
+ "https://sepolia.etherscan.io",
+ mockIntegration,
+ multiPlanetary,
+ failureSubscribers,
+ "bsc"
+ );
describe(EthereumBurnEventObserver.prototype.notify.name, () => {
it("should record the block hash even if there is no events", () => {
@@ -170,6 +186,45 @@
);
});
+ it("should store monitor and history under bsc network key", async () => {
+ const event = {
+ blockHash: "BLOCK-HASH",
+ address: "0x4029bC50b4747A037d38CF2197bCD335e22Ca301",
+ logIndex: 0,
+ blockNumber: 0,
+ event: "Burn",
+ raw: {
+ data: "",
+ topics: [],
+ },
+ signature: "",
+ transactionIndex: 0,
+ transactionHash: "TX-BSC",
+ txId: "TX-BSC",
+ returnValues: {
+ _sender: "0x2734048eC2892d111b4fbAB224400847544FC872",
+ _to: "0x6d29f9923C86294363e59BAaA46FcBc37Ee5aE2e",
+ amount: 1000000000000000000,
+ },
+ } as EventData & TransactionLocation;
+
+ await bscObserver.notify({
+ blockHash: "BLOCK-HASH",
+ events: [event],
+ });
+
+ expect(mockMonitorStateStore.store).toHaveBeenCalledWith("bsc", {
+ blockHash: "BLOCK-HASH",
+ txId: "TX-BSC",
+ });
+ expect(mockExchangeHistoryStore.put).toHaveBeenCalledWith(
+ expect.objectContaining({
+ network: "bsc",
+ tx_id: "TX-BSC",
+ })
+ );
+ });
+
function makeEvent(
ncgRecipient: string,
amount: number,This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.
| monitorStateStore, | ||
| burnEventObserver: bscBurnEventObserver, | ||
| networkKey: "bsc", | ||
| }; |
There was a problem hiding this comment.
BSC observer hardcodes "ethereum" state key causing corruption
High Severity
The bscDeps uses networkKey: "bsc" so the relayer loads checkpoint state from the "bsc" key, but EthereumBurnEventObserver.notify() hardcodes "ethereum" in both _monitorStateStore.store("ethereum", ...) and _exchangeHistoryStore.put({ network: "ethereum", ... }). This means BSC state is never saved under the "bsc" key, so every Lambda invocation BSC starts from confirmedTip and skips all intermediate blocks. Worse, both ETH and BSC observers run concurrently via Promise.all and race to write the "ethereum" state key — a BSC block hash written there will cause the ETH relayer to fail on the next invocation when it calls provider.getBlock(bscBlockHash) against the Ethereum RPC.
Additional Locations (1)
| }, | ||
| 0 | ||
| ); | ||
| } |
There was a problem hiding this comment.
DynamoDB queries lack pagination risking incomplete results
Medium Severity
transferredAmountInLast24Hours and getPendingTransactions each issue a single DynamoDB QueryCommand/ScanCommand without checking LastEvaluatedKey for pagination. DynamoDB returns at most 1 MB per request. If the result set exceeds that, the returned items are incomplete. For transferredAmountInLast24Hours, this could undercount transferred amounts and allow users to bypass the 24-hour transfer limit. For getPendingTransactions, pending transactions beyond the first page are silently missed.
Additional Locations (1)
| relayEthereum(ethDeps), | ||
| relayBSC(bscDeps), | ||
| relayNineChronicles(ncDeps), | ||
| ]); |
There was a problem hiding this comment.
Promise.all couples independent chains causing cascade failures
Medium Severity
Using Promise.all means if any single chain relay throws (e.g., BSC RPC is temporarily down), the handler rejects immediately and reports the entire Lambda invocation as failed — even though the other two relays may have completed successfully. The original index.ts ran each monitor independently, so one chain's failure didn't affect the others. Using Promise.allSettled and then checking individual results would preserve independent processing and avoid cascade failures, false failure alerts, and unnecessary re-initialization of already-completed work.
- infra/deploy-lambda.sh: manual local deploy script that builds TS, packages Lambda zip, updates function code via AWS CLI, and auto-populates DynamoDB checkpoints from live chain tips (ETH/BSC/9c) using attribute_not_exists to be safely idempotent - .github/workflows/live-test.yml: change trigger from workflow_run to workflow_dispatch to prevent automated post-deploy CI execution; deploy and E2E testing are now both local-only to reduce supply chain attack surface on KMS-backed bridge infrastructure Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
You have used all of your free Bugbot PR reviews. To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial. |
- burn-event-observer: add networkKey param (default "ethereum") and use it for monitorStateStore and exchangeHistoryStore calls instead of hardcoded "ethereum" string; fixes BSC state being stored/queried under the wrong key - lambda: pass "bsc" networkKey to bscBurnEventObserver - lambda: use Promise.allSettled instead of Promise.all so a single chain failure does not abort the other two chains; log per-chain errors - dynamo-exchange-history-store: paginate transferredAmountInLast24Hours (QueryCommand) and getPendingTransactions (ScanCommand) with LastEvaluatedKey loop to handle tables larger than one DynamoDB page Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Creates IAM role with DynamoDB + KMS permissions, then creates the bridge-relayer Lambda function with all required environment variables. Run once before the first deploy-lambda.sh. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>



Summary
EC2 상시 운영($133/월) → AWS Lambda + DynamoDB 서버리스 구조로 교체합니다.
운영 비용: ~$133/월 → ~$8/월 (KMS $6 + DynamoDB/Lambda ~$2)
변경 철학
기존 비즈니스 로직(observers, policies, signers, graphql client)은 일체 수정하지 않습니다.
수년간 검증된 코드를 존중하며, 변경은 최소 범위로 제한합니다.
TriggerableMonitor), 로컬 SQLite 파일스토어index.ts→lambda.ts신규 파일 (기존 파일 수정 없음 —
package.json제외)bridge/src/dynamo-monitor-state-store.tsIMonitorStateStore의 DynamoDB 구현체.sqlite3-monitor-state-store.ts와 동일한 인터페이스.network(ethereum | nineChronicles | bsc)store()/load()인터페이스 1:1 대응bridge/src/dynamo-exchange-history-store.tsIExchangeHistoryStore의 DynamoDB 구현체.sqlite3-exchange-history-store.ts와 동일한 인터페이스.network-timestamp-index로 24시간 롤링 집계put()멱등성:ConditionExpression: attribute_not_exists(tx_id)+ConditionalCheckFailedException무시bridge/src/relayer.ts무한 루프를 단발성 실행으로 교체. Lambda 한 번 호출 = 마지막 체크포인트부터 현재 tip까지 일괄 처리.
relayEthereum(deps)/relayBSC(deps)/relayNineChronicles(deps)bridge/src/lambda.tsindex.ts를 대체하는 Lambda handler. 초기화 코드는index.ts와 동일하게 유지하고 아래만 교체:Sqlite3MonitorStateStore→DynamoMonitorStateStoreSqlite3ExchangeHistoryStore→DynamoExchangeHistoryStoremonitor.run()→Promise.all([relayEthereum, relayBSC, relayNineChronicles])bridge/e2e/live-roundtrip.ts+bridge/e2e/utils.ts배포마다 실제 네트워크에서 120 WNCG 라운드트립 자동 검증.
.github/workflows/live-test.yml"Deploy Lambda" 워크플로 성공 후 자동 실행.
infra/bootstrap-dynamodb.sh원클릭 인프라 셋업 (DynamoDB 테이블 + EventBridge rule + Lambda 타임아웃 300s 설정).
스케줄: 5분 주기
Lambda 타임아웃(300s) = EventBridge 주기(5분). 1분 주기면 이전 실행 완료 전 다음이 시작되어 동시 실행 발생.
브릿지 특성상 수분 딜레이는 허용 범위 (온체인 finality도 수분 소요).
필요한 신규 GitHub Secrets
배포 순서
AWS_REGION=us-east-1 LAMBDA_FUNCTION_NAME=bridge-relayer bash infra/bootstrap-dynamodb.shhandlerexport frombridge/src/lambda.ts)🤖 Generated with Claude Code
Note
Medium Risk
Changes the bridge runtime model (continuous monitors → scheduled, one-shot relays) and swaps persistence from local SQLite to DynamoDB, which can affect event coverage/duplication and operational reliability if checkpoints or indexes are misconfigured.
Overview
Migrates the bridge relayer toward a serverless execution model by introducing a new
handlerinbridge/src/lambda.tsthat runs one-shot relays for Ethereum, BSC, and Nine Chronicles and reuses existing observers/policies.Replaces local SQLite-backed state with DynamoDB implementations (
DynamoMonitorStateStore,DynamoExchangeHistoryStore) and adds a newbridge/src/relayer.tsthat replays from the last stored checkpoint to the confirmed tip (including chunked ETH log fetching).Adds live post-deploy validation: a GitHub Actions workflow runs a real roundtrip E2E test (
bridge/e2e/live-roundtrip.ts) that burns/mints 120 WNCG/NCG, invokes the Lambda, waits for receipts, and notifies Slack on failure; plus aninfra/bootstrap-dynamodb.shscript to create required DynamoDB tables and an EventBridge schedule/timeout.Written by Cursor Bugbot for commit 95b2303. This will update automatically on new commits. Configure here.