Skip to content

Store the transactions inputs and outputs to the DB#1110

Merged
Klakurka merged 1 commit intoPayButton:masterfrom
Fabcien:inputs_outputs_db
Feb 23, 2026
Merged

Store the transactions inputs and outputs to the DB#1110
Klakurka merged 1 commit intoPayButton:masterfrom
Fabcien:inputs_outputs_db

Conversation

@Fabcien
Copy link
Collaborator

@Fabcien Fabcien commented Feb 23, 2026

This will populate the inputAddresses field of the Transaction when calling getAddressDetails from the client (endpoint address/transactions/<address>).

Note: I fixed a couple lint issues in the test that are unrelated but prevented me from committing the changes.

Test Plan:
yarn run docker test

Summary by CodeRabbit

  • New Features
    • Transactions now display detailed input and output information with associated addresses and amounts.

This will populate the `inputAddresses` field of the Transaction when calling `getAddressDetails` from the client (endpoint address/transactions/<address>).

Note: I fixed a couple lint issues in the test that are unrelated but prevented me from committing the changes.

Test Plan:
yarn run docker test
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 23, 2026

📝 Walkthrough

Walkthrough

The pull request introduces transaction input and output tracking by creating two new database tables (TransactionInput and TransactionOutput), updating the Prisma schema with corresponding models and relationships, and modifying services to collect addresses from transactions, upsert them, and establish database relationships between transactions and their associated addresses.

Changes

Cohort / File(s) Summary
Database Schema & Migration
prisma-local/migrations/.../migration.sql, prisma-local/schema.prisma
Added TransactionInput and TransactionOutput tables with foreign keys to Transaction and Address; extended Address and Transaction models with bidirectional relations and indexes on transactionId and addressId.
Chronik Service
services/chronikService.ts
Added address upsert logic to collect all unique input/output addresses from transactions, build an address-to-ID map, and populate inputs/outputs with corresponding addressIds in the transaction payload.
Transaction Service
services/transactionService.ts
Updated query includes to eagerly load inputs and outputs with address relations and index ordering; modified getSimplifiedTrasaction to derive resolved addresses from transaction inputs/outputs when not explicitly provided.
Service Tests
tests/unittests/transactionService.test.ts
Added test coverage for getSimplifiedTrasaction using inputs/outputs from transaction objects; updated query include structures in test mocks to match service changes.

Sequence Diagram

sequenceDiagram
    participant Chronik as Chronik API
    participant ChronikSvc as ChronikService
    participant AddrSvc as AddressService
    participant DB as Database
    participant TxnSvc as TransactionService

    Chronik->>ChronikSvc: Provide transaction with inputs/outputs
    ChronikSvc->>ChronikSvc: Collect & deduplicate input/output addresses
    loop For each unique address
        ChronikSvc->>AddrSvc: upsertAddress(address)
        AddrSvc->>DB: Upsert Address record
        DB-->>AddrSvc: Return Address with ID
        AddrSvc-->>ChronikSvc: Return addressId
    end
    ChronikSvc->>ChronikSvc: Build address string → ID map
    ChronikSvc->>ChronikSvc: Populate inputs/outputs with addressIds
    ChronikSvc->>DB: Create Transaction with inputs/outputs
    DB-->>ChronikSvc: Return Transaction
    ChronikSvc-->>Chronik: Return enriched transaction
    
    Chronik->>TxnSvc: Request simplified transaction
    TxnSvc->>DB: Query transaction with inputs/outputs/address relations
    DB-->>TxnSvc: Return hydrated transaction
    TxnSvc->>TxnSvc: Resolve addresses from inputs/outputs
    TxnSvc-->>Chronik: Return SimplifiedTransaction
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

Suggested labels

enhancement (UI/UX/feature)

Suggested reviewers

  • chedieck

Poem

🐰 Inputs and outputs hopping through the chain,
Addresses tracked in the database domain,
No orphaned links, no data in vain,
Transaction threads woven with care and refrain!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The description is incomplete. It lacks the 'Related to #' issue reference, detailed description of changes, and proper formatting according to the template. Add the related issue reference, expand the description section with technical details about the changes, and format according to the provided template.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding database storage for transaction inputs and outputs, which is the primary focus of this PR.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Klakurka Klakurka self-requested a review February 23, 2026 15:40
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
prisma-local/schema.prisma (1)

93-117: Consider adding a composite unique constraint on (transactionId, index) and reviewing onDelete semantics for addressId.

Two independent observations:

  1. Missing uniqueness guard: Neither TransactionInput nor TransactionOutput has a @@unique([transactionId, index]) constraint. Application logic prevents duplicates today (inputs/outputs only exist in the create branch of the upsert), but a DB-level constraint would be a safer guard.

  2. Cascade semantics on addressId: onDelete: Cascade means that deleting any participant Address (not the owning address of the Transaction) will silently delete the corresponding TransactionInput/TransactionOutput rows. The parent Transaction record survives but with a partial inputs/outputs view. onDelete: Restrict would be the safer default here, preventing deletion of an address that's still referenced as a participant.

♻️ Suggested constraint additions
 model TransactionInput {
   id            String      `@id` `@default`(dbgenerated("(uuid())"))
   transactionId String
   addressId     String
   index         Int
   transaction   Transaction `@relation`(fields: [transactionId], references: [id], onDelete: Cascade)
-  address       Address     `@relation`(fields: [addressId], references: [id], onDelete: Cascade)
+  address       Address     `@relation`(fields: [addressId], references: [id], onDelete: Restrict)
   amount        Decimal     `@db.Decimal`(24, 8)

   @@index([transactionId])
   @@index([addressId])
+  @@unique([transactionId, index])
 }

 model TransactionOutput {
   id            String      `@id` `@default`(dbgenerated("(uuid())"))
   transactionId String
   addressId     String
   index         Int
   transaction   Transaction `@relation`(fields: [transactionId], references: [id], onDelete: Cascade)
-  address       Address     `@relation`(fields: [addressId], references: [id], onDelete: Cascade)
+  address       Address     `@relation`(fields: [addressId], references: [id], onDelete: Restrict)
   amount        Decimal     `@db.Decimal`(24, 8)

   @@index([transactionId])
   @@index([addressId])
+  @@unique([transactionId, index])
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prisma-local/schema.prisma` around lines 93 - 117, Add a DB-level uniqueness
constraint and tighten delete semantics: in models TransactionInput and
TransactionOutput add a composite unique constraint @@unique([transactionId,
index]) to prevent duplicate index entries per transaction, and change the
address relation onDelete from Cascade to Restrict (the relation that references
address via addressId) so deleting an Address that is still referenced by
inputs/outputs is prevented rather than silently cascading deletes; leave the
transaction relation onDelete as-is if you want transaction deletions to
cascade.
services/transactionService.ts (1)

57-60: Unnecessary unsafe cast — TransactionWithAddressAndPrices already includes inputs/outputs.

Since includeAddressAndPrices now includes inputs and outputs, the Prisma-generated TransactionWithAddressAndPrices type exposes them directly as non-optional arrays. The explicit as { inputs?: ... } cast is redundant and opts out of type-checking unnecessarily.

♻️ Replace casts with direct field access
-  const dbInputsArr = (tx as { inputs?: Array<{ address: { address: string }, amount: Prisma.Decimal }> }).inputs
-  const dbOutputsArr = (tx as { outputs?: Array<{ address: { address: string }, amount: Prisma.Decimal }> }).outputs
-  const resolvedInputAddresses = inputAddresses ?? (Array.isArray(dbInputsArr) ? dbInputsArr.map(i => ({ address: i.address.address, amount: i.amount })) : [])
-  const resolvedOutputAddresses = outputAddresses ?? (Array.isArray(dbOutputsArr) ? dbOutputsArr.map(o => ({ address: o.address.address, amount: o.amount })) : [])
+  const resolvedInputAddresses = inputAddresses ?? tx.inputs.map(i => ({ address: i.address.address, amount: i.amount }))
+  const resolvedOutputAddresses = outputAddresses ?? tx.outputs.map(o => ({ address: o.address.address, amount: o.amount }))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@services/transactionService.ts` around lines 57 - 60, The explicit unsafe
casts for dbInputsArr/dbOutputsArr are unnecessary and bypass type-checking;
instead use the Prisma-generated TransactionWithAddressAndPrices fields directly
(access tx.inputs and tx.outputs) when building resolvedInputAddresses and
resolvedOutputAddresses, removing the `as { inputs?: ... }`/`as { outputs?: ...
}` casts and keeping the existing Array.isArray checks and mapping logic to
extract { address: ..., amount: ... } from each entry.
services/chronikService.ts (1)

323-332: index values can be sparse after filtering — consider re-indexing.

The index is assigned as the position i in the amount-sorted array before the .filter() removes entries with undefined addressId. If any addresses fail lookup (e.g., indices 1, 3), the stored rows have non-contiguous indices (0, 2, 4). The orderBy: { index: 'asc' } still returns results in the correct relative order, but the field semantics are surprising.

Additionally, index here represents amount-rank (descending sort position), not the original Chronik UTXO index. This naming could be confusing.

♻️ Re-index after filtering
-      inputs: {
-        create: inputAddresses
-          .map(({ address: addr, amount: amt }, i) => ({ addressId: getAddressId(addr), index: i, amount: amt }))
-          .filter((item): item is { addressId: string, index: number, amount: Prisma.Decimal } => item.addressId !== undefined)
-      },
-      outputs: {
-        create: outputAddresses
-          .map(({ address: addr, amount: amt }, i) => ({ addressId: getAddressId(addr), index: i, amount: amt }))
-          .filter((item): item is { addressId: string, index: number, amount: Prisma.Decimal } => item.addressId !== undefined)
-      }
+      inputs: {
+        create: inputAddresses
+          .map(({ address: addr, amount: amt }) => ({ addressId: getAddressId(addr), amount: amt }))
+          .filter((item): item is { addressId: string, amount: Prisma.Decimal } => item.addressId !== undefined)
+          .map((item, i) => ({ ...item, index: i }))
+      },
+      outputs: {
+        create: outputAddresses
+          .map(({ address: addr, amount: amt }) => ({ addressId: getAddressId(addr), amount: amt }))
+          .filter((item): item is { addressId: string, amount: Prisma.Decimal } => item.addressId !== undefined)
+          .map((item, i) => ({ ...item, index: i }))
+      }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@services/chronikService.ts` around lines 323 - 332, The current mapping for
inputs and outputs assigns index = i before filtering undefined addressIds,
producing sparse non-contiguous indices and mislabeling the field as the UTXO
index; update the logic in the inputs and outputs creation (where
inputAddresses/outputAddresses are mapped and filtered using getAddressId) to
first filter out entries with undefined addressId, then re-map them assigning a
new contiguous index (or rename the field to amountRank if you mean the rank in
amount-sorted order) so stored rows have contiguous indices that accurately
reflect amount-rank or the true Chronik UTXO index.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@services/chronikService.ts`:
- Around line 291-306: The code currently calls upsertAddress for every parsed
input/output address (in the uniqueAddressStrings map) which creates DB Address
rows with no user association and causes unbounded enrollment; instead, stop
creating new "participant" addresses: for each parsed address (from
parseAddress) first query for an existing address record (use
fetchAddressByNetworkIdAndAddress or equivalent lookup) and only set
addressIdMap if an existing Address is found; do NOT call upsertAddress for
addresses that would create a record with no user association. Update the
uniqueAddressStrings mapping logic to skip creation and only populate
addressIdMap from existing addresses to prevent persisting participant-only
addresses.

---

Nitpick comments:
In `@prisma-local/schema.prisma`:
- Around line 93-117: Add a DB-level uniqueness constraint and tighten delete
semantics: in models TransactionInput and TransactionOutput add a composite
unique constraint @@unique([transactionId, index]) to prevent duplicate index
entries per transaction, and change the address relation onDelete from Cascade
to Restrict (the relation that references address via addressId) so deleting an
Address that is still referenced by inputs/outputs is prevented rather than
silently cascading deletes; leave the transaction relation onDelete as-is if you
want transaction deletions to cascade.

In `@services/chronikService.ts`:
- Around line 323-332: The current mapping for inputs and outputs assigns index
= i before filtering undefined addressIds, producing sparse non-contiguous
indices and mislabeling the field as the UTXO index; update the logic in the
inputs and outputs creation (where inputAddresses/outputAddresses are mapped and
filtered using getAddressId) to first filter out entries with undefined
addressId, then re-map them assigning a new contiguous index (or rename the
field to amountRank if you mean the rank in amount-sorted order) so stored rows
have contiguous indices that accurately reflect amount-rank or the true Chronik
UTXO index.

In `@services/transactionService.ts`:
- Around line 57-60: The explicit unsafe casts for dbInputsArr/dbOutputsArr are
unnecessary and bypass type-checking; instead use the Prisma-generated
TransactionWithAddressAndPrices fields directly (access tx.inputs and
tx.outputs) when building resolvedInputAddresses and resolvedOutputAddresses,
removing the `as { inputs?: ... }`/`as { outputs?: ... }` casts and keeping the
existing Array.isArray checks and mapping logic to extract { address: ...,
amount: ... } from each entry.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2c9f05a and be2c605.

📒 Files selected for processing (5)
  • prisma-local/migrations/20260223120000_add_transaction_inputs_outputs/migration.sql
  • prisma-local/schema.prisma
  • services/chronikService.ts
  • services/transactionService.ts
  • tests/unittests/transactionService.test.ts

Comment on lines +291 to +306
const uniqueAddressStrings = [...new Set([
...inputAddresses.map(({ address: addr }) => addr),
...outputAddresses.map(({ address: addr }) => addr)
])]
const addressIdMap = new Map<string, string>()
await Promise.all(
uniqueAddressStrings.map(async (addrStr) => {
try {
const parsed = parseAddress(addrStr)
const addr = await upsertAddress(parsed)
addressIdMap.set(parsed, addr.id)
} catch {
// Skip invalid addresses: don't upsert, don't add to map
}
})
)
Copy link
Contributor

@coderabbitai coderabbitai bot Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

rg -n "fetchAllAddressesForNetworkId" --type=ts -A 30

Repository: PayButton/paybutton-server

Length of output: 13076


🏁 Script executed:

# Find the addressService file(s) to examine the implementation
fd -e ts -path "*address*" | head -20

Repository: PayButton/paybutton-server

Length of output: 238


🏁 Script executed:

# Also search for where fetchAllAddressesForNetworkId is defined
rg -n "function fetchAllAddressesForNetworkId\|fetchAllAddressesForNetworkId\s*[:=]" --type=ts -B 2 -A 20

Repository: PayButton/paybutton-server

Length of output: 52


fetchAllAddressesForNetworkId returns ALL addresses — including participant records created by upsertAddress.

The implementation at services/addressService.ts:181–187 filters only by networkId with no user association check:

export async function fetchAllAddressesForNetworkId (networkId: number): Promise<Address[]> {
  return await prisma.address.findMany({
    where: {
      networkId
    }
  })
}

When upsertAddress creates Address records for participant addresses (lines 291–306), they persist in the database without user association. On the next server restart:

  1. subscribeInitialAddresses() (line 978) will subscribe to ALL addresses including these participant records.
  2. syncMissedTransactions() (line 960) will sync transactions for all participant addresses.
  3. Each synced transaction creates more participant addresses from inputs/outputs, and the next restart repeats the process — unbounded growth.

This is a fundamental behavioral change: the Address table now tracks all transaction participants, not just user-registered addresses, and the system automatically enrolls them for syncing at restart.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@services/chronikService.ts` around lines 291 - 306, The code currently calls
upsertAddress for every parsed input/output address (in the uniqueAddressStrings
map) which creates DB Address rows with no user association and causes unbounded
enrollment; instead, stop creating new "participant" addresses: for each parsed
address (from parseAddress) first query for an existing address record (use
fetchAddressByNetworkIdAndAddress or equivalent lookup) and only set
addressIdMap if an existing Address is found; do NOT call upsertAddress for
addresses that would create a record with no user association. Update the
uniqueAddressStrings mapping logic to skip creation and only populate
addressIdMap from existing addresses to prevent persisting participant-only
addresses.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue here is not the change but the lack of filtering on the subscription. This should be fixed separately

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Fabcien, understood! You're right that the subscription and sync logic in fetchAllAddressesForNetworkId (and related functions like subscribeInitialAddresses() and syncMissedTransactions()) should filter to only user-registered addresses rather than all participant addresses that get upserted.

Keeping the current change in getTransactionFromChronikTransaction to store inputs/outputs makes sense as a separate concern from how we manage subscriptions on server restart.

Would you like me to create a follow-up issue to track the work needed to add filtering logic to prevent subscribing/syncing participant-only addresses?


✏️ Learnings added
Learnt from: Fabcien
Repo: PayButton/paybutton-server PR: 1110
File: services/chronikService.ts:291-306
Timestamp: 2026-02-23T15:44:26.406Z
Learning: In `services/chronikService.ts`, the `upsertAddress` calls for transaction input/output addresses are intentional to store participant data. The subscription filtering (e.g., in `fetchAllAddressesForNetworkId`) should be addressed separately to distinguish user-registered addresses from participant addresses.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

@Klakurka Klakurka added the enhancement (behind the scenes) Stuff that users won't see label Feb 23, 2026
@Klakurka Klakurka added this to the Phase 3 milestone Feb 23, 2026
Copy link
Member

@Klakurka Klakurka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok nice. I was expecting a single new table for this but 2 is fine.

@Klakurka Klakurka merged commit 070f7ba into PayButton:master Feb 23, 2026
3 checks passed
@Fabcien Fabcien deleted the inputs_outputs_db branch February 23, 2026 18:59
Fabcien added a commit to Fabcien/paybutton-server that referenced this pull request Feb 27, 2026
It's unclear what the use case is and it was added in PayButton#1110 to avoid having to do it later.
It appears that this table is costly as it contains a lot of outputs.

Let's remove it for now to favor disk size and performance. We can add it back later if we need it.
Klakurka pushed a commit that referenced this pull request Feb 27, 2026
* Store transaction inputs/outputs address as string

This release some constraints on the database and improve the performances.

* Reduce concurrency

For the chronik calls, and the db batch sizes. This avoids too long db transactions from impacting the performances and allow to start working faster on the chronik transactions.

* Update the logic in createManyTransactions

This avoid initiating a DB transaction for each tx, and favor bulk operations instead. Since it also removes the batch processing this removes the connection limits to the db issue entirely when an address has too many txs.

* Optimize getTransactionFromChronikTransaction

This is a 2-step optimization:
 - remove unneeded async in satoshisToUnit() callsites, this reduce locks
 - optimize the inputs and outputs processing by avoiding looping several times, avoid copying arrays and doing redundant formatting/checks.

This also fixes a logging error and a loop exit condition that could potentially cause an infinite loop. Inspired by 6298e84 and c4cb339.

This is a ~500x improvement on my machine.

* fix: one network at a time

* fix: current prices after initial sync

* Fix the chronik service tests

* Remove TransactionOutput from the database

It's unclear what the use case is and it was added in #1110 to avoid having to do it later.
It appears that this table is costly as it contains a lot of outputs.

Let's remove it for now to favor disk size and performance. We can add it back later if we need it.

---------

Co-authored-by: Estevão <estevao@chedieck.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement (behind the scenes) Stuff that users won't see

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants