-
Notifications
You must be signed in to change notification settings - Fork 11
Closed
Labels
good first issueGood for newcomersGood for newcomersgood first reviewGood for newcomersGood for newcomershelp wantedExtra attention is neededExtra attention is needed
Milestone
Description
You'll need to configure an Iceberg catalog for your data. Iceberg can store data in various formats such as Parquet or Avro. First, you'll set up a local or cloud storage (e.g., S3, GCS) to store the Iceberg tables.
Load a catalog (Hadoop, AWS S3, GCS, etc.)
from pyiceberg.catalog import load_catalog
catalog = load_catalog("my_catalog")
Define the schema for the Iceberg table (based on the structure of your BigQuery dataset)
schema = {
'transaction_hash': 'string',
'signer_account_id': 'string',
'block_timestamp': 'long',
'actions': 'string'
}
Create an Iceberg table for Near transactions
transactions_table = catalog.create_table(
identifier="near.transactions",
schema=schema,
partition_spec=None # Partition if necessary, e.g., by date or block
)
Metadata
Metadata
Assignees
Labels
good first issueGood for newcomersGood for newcomersgood first reviewGood for newcomersGood for newcomershelp wantedExtra attention is neededExtra attention is needed