Workarounds for Iceberg max partitions per writer limit #469
Unanswered
colatkinson
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've just started playing around both with dbt and this adapter, so apologies in advance if there's something obvious I'm missing.
I'm attempting to use a
table
materialization to create a partitioned Iceberg table from unpartitioned Hive data. This works well for small subsets of data, but on larger datasets it quickly hits the max partitions limit of 100. While I could bump this limit, that seems like just pushing the issue off rather than actually handling it correctly.I came across this issue for the Athena adapter, where ultimately it seems the solution was to modify the macro SQL to catch this problem and insert in batches. But that would require modifying the adapter itself AFAIU.
Is there a recommended approach/pattern for handling this within dbt?
Beta Was this translation helpful? Give feedback.
All reactions