You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a Trino cluster running on 2 nodes using the Iceberg connector and tables created using DBT. Most tables are partition by a run date or buckets. In all my tables the file size is really small (hundreds of files under 1 MB). Even if I try running something like: ALTER TABLE iceberg.dev.my_table EXECUTE optimize(file_size_threshold => '256MB'); I still end up with KB files and end up with more files.
What am I missing that will actually condense files into larger sizes?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
We have a Trino cluster running on 2 nodes using the Iceberg connector and tables created using DBT. Most tables are partition by a run date or buckets. In all my tables the file size is really small (hundreds of files under 1 MB). Even if I try running something like:
ALTER TABLE iceberg.dev.my_table EXECUTE optimize(file_size_threshold => '256MB');
I still end up with KB files and end up with more files.What am I missing that will actually condense files into larger sizes?
Beta Was this translation helpful? Give feedback.
All reactions