You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi again! Wondering if there's any way to make the checkpoint creation more efficient, time-wise. Our current delta table has ~300,000 underlying parquet files (1.6 TB) and a delta_log with ~15,000 transaction files. About 7,000 files (40 GB) are added every day. The oxbow lambda takes about 13-14 minutes to create each checkpoint, and we worry it will soon hit the max 15 minutes. Any ideas for how we can reduce the time needed for checkpointing? Thank you!
P.S. Sorry for opening all these issues! Thanks for your quick responses!
The text was updated successfully, but these errors were encountered:
@connieksun there's some guidance here that I could offer which is not going to be too applicable given the high scale and throughput of the table. Through Buoyant Data I offer commercial customers support and that would allow us to sign an NDA and I could look more deeply at the table structure and help improve performance here.
Hi again! Wondering if there's any way to make the checkpoint creation more efficient, time-wise. Our current delta table has ~300,000 underlying parquet files (1.6 TB) and a delta_log with ~15,000 transaction files. About 7,000 files (40 GB) are added every day. The oxbow lambda takes about 13-14 minutes to create each checkpoint, and we worry it will soon hit the max 15 minutes. Any ideas for how we can reduce the time needed for checkpointing? Thank you!
P.S. Sorry for opening all these issues! Thanks for your quick responses!
The text was updated successfully, but these errors were encountered: