-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Reserve memory for native shuffle writer per partition #1022
Conversation
…r per partition (apache#988)" (apache#1020)" This reverts commit 8d097d5.
ff3b262
to
851427f
Compare
72c7a0c
to
2d5478a
Compare
I am testing this PR out now with benchmarks. |
I am testing with TPC-H sf=100. I usually test with one executor and 8 cores, but with this PR I can only run with a single core. I tried with 2 cores with this config:
The job fails with:
|
I will try it with sf=100. |
/// The difference in memory usage after appending rows | ||
MemDiff(Result<isize>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this always be an increase in memory? Should this use usize
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could be decrease too, if flush happens.
// Cannot allocate enough memory for the array builders in the partition, | ||
// spill partitions and retry. | ||
self.spill().await?; | ||
self.reservation.free(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forgot to free memory reservation in previous commit.
@andygrove Could you try run benchmarks again? Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I no longer see the memory error, but there seems to be a significant performance regression. TPC_H q2 used to take 12 seconds and is now taking many minutes. I do not see spill happening in Spark UI. I am going to add some debug logging to try and understand what is happening.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In native metrics, I do see excessive spilling:
spill_count=8, spilled_bytes=19441254400, data_size=877436
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay. I will take a look it further.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that is because we silently use some memory but never report them into the reservation, like the memory usage on array builders, now we count for them. So under same memory settings, it is more likely you hit the bar of memory pool. Have you try to increase the Comet memory like spark.comet.memoryOverhead
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried setting the overhead:
--conf spark.executor.instances=1 \
--conf spark.executor.memory=16G \
--conf spark.executor.cores=8 \
--conf spark.cores.max=8 \
--conf spark.memory.offHeap.enabled=true \
--conf spark.memory.offHeap.size=20g \
--conf spark.comet.memoryOverhead=16g \
This did not help with performance:
Query 2 took 352.05119466781616 seconds
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With latest commit, I ran TPCH sf=100 locally and didn't see regression now. Can you verify it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
Query 2 took 12.09787917137146 seconds
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1022 +/- ##
============================================
+ Coverage 34.30% 34.43% +0.13%
- Complexity 887 898 +11
============================================
Files 112 112
Lines 43429 43538 +109
Branches 9623 9660 +37
============================================
+ Hits 14897 14994 +97
- Misses 25473 25479 +6
- Partials 3059 3065 +6 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @viirya
Thanks @andygrove |
Which issue does this PR close?
Closes #1019.
Rationale for this change
This restore the patch merged in #988. The patch causes the issue #1019. This patch includes a fix for that.
What changes are included in this PR?
How are these changes tested?
Manually run TPCH benchmark locally.