Wait for data to be drained before writing new central file header #92
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When adding a huge amount of files to an archive there will likely be a memory out of heap exception when writing the central file header in the zip module.
Here is a small test for reproduction where testfile can be an empty file:
<script src="https://gist.github.com/falk0pr0ss/d92f77c342f2505b40faed42946f73b4.js"></script>
Here is the behaviour before the proposed fix - the huge increase of memory usage is when the writeCentralFileHeader method is called repeatedly and a memory out of heap exception is caused:
Here is the behaviour with the proposed fix, where the risk of a memory out of heap exception is reduced by waiting for the drain event to be fired after the write method returns false: