Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Truncated core file when COMP_COMPRESSION is set to "true" #165

Open
amikugup opened this issue Oct 16, 2024 · 5 comments
Open

Truncated core file when COMP_COMPRESSION is set to "true" #165

amikugup opened this issue Oct 16, 2024 · 5 comments

Comments

@amikugup
Copy link

We are observing a strange issue with IBM core dump handler. Actually, we are getting truncated core file when COMP_COMPRESSION flag is set to "true". gdb is complaining about the truncated file and core file size is close to 900 MB while gdb is expecting a core file size of 3 GB.

We didn't see any such issue when we turned off the compression. we got a full core file and gdb is also happy.
Is this a known issue with the compression flag?

@pereyra-m
Copy link

Hi.

I'm getting problems too with big dumps, they can't be read with gdb.
I'll try without compression.

@No9
Copy link
Collaborator

No9 commented Nov 6, 2024

Let me know how you get on
We use the zip crate and just use 'COMP_COMPRESSION' as a flag so I'd say it's likely a bug in that crate.

zip::CompressionMethod::Deflated

Looks like a lot has been added to zip as it's now on version 2.2.0 so a PR with a bump would be appreciated.

Thanks

@pereyra-m
Copy link

Hi again.

We were using the 8.6.0 version, and even when the flag was set to "false", the dumps were uploaded compressed and the big ones were corrupt.
The release notes show that this was solved in recent versions, so we upgraded to 8.10.0 and now it's working.

@amikugup
Copy link
Author

amikugup commented Nov 7, 2024

We are already using v8.10.0 but that doesn't solve the problem and we still need to disable the compression to make this work. @pereyra-m this would be helpful if you can share more details regarding the core file size that you have tried and configurable value you are using.

@pereyra-m
Copy link

We haven't any special configuration, and we noticed the corruptions when the dumps were higher than 1GB more or less.
The error was something like

Failed to read a valid object file image from memory.

Maybe in your case is something else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants