-
Notifications
You must be signed in to change notification settings - Fork 971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metadata backup failed on large volume #5276
Comments
Which metadata system is used? tikv,redis or sql? |
Tikv, I suppose |
@polyrabbit Can you try #5080? |
Unfortunately #5080 still fails with:
But this time it runs longer (13min+) than before, I suppose there is another txn opened too long? |
Update: a second test works now, the progress shows it needs 10+ hours to finish, I'll wait to see if it succeeds tomorrow. The difference between those two tests is that I rebased #5080 this morning, and the second test is I cherry-picked #5080 - I suppose there are some conflicts between those commits. Also, I noticed backup spends lots of time on sorting large dirs: Line 2897 in c90a175
Is it necessary? |
It took 7h+ to backup 318039153 files. |
We are working on a faster dump into binary format, will let you know when it's ready |
Why not consider merging #5080? Does it have any critical drawbacks? I suppose stream scan also benefits other cases. |
We have a volume with 500M+ inodes, the metadata backup always fails with the following error:
The text was updated successfully, but these errors were encountered: