Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOCS-1159: RELEASE.2024-03-15T01-07-19Z #1211

Merged
merged 2 commits into from
May 9, 2024
Merged

DOCS-1159: RELEASE.2024-03-15T01-07-19Z #1211

merged 2 commits into from
May 9, 2024

Conversation

@ravindk89 ravindk89 added the Review Assigned Reviewers Must LGTM To Close label May 8, 2024
@ravindk89 ravindk89 self-assigned this May 8, 2024
Copy link
Collaborator

@feorlen feorlen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM mod typo

Copy link
Collaborator

@djwfyi djwfyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple of suggestions for you.
Otherwise, looks good to me.

You can change this value after startup to any value between ``0`` and the upper bound for the erasure set size.
MinIO only applies the changed parity to newly written objects.
Existing objects retain the parity value in place at the time of their creation.
MinIO by default automatically "upgrades" parity for an object if the destination erasure set maintains write quorum *but* has one or more drives offline.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
MinIO by default automatically "upgrades" parity for an object if the destination erasure set maintains write quorum *but* has one or more drives offline.
MinIO by default automatically "upgrades" parity for an object if the destination erasure set maintains write quorum *but* has one or more drives offline.
The object written on the destination maintains the same number of data shards, but a reduced number of parity shards compared to the original.

I think we need an example to clarify how MinIO "upgrades" parity.
I might have gotten it backwards in my suggestion.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so as well - @harshavardhana or perhaps @klauspost can confirm but:

in an erasure set of 16 drives and EC:4, if 1 drive goes online, we write with EC:5.

so the new object is 11:5 while other objects are 12:4. Since a drive is offline, we get 11 data blocks and 4 parity blocks, but the object is maintaining the 'quorum' of all other objects.

If you were to set this for capacity mode, we would write 12:4 and just leave off the last parity block, such that this object has reduced quorum (just like all the other objects on that set).

AFAIK healing does not correct parity upgrades, though it would 'fix' the object written in capacity mode.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/minio/minio/blob/f5e3eedf3458c9f5b580edac643292213c1322e7/cmd/erasure-object.go#L1293-L1320

Also implies we upgrade parity for each 'down' drive up to quorum.

I assume the issue is that the smaller number of data shards means those shards are larger, therefore there are some drives that end up hot-spotted if the erasure set operates in it's degraded state for a long time.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@djwfyi i might fire this off now and return to parity upgrade behavior in a dedicated PR, since it's likely nuanced.

source/reference/minio-server/settings/core.rst Outdated Show resolved Hide resolved
source/reference/minio-server/settings/core.rst Outdated Show resolved Hide resolved
Co-authored-by: Daryl White <[email protected]>
Co-authored-by: Andrea Longo <[email protected]>
@ravindk89 ravindk89 merged commit 64923e3 into main May 9, 2024
@ravindk89 ravindk89 deleted the DOCS-1159 branch May 9, 2024 18:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Review Assigned Reviewers Must LGTM To Close
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants