Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

snapshot: Remove max_connections and max_pending_resets fields #4913

Merged

Conversation

zulinx86
Copy link
Contributor

@zulinx86 zulinx86 commented Nov 13, 2024

Changes

Remove max_connections and max_pending_resets fields from snapshot.

Note that I only focused on removing them in this PR although I might open another PR to refactor more later.

Reason

TcpIPv4Handler for MMDS network stack preallocates several buffers
whose sizes are saved into a snapshot as max_connections and
max_pending_resets in MmdsNetworkStackState. But they are always the
same constant hardcoded values (DEFAULT_MAX_CONNECTIONS and
DEFAULT_MAX_PENDING_RESETS) as of today, which means there is no need
to save them into a snapshot. Even if we change the hardcoded sizes
across Firecracker versions, that should not be a problem. This is
because the snapshot feature does not support migration of network
connections and those buffers are initialized with empty on snapshot
restoration. When migrating from a Firecracker version with larger
buffers to another version with smaller ones, guest workloads that
worked previously might start to fail due to the less buffer spaces.
However, the issue is not a problem of the snapshot feature and it
should also occur even on a purely booted microVM (not restored from a
snapshot). Thus, it is fine to remove those fields from a snapshot.

Since this is a breaking change of the snapshot format, bumps the major
version.

License Acceptance

By submitting this pull request, I confirm that my contribution is made under
the terms of the Apache 2.0 license. For more information on following Developer
Certificate of Origin and signing off your commits, please check
CONTRIBUTING.md.

PR Checklist

  • I have read and understand CONTRIBUTING.md.
  • I have run tools/devtool checkstyle to verify that the PR passes the
    automated style checks.
  • I have described what is done in these changes, why they are needed, and
    how they are solving the problem in a clear and encompassing way.
  • [ ] I have updated any relevant documentation (both in code and in the docs)
    in the PR.
  • I have mentioned all user-facing changes in CHANGELOG.md.
  • [ ] If a specific issue led to this PR, this PR closes the issue.
  • [ ] When making API changes, I have followed the
    Runbook for Firecracker API changes.
  • I have tested all new and changed functionalities in unit tests and/or
    integration tests.
  • [ ] I have linked an issue to every new TODO.

  • This functionality cannot be added in rust-vmm.

@zulinx86 zulinx86 force-pushed the exclude_unnecessary_fields branch from f449568 to d8f7e64 Compare November 13, 2024 11:09
Copy link

codecov bot commented Nov 13, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 84.07%. Comparing base (32c6ed4) to head (762ed4f).
Report is 2 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4913      +/-   ##
==========================================
- Coverage   84.10%   84.07%   -0.03%     
==========================================
  Files         251      251              
  Lines       28080    28067      -13     
==========================================
- Hits        23616    23597      -19     
- Misses       4464     4470       +6     
Flag Coverage Δ
5.10-c5n.metal 84.64% <100.00%> (-0.04%) ⬇️
5.10-m5n.metal 84.63% <100.00%> (-0.03%) ⬇️
5.10-m6a.metal 83.93% <100.00%> (-0.04%) ⬇️
5.10-m6g.metal 80.75% <100.00%> (-0.04%) ⬇️
5.10-m6i.metal 84.62% <100.00%> (-0.04%) ⬇️
5.10-m7g.metal 80.75% <100.00%> (-0.04%) ⬇️
6.1-c5n.metal 84.64% <100.00%> (-0.04%) ⬇️
6.1-m5n.metal 84.62% <100.00%> (-0.04%) ⬇️
6.1-m6a.metal 83.93% <100.00%> (-0.04%) ⬇️
6.1-m6g.metal 80.75% <100.00%> (-0.03%) ⬇️
6.1-m6i.metal 84.62% <100.00%> (-0.03%) ⬇️
6.1-m7g.metal 80.75% <100.00%> (-0.04%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@zulinx86 zulinx86 force-pushed the exclude_unnecessary_fields branch 2 times, most recently from 524d1bf to 18db7e5 Compare November 13, 2024 20:32
@zulinx86 zulinx86 marked this pull request as ready for review November 13, 2024 20:32
@zulinx86 zulinx86 changed the title [DRAFT] snapshot: Remove max_connections and max_pending_resets fields Nov 13, 2024
@zulinx86 zulinx86 force-pushed the exclude_unnecessary_fields branch from 18db7e5 to 9f9c5e0 Compare November 13, 2024 20:46
`TcpIPv4Handler` for MMDS network stack preallocates several buffers
whose sizes are saved into a snapshot as `max_connections` and
`max_pending_resets` in `MmdsNetworkStackState`. But they are always the
same constant hardcoded values (`DEFAULT_MAX_CONNECTIONS` and
`DEFAULT_MAX_PENDING_RESETS`) as of today, which means there is no need
to save them into a snapshot. Even if we change the hardcoded sizes
across Firecracker versions, that should not be a problem. This is
because the snapshot feature does not support migration of network
connections and those buffers are initialized with empty on snapshot
restoration. When migrating from a Firecracker version with larger
buffers to another version with smaller ones, guest workloads that
worked previously might start to fail due to the less buffer spaces.
However, the issue is not a problem of the snapshot feature and it
should also occur even on a purely booted microVM (not restored from a
snapshot). Thus, it is fine to remove those fields from a snapshot.

Since this is a breaking change of the snapshot format, bumps the major
version.

Signed-off-by: Takahiro Itazuri <[email protected]>
There is no need to use MmdsNetworkStack::new() instead of
MmdsNetworkStack::new_with_defaults() in tests that pass the same
default values.

Signed-off-by: Takahiro Itazuri <[email protected]>
@zulinx86 zulinx86 force-pushed the exclude_unnecessary_fields branch from 9f9c5e0 to 762ed4f Compare November 13, 2024 20:58
@roypat
Copy link
Contributor

roypat commented Nov 14, 2024

might start to fail due to the less buffer spaces.

Not even this can happen I think, because its just the capacity of the data structures. If they reach that capacity, then they'll simply grow :D

@zulinx86
Copy link
Contributor Author

might start to fail due to the less buffer spaces.

Not even this can happen I think, because its just the capacity of the data structures. If they reach that capacity, then they'll simply grow :D

yeah, I'm on the same page. I tried to make excuses at best I could :P

@zulinx86 zulinx86 added the Status: Awaiting review Indicates that a pull request is ready to be reviewed label Nov 14, 2024
@zulinx86 zulinx86 merged commit bc4da15 into firecracker-microvm:main Nov 14, 2024
7 of 8 checks passed
@zulinx86 zulinx86 deleted the exclude_unnecessary_fields branch November 14, 2024 09:04
@@ -12,6 +12,10 @@ and this project adheres to

### Changed

- [#4913](https://github.com/firecracker-microvm/firecracker/pull/4913): Removed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not opposing the change, but what are customers supposed to do in response to this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, they need to regenerate a snapshot. We might want to apply the same fix for the previous change.

@zulinx86 zulinx86 mentioned this pull request Nov 14, 2024
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Awaiting review Indicates that a pull request is ready to be reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants