Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: update to vm-memory 0.13.1 #4175

Merged
merged 9 commits into from
Oct 20, 2023

Conversation

roypat
Copy link
Contributor

@roypat roypat commented Oct 13, 2023

As always, the linux-loader update has to biggy-back off of the vm-memory update. This allows us to remove the {Read,Write}Volatile traits, instead using the ones from upstream.

Also eliminates some more usages of now-deprecated vm-memory functions that I missed when I first wrote the volatile-io patch for firecracker.

Lastly, this PR also includes a lot of kani performance tweaking that became necessary with the new vm-memory update for some reason.

License Acceptance

By submitting this pull request, I confirm that my contribution is made under
the terms of the Apache 2.0 license. For more information on following
Developer Certificate of Origin and signing off your commits, please check
CONTRIBUTING.md.

PR Checklist

  • If a specific issue led to this PR, this PR closes the issue.
  • The description of changes is clear and encompassing.
  • Any required documentation changes (code and docs) are included in this PR.
  • API changes follow the Runbook for Firecracker API changes.
  • User-facing changes are mentioned in CHANGELOG.md.
  • All added/changed functionality is tested.
  • New TODOs link to an issue.
  • Commits meet contribution quality standards.

  • This functionality cannot be added in rust-vmm.

@codecov
Copy link

codecov bot commented Oct 13, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (3806ded) 83.00% compared to head (864039b) 82.97%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4175      +/-   ##
==========================================
- Coverage   83.00%   82.97%   -0.04%     
==========================================
  Files         222      221       -1     
  Lines       28642    28505     -137     
==========================================
- Hits        23775    23652     -123     
+ Misses       4867     4853      -14     
Flag Coverage Δ
4.14-c7g.metal 78.51% <100.00%> (-0.06%) ⬇️
4.14-m5d.metal 80.31% <100.00%> (-0.05%) ⬇️
4.14-m6a.metal 79.45% <100.00%> (-0.06%) ⬇️
4.14-m6g.metal 78.51% <100.00%> (-0.07%) ⬇️
4.14-m6i.metal 80.30% <100.00%> (-0.05%) ⬇️
5.10-c7g.metal 81.43% <100.00%> (-0.05%) ⬇️
5.10-m5d.metal 83.00% <100.00%> (-0.04%) ⬇️
5.10-m6a.metal 82.23% <100.00%> (-0.05%) ⬇️
5.10-m6g.metal 81.43% <100.00%> (-0.05%) ⬇️
5.10-m6i.metal 82.98% <100.00%> (-0.04%) ⬇️
6.1-c7g.metal 81.43% <100.00%> (-0.05%) ⬇️
6.1-m5d.metal 83.00% <100.00%> (-0.04%) ⬇️
6.1-m6a.metal 82.23% <100.00%> (-0.05%) ⬇️
6.1-m6g.metal 81.43% <100.00%> (-0.05%) ⬇️
6.1-m6i.metal 82.98% <100.00%> (-0.04%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
src/vmm/src/arch/x86_64/mptable.rs 99.42% <100.00%> (ø)
src/vmm/src/builder.rs 93.04% <ø> (ø)
src/vmm/src/devices/virtio/block/io/sync_io.rs 92.59% <ø> (ø)
src/vmm/src/devices/virtio/iovec.rs 98.64% <ø> (ø)
src/vmm/src/devices/virtio/queue.rs 92.61% <100.00%> (+0.06%) ⬆️
src/vmm/src/devices/virtio/vsock/csm/connection.rs 90.34% <ø> (ø)
src/vmm/src/devices/virtio/vsock/csm/txbuf.rs 98.90% <ø> (ø)
src/vmm/src/devices/virtio/vsock/packet.rs 97.05% <ø> (ø)
src/vmm/src/io_uring/mod.rs 97.71% <ø> (ø)
src/vmm/src/io_uring/queue/completion.rs 96.00% <ø> (ø)
... and 4 more

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@roypat roypat force-pushed the update-vm-memory-0-13-1 branch 22 times, most recently from 9a521d3 to 3a35f20 Compare October 19, 2023 16:08
@roypat roypat marked this pull request as ready for review October 19, 2023 16:22
@roypat roypat added the Status: Awaiting review Indicates that a pull request is ready to be reviewed label Oct 19, 2023
src/vmm/src/devices/virtio/queue.rs Show resolved Hide resolved
src/utils/Cargo.toml Show resolved Hide resolved
Copy link
Contributor

@pb8o pb8o left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code looks good to me. Just commented on a couple minor typos.

@roypat roypat force-pushed the update-vm-memory-0-13-1 branch from 6f37e97 to f58cd48 Compare October 20, 2023 08:41
@roypat roypat requested a review from bchalios October 20, 2023 08:41
Copy link
Contributor

@bchalios bchalios left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a minor comment about a comment and we're good to go :)

src/vmm/src/devices/virtio/queue.rs Show resolved Hide resolved
src/vmm/src/devices/virtio/queue.rs Show resolved Hide resolved
@roypat roypat force-pushed the update-vm-memory-0-13-1 branch 2 times, most recently from 6eeab86 to d01913c Compare October 20, 2023 10:09
As always, the linux-loader update has to piggy-back off of the
vm-memory update. This allows us to remove the {Read,Write}Volatile
traits, instead using the ones from upstream.

Also eliminates some more usages of now-deprecated vm-memory functions
that I missed when I first wrote the volatile-io patch for firecracker.

Signed-off-by: Patrick Roy <[email protected]>
Make all Queue functions generic over their GuestMemory parameter. This
allows kani to mock out rust-vmm's "multi-region support" guest memory
implementation with a simpler one that only supports a single region,
allowing significant harness speedups.

Signed-off-by: Patrick Roy <[email protected]>
GuestMemoryMmap supports arbitrary guest memory regions, and thus needs
logic that deals with memory accesses spanning multiple regions. Our
kani harnesses only use a single region, and the logic for supporting
multiple regions causes kani to severely degrade in performance.

Signed-off-by: Patrick Roy <[email protected]>
Calls to set_avail_event tend to cause memory to grow unboundedly during
verification.  The function writes to the `avail_event` of the virtio
queue, which is not read from by the device. It is only intended to be
used by guest. Therefore, it does not affect any device functionality
(e.g. its only call site, try_enable_notification, will behave
independently of what value was written here). Thus we can stub it out
with a no-op. Note that we have a separate harness for set_avail_event,
to ensure the function itself is sound.

Signed-off-by: Patrick Roy <[email protected]>
m6a.metal has significantly more memory, and is also slightly faster.

Signed-off-by: Patrick Roy <[email protected]>
To get the runtime of the harnesses back below 15 minutes. This
reduction does not loose any generality, as the only real distinction of
cases here is "0, 1 or 2 buffers". Anything above that is just
nice-to-have.

Signed-off-by: Patrick Roy <[email protected]>
The --enable-stubbing option is deprecated in favor of -Z stubbing.

Signed-off-by: Patrick Roy <[email protected]>
Using dirty bitmap tracking will cause verification performance to
drastically decrease, as it involves non-trivial arithmetic on arbitrary
values.

Signed-off-by: Patrick Roy <[email protected]>
Kani, running using nightly rustc, was warning about this comment. Fix
it to resolve the warning during verification, to get slightly cleaner
logs.

Signed-off-by: Patrick Roy <[email protected]>
@roypat roypat force-pushed the update-vm-memory-0-13-1 branch from 0aad9f6 to 864039b Compare October 20, 2023 10:26
@roypat roypat merged commit 54caed2 into firecracker-microvm:main Oct 20, 2023
4 of 5 checks passed
@roypat roypat deleted the update-vm-memory-0-13-1 branch April 15, 2024 14:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Awaiting review Indicates that a pull request is ready to be reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants