Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve QoL for release process #4166

Closed

Conversation

roypat
Copy link
Contributor

@roypat roypat commented Oct 11, 2023

Changes

Speeding up the running of tests, and automation of dependabot removal

Reason

less friction

License Acceptance

By submitting this pull request, I confirm that my contribution is made under
the terms of the Apache 2.0 license. For more information on following
Developer Certificate of Origin and signing off your commits, please check
CONTRIBUTING.md.

PR Checklist

  • If a specific issue led to this PR, this PR closes the issue.
  • The description of changes is clear and encompassing.
  • Any required documentation changes (code and docs) are included in this PR.
  • API changes follow the Runbook for Firecracker API changes.
  • User-facing changes are mentioned in CHANGELOG.md.
  • All added/changed functionality is tested.
  • New TODOs link to an issue.
  • Commits meet contribution quality standards.

  • This functionality cannot be added in rust-vmm.

So I don't have to wait over an hour to see if it worked or not.

Signed-off-by: Patrick Roy <[email protected]>
File is only for humans, sorry.

Signed-off-by: Patrick Roy <[email protected]>
@roypat roypat added the Status: Awaiting review Indicates that a pull request is ready to be reviewed label Oct 11, 2023
@codecov
Copy link

codecov bot commented Oct 11, 2023

Codecov Report

All modified lines are covered by tests ✅

Comparison is base (d096bcd) 82.99% compared to head (80c0fce) 82.99%.

Additional details and impacted files
@@                Coverage Diff                @@
##           firecracker-v1.5    #4166   +/-   ##
=================================================
  Coverage             82.99%   82.99%           
=================================================
  Files                   223      223           
  Lines                 28448    28448           
=================================================
  Hits                  23609    23609           
  Misses                 4839     4839           
Flag Coverage Δ
4.14-c7g.metal 78.53% <ø> (ø)
4.14-m5d.metal 80.33% <ø> (ø)
4.14-m6a.metal 79.46% <ø> (ø)
4.14-m6g.metal 78.53% <ø> (ø)
4.14-m6i.metal 80.31% <ø> (ø)
5.10-c7g.metal 81.44% <ø> (ø)
5.10-m5d.metal 82.99% <ø> (-0.02%) ⬇️
5.10-m6a.metal 82.23% <ø> (ø)
5.10-m6g.metal 81.44% <ø> (?)
5.10-m6i.metal 82.99% <ø> (ø)
6.1-c7g.metal 81.44% <ø> (ø)
6.1-m5d.metal 83.00% <ø> (ø)
6.1-m6a.metal 82.23% <ø> (ø)
6.1-m6g.metal 81.44% <ø> (?)
6.1-m6i.metal 82.99% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@pb8o pb8o left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment on running the build tests in parallel.

@@ -486,7 +486,7 @@ cmd_build() {
}

function cmd_make_release {
cmd_test -- --reruns 2 || die "Tests failed!"
cmd_test -- --reruns 2 -n 8 || die "Tests failed!"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really want to do this, but I think we need to be careful .Some of the tests cannot be run in parallel. Like the build ones. The performance ones may also fail since they assume they have the whole host to work with.

We could separate it in two runs.

    OPTS="--reruns=7"
    cmd_test -- $OPTS --json-report-file=test-report.json -n8 --dist=worksteal integration_tests/{functional,security,style} || die "Tests failed!"
    cmd_test -- $OPTS --json-report-file=test-report-perf.json integration_tests/{performance,build} || die "Tests failed!"

We end up with 2 files, so we would need to adapt the release script to account for the other file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahhh, I did not realize this also ran the build and performance test. Suffer more I shall then.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also if we do this we may be able to run the release sanity check without faking the testrun: https://github.com/firecracker-microvm/firecracker/blob/main/.buildkite/pipeline_pr.py#L52

cmd_test -- --reruns 2 || die "Tests failed!"
cmd_test -- --reruns 2 -n 8 || die "Tests failed!"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When this parallelization is enabled, does the test work well?
I'm curious about why we haven't parallelize this so far.

@roypat roypat closed this Oct 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Awaiting review Indicates that a pull request is ready to be reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants