-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve QoL for release process #4166
Improve QoL for release process #4166
Conversation
So I don't have to wait over an hour to see if it worked or not. Signed-off-by: Patrick Roy <[email protected]>
File is only for humans, sorry. Signed-off-by: Patrick Roy <[email protected]>
Codecov ReportAll modified lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## firecracker-v1.5 #4166 +/- ##
=================================================
Coverage 82.99% 82.99%
=================================================
Files 223 223
Lines 28448 28448
=================================================
Hits 23609 23609
Misses 4839 4839
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment on running the build tests in parallel.
@@ -486,7 +486,7 @@ cmd_build() { | |||
} | |||
|
|||
function cmd_make_release { | |||
cmd_test -- --reruns 2 || die "Tests failed!" | |||
cmd_test -- --reruns 2 -n 8 || die "Tests failed!" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really want to do this, but I think we need to be careful .Some of the tests cannot be run in parallel. Like the build ones. The performance ones may also fail since they assume they have the whole host to work with.
We could separate it in two runs.
OPTS="--reruns=7"
cmd_test -- $OPTS --json-report-file=test-report.json -n8 --dist=worksteal integration_tests/{functional,security,style} || die "Tests failed!"
cmd_test -- $OPTS --json-report-file=test-report-perf.json integration_tests/{performance,build} || die "Tests failed!"
We end up with 2 files, so we would need to adapt the release script to account for the other file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahhh, I did not realize this also ran the build and performance test. Suffer more I shall then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also if we do this we may be able to run the release sanity check without faking the testrun: https://github.com/firecracker-microvm/firecracker/blob/main/.buildkite/pipeline_pr.py#L52
cmd_test -- --reruns 2 || die "Tests failed!" | ||
cmd_test -- --reruns 2 -n 8 || die "Tests failed!" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When this parallelization is enabled, does the test work well?
I'm curious about why we haven't parallelize this so far.
Changes
Speeding up the running of tests, and automation of dependabot removal
Reason
less friction
License Acceptance
By submitting this pull request, I confirm that my contribution is made under
the terms of the Apache 2.0 license. For more information on following
Developer Certificate of Origin and signing off your commits, please check
CONTRIBUTING.md
.PR Checklist
CHANGELOG.md
.TODO
s link to an issue.rust-vmm
.