-
Notifications
You must be signed in to change notification settings - Fork 185
Running Acceptance Tests at PR
"Acceptance tests" are tests that are being run on master branch. They are located under testsuite directory. Based on the result of these tests, new versions of uyuni are released.
These tests usually take a lot of time and resources. Between two subsequent runs there can be multiple code merges from different developers, which makes the probability of regressions higher, and more complicated to figure out what line of code broke the tests. This means that when tests fail, the team needs to dedicate some time and energy to figure out the cause and whether some code changes are needed. This usually happens under time constraints, which can increase the stress level of the team.
We aim to lower the chances code merges introduce bugs or regressions, by allowing developers to run the acceptance tests on their Pull Request prior to merging.
However, running at the Pull Request is not the same as running at the master branch. At the master branch, we need to reproduce what a user would do and use the infrastructure a user would. At the Pull Request, we overall need reproducibility and stability: two subsequent runs of the testsuite on the same code should produce the same result. Also, we need them to be usable, which means they should take much less time to produce the results.
Thus, at the Pull Request we are taking some "shortcuts"/"fakes":
- We do not package the code as RPMs, but build the code and copy it into the target directory
- We do not use VMs, but containers
- We do not test SCC integration, nor we have SCC channels configured
- We do not test PXE
- We do not test reboots
Still, running "Acceptance tests" at PR is useful to filter out code that would break the product before it lands into master.
Acceptance tests are enabled by default for all users.
You can run the same tests locally with your code. Just go do testsuite/podman_runner and run ./run
. This will run the same set of commands that are run inside the github action. Then, you can use sudo -i podman exec -ti CONTAINER bash
, sudo -i podman ps
, sudo -i podman logs CONTAINER
, etc. to debug it. You can as well connect to the server using a web browser. Just run sudo -i podman ps
and note which is the port you have to connect to.
Note run
will call 00_setup_env.sh
. This script is only used when running locally to do a clean up, i.e. to kill the containers. This script has a sleep 10
at the end to wait for all to settle. This usually is enough but if you get some errors that the containers already exist, try increasing that value.
⚠️ You need podman >= 4.1 installed. Also, you need to install cni-plugin-dnsname package. For openSUSE , you can find it at https://download.opensuse.org/repositories/devel:/microos/. Otherwise, containers won't be able to connect to each other. Note we run podman as root, this is why we always prependsudo -i
to any podman command. This is so we run containers rootfull. Otherwise, they wouldn't be able to connect to each other nor run the hardware related tests.
You can enable cucumber reports by adding a secret named CUCUMBER_PUBLISH_TOKEN . You can get this token by signing up in cucumber reports and adding a new collection.
Then, you can add this secret in your github profile: https://github.com/YOUR:GITHUB_USER/uyuni/settings/secrets/actions.
⚠️ Note you need a fork of uyuni
Tests that are skipped have the label @skip_if_container
. Some of them are skipped because do not make sense or are too difficult to test: cobbler, reboot vms, test xen or kvm ... others are skipped because they fail with the current infrastructure, and we need a stable testsuite to start with: i.e. bootstrapping using the web interface.
The plan is work on those that make sense but fail, so that eventually can be enabled, and so the coverage increased.
If you want to change the containers used for testing, for example for updating the java version, or a new salt package, or some new dependency, ... you can do that following this procedure:
- Create or update your fork, so your fork master branch has the latest changes
- Create a PR with your changes targeted to your fork master branch
- Check test results. If tests pass, means your changes are backward compatible, so merging the PR won't break your colleagues PR
- Merge your changes into master branch of your fork
- Build the container with the action "Create and publish docker images used for the CI" https://github.com/YOUR_GH_USERNAME/uyuni/actions/workflows/build_containers.yml
- Create a PR that changes the github workflow to use YOUR container
--- a/.github/workflows/acceptance_tests_common.yml
+++ b/.github/workflows/acceptance_tests_common.yml
@@ -9,7 +9,7 @@ on:
required: true
type: string
env:
- UYUNI_PROJECT: uyuni-project
+ UYUNI_PROJECT: YOUR_GH_USERNAME
UYUNI_VERSION: master
CUCUMBER_PUBLISH_TOKEN: ${{ secrets.CUCUMBER_PUBLISH_TOKEN }}
jobs:
- Check results of the tests of that PR. If tests pass, means your container passes the tests
If both tests pass, without and with your container, then you can go ahead and create a PR to uyuni-project master branch. Once it is accepted, you can build the container https://github.com/uyuni-project/uyuni/actions/workflows/build_containers.yml
If you are unsure the failing tests are related to your code, you can check the "reference jobs". These are jobs that run on a scheduled time with code from master. If they fail for the same reason as your build, it means the tests or the infrastructure are broken. If they do not fail, but yours do, it means it is related to your code.
Reference tests:
- https://github.com/uyuni-project/uyuni/actions/workflows/acceptance_tests_secondary_parallel.yml?query=event%3Aschedule
- https://github.com/uyuni-project/uyuni/actions/workflows/acceptance_tests_secondary.yml?query=event%3Aschedule
This sometimes happens when new version of the gems that are in the testsuite/Gemfile are updating upstream. For example, mini_mime was updated on the 08th of August to 1.1.5. With the update, it came a new requirement of ruby version >= 2.6. We do not have ruby 2.6 or newer available in the container, so you will get ann error about not being able to install mini_mime.
The solution, is to fix the version of mini_mime in the testsuite/Gemfile, to a version that did not require ruby 2.6.
For example:
When running it locally, it could happen that you have an old Gemfile.lock in your workspace. The presence of this file, will force the version of the gems. This file is not checked in into git. If you see any error about gems and the Gemfile, remove testsuite/Gemfile.lock from your system.
Check your firewall (iptables) is not dropping connections. You can turn it off and try again, and you will see if that is the issue.
Check also that IPv6 is not disabled on your machine. Turn it on again, reboot and try again.
By default, all first-time contributors require approval to run workflows.
More at https://docs.github.com/en/actions/managing-workflow-runs/approving-workflow-runs-from-public-forks