Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Bug 1872238: oVirt: Introduce qemu-guest-agent container #2042

Closed
wants to merge 1 commit into from

Conversation

eslutsky
Copy link

@eslutsky eslutsky commented Aug 31, 2020

ocp VMs needs to run oVirt QEMU agent container.

Bug: 1872238
Signed-off-by: Evgeny Slutsky [email protected]

@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 31, 2020
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: eslutsky
To complete the pull request process, please assign sinnykumari
You can assign the PR to them by writing /assign @sinnykumari in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@eslutsky eslutsky force-pushed the qemu-guest-agent branch 2 times, most recently from 17a8ffc to 9047487 Compare August 31, 2020 09:22
@Gal-Zaidman
Copy link
Contributor

/test ?

@openshift-ci-robot
Copy link
Contributor

@Gal-Zaidman: The following commands are available to trigger jobs:

  • /test cluster-bootimages
  • /test e2e-aws
  • /test e2e-aws-disruptive
  • /test e2e-aws-proxy
  • /test e2e-aws-workers-rhel7
  • /test e2e-azure
  • /test e2e-gcp-op
  • /test e2e-gcp-upgrade
  • /test e2e-metal-ipi
  • /test e2e-openstack
  • /test e2e-ovirt
  • /test e2e-ovn-step-registry
  • /test e2e-vsphere
  • /test e2e-vsphere-upi
  • /test images
  • /test okd-e2e-aws
  • /test okd-e2e-gcp-op
  • /test okd-e2e-gcp-upgrade
  • /test okd-images
  • /test unit
  • /test verify

Use /test all to run the following jobs:

  • pull-ci-openshift-machine-config-operator-master-e2e-aws
  • pull-ci-openshift-machine-config-operator-master-e2e-aws-workers-rhel7
  • pull-ci-openshift-machine-config-operator-master-e2e-gcp-op
  • pull-ci-openshift-machine-config-operator-master-e2e-gcp-upgrade
  • pull-ci-openshift-machine-config-operator-master-e2e-metal-ipi
  • pull-ci-openshift-machine-config-operator-master-e2e-ovn-step-registry
  • pull-ci-openshift-machine-config-operator-master-images
  • pull-ci-openshift-machine-config-operator-master-okd-e2e-aws
  • pull-ci-openshift-machine-config-operator-master-okd-images
  • pull-ci-openshift-machine-config-operator-master-unit
  • pull-ci-openshift-machine-config-operator-master-verify

In response to this:

/test ?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Gal-Zaidman
Copy link
Contributor

/test e2e-ovirt

1 similar comment
@eslutsky
Copy link
Author

/test e2e-ovirt

@cgwalters
Copy link
Member

cgwalters commented Aug 31, 2020

Hmm did you see coreos/afterburn#458 ? We are actively working on implementing the protocol in Afterburn.

cc @lucab

@lucab
Copy link

lucab commented Aug 31, 2020

We are actively working on implementing the protocol in Afterburn.

Slight correction to the above: we are exploring whether we can re-use the oVirt protocol/virtio-port in our CI to signal that a node booted into initramfs (for the sake of not reinveinting a wheel if that protocol is already good).
I don't think it overlaps much with this PR, which instead wants to run the qemu-agent (not the oVirt one) in the real rootfs as a long-running podman service (not sure why not as a daemonset).

@eslutsky eslutsky force-pushed the qemu-guest-agent branch 2 times, most recently from b2cc0e7 to 3a70234 Compare August 31, 2020 15:37
@kikisdeliveryservice kikisdeliveryservice changed the title WIP oVirt: Introduce qemu-guest-agent container [WIP] Bug: 1764804: oVirt: Introduce qemu-guest-agent container Aug 31, 2020
@kikisdeliveryservice kikisdeliveryservice changed the title [WIP] Bug: 1764804: oVirt: Introduce qemu-guest-agent container [WIP] Bug 1764804: oVirt: Introduce qemu-guest-agent container Aug 31, 2020
@kikisdeliveryservice
Copy link
Contributor

/bugzilla refresh

@openshift-ci-robot
Copy link
Contributor

@kikisdeliveryservice: Bugzilla bug 1764804 is in a bug group that is not in the allowed groups for this repo.
Allowed groups for this repo are:

  • qe_staff
  • redhat

In response to this:

/bugzilla refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@eslutsky
Copy link
Author

/test e2e-ovirt

@eslutsky
Copy link
Author

/retest e2e-ovirt

@openshift-ci-robot
Copy link
Contributor

@eslutsky: The /retest command does not accept any targets.
The following commands are available to trigger jobs:

  • /test cluster-bootimages
  • /test e2e-aws
  • /test e2e-aws-disruptive
  • /test e2e-aws-proxy
  • /test e2e-aws-workers-rhel7
  • /test e2e-azure
  • /test e2e-gcp-op
  • /test e2e-gcp-upgrade
  • /test e2e-metal-ipi
  • /test e2e-openstack
  • /test e2e-ovirt
  • /test e2e-ovn-step-registry
  • /test e2e-vsphere
  • /test e2e-vsphere-upi
  • /test images
  • /test okd-e2e-aws
  • /test okd-e2e-gcp-op
  • /test okd-e2e-gcp-upgrade
  • /test okd-images
  • /test unit
  • /test verify

Use /test all to run the following jobs:

  • pull-ci-openshift-machine-config-operator-master-e2e-aws
  • pull-ci-openshift-machine-config-operator-master-e2e-aws-workers-rhel7
  • pull-ci-openshift-machine-config-operator-master-e2e-gcp-op
  • pull-ci-openshift-machine-config-operator-master-e2e-gcp-upgrade
  • pull-ci-openshift-machine-config-operator-master-e2e-metal-ipi
  • pull-ci-openshift-machine-config-operator-master-e2e-ovn-step-registry
  • pull-ci-openshift-machine-config-operator-master-images
  • pull-ci-openshift-machine-config-operator-master-okd-e2e-aws
  • pull-ci-openshift-machine-config-operator-master-okd-images
  • pull-ci-openshift-machine-config-operator-master-unit
  • pull-ci-openshift-machine-config-operator-master-verify

In response to this:

/retest e2e-ovirt

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kikisdeliveryservice
Copy link
Contributor

/test e2e-ovirt

@eslutsky
Copy link
Author

/retest e2e-ovirt

@openshift-ci-robot
Copy link
Contributor

@eslutsky: The /retest command does not accept any targets.
The following commands are available to trigger jobs:

  • /test cluster-bootimages
  • /test e2e-aws
  • /test e2e-aws-disruptive
  • /test e2e-aws-proxy
  • /test e2e-aws-workers-rhel7
  • /test e2e-azure
  • /test e2e-gcp-op
  • /test e2e-gcp-upgrade
  • /test e2e-metal-ipi
  • /test e2e-openstack
  • /test e2e-ovirt
  • /test e2e-ovn-step-registry
  • /test e2e-vsphere
  • /test e2e-vsphere-upi
  • /test images
  • /test okd-e2e-aws
  • /test okd-e2e-gcp-op
  • /test okd-e2e-gcp-upgrade
  • /test okd-images
  • /test unit
  • /test verify

Use /test all to run the following jobs:

  • pull-ci-openshift-machine-config-operator-master-e2e-aws
  • pull-ci-openshift-machine-config-operator-master-e2e-aws-workers-rhel7
  • pull-ci-openshift-machine-config-operator-master-e2e-gcp-op
  • pull-ci-openshift-machine-config-operator-master-e2e-gcp-upgrade
  • pull-ci-openshift-machine-config-operator-master-e2e-metal-ipi
  • pull-ci-openshift-machine-config-operator-master-e2e-ovn-step-registry
  • pull-ci-openshift-machine-config-operator-master-images
  • pull-ci-openshift-machine-config-operator-master-okd-e2e-aws
  • pull-ci-openshift-machine-config-operator-master-okd-images
  • pull-ci-openshift-machine-config-operator-master-unit
  • pull-ci-openshift-machine-config-operator-master-verify

In response to this:

/retest e2e-ovirt

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@cgwalters
Copy link
Member

Is there a reason this is running via podman instead of as a daemonset?

@cgwalters
Copy link
Member

Also are you really aiming this at 4.6?

--volume /usr/share/zoneinfo:/usr/share/zoneinfo \
--net=host \
--pull missing \
docker://registry.svc.ci.openshift.org/ovirt/qemu-guest-agent:4.2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot pull arbitrary container images with OpenShift 4 by default. Anything we run by default must be part of the release image.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And once it's part of the release image, this would need to have its value substituted by templating, same as other containers the MCO references.

Copy link
Author

@eslutsky eslutsky Sep 1, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed, we build this container in ovirt NS just for testing purposes, we want to see it running and document the gaps .

@cgwalters
Copy link
Member

/hold
The BZ you linked is about creating the container. I think you need a separate BZ about shipping this with OpenShift by default. And this needs more prep work, like being added to the release image.
And I'd like to see discussion about podman vs daemonset.

@openshift-ci-robot
Copy link
Contributor

@eslutsky: This pull request references Bugzilla bug 1872238, which is invalid:

  • expected the bug to target the "4.6.0" release, but it targets "4.7.0" instead

Comment /bugzilla refresh to re-evaluate validity if changes to the Bugzilla bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

[WIP] Bug 1872238: oVirt: Introduce qemu-guest-agent container

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@eslutsky
Copy link
Author

eslutsky commented Sep 2, 2020

/bugzilla refresh

@openshift-ci-robot
Copy link
Contributor

@eslutsky: This pull request references Bugzilla bug 1872238, which is invalid:

  • expected the bug to target the "4.6.0" release, but it targets "4.7.0" instead

Comment /bugzilla refresh to re-evaluate validity if changes to the Bugzilla bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

/bugzilla refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@eslutsky
Copy link
Author

eslutsky commented Sep 9, 2020

/test e2e-ovirt

@eslutsky
Copy link
Author

eslutsky commented Sep 9, 2020

/test e2e-ovirt

ocp VMs  needs to run oVirt QEMU agent container.

Bug: 1764804
Signed-off-by: Evgeny Slutsky <[email protected]>
@eslutsky
Copy link
Author

eslutsky commented Sep 9, 2020

/test e2e-ovirt

@rgolangh
Copy link
Contributor

/hold
The BZ you linked is about creating the container. I think you need a separate BZ about shipping this with OpenShift by default. And this needs more prep work, like being added to the release image.
And I'd like to see discussion about podman vs daemonset.

* release image
  Early in the process we had a discussion about that starting [here](https://bugzilla.redhat.com/show_bug.cgi?id=1764804#c25). This container image is platform-specific and is considered 'extra'. It falls in the same category as CSI drivers I guess.

* podman vs. daemonset
  The main advantage of systemd+podman is that its decoupled from kubelet and will allow log gathering in case kubelet or other k8s failures.
  The main disadvantage is that we can't monitor its resources and it's a hidden process from OpenShift pov.

Its possible that having an IP as early as possible for debug purposes is not a that important because we have other means to get it (console motd?). Let me know what you think.

@cgwalters @runcom any feedback?

@openshift-ci-robot
Copy link
Contributor

@eslutsky: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/verify e452a3a link /test verify
ci/prow/okd-e2e-aws e452a3a link /test okd-e2e-aws
ci/prow/e2e-aws e452a3a link /test e2e-aws
ci/prow/e2e-ovn-step-registry e452a3a link /test e2e-ovn-step-registry
ci/prow/e2e-ovirt e452a3a link /test e2e-ovirt
ci/prow/e2e-upgrade e452a3a link /test e2e-upgrade

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-robot
Copy link
Contributor

@eslutsky: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-agnostic-upgrade e452a3a link /test e2e-agnostic-upgrade
ci/prow/e2e-aws-serial e452a3a link /test e2e-aws-serial

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@eslutsky
Copy link
Author

eslutsky commented Nov 17, 2020

@cgwalters , can we a get a feedback on this ,
so we ene

/hold
The BZ you linked is about creating the container. I think you need a separate BZ about shipping this with OpenShift by default. And this needs more prep work, like being added to the release image.
And I'd like to see discussion about podman vs daemonset.

  • release image
    Early in the process we had a discussion about that starting here. This container image is platform-specific and is considered 'extra'. It falls in the same category as CSI drivers I guess.
  • podman vs. daemonset
    The main advantage of systemd+podman is that its decoupled from kubelet and will allow log gathering in case kubelet or other k8s failures.
    The main disadvantage is that we can't monitor its resources and it's a hidden process from OpenShift pov.

Its possible that having an IP as early as possible for debug purposes is not a that important because we have other means to get it (console motd?). Let me know what you think.
@roy
by looking a

/hold
The BZ you linked is about creating the container. I think you need a separate BZ about shipping this with OpenShift by default. And this needs more prep work, like being added to the release image.
And I'd like to see discussion about podman vs daemonset.

  • release image
    Early in the process we had a discussion about that starting here. This container image is platform-specific and is considered 'extra'. It falls in the same category as CSI drivers I guess.
  • podman vs. daemonset
    The main advantage of systemd+podman is that its decoupled from kubelet and will allow log gathering in case kubelet or other k8s failures.
    The main disadvantage is that we can't monitor its resources and it's a hidden process from OpenShift pov.

Its possible that having an IP as early as possible for debug purposes is not a that important because we have other means to get it (console motd?). Let me know what you think.

@rgolangh , I've checked your CSI drivers reference , and it appears to be also be part of the release image:

oc adm release info -a=/tmp/pull-secret.yaml --pullspecs  registry.svc.ci.openshift.org/ocp/release:4.7.0-0.nightly-2020-10-27-051128 | grep ovirt-csi                                                
  ovirt-csi-driver                               quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d6d6d5a7b2bfb797cf521d9d188447584a53e9543bb3e2cb6ba8decb675c45c
  ovirt-csi-driver-operator                      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae8f9bbc986b38951b466415984661603dc55ecdb627040c2c9852e6312b6079

so i don't think it's a valid example,
@cgwalters , we are targeting this task to the upcoming release , so we would like to get your feedback on this .
thanks

@cgwalters
Copy link
Member

podman vs. daemonset
The main advantage of systemd+podman is that its decoupled from kubelet and will allow log gathering in case kubelet or other k8s failures.

I think the MCO should take care of this generically across platforms. How would qemu-guest-agent help with that anyways?

Stated more strongly, exactly what functionality do you want from the agent?

We have some work on implementing boot checkins in afterburn here: coreos/afterburn#458

It sounds like the guest agent has some protocol for the guest to report its IP address - what are you using that for?

@eslutsky
Copy link
Author

podman vs. daemonset
The main advantage of systemd+podman is that its decoupled from kubelet and will allow log gathering in case kubelet or other k8s failures.

I think the MCO should take care of this generically across platforms. How would qemu-guest-agent help with that anyways?

Stated more strongly, exactly what functionality do you want from the agent?

We have some work on implementing boot checkins in afterburn here: coreos/afterburn#458

It sounds like the guest agent has some protocol for the guest to report its IP address - what are you using that for?

on each oVirt host we have a vdsm component which use libvirt to communicate with the agent over the virtio serial port .
vdsm relay this information to the oVirt engine.
our IPI installer using oVirt API to query the engine for the VM IP address.
this info will be used by the cluster-api-provider-ovirt (to retrieve VMs ip address) , and also for collecting installation logs when the installation fails (no API available) [0]

[0] https://bugzilla.redhat.com/show_bug.cgi?id=1810438

@cgwalters
Copy link
Member

There's of course actually three options for running this:

  • via extensions/package layering: We ship open-vm-tools today baked into RHCOS today because vSphere refused to support containerzed agents and some of the things like live migration (fs freezing) are critical
  • podman
  • daemonset

They all have tradeoffs. (I'd actually like to switch the open-vm-tools case to be an extension that we dynamically layer only on vSphere platforms so we don't look dumb by shipping it aws/gcp/bare metal/etc; anyways somewhat tangential)

If the oVirt team feels strongly that it'd significantly help things now to just add the guest agent into the set of extensions and change the MCO to do exactly the above (dynamically layer just on oVirt platforms) - well...okay.

But the thing is every time we do this it hinders our long term goals to get people to move to containers - and also to minimize the "agent footprint" on the host.

@cgwalters
Copy link
Member

our IPI installer using oVirt API to query the engine for the VM IP address.
this info will be used by the cluster-api-provider-ovirt (to retrieve VMs ip address) ,

On other IPI virt platforms (AWS/GCP/Azure/OpenStack) we get by without this. Why does oVirt need it but those don't? The hypervisor is in a position to know what DHCP address it served to an instance.

, and also for collecting installation logs when the installation fails (no API available) [0]

Again I strongly think this one is a generic problem; see related discussion in coreos/ignition#585

@eslutsky
Copy link
Author

eslutsky commented Nov 18, 2020

our IPI installer using oVirt API to query the engine for the VM IP address.
this info will be used by the cluster-api-provider-ovirt (to retrieve VMs ip address) ,

On other IPI virt platforms (AWS/GCP/Azure/OpenStack) we get by without this. Why does oVirt need it but those don't? The hypervisor is in a position to know what DHCP address it served to an instance.

well, the hypervisor doesn't act as a DHCP server,
usually, the VMs get their IP from the external network DHCP server which is transparent to the hypervisor.
that's why the agent is required here.
OpenStack for example controls its DHCP allocations using neutron,
in oVirt, we are delegating this task to an external DHCP service which makes oVirt unique in some way.

, and also for collecting installation logs when the installation fails (no API available) [0]

Again I strongly think this one is a generic problem; see related discussion in coreos/ignition#585

@Gal-Zaidman
Copy link
Contributor

On other IPI virt platforms (AWS/GCP/Azure/OpenStack) we get by without this. Why does oVirt need it but those don't? The hypervisor is in a position to know what DHCP address it served to an instance.

oVirt needs the guest agent running inside each VM that it manages (ocp or not ocp).
The gust agent enables the oVirt Engine to manage the VM, show detail of the VM in the oVirt Engine UI, and from the perspective of OCP, it will help us get information on the VM by using the oVirt API (like IP address of the VM).

Currently, we need this container to be available for our RHCOS VMs those are the only VMs that doesn't have the guest agent.
So we need to understand:

  1. How to get the container image into the release?
  2. What is the best way to run the container on the VM, we want it available and running at the earliest stage possible?

As you can see this PR is old and we have been dragging this for a while now, and we need it for 4.7.
We can setup a meeting talking about how to get this in, I think it will be faster

@runcom
Copy link
Member

runcom commented Nov 18, 2020

shipping the daemonset would still be backing this within the MCO repo (which doesn't help us with code that the MCO team isn't an expert about). Hina is setting up a meeting to discuss this as I also think it'll help.

@cgwalters
Copy link
Member

The list above missed the 4th option, which is:

  • Extend https://github.com/coreos/afterburn/ with the core functionality you need. As I noted we already had some work on implementing boot time check in there; implementing the bit to send networking information might not be really hard, although that's obviously a notable increase in scope from basic checkins.

So for clarity, all the options:

  1. via extensions/package layering
  2. Extend afterburn
  3. via podman
  4. via daemonset

Now for option 3 - personally I am a bit skeptical that this needs to run before kubelet - we already have too many of these "special before kubelet" containers and they make things a lot more complicated.

My preferred order looks like:

  • Investigate difficulty of afterburn path, if easy: preferred
  • daemonset
  • extension

@cgwalters
Copy link
Member

Can you clarify, is the desired functionality in the qemu-guest-agent package or do we really need https://github.com/oVirt/ovirt-guest-agent ? At least the former seems to be zero additional dependencies on the host which is nice.

@nyoxi
Copy link

nyoxi commented Nov 23, 2020

Can you clarify, is the desired functionality in the qemu-guest-agent package or do we really need https://github.com/oVirt/ovirt-guest-agent ? At least the former seems to be zero additional dependencies on the host which is nice.

You don't want oVirt Guest Agent as that one is deprecated. What you really want is QEMU Guest Agent.

@cgwalters
Copy link
Member

OK this turned into https://bugzilla.redhat.com/show_bug.cgi?id=1900759
/close

@openshift-ci-robot
Copy link
Contributor

@cgwalters: Closed this PR.

In response to this:

OK this turned into https://bugzilla.redhat.com/show_bug.cgi?id=1900759
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot
Copy link
Contributor

@eslutsky: An error was encountered getting external tracker bugs for bug 1872238 on the Bugzilla server at https://bugzilla.redhat.com:

could not parse external identifier "/pull/2042#issuecomment-683895336" as pull: invalid pull identifier: could not parse 2042#issuecomment-683895336 as number: strconv.Atoi: parsing "2042#issuecomment-683895336": invalid syntax
Please contact an administrator to resolve this issue, then request a bug refresh with /bugzilla refresh.

In response to this:

[WIP] Bug 1872238: oVirt: Introduce qemu-guest-agent container

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@cgwalters
Copy link
Member

Though after that meeting I did think of yet another option - stick the code inside https://github.com/openshift/cluster-api-provider-ovirt
It's already oVirt/RHV specific etc. - The thing that's architecturally ugly about this though is cluster API is supposed to be about managing the infrastructure (possibly for multiple clusters, though AFAIK OpenShift doesn't use that) - it's not supposed to be about managing stuff inside the machines. But...eh, I can't imagine it'd be too hard to carry a special case there to say something like "if (running_in_openshift) deploy_daemonset()".

Anyways though, fine continuing the angle of including in the OS by default, but leaving this here in case you guys are like "ah that sounds great!".

@mandre
Copy link
Member

mandre commented Nov 30, 2020

Though after that meeting I did think of yet another option - stick the code inside https://github.com/openshift/cluster-api-provider-ovirt
It's already oVirt/RHV specific etc.

OpenStack would also benefit from having the qemu-guest-agent available somehow. If we were to go down the "daemonset in the platform specific cluster API providers" path, that would mean duplicating the code from the ovirt provider to the openstack one. I can see it be rather hard to maintain on the long run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress.
Projects
None yet
Development

Successfully merging this pull request may close these issues.