Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manage the lifecycle of an (internal-only) bridge #2267

Open
mzagozen opened this issue Oct 28, 2024 · 3 comments
Open

Manage the lifecycle of an (internal-only) bridge #2267

mzagozen opened this issue Oct 28, 2024 · 3 comments

Comments

@mzagozen
Copy link
Contributor

I recently started using the bridge node kind to connect multiple nodes to the same access domain. In other words, I use the bridge to connect multiple containerlab nodes defined in the same topology, and not connect the bridge to any external or other interfaces on the host. The documentation states you're supposed to bring up the bridge yourself. This makes perfect sense when the bridge is connected to interfaces outside of control of containerlab - the user will need to set up those anyway. But for the use case where the bridge only connects to containerlab nodes (like in the br01 example), I feel it would make for a better user experience if containerlab attempts to take ownership of the entire lifecycle for the bridge.

For instance, if a bridge node exists the topology, for the deploy and destroy commands:

  • [deploy] containerlab creates the bridge if it does not exist
  • [deploy] containerlab connects node endpoints as listed to the bridge
  • [destroy] containerlab removes the node endpoints it created on the bridge
  • [destroy] containerlab removes the bridge if no interfaces remain connected to the bridge on topology destroy, after having removed the nodes

WDYT? I am not super familiar with containerlab internals, but I am guessing these operations would need to be serialized before and after concurrently processing nodes?

@hellt
Copy link
Member

hellt commented Oct 28, 2024

Hi
This sounds logical to me. The reason we never did that was

  • did not have this use case as a priority
  • more work

:)

But I'd be down to review a PR for it, especially considering the safe guards around not trying to remove the bridge if it existed before the deploy command

@steiler
Copy link
Collaborator

steiler commented Oct 28, 2024

Checking and setup of the bridge should be simple via implementing the PreDeploy() func on the bridge node.
The destroy action is a little more tricky. It is not a phased process or so ... we simply call delete on the nodes and thats it.
you could spawn a go routine that remains running still trying to delete ... but that seems ugly to me. So the delete call on the bridge should maybe block but when are all other nodes removed...
maybe create a second delete phase and also pull node.DeleteNetnsSymlink() into that phase ... so remove DeleteNetnsSymlink() from the node interface and implicitly call it from deleteStageTwo() or so ... in which then also the bridge interfaces are being checked...
Or try to check via the list of links, if links would remain after all the links the bridge knows of via the topology file are removed.

@hellt
Copy link
Member

hellt commented Oct 28, 2024

What if we had a clean up stage that is called after the nodes destroy?
Something that potentially can be exposed to a user by introducing two more top level knobs

Setup
Teardown

With a list of commands to run using teardown.commands and setup.commands

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants