-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Extract configuration generation to init-container #477
Comments
Thank you Lennart. That's a great idea. If it's (1) then I think there's no way this wouldn't work. For (2) I am not sure, it may get a bit messy for multi-ironic scenario. I'm still not sure, though, how that could help solve #468 ? Or do you mean that we could do that by bypassing this initContainer and implementing some manual configurations on our own? (If that's the case, I have no idea. If I understand @dtantsur comment correctly, we would still need the pod IP somehow to generate the configuration if JSON_PRC is in use) |
I mean option 2. Most of the configuration is currently done in "common" scripts anyway ( My hope and understanding is that there isn't anything preventing us from doing ALL the configuration in one common container as long as we have shared volumes for getting the config from that init-container.
Yes I mean opting out of the init-container. Then it is possible to specify all configuration directly using configMaps and secrets and that solves #468 since every option is directly "exposed". I.e. I can set whatever IP I want in every field without any extra variables or changes in the scripts/image. In theory it is already possible to override and skip the configuration by changing the |
Thank you. As said I think this is a great idea to decouple the config generation from the main threads.
I'm wondering though if this could be the standard? I mean could we use configMap in general case as well, then the issue will just be how to config those maps? |
Yes it could become the standard, but we need to move carefully in order to not break everything relying on the current behavior. That is why I don't propose to just delete all scripts 😂 |
Yeah. That makes sense, though I guess one could argue that using the initContainer is a pretty big move too 😂 Well, I guess this could be tried out, so that we at least allow using configMaps, then if configMaps turned out to be a good option, we can then move on to make it the standard. |
I just realized now that having one giant initContainer will also mean that we're stuck at the current one pod manipulation (IMO at some point we need to think about splitting it to smaller pods, especially since it's required to scale up ironic). It can be, however, easily solved by just have one initContainer for every pod. |
I don't see how this is related. Of course there will be one init-container per pod and this does not affect in any way (neither improve nor worsen) the current one replica deployment it is just a different way of initializing. |
I think it's related in a way that if we decide to bundle everything to one initContainer, it will work for our current setup where everything is inside one pod. If we split the deployment later (for e.g 1 pod for ironic, another for mariadb etc.), we will either have to split that init container up, or have some config duplication. It's different from the current set up, in which each container is responsible for its own config generation, so no matter how we might split them, the config handling will stay the same. Anyway, I'm just getting too far ahead. Having config duplication is not too bad though, and we are not about to change our deployment soon. |
@lentzi90 I really like the idea and honestly I've already done some tests in the past with init containers precisely to move the configuration away from the ironic-image, so thumbs up from my side, as a long term goal |
The containers may do their own config as it is now but they are still extremely coupled. They rely on a shared volume and common scripts to do that configuration. The config duplication that you talk about is just what would end up in each container, not something the user would have to store or provide. The init-container would for example generate dnsmasq config also when it is not needed but I don't see how that is an issue. As it is today, each container has a bunch of stuff that they do not really need. For example, they all have python and inspector installed together with scripts to run and configure it, even though we may not even use it. That is far worse than having an extra config file in my opinion. |
Yes, this is the exact issue that I wanted to say here. Thank you for putting it in correct language!
Sure. I'm not meaning that generating excessive config is a bad thing, the issue is that if we have, for e.g., 2 pods with 2 initContainers, then it could happen that these two init generate different configs, and the two pods won't work well together because of that. This is exactly the issue of coupling config that you have mentioned above. It would not be very visible, though, if we still use one pod for all of the containers.
Yes, no argue about that, but I don't think it's too related to what we're discussing here. IMO it's very easy to prune down the container images to only contain what is necessary, regardless of the initContainer. This issue is just due to a designer's choice to use the same image for almost everything. |
I don't see how anything changes in this regard with my proposal. The output of the script does not change because it runs in an init-container. This proposal is not about how to split up the deployment and separate the containers. It is a valid concern of course to think about if it would be harder to do with this proposal implemented, and my point is that it would be easier. If the config generation is separated from the individual containers it will be possible to run them separately either by
I disagree. This is more relevant than the discussion about splitting them into separate pods. Please explain how you "prune down the container images to only container what is necessary" when they all use python to render their own config? As far as I'm concerned, the config generation must be separated first, and that is exactly what this proposal is about. |
Sorry, I didn't mean to criticize or disagree with your proposal, as I said I think it's a great idea, and having it would for sure be a great improvement to our current design, and anything we do afterwards would be easier thanks to it. What I'm trying to express here is just something we should consider if we decide to go with this proposal, or when we implement it.
Fully agree. Just expressing a concern that since this separation to initContainer would be a big task, we may as well keep it in mind that the config generation process might be different if we keep the current 1-pod manifest vs. if we split the pod as well. Again, I'm not saying that you were wrong in any of your logic, I'm just trying to help identifying the bigger picture. IMO if we decide to go with this proposal, we should also redesign the structure to break out of the 1-pod manifest, but just my two-cent.
By "what is necessary", I mean it could include python or bash or whatever it is that is needed in config generation right now. I fully agree that having config generation separated would allow us to prune the images better, but I'm just showing an observation: even without the config generation separated, we still can prune the images down, but we just do not do it because we want to use the same image for everything. (That may be the topic we need to discuss first if we want to discuss image prune?). |
No worries, it is good to scrutinize it before we start! 🙂
I agree that we should split it up and I think this proposal makes it slightly easier to do so, but more than anything I think it is a separate topic and will need much larger changes also for users, e.g. they may need to switch to the ironic-standalong-operator instead of using kustomize. This proposal on the other hand could be done without impacting them.
Python and bash are big and bad both from a security perspective and from image size perspective. If we just remove some specific packages but keep the tool chain we do not gain much. But yes, you are right that we need to have separate images for separate containers. I think this proposal is a good place to start on that work because it will enable us to get much greater benefits out of it, especially considering that it can be done without breaking or changing the current deployment method. |
Well, the way I see it is that if we to commit to some change this big, we may as well do it "right". In any case, if we decide to split the big ironic pod and the init config container was in place, it would likely change too, so it's a good thing to keep in mind. (And imo breaking the big ironic pod will need to happen sooner or later, as only ironic container, but not Anyway, since you mentioned the ironic-standalone-operator, I agree that there are too many uncertainties that may affect our decisions in the future, so maybe it's a good idea to just implement any improvements that we can agree with each other and know for sure it won't break anything. I think this proposal may suffice that. Just on the side note, I actually have split the ironic pod up in my multi-ironic setup, and that didn't require any change in user input or the standalone-operator, so imo it's totally doable right now too.
We wouldn't gain as much as if we don't have them at all, but IMO it would be a great difference between pure python vs. python + all the oslo packages + binaries compiled from ironic source code. Of course, this is not the topic of discussion here, but I think it's worth mentioning since that may suggest that there might be other reasons why we have not use separate images and pruned them down yet.
Yeah I agree. As I said I'm just not sure if the decision to use the same image for everything was just an act out of convenience, or if there was any reason behind that. |
/triage accepted |
I very much like this proposal. As you say, it can pave the way for splitting the one giant ironic-image into separate images for independent components. The next step, I believe, should be to come up with a rough execution plan. I'd hate to see this change introduced as one giant PR. It's possible that my effort to reduce the number of large options in ironic-image (e.g. #487 or inspector removal #483) is step zero since it radically reduces the number of branches in our bash scripts. We need to make sure that no options have conditional defaults (e.g. JSON RPC is on by default in runironic-conductor, but not in runironic). Step one can be to do the split inside ironic-image without adding an init container just yet. This way we can gradually isolate the configuration step, verifying the functionality at each stage. |
Ah, and when ironic-standalone-operator is complete, we can move some configuration logic to it. Right now we're limited by trying to make ironic-image usable as it is. |
OCPBUGS-32386: Use unix sockets by default for reverse proxy communication
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
I think I have come up with a way to solve #468 in a more generic way and improve the user experience of the ironic-image.
The idea is to move all the scripts for generating configuration into its own container. This would run as an init-container just like ironic-ipa-downloader. It would generate all configuration files, setup TLS and auth and of course do the
wait_for_interface_or_ip
.Doing it this way has multiple benefits:
wait_for_interface_or_ip
it is immediately clear that the issue is during initialization, compared to current "pod not ready" symptoms.I realize this may not be easy to achieve but I do think it would be a good way to enable advanced use cases and provide a greater flexibility in how we deploy ironic. What do you think?
The text was updated successfully, but these errors were encountered: