-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add workflow to publish nginx and service images to GHCR #676
Conversation
9374e03
to
ebb2ba5
Compare
Love this 😄 What do you think about building the images multi-arch? Similar to pyxform-http: # Setup QEMU emulator to build multi-arch images
# https://github.com/docker/setup-qemu-action
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
# Configure Buildx for Docker build
# https://github.com/docker/setup-buildx-action
- name: Set up Buildx
uses: docker/setup-buildx-action@v3
# Build and push Docker image with Buildx
# https://github.com/docker/build-push-action
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: 'linux/amd64,linux/arm64' The main things are the two additional actions above the build + the specified |
Oh yes, I meant to add that to the todo! We can do that here. 👍 |
Although this does run on every push, so building multi-arch will add to the build time and might not make sense. Do you think it might be good to build on release instead, or perhaps in addition? The docker/metadata action will handle correct image tagging for releases. |
That I do have in my todo list to address! I'm just leaving it like that until all the other issues are addressed. I can add multiarch at that same time since you've told me exactly how. 🎉 |
docker-compose.yml
Outdated
volumes: | ||
- ./files/local/customssl/:/etc/customssl/live/local/ | ||
- ./files/nginx/redirector.conf:/etc/nginx/conf.d/redirector.conf | ||
- ./files/nginx/common-headers.conf:/usr/share/odk/nginx/common-headers.conf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yanokwa we do want users to be able to customize redirector.conf
and common-headers.conf
, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No customization on either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. I've moved this back to a copy in the Dockerfile, then. That means those files will be copied at build time so will be static in the image.
@@ -17,7 +27,6 @@ fi | |||
|
|||
# start from fresh templates in case ssl type has changed | |||
echo "writing fresh nginx templates..." | |||
cp /usr/share/odk/nginx/redirector.conf /etc/nginx/conf.d/redirector.conf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'll likely need to add this back. It was there because if you go from upstream to letsencrypt, the file at /etc/nginx/conf.d/redirector.conf
gets overwritten and you need a fresh template.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh, yes. I've added a comment.
@@ -1,5 +1,15 @@ | |||
#!/bin/bash | |||
|
|||
|
|||
echo "writing client config..." | |||
if [[ $OIDC_ENABLED != 'true' ]] && [[ $OIDC_ENABLED != 'false' ]]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is more robust, but we don't do any other such checking elsewhere. I'd bias toward if true, then it's enabled. If it's anything else, it's not enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This came in as part of other work. I'd rather leave it like this here and file an issue to reconsider if you really want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before merging, verify that toggling between self-sign, upstream, and letsencrypt works
All of this is near instant: modify .env, |
Thanks for the review @spwoodcock and @yanokwa! Let’s merge now and then file issues as we see improvements to make. @matthew-white I’ll definitely want your review on this when we get to #677 but I think for now this is a good first step. |
Closes #165
Progress towards #656
Based on #249, #546
TODO:
What has been done to verify that this works as intended?
The built images can be seen at https://github.com/lognaturel/central/pkgs/container/central-nginx and https://github.com/lognaturel/central/pkgs/container/central-service
I have verified that the service image + nginx image with custom certs work with the standard Docker Compose file.
Why is this the best possible solution? Were any other approaches considered?
I mostly followed the groundwork laid by @mattelacchiato in #249 and then @tobiasmcnulty at #546 with some further simplifications.
I used the image naming suggested by @tobiasmcnulty:
central-service
andcentral-nginx
. I considered calling them frontend and backend but I think matching the Dockerfile names is the least confusing option.How does this change affect users? Describe intentional changes to behavior and behavior that could have accidentally been affected by code changes. In other words, what are the regression risks?
It doesn't affect users yet because the images aren't used. I'm proposing splitting that off at #677
Does this change require updates to documentation? If so, please file an issue here and include the link below.
Not yet but #677 will.
Before submitting this PR, please make sure you have:
next
branch OR only changed documentation/infrastructure (master
is stable and used in production)