-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
version next
(currently 1.68.0) does not support anymore config.executions schema
#125
Comments
The issue actually begins in 1.67.1 |
@MaSpeng, @finkinfridom @medwingADMIN for binging that up! I am not thrilled to fully switch from the config file to env vars. The main reason is, that this will have quite some impact on existing charts users, and we would bump the major version of the chart to +1. Maybe @ivov from n8n can shed some light on the strategy. As the first step, I suggest removing |
https://truecharts.org/charts/stable/n8n/ supposedly working with 1.68.0 |
Given most of our needed configurations can be set as environment variables and, given also setting
didn't fix the issue (as the default values were still there) the only working solution found was to set
in your values.yaml With the above changes, the configmap for the |
i know im a noob, but my friend who set my k8s up for me used the other chart i linked above, im on 1.68 now with no problems in queue mode woth workers, webhook processors and replicas |
@MaSpeng Thanks for bringing this up. Our config schema has grown too large and does not support dependency injection, so we are slowly moving towards smaller independent configs that support DI. The env vars continue to work as before, but we did not realize that the internal structure of the config schema was being relied on externally. Sorry for the inconvenience - we'll rename back those keys. But please bear in mind we will likely deprecate and later drop |
thank you all for the input and contributions. I find applications that have multiple levels of configuration to be easier to operate and reason about because one likely end up with config file for general configuration setup and env vars for environment-specific changes. This makes it easy to see what is a general config option and what is env-specific. For the upcoming iteration of the chart, I propose maintaining the structured format and converting it to environment variables. db:
type: postgresdb
postgresdb:
database: n8n
host: localhost
port: 12345
... will become env variables DB_TYPE=postgresdb
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_HOST=localhost
DB_POSTGRESDB_PORT=12345
... There are also the |
@Vad1mo I appreciate the changes you want to make. The incorporation of the I think this would look like this in the end: db:
type: postgresdb
postgresdb:
database: n8n
host: localhost
port: 5432
password_file: /var/secrets/postgres_secret I would suggest to also provide a schema to ensure that a specific configuration is only provided through an environment or a file. This might be a general improvement to ensure the configuration is sensible, so not someone tries to configure By the way could also be a good time to remove deprecated configurations like |
I see, the
Schema is a good key word, I had that in mind too, especially since I recently stumbled across helm-cel, do you know if there is a list of support env vars for n8n? the closest I got was this However, I would rather not limit the chart to be too strictly in a way that prevents people from setting env vars that are not yet supported by this chart.
|
I agree with the part out of scope.
Personally, I would use the documentation for this topic: https://docs.n8n.io/hosting/configuration/environment-variables/
I would also not limit the |
This is the n8n helm chart values for the next version. I designed it that you can now configure n8n, worker, and webhook individually. @finkinfridom # README
# High level values structure, overview and explanation of the values.yaml file.
# 1. chart wide values, like the image repository, image tag, etc.
# 2. ingress, (tested with nginx, but it likly works with other too)
# 3. n8n app configuration + kubernetes specific settings
# 4. worker related settings + kubernetes specific settings
# 5. webhook related settings + kubernetes specific settings
# 6. Redis related settings + kubernetes specific settings
##
##
## Common Kubernetes Config Settings for this entire n8n deployment
##
image:
repository: n8nio/n8n
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
# define a custom incressClassName, like "traefik" or "nginx"
className: ""
# n8n application related n8n configuration + Kubernetes specific settings
n8n:
# The config: {} dictionary is converted to environmental variables in the ConfigMap.
# Example: the YAML entry db.postgresdb.host: loclahost is transformed DB_POSTGRESDB_HOST=localhost
# See https://docs.n8n.io/hosting/configuration/environment-variables/ for all values.
config:
n8n:
# If not specified, n8n will create a random encryption key for encrypting saved credentials, and saves it in the dir ~/.n8n folder
# if you run a stateless n8n, you should provide an encryption key here.
encryption_key:
# db:
# type: postgresdb
# postgresdb:
# host: 192.168.0.52
# Dictionary for secrets, unlike config:, the values here will end up in the secret file.
# The YAML entry db.postgresdb.password: my_secret is transformed DB_POSTGRESDB_password=bXlfc2VjcmV0
# See https://docs.n8n.io/hosting/configuration/environment-variables/
secret:
# database:
# postgresdb:
# password: 'big secret'
##
## N8n Kubernetes specific settings
##
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: false
# what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
type: emptyDir
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
## PVC annotations
#
# If you need this annotation include it under `values.yml` file and pvc.yml template will add it.
# This is not maintained at Helm v3 anymore.
# https://github.com/8gears/n8n-helm-chart/issues/8
#
# annotations:
# helm.sh/resource-policy: keep
## Persistent Volume Access Mode
##
accessModes:
- ReadWriteOnce
## Persistent Volume size
##
size: 1Gi
## Use an existing PVC
##
# existingClaim:
replicaCount: 1
# here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable
# If these options are not set, default values are 25%
# deploymentStrategy:
# type: RollingUpdate
# maxSurge: "50%"
# maxUnavailable: "50%"
deploymentStrategy:
type: "Recreate"
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podLabels: {}
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building
# your own docker image
# see https://github.com/8gears/n8n-helm-chart/pull/30
lifecycle:
{}
# here's the sample configuration to add mysql-client to the container
# lifecycle:
# postStart:
# exec:
# command: ["/bin/sh", "-c", "apk add mysql-client"]
# here you can override a command for main container
# it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or
# run additional preparation steps (e.g., installing additional software)
command: []
# sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful):
# command:
# - tini
# - --
# - /bin/sh
# - -c
# - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n
# here you can override the livenessProbe for the main container
# it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like
livenessProbe:
httpGet:
path: /healthz
port: http
# initialDelaySeconds: 30
# periodSeconds: 10
# timeoutSeconds: 5
# failureThreshold: 6
# successThreshold: 1
# here you can override the readinessProbe for the main container
# it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like
readinessProbe:
httpGet:
path: /healthz
port: http
# initialDelaySeconds: 30
# periodSeconds: 10
# timeoutSeconds: 5
# failureThreshold: 6
# successThreshold: 1
# here you can add init containers to the various deployments
initContainers: []
service:
type: ClusterIP
port: 80
annotations: {}
workerResources:
{}
webhookResources:
{}
resources:
{}
# We usually recommend not specifying default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
## Worker related settings
worker:
enabled: false
count: 2
concurrency: 2
config: {}
secret: {}
##
## Worker Kubernetes specific settings
##
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: false
# what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
type: emptyDir
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
## PVC annotations
#
# If you need this annotation include it under `values.yml` file and pvc.yml template will add it.
# This is not maintained at Helm v3 anymore.
# https://github.com/8gears/n8n-helm-chart/issues/8
#
# annotations:
# helm.sh/resource-policy: keep
## Persistent Volume Access Mode
##
accessModes:
- ReadWriteOnce
## Persistent Volume size
##
size: 1Gi
## Use an existing PVC
##
# existingClaim:
replicaCount: 1
# here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable
# If these options are not set, default values are 25%
# deploymentStrategy:
# type: RollingUpdate
# maxSurge: "50%"
# maxUnavailable: "50%"
deploymentStrategy:
type: "Recreate"
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podLabels: {}
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building
# your own docker image
# see https://github.com/8gears/n8n-helm-chart/pull/30
lifecycle:
{}
# here's the sample configuration to add mysql-client to the container
# lifecycle:
# postStart:
# exec:
# command: ["/bin/sh", "-c", "apk add mysql-client"]
# here you can override a command for main container
# it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or
# run additional preparation steps (e.g., installing additional software)
command: []
# sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful):
# command:
# - tini
# - --
# - /bin/sh
# - -c
# - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n
# here you can override the livenessProbe for the main container
# it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like
livenessProbe:
httpGet:
path: /healthz
port: http
# initialDelaySeconds: 30
# periodSeconds: 10
# timeoutSeconds: 5
# failureThreshold: 6
# successThreshold: 1
# here you can override the readinessProbe for the main container
# it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like
readinessProbe:
httpGet:
path: /healthz
port: http
# initialDelaySeconds: 30
# periodSeconds: 10
# timeoutSeconds: 5
# failureThreshold: 6
# successThreshold: 1
# here you can add init containers to the various deployments
initContainers: []
service:
type: ClusterIP
port: 80
annotations: {}
workerResources:
{}
webhookResources:
{}
resources:
{}
# We usually recommend not specifying default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
## Webhook related settings
# With .Values.scaling.webhook.enabled=true you disable Webhooks from the main process, but you enable the processing on a different Webhook instance.
# See https://github.com/8gears/n8n-helm-chart/issues/39#issuecomment-1579991754 for the full explanation.
webhooks:
enabled: false
count: 1
config: {}
secret: {}
##
## Webhook Kubernetes specific settings
##
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: false
# what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
type: emptyDir
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
## PVC annotations
#
# If you need this annotation include it under `values.yml` file and pvc.yml template will add it.
# This is not maintained at Helm v3 anymore.
# https://github.com/8gears/n8n-helm-chart/issues/8
#
# annotations:
# helm.sh/resource-policy: keep
## Persistent Volume Access Mode
##
accessModes:
- ReadWriteOnce
## Persistent Volume size
##
size: 1Gi
## Use an existing PVC
##
# existingClaim:
replicaCount: 1
# here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable
# If these options are not set, default values are 25%
# deploymentStrategy:
# type: RollingUpdate
# maxSurge: "50%"
# maxUnavailable: "50%"
deploymentStrategy:
type: "Recreate"
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podLabels: {}
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building
# your own docker image
# see https://github.com/8gears/n8n-helm-chart/pull/30
lifecycle:
{}
# here's the sample configuration to add mysql-client to the container
# lifecycle:
# postStart:
# exec:
# command: ["/bin/sh", "-c", "apk add mysql-client"]
# here you can override a command for main container
# it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or
# run additional preparation steps (e.g., installing additional software)
command: []
# sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful):
# command:
# - tini
# - --
# - /bin/sh
# - -c
# - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n
# here you can override the livenessProbe for the main container
# it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like
livenessProbe:
httpGet:
path: /healthz
port: http
# initialDelaySeconds: 30
# periodSeconds: 10
# timeoutSeconds: 5
# failureThreshold: 6
# successThreshold: 1
# here you can override the readinessProbe for the main container
# it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like
readinessProbe:
httpGet:
path: /healthz
port: http
# initialDelaySeconds: 30
# periodSeconds: 10
# timeoutSeconds: 5
# failureThreshold: 6
# successThreshold: 1
# here you can add init containers to the various deployments
initContainers: []
service:
type: ClusterIP
port: 80
annotations: {}
workerResources:
{}
webhookResources:
{}
resources:
{}
# We usually recommend not specifying default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
## Bitnami Valkey configuration
## https://artifacthub.io/packages/helm/bitnami/valkey
redis:
enabled: false
architecture: standalone
master:
persistence:
enabled: true
existingClaim: ""
size: 2Gi
|
I cant think of anything else it would need execution mode? |
@Vad1mo looks good to me 👍. The execution mode is a good topic, in the past, this was conditionaly set based on |
@Vad1mo looks good to me too. thanks a lot |
now you do it like this with enabled: true like: webhooks:
enabled: false
config: {}
secret: {}
...
worker:
enabled: false
count: 2
concurrency: 2
config: {}
secret: {} |
what about enabling queue mode |
is count/concurrency equivalent to replicas |
@mayphilc Thats what was talked about in this topic, the "queue" mode is actually the "EXECUTIONS_MODE", so I when enabling the webhook/worker options, the chart will set the execution mode to "queue", at least that what I expect by now :). |
I now see that I need to take Redis into account when enabling the queue mode. So it will likely have an influence on each other. I have other questions regarding webhook/worker.
|
@Vad1mo From the environment variable configuration in the n8n documentation, it looks like you could at least use each environment variable on each service. If it will have any effect is decided at the runtime and seems not to be documented. As a consequence we normally share the same environment variables on each service (main, webhook, worker), which is especially important if you allow additional build in or external modules for code nodes. I think in the end, you should at least be able to overwrite or nullify a particular setting for a specific service so in the end, the consumer of this helm chart has the freedom to configure his specific needs, but could also be a case of YAGNI ^^. |
Thanks for your view. I was thinking the same, glad I was able to reassure my view |
When deploying
next
version for n8n (currently the 1.68.0) the pod keeps restarting because theconfig.executions
schema did not match anymore.Is it a breaking change for the upcoming version? What's the suggested way to migrate to the next version?
Currently, the helm chart provide a couple of configuration keys
https://github.com/8gears/n8n-helm-chart/blob/main/charts/n8n/values.yaml#L8
Given the current n8n documentation states about environment variables usage (https://docs.n8n.io/hosting/scaling/execution-data/#enable-data-pruning) shouldn't be easier to move everything to the env variables instead of the config map?
The text was updated successfully, but these errors were encountered: