Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

version next (currently 1.68.0) does not support anymore config.executions schema #125

Open
finkinfridom opened this issue Nov 18, 2024 · 22 comments
Labels
enhancement New feature or request

Comments

@finkinfridom
Copy link

When deploying next version for n8n (currently the 1.68.0) the pod keeps restarting because the config.executions schema did not match anymore.
Is it a breaking change for the upcoming version? What's the suggested way to migrate to the next version?

Currently, the helm chart provide a couple of configuration keys
https://github.com/8gears/n8n-helm-chart/blob/main/charts/n8n/values.yaml#L8

Given the current n8n documentation states about environment variables usage (https://docs.n8n.io/hosting/scaling/execution-data/#enable-data-pruning) shouldn't be easier to move everything to the env variables instead of the config map?

@medwingADMIN
Copy link

The issue actually begins in 1.67.1

@MaSpeng
Copy link

MaSpeng commented Nov 18, 2024

There are multiple pull requests wich remove sections from the configuration schema since version 1.50.0 it seems:

These changes seem all be breaking and not documented as well :(

@Vad1mo
Copy link
Member

Vad1mo commented Nov 18, 2024

@MaSpeng, @finkinfridom @medwingADMIN for binging that up!

I am not thrilled to fully switch from the config file to env vars. The main reason is, that this will have quite some impact on existing charts users, and we would bump the major version of the chart to +1.
We would also need to change how we approach configs and docs in that chart.

Maybe @ivov from n8n can shed some light on the strategy.

As the first step, I suggest removing executions from values.yaml

@mayphilc
Copy link

mayphilc commented Nov 18, 2024

https://truecharts.org/charts/stable/n8n/

supposedly working with 1.68.0

@finkinfridom
Copy link
Author

Given most of our needed configurations can be set as environment variables and, given also setting

config:
  executions: {}

didn't fix the issue (as the default values were still there) the only working solution found was to set

config:
  executions: null

in your values.yaml

With the above changes, the configmap for the config doesn't exist anymore on the cluster and the app correctly upgrade through helm

@mayphilc
Copy link

i know im a noob, but my friend who set my k8s up for me used the other chart i linked above, im on 1.68 now with no problems in queue mode woth workers, webhook processors and replicas

@ivov
Copy link

ivov commented Nov 19, 2024

@MaSpeng, @finkinfridom @medwingADMIN for binging that up!

I am not thrilled to fully switch from the config file to env vars. The main reason is, that this will have quite some impact on existing charts users, and we would bump the major version of the chart to +1. We would also need to change how we approach configs and docs in that chart.

Maybe @ivov from n8n can shed some light on the strategy.

As the first step, I suggest removing executions from values.yaml

@MaSpeng Thanks for bringing this up.

Our config schema has grown too large and does not support dependency injection, so we are slowly moving towards smaller independent configs that support DI.

The env vars continue to work as before, but we did not realize that the internal structure of the config schema was being relied on externally. Sorry for the inconvenience - we'll rename back those keys. But please bear in mind we will likely deprecate and later drop N8N_CONFIG_FILES in n8n v2, in favor of .env files and *_FILE env vars.

@Vad1mo
Copy link
Member

Vad1mo commented Nov 20, 2024

thank you all for the input and contributions.
The next step would be to prepare this chart for the breaking change.
My main intent is to provide a clear and actionable path for users to be able to upgrade that chart.

I find applications that have multiple levels of configuration to be easier to operate and reason about because one likely end up with config file for general configuration setup and env vars for environment-specific changes. This makes it easy to see what is a general config option and what is env-specific.

For the upcoming iteration of the chart, I propose maintaining the structured format and converting it to environment variables.

db:
  type: postgresdb
  postgresdb:
    database: n8n
    host: localhost
    port: 12345
...      

will become env variables

DB_TYPE=postgresdb
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_HOST=localhost
DB_POSTGRESDB_PORT=12345
...

There are also the .*_FILE env vars, I am not sure if we should incorporate that too.

@MaSpeng
Copy link

MaSpeng commented Nov 20, 2024

@Vad1mo I appreciate the changes you want to make.

The incorporation of the _FILE environments could be useful, for example when using a secret management/vault which provides the secrets through files, DB_POSTGRESDB_PASSWORD_FILE as example for this use case.

I think this would look like this in the end:

db:
  type: postgresdb
  postgresdb:
    database: n8n
    host: localhost
    port: 5432
    password_file: /var/secrets/postgres_secret

I would suggest to also provide a schema to ensure that a specific configuration is only provided through an environment or a file.

This might be a general improvement to ensure the configuration is sensible, so not someone tries to configure sqlite as database when the type is already set to postgresdb.

By the way could also be a good time to remove deprecated configurations like mysqldb.

@Vad1mo
Copy link
Member

Vad1mo commented Nov 20, 2024

The incorporation of the _FILE environments could be useful, for example when using a secret management/vault which provides the secrets through files, DB_POSTGRESDB_PASSWORD_FILE as example for this use case.

I think this would look like this in the end:

db:
  type: postgresdb
  postgresdb:
    database: n8n
    host: localhost
    port: 5432
    password_file: /var/secrets/postgres_secret

I see, the password_file: /var/secrets/postgres_secret would be supported out of the box, I wonder how /var/secrets/postgres_secret is mounted.
I guess given the various solutions, this should be something out of the scope of this chart.

I would suggest to also provide a schema to ensure that a specific configuration is only provided through an environment or a file.

This might be a general improvement to ensure the configuration is sensible, so not someone tries to configure sqlite as database when the type is already set to postgresdb.

Schema is a good key word, I had that in mind too, especially since I recently stumbled across helm-cel, do you know if there is a list of support env vars for n8n? the closest I got was this

However, I would rather not limit the chart to be too strictly in a way that prevents people from setting env vars that are not yet supported by this chart.

By the way could also be a good time to remove deprecated configurations like mysqldb.
Agree

@MaSpeng
Copy link

MaSpeng commented Nov 22, 2024

@Vad1mo

I guess given the various solutions, this should be something out of the scope of this chart.

I agree with the part out of scope.

Schema is a good key word, I had that in mind too, especially since I recently stumbled across helm-cel, do you know if there is a list of support env vars for n8n? the closest I got was this

Personally, I would use the documentation for this topic: https://docs.n8n.io/hosting/configuration/environment-variables/

However, I would rather not limit the chart to be too strictly in a way that prevents people from setting env vars that are not yet supported by this chart.

I would also not limit the extraEnvs part, I mostly thought about the configuration parameter like database and so on.

@Vad1mo
Copy link
Member

Vad1mo commented Dec 15, 2024

This is the n8n helm chart values for the next version.

I designed it that you can now configure n8n, worker, and webhook individually.
Please look at the values files and let me know it that makes sense from the user perspective.

@finkinfridom
@MaSpeng
@mayphilc
@ivov
@medwingADMIN

# README
# High level values structure, overview and explanation of the values.yaml file.
# 1. chart wide values, like the image repository, image tag, etc.
# 2. ingress, (tested with nginx, but it likly works with other too)
# 3. n8n app configuration + kubernetes specific settings
# 4. worker related settings + kubernetes specific settings
# 5. webhook related settings + kubernetes specific settings
# 6. Redis related settings + kubernetes specific settings

##
##
## Common Kubernetes Config Settings for this entire n8n deployment
##
image:
  repository: n8nio/n8n
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""
imagePullSecrets: []

ingress:
  enabled: false
  annotations: {}
  # kubernetes.io/ingress.class: nginx
  # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

  # define a custom incressClassName, like "traefik" or "nginx"
  className: ""



# n8n application related n8n configuration + Kubernetes specific settings
n8n:
  # The config: {} dictionary is converted to environmental variables in the ConfigMap.
  # Example: the YAML entry db.postgresdb.host: loclahost is transformed DB_POSTGRESDB_HOST=localhost
  # See https://docs.n8n.io/hosting/configuration/environment-variables/ for all values.
  config:
    n8n:
      # If not specified, n8n will create a random encryption key for encrypting saved credentials, and saves it in the dir ~/.n8n folder
      # if you run a stateless n8n, you should provide an encryption key here.
      encryption_key:
    #  db:
    #    type: postgresdb
    #    postgresdb:
    #      host: 192.168.0.52

  # Dictionary for secrets, unlike config:, the values here will end up in the secret file.
  # The YAML entry db.postgresdb.password: my_secret is transformed DB_POSTGRESDB_password=bXlfc2VjcmV0
  # See https://docs.n8n.io/hosting/configuration/environment-variables/
  secret:
  #    database:
  #      postgresdb:
  #        password: 'big secret'

  ##
  ## N8n Kubernetes specific settings
  ##
  persistence:
    ## If true, use a Persistent Volume Claim, If false, use emptyDir
    ##
    enabled: false
    # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
    type: emptyDir
    ## Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    # storageClass: "-"
    ## PVC annotations
    #
    # If you need this annotation include it under `values.yml` file and pvc.yml template will add it.
    # This is not maintained at Helm v3 anymore.
    # https://github.com/8gears/n8n-helm-chart/issues/8
    #
    # annotations:
    #   helm.sh/resource-policy: keep
    ## Persistent Volume Access Mode
    ##
    accessModes:
      - ReadWriteOnce
    ## Persistent Volume size
    ##
    size: 1Gi
    ## Use an existing PVC
    ##
    # existingClaim:

  replicaCount: 1

  # here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable
  # If these options are not set, default values are 25%
  # deploymentStrategy:
  #  type: RollingUpdate
  #  maxSurge: "50%"
  #  maxUnavailable: "50%"

  deploymentStrategy:
    type: "Recreate"

  nameOverride: ""
  fullnameOverride: ""

  serviceAccount:
    # Specifies whether a service account should be created
    create: true
    # Annotations to add to the service account
    annotations: {}
    # The name of the service account to use.
    # If not set and create is true, a name is generated using the fullname template
    name: ""

  podAnnotations: {}

  podLabels: {}

  podSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000

  securityContext: {}
    # capabilities:
    #   drop:
    #   - ALL
    # readOnlyRootFilesystem: true
    #  runAsNonRoot: true
  #  runAsUser: 1000

  # here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building
  # your own docker image
  # see https://github.com/8gears/n8n-helm-chart/pull/30
  lifecycle:
    {}

  #  here's the sample configuration to add mysql-client to the container
  # lifecycle:
  #  postStart:
  #    exec:
  #      command: ["/bin/sh", "-c", "apk add mysql-client"]

  # here you can override a command for main container
  # it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or
  # run additional preparation steps (e.g., installing additional software)
  command: []

  # sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful):
  # command:
  #  - tini
  #  - --
  #  - /bin/sh
  #  - -c
  #  - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n

  # here you can override the livenessProbe for the main container
  # it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like

  livenessProbe:
    httpGet:
      path: /healthz
      port: http
    # initialDelaySeconds: 30
    # periodSeconds: 10
    # timeoutSeconds: 5
    # failureThreshold: 6
    # successThreshold: 1

  # here you can override the readinessProbe for the main container
  # it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like

  readinessProbe:
    httpGet:
      path: /healthz
      port: http
    # initialDelaySeconds: 30
    # periodSeconds: 10
    # timeoutSeconds: 5
    # failureThreshold: 6
    # successThreshold: 1

  # here you can add init containers to the various deployments
  initContainers: []

  service:
    type: ClusterIP
    port: 80
    annotations: {}
  workerResources:
    {}
  webhookResources:
    {}
  resources:
    {}
    # We usually recommend not specifying default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #   cpu: 100m
    #   memory: 128Mi
    # requests:
    #   cpu: 100m
  #   memory: 128Mi

  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80
  nodeSelector: {}
  tolerations: []
  affinity: {}


## Worker related settings
worker:
  enabled: false
  count: 2
  concurrency: 2
  config: {}
  secret: {}

  ##
  ## Worker Kubernetes specific settings
  ##
  persistence:
    ## If true, use a Persistent Volume Claim, If false, use emptyDir
    ##
    enabled: false
    # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
    type: emptyDir
    ## Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    # storageClass: "-"
    ## PVC annotations
    #
    # If you need this annotation include it under `values.yml` file and pvc.yml template will add it.
    # This is not maintained at Helm v3 anymore.
    # https://github.com/8gears/n8n-helm-chart/issues/8
    #
    # annotations:
    #   helm.sh/resource-policy: keep
    ## Persistent Volume Access Mode
    ##
    accessModes:
      - ReadWriteOnce
    ## Persistent Volume size
    ##
    size: 1Gi
    ## Use an existing PVC
    ##
    # existingClaim:

  replicaCount: 1

  # here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable
  # If these options are not set, default values are 25%
  # deploymentStrategy:
  #  type: RollingUpdate
  #  maxSurge: "50%"
  #  maxUnavailable: "50%"

  deploymentStrategy:
    type: "Recreate"

  nameOverride: ""
  fullnameOverride: ""

  serviceAccount:
    # Specifies whether a service account should be created
    create: true
    # Annotations to add to the service account
    annotations: {}
    # The name of the service account to use.
    # If not set and create is true, a name is generated using the fullname template
    name: ""

  podAnnotations: {}

  podLabels: {}

  podSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000

  securityContext: {}
    # capabilities:
    #   drop:
    #   - ALL
    # readOnlyRootFilesystem: true
  #  runAsNonRoot: true
  #  runAsUser: 1000

  # here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building
  # your own docker image
  # see https://github.com/8gears/n8n-helm-chart/pull/30
  lifecycle:
    {}

  #  here's the sample configuration to add mysql-client to the container
  # lifecycle:
  #  postStart:
  #    exec:
  #      command: ["/bin/sh", "-c", "apk add mysql-client"]

  # here you can override a command for main container
  # it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or
  # run additional preparation steps (e.g., installing additional software)
  command: []

  # sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful):
  # command:
  #  - tini
  #  - --
  #  - /bin/sh
  #  - -c
  #  - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n

  # here you can override the livenessProbe for the main container
  # it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like

  livenessProbe:
    httpGet:
      path: /healthz
      port: http
    # initialDelaySeconds: 30
    # periodSeconds: 10
    # timeoutSeconds: 5
    # failureThreshold: 6
    # successThreshold: 1

  # here you can override the readinessProbe for the main container
  # it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like

  readinessProbe:
    httpGet:
      path: /healthz
      port: http
    # initialDelaySeconds: 30
    # periodSeconds: 10
    # timeoutSeconds: 5
    # failureThreshold: 6
    # successThreshold: 1

  # here you can add init containers to the various deployments
  initContainers: []

  service:
    type: ClusterIP
    port: 80
    annotations: {}



  workerResources:
    {}

  webhookResources:
    {}

  resources:
    {}
    # We usually recommend not specifying default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #   cpu: 100m
    #   memory: 128Mi
    # requests:
  #   cpu: 100m
  #   memory: 128Mi

  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80

  nodeSelector: {}

  tolerations: []

  affinity: {}

## Webhook related settings
# With .Values.scaling.webhook.enabled=true you disable Webhooks from the main process, but you enable the processing on a different Webhook instance.
# See https://github.com/8gears/n8n-helm-chart/issues/39#issuecomment-1579991754 for the full explanation.
webhooks:
  enabled: false
  count: 1
  config: {}
  secret: {}
  ##
  ## Webhook Kubernetes specific settings
  ##
  persistence:
    ## If true, use a Persistent Volume Claim, If false, use emptyDir
    ##
    enabled: false
    # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
    type: emptyDir
    ## Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    # storageClass: "-"
    ## PVC annotations
    #
    # If you need this annotation include it under `values.yml` file and pvc.yml template will add it.
    # This is not maintained at Helm v3 anymore.
    # https://github.com/8gears/n8n-helm-chart/issues/8
    #
    # annotations:
    #   helm.sh/resource-policy: keep
    ## Persistent Volume Access Mode
    ##
    accessModes:
      - ReadWriteOnce
    ## Persistent Volume size
    ##
    size: 1Gi
    ## Use an existing PVC
    ##
    # existingClaim:

  replicaCount: 1

  # here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable
  # If these options are not set, default values are 25%
  # deploymentStrategy:
  #  type: RollingUpdate
  #  maxSurge: "50%"
  #  maxUnavailable: "50%"

  deploymentStrategy:
    type: "Recreate"

  nameOverride: ""
  fullnameOverride: ""

  serviceAccount:
    # Specifies whether a service account should be created
    create: true
    # Annotations to add to the service account
    annotations: {}
    # The name of the service account to use.
    # If not set and create is true, a name is generated using the fullname template
    name: ""

  podAnnotations: {}

  podLabels: {}

  podSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000

  securityContext: {}
    # capabilities:
    #   drop:
    #   - ALL
    # readOnlyRootFilesystem: true
  #  runAsNonRoot: true
  #  runAsUser: 1000

  # here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building
  # your own docker image
  # see https://github.com/8gears/n8n-helm-chart/pull/30
  lifecycle:
    {}

  #  here's the sample configuration to add mysql-client to the container
  # lifecycle:
  #  postStart:
  #    exec:
  #      command: ["/bin/sh", "-c", "apk add mysql-client"]

  # here you can override a command for main container
  # it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or
  # run additional preparation steps (e.g., installing additional software)
  command: []

  # sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful):
  # command:
  #  - tini
  #  - --
  #  - /bin/sh
  #  - -c
  #  - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n

  # here you can override the livenessProbe for the main container
  # it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like

  livenessProbe:
    httpGet:
      path: /healthz
      port: http
    # initialDelaySeconds: 30
    # periodSeconds: 10
    # timeoutSeconds: 5
    # failureThreshold: 6
    # successThreshold: 1

  # here you can override the readinessProbe for the main container
  # it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like

  readinessProbe:
    httpGet:
      path: /healthz
      port: http
    # initialDelaySeconds: 30
    # periodSeconds: 10
    # timeoutSeconds: 5
    # failureThreshold: 6
    # successThreshold: 1

  # here you can add init containers to the various deployments
  initContainers: []

  service:
    type: ClusterIP
    port: 80
    annotations: {}



  workerResources:
    {}

  webhookResources:
    {}

  resources:
    {}
    # We usually recommend not specifying default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #   cpu: 100m
    #   memory: 128Mi
    # requests:
  #   cpu: 100m
  #   memory: 128Mi

  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80

  nodeSelector: {}

  tolerations: []

  affinity: {}



## Bitnami Valkey configuration
## https://artifacthub.io/packages/helm/bitnami/valkey
redis:
  enabled: false
  architecture: standalone

  master:
    persistence:
      enabled: true
      existingClaim: ""
      size: 2Gi

@mayphilc
Copy link

mayphilc commented Dec 15, 2024

@Vad1mo

I cant think of anything else it would need

execution mode?

@MaSpeng
Copy link

MaSpeng commented Dec 16, 2024

@Vad1mo looks good to me 👍. The execution mode is a good topic, in the past, this was conditionaly set based on .Values.scaling.enabled, how do you plan to do this in the future?

@finkinfridom
Copy link
Author

@Vad1mo looks good to me too. thanks a lot

@Vad1mo
Copy link
Member

Vad1mo commented Dec 17, 2024

@Vad1mo looks good to me 👍. The execution mode is a good topic, in the past, this was conditionaly set based on .Values.scaling.enabled, how do you plan to do this in the future?

now you do it like this with enabled: true

like:

webhooks:
  enabled: false
  config: {}
  secret: {}
  ...
worker:
  enabled: false
  count: 2
  concurrency: 2
  config: {}
  secret: {}

@mayphilc
Copy link

@Vad1mo looks good to me 👍. The execution mode is a good topic, in the past, this was conditionaly set based on .Values.scaling.enabled, how do you plan to do this in the future?

now you do it like this with enabled: true

like:

webhooks:
  enabled: false
  config: {}
  secret: {}
  ...
worker:
  enabled: false
  count: 2
  concurrency: 2
  config: {}
  secret: {}

what about enabling queue mode

@mayphilc
Copy link

mayphilc commented Dec 18, 2024

is count/concurrency equivalent to replicas

@MaSpeng
Copy link

MaSpeng commented Dec 18, 2024

@Vad1mo looks good to me 👍. The execution mode is a good topic, in the past, this was conditionaly set based on .Values.scaling.enabled, how do you plan to do this in the future?

now you do it like this with enabled: true
like:

webhooks:
  enabled: false
  config: {}
  secret: {}
  ...
worker:
  enabled: false
  count: 2
  concurrency: 2
  config: {}
  secret: {}

what about enabling queue mode

@mayphilc Thats what was talked about in this topic, the "queue" mode is actually the "EXECUTIONS_MODE", so I when enabling the webhook/worker options, the chart will set the execution mode to "queue", at least that what I expect by now :).

@Vad1mo
Copy link
Member

Vad1mo commented Dec 18, 2024

webhook/worker
We haven't used webhook/worker so far, and I didn't look into the underlying code too deep. I am looking at it purely from a consumer perspective.

I now see that I need to take Redis into account when enabling the queue mode. So it will likely have an influence on each other.

I have other questions regarding webhook/worker.
In the future, webhook/worker will have their dedicated config.

  1. Do webhook/worker have always the same config as the main?
  2. Should webhook/worker have a config that is different from main
  3. Should webhook/worker be able to extend the config of main?

@MaSpeng
Copy link

MaSpeng commented Dec 18, 2024

@Vad1mo From the environment variable configuration in the n8n documentation, it looks like you could at least use each environment variable on each service. If it will have any effect is decided at the runtime and seems not to be documented.

As a consequence we normally share the same environment variables on each service (main, webhook, worker), which is especially important if you allow additional build in or external modules for code nodes.
You could enable them for main and while your are developing and testing a workflow, everything would be fine, but if you would forget to do the same on the worker service, the workflow would fail when executed regularly through a webhook or a schedule as example.

I think in the end, you should at least be able to overwrite or nullify a particular setting for a specific service so in the end, the consumer of this helm chart has the freedom to configure his specific needs, but could also be a case of YAGNI ^^.

@Vad1mo
Copy link
Member

Vad1mo commented Dec 18, 2024

I think in the end, you should at least be able to overwrite or nullify a particular setting for a specific service so in the end, the consumer of this helm chart has the freedom to configure his specific needs, but could also be a case of YAGNI ^^.

Thanks for your view. I was thinking the same, glad I was able to reassure my view

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants