You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With those definitions, Kubernetes blocks Services, Configurations, and Revisions to have duplicate environment variables.
One impact for example is that one can have a KService with duplicate environment variables. One can still update it as long as one does not touch the env (for example by just updating the image). Knative can then still successfully update the Configuration, but fails to create the new Revision.
Actual Behavior
Having duplicate environment variables makes no sense. Period. So, I generally like that validation. But, I think the transition into that validation should be smoother.
One should document this change in the release notes, I think. What might help users, is to determine if they have that problem in their cluster. The following query lists the affected KServices:
We suddenly already have a mutating webhook in place where we added this logic to mutate away that problem in KService operations. Not sure if Knative would want a similar logic:
// RemoveDuplicateEnvironmentVariables removes duplicate entries from container.Env and uses the last value in case of such duplicatesfunc (ksvcm*KServiceMutator) RemoveDuplicateEnvironmentVariables(container*corev1.Container) {
allEnvVars:=map[string]int{}
j:=0// output indexfori:=rangecontainer.Env {
name:=container.Env[i].Nameif_, found:=allEnvVars[name]; found {
ksvcm.Logger.Warn(fmt.Sprintf("Removing duplicate environment variable %q. Value is different: %v", name, !reflect.DeepEqual(container.Env[i], container.Env[allEnvVars[name]])))
// duplicate env var overwrite the first env var with the same name with this onecontainer.Env[allEnvVars[name]] =container.Env[i]
} else {
// new env varallEnvVars[name] =jcontainer.Env[j] =container.Env[i]
j++
}
}
container.Env=container.Env[:j]
}
Steps to Reproduce the Problem
If you are already on v1.16, you could remove the three lines from the Configuration, Revision and Service custom resource definition
Create a KService and have multiple environment variable entries with the same name.
Install v1.16 (or add back the lines you removed in (1).
Try to update the KService (or create the same one).
The text was updated successfully, but these errors were encountered:
Hi @SaschaSchwarze0 fwiw, K8s only emits a warning for duplicate env vars.
It does not block pod creation (it shown though that the pod is not valid eg. when you try to edit it).
In addition, with K8s you cannot update the env vars.
Knative does not allow to create the pod in 1.16+.
I agree that since we are changing the semantics we should have a mechanism to make the transition smoother.
I think we should start by adding it to the release notes.
In what area(s)?
Other classifications:
What version of Knative?
v1.16
Expected Behavior
Knative v1.16 changed behavior when it comes to environment variables defined in a KService. The change was not documented.
The origin of the change is the bump of the Kubernetes client version from v0.29 to v0.30 and the regeneration of the custom resource definition yamls.
This caused the addition of these three lines for the
env
field:https://github.com/knative/serving/blob/v0.43.0/config/core/300-resources/service.yaml#L282-L284
With those definitions, Kubernetes blocks Services, Configurations, and Revisions to have duplicate environment variables.
One impact for example is that one can have a KService with duplicate environment variables. One can still update it as long as one does not touch the
env
(for example by just updating the image). Knative can then still successfully update the Configuration, but fails to create the new Revision.Actual Behavior
Having duplicate environment variables makes no sense. Period. So, I generally like that validation. But, I think the transition into that validation should be smoother.
One should document this change in the release notes, I think. What might help users, is to determine if they have that problem in their cluster. The following query lists the affected KServices:
We suddenly already have a mutating webhook in place where we added this logic to mutate away that problem in KService operations. Not sure if Knative would want a similar logic:
Steps to Reproduce the Problem
The text was updated successfully, but these errors were encountered: