-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] v0.24.3+ cannot reference extra files for metadata server auth #1083
Comments
FWIW here is the terraform I'm using for the storage class etc: resource "kubernetes_secret" "jfs-secret-storage-one" {
metadata {
name = "jfs-secret-storage-one"
namespace = "default"
labels = {
"juicefs.com/validate-secret" : "true"
}
}
type = "Opaque"
data = {
"configs" : "{${kubernetes_secret.jfs-secret-storage-one-tls.metadata[0].name}: /tls}"
"name" : "storage-one"
"metaurl" : "rediss://default:*****@10.250.5.130:6380/1?tls-cert-file=/tls/client.crt&tls-key-file=/tls/client.key&tls-ca-cert-file=/tls/ca.crt"
"storage" : "sftp"
"bucket" : "10.250.5.130:juicefs/"
"access-key" : "juicefs"
"secret-key" : "*******"
}
}
resource "kubernetes_secret" "jfs-secret-storage-one-tls" {
metadata {
name = "jfs-secret-storage-one-tls"
namespace = "kube-system"
}
type = "Opaque"
binary_data = {
"ca.crt" : "${filebase64("${path.module}/../storage/slashkeydb/tls/ca.crt")}"
"client.key" : "${filebase64("${path.module}/../storage/slashkeydb/tls/client.key")}"
"client.crt" : "${filebase64("${path.module}/../storage/slashkeydb/tls/client.crt")}"
}
}
resource "kubernetes_storage_class" "jfs-storage-one" {
metadata {
name = "jfs-storage-one"
}
storage_provisioner = "csi.juicefs.com"
reclaim_policy = "Retain"
parameters = {
"csi.storage.k8s.io/provisioner-secret-name" : kubernetes_secret.jfs-secret-storage-one.metadata[0].name
"csi.storage.k8s.io/provisioner-secret-namespace" : kubernetes_secret.jfs-secret-storage-one.metadata[0].namespace
"csi.storage.k8s.io/node-publish-secret-name" : kubernetes_secret.jfs-secret-storage-one.metadata[0].name
"csi.storage.k8s.io/node-publish-secret-namespace" : kubernetes_secret.jfs-secret-storage-one.metadata[0].namespace
"pathPattern" : "$${.pvc.namespace}.$${.pvc.name}"
}
} |
I did not reproduce on v0.24.5 Can you confirm whether mountpod contains the secret volume |
Hi @zxh326, sorry about the delay. It seems like everything is attached as it's supposed to be. Here are some Lens screenshots post-upgrade. Curiously, it seems like the error originates in the |
csi-node itself will not mount the we will take a look at how to solve the problem, thx for now, you can edit daemonset to mount the secret in the csi-node to fix |
Okay great! I can confirm that editing the DaemonSet to mount the secret has worked for now, but hoping to see a fix in an upcoming release. Thank you! |
What happened:
After upgrading to v0.24.5 and restarting my pods, they all failed to start with the following error:
I had followed https://juicefs.com/docs/csi/guide/pv/#mount-pod-extra-files to include TLS certs in the mount containers for auth with a Redis metadata server, and this worked fine on v0.24.2 which I was on previously. Downgrading to v0.24.3 did not fix the issue, but downgrading fully back to v0.24.2 did.
What you expected to happen:
Pods should continue to be able to mount volumes after upgrading.
How to reproduce it (as minimally and precisely as possible):
Set up a storage class pointing at a Redis metadata server with TLS. Use aforementioned guide to mount TLS certs in mount container. Notice that this works on 0.24.2 but not later versions.
Anything else we need to know?
Environment:
v0.24.2
->v0.24.5
kubectl version
):v1.31.0-rc.1
The text was updated successfully, but these errors were encountered: