-
-
Notifications
You must be signed in to change notification settings - Fork 0
User Management
Scaleway provides a token-based authentication for username kubernetes-admin
which is hardcoded into system:masters
group (maybe using --token-auth-file
on apiserver).
It's possible to reset the admin token using scaleway UI/API.
- nobody should use the master token.
- authentication is done using certificates (see how to create below)
- no
Role
orClusterRole
binding to groups - groups are solely uses as labels
Note: once a certificate is issued to a User, it cannot be revoked so the issuer will be able to authenticate to the Cluster for as long as the certificate has not expired. Set expiration accordingly. It's easy to recreate a certificate so don't go beyond a year (31536000s).
Note: you'll need the cluster's CA certificate to prepare the kubeconfig. This is done once
# download the cluster's CA cert
kubectl get configmap kube-root-ca.crt -o json |jq -r '.data."ca.crt"' > ca.crt
# fill-out
CLUSTERURL=https://some-id.api.k8s.fr-par.scw.cloud:6443
USERNAME=some-username
GROUPNAME=group-a
CERTIFICATE_NAME=user-$USERNAME
openssl genrsa -out ${USERNAME}.key 2048
CSR_FILE=$USERNAME.csr
KEY_FILE=$USERNAME.key
CRT_FILE=$USERNAME.crt
openssl req -new -key $KEY_FILE -out $CSR_FILE -subj "/CN=$USERNAME/O=$GROUPNAME"
cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: $CERTIFICATE_NAME
spec:
groups:
- system:authenticated
request: $(cat $CSR_FILE | base64 | tr -d '\n')
signerName: kubernetes.io/kube-apiserver-client6
# one year expiration
expirationSeconds: 31536000
usages:
- client auth
EOF
kubectl certificate approve $CERTIFICATE_NAME
kubectl get csr $CERTIFICATE_NAME -o jsonpath='{.status.certificate}' | base64 -d > $CRT_FILE
CONFIGNAME=kiwix-$USERNAME.config
kubectl config --kubeconfig $CONFIGNAME set-credentials $USERNAME --client-key=$KEY_FILE --client-certificate=$CRT_FILE --embed-certs=true
kubectl config --kubeconfig $CONFIGNAME set-cluster kiwix --embed-certs --certificate-authority=ca.crt --server=$CLUSTERURL
kubectl config --kubeconfig $CONFIGNAME set-context $USERNAME@kiwix --cluster=kiwix --user=$USERNAME --namespace=$NAMESPACE
kubectl config --kubeconfig $CONFIGNAME use-context $USERNAME@kiwix
Now securely it transmit it to the user.
User will now be able to authenticate with the Cluster but has no authorization.
Manually bind User to Roles
on namespaces and ClusterRole
on namespaces or Cluster-wide.
- Use per-user bindings with
user-
prefixed names to ease removal (very important) - Use user-listed shared bindings for known, stable groups (coreteam)
- Provide least required number of permissions and roles
Sample binding: use appropriate ones for User
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-$USERNAME-view-binding
subjects:
- name: $USERNAME
kind: User
namespace: $NAMESPACE
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
EOF
Once exported as a kubeconfig file, can be used directly with kubectl
export KUBECONFIG=./username.config
kubectl --kubeconfig ./username.config
Editing a User's access is done by adding or removing bindings. Should a Role or ClusterRole satisfying the requirements not be present, this should be created first then a binding added.
Special group of known, limited users would uses combined bindings which would contain multiple User
subjects. Should a user need to be added or removed from one of those group, the bindings for the group would be updated to include/remove the User's subject.
Kubernetes has no notion of User so it's not possible to delete one. What should be done instead is to remove all authorizations associated with the User (its username
string).
There is currently no way to revoke a certificate in k8s so the User will still be able to authenticate (until certificate expiration)
for binding in $(kubectl get clusterrolebindings -o name |grep user-$USERNAME-); do kubectl delete $binding; done
for binding in $(kubectl get rolebindings -o name |grep user-$USERNAME-); do kubectl delete $binding; done
For services using Continuous Deployment, the recommended process is to trigger a new deployment rollout once the new image has been pushed to the registry.
This assumes that the service is referencing a tagged image (usually :latest
in those cases) with an imagePullPolicy: Always
.
Following step can be added after the docker-publish-action
one:
- name: Restart live service
uses: actions-hub/kubectl@master
env:
KUBE_CONFIG: ${{ secrets.KUBE_CONFIG }}
with:
args: rollout restart deployments <deployment-name> -n <namespace>
This requires a KUBE_CONFIG
secret obviouly.
For isolation/security, we add those secrets per-repository (ie. not organization wide) and bind those to a namespace.
Credentials creation: Run on MacOS (with jq + pbcopy + kubectl):
YEAR=$(date +"%Y")
NAMESPACE=cms
# create user
./cluster-mgmt/create-user.sh $NAMESPACE-gh-bot $NAMESPACE gh-bots
# create a clusterrolebinding recipe: save it to repository in the namespace's folder
cat <<EOF > $NAMESPACE/$NAMESPACE-github-actions-hook.clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: $NAMESPACE-github-actions-hook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: github-actions-hook
subjects:
- kind: User
name: $NAMESPACE-gh-bot
namespace: $NAMESPACE
EOF
# apply the clusterrolebinding
kubectl apply -f $NAMESPACE/$NAMESPACE-github-actions-hook.clusterrolebinding.yaml
# test that it's working OK
kubectl --kubeconfig ./cluster-mgmt/users/$NAMESPACE-gh-bot-${YEAR}_kiwix-prod.config get pods -o wide
# copy the credential and add it to the repo's secret as KUBE_CONFIG
cat ./cluster-mgmt/users/$NAMESPACE-gh-bot-${YEAR}_kiwix-prod.config |base64 |pbcopy