The newly introduced Resource Interpreter Webhook framework allows users to implement their own CRD plugins that will be consulted at all parts of propagation process. With this feature, CRDs and CRs will be propagated just like Kubernetes native resources, which means all scheduling primitives also support custom resources. An example as well as some helpful utilities are provided to help users better understand how this framework works.
Refer to Proposal for more details.
-
Introduced
dynamicWeight
primitive toPropagationPolicy
andClusterPropagationPolicy
. With this feature, replicas could be divided by a dynamic weight list, and the weight of each cluster will be calculated based on the available replicas during scheduling.This feature can balance the cluster's utilization significantly. #841
-
Introduced
Job
schedule (divide) support. AJob
that desires many replicas now could be divided into many clusters just likeDeployment
.This feature makes it possible to run huge Jobs across small clusters. #898
After workloads (e.g. Deployments) are propagated to member clusters, users may also want to get the overall workload
status across many clusters, especially the status of each pod
. In this release, a get
subcommand was introduced to
the kubectl-karmada
. With this command, user are now able get all kinds of resources deployed in member clusters from
the Karmada control plane.
For example (get deployment
and pods
across clusters):
$ kubectl karmada get deployment
NAME CLUSTER READY UP-TO-DATE AVAILABLE AGE ADOPTION
nginx member2 1/1 1 1 19m Y
nginx member1 1/1 1 1 19m Y
$ kubectl karmada get pods
NAME CLUSTER READY STATUS RESTARTS AGE
nginx-6799fc88d8-vzdvt member1 1/1 Running 0 31m
nginx-6799fc88d8-l55kk member2 1/1 Running 0 31m
- karmada-scheduler-estimator: The number of pods becomes an important reference when calculating available replicas for the cluster. #777
- The labels (
resourcebinding.karmada.io/namespace
,resourcebinding.karmada.io/name
,clusterresourcebinding.karmada.io/name
) which were previously added on the Work object now have been moved to annotations. #752 - Bugfix: Fixed the impact of cluster unjoining on resource status aggregation. #817
- Instrumentation: Introduced events (
SyncFailed
andSyncSucceed
) to the Work object. #800 - Instrumentation: Introduced condition (
Scheduled
) to theResourceBinding
andClusterResourceBinding
. #823 - Instrumentation: Introduced events (
CreateExecutionNamespaceFailed
andRemoveExecutionNamespaceFailed
) to the Cluster object. #749 - Instrumentation: Introduced several metrics (
workqueue_adds_total
,workqueue_depth
,workqueue_longest_running_processor_seconds
,workqueue_queue_duration_seconds_bucket
) forkarmada-agent
andkarmada-controller-manager
. #831 - Instrumentation: Introduced condition (
FullyApplied
) to theResourceBinding
andClusterResourceBinding
. #825 - karmada-scheduler: Introduced feature gates. #805
- karmada-controller-manager: Deleted resources from member clusters that use "Background" as the default delete option. #970