Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Global scale-down-delay setting ignored #15619

Open
harrymilne opened this issue Nov 15, 2024 · 1 comment
Open

Global scale-down-delay setting ignored #15619

harrymilne opened this issue Nov 15, 2024 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@harrymilne
Copy link

What version of Knative?

0.14.x
0.16.x

Expected Behavior

A delay before Knative Pods are scaled in

Actual Behavior

Behaviour does not change before and after setting the scale-down-delay in the config-autoscaler ConfigMap via the Knative Operator, Pods are still scaled in without delay

Steps to Reproduce the Problem

  • Create scale-to-zero Knative Service
  • Set global scale-down-delay to 1h or anything
  • Run enough requests to trigger Revision scale out
  • Monitor as Pods only stay around as long as the traffic exists
@harrymilne harrymilne added the kind/bug Categorizes issue or PR as related to a bug. label Nov 15, 2024
@skonto
Copy link
Contributor

skonto commented Nov 18, 2024

Hi @harrymilne,

I tried to reproduce this, when installing with Serving yamls but with no result, same with operator.
You can try the operator with the following serving CR:

apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
  name: knative-serving
spec:
  ingress:
    kourier:
      enabled: true
  config:
    logging:
      loglevel.autoscaler: "debug"
    network:
      ingress-class: "kourier.ingress.networking.knative.dev"
    autoscaler:
      scale-down-delay: "15m"

If you set loglevel.autoscaler: debug in cm config-autoscaler you should see then in autoscaler's log:

kubectl logs autoscaler-6c7bf97997-tdlxd -n knative-serving | grep Delaying
...
{"severity":"DEBUG","timestamp":"2024-11-18T11:39:31.899586353Z","logger":"autoscaler","caller":"scaling/autoscaler.go:261","message":"Delaying scale to 0, staying at 1","commit":"6a27004","knative.dev/key":"default/autoscale-go-00001"}

Also when you update the cm via the Serving Cr you should get in the autoscaler pod logs:

{"severity":"DEBUG","timestamp":"2024-11-18T13:08:38.839831791Z","logger":"autoscaler.config-store","caller":"configmap/store.go:155","message":"autoscaler config \"config-autoscaler\" config was added or updated: &autoscalerconfig.Config{EnableScaleToZero:true, ContainerConcurrencyTargetFraction:0.7, ContainerConcurrencyTargetDefault:100, TargetUtilization:0.7, RPSTargetDefault:200, TargetBurstCapacity:211, ActivatorCapacity:100, AllowZeroInitialScale:false, InitialScale:1, MinScale:0, MaxScale:0, MaxScaleLimit:0, MaxScaleUpRate:1000, MaxScaleDownRate:2, StableWindow:60000000000, PanicWindowPercentage:10, PanicThresholdPercentage:200, ScaleToZeroGracePeriod:30000000000, ScaleToZeroPodRetentionPeriod:0, ScaleDownDelay:900000000000, PodAutoscalerClass:\"kpa.autoscaling.knative.dev\"}","commit":"6a27004"}

With a scaleDownDelay=15m the pod did scale down after ~15min:

kubectl get po 
NAME                                             READY   STATUS    RESTARTS   AGE
autoscale-go-00001-deployment-6f564fbb6c-r4nb9   2/2     Running   0          16m
kubectl get po 
NAME                                             READY   STATUS        RESTARTS   AGE
autoscale-go-00001-deployment-6f564fbb6c-r4nb9   2/2     Terminating   0          16m

I tested with version 1.16.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants