I used the following command to autoscale.
kubectl autoscale deployment catch-node --cpu-percent=50 --min=1 --max=10
The state of autoscaling in my case when testing a load is as follows.
27th minute
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
catch-node Deployment/catch-node/scale 50% 20% 1 10 27m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
catch-node 1 1 1 1 27m
29th minute
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
catch-node Deployment/catch-node/scale 50% 35% 1 10 29m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
catch-node 1 1 1 1 29m
31st minute
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
catch-node Deployment/catch-node/scale 50% 55% 1 10 31m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
catch-node 1 1 1 1 31m
34th minute
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
catch-node Deployment/catch-node/scale 50% 190% 1 10 34m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
catch-node 4 4 4 4 34m
Here I get a message about the failure of the error between the transition of 1 pod to 4pods during autoscaling. Please let me know how long it will take to create new containers when it exceeds the% CPU limit set during autoscaling. Also, please let me know if there is any method to reduce this time .once all new modules appear, the problem is not there. thanks in advance