kubernetes – What is the expected behavior of running multiple `kubectl rollout restart` commands simultaneously?

I didn’t find the official documentation explaining how Kubernetes will behave as presented in your question.

However, I wrote a script that will spawn the 5 rollout restart command in parallel and used the deployment.yaml below for testing, with rollingUpdate as strategy and maxSurge = maxUnavailable = 1.

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp1
spec:
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  replicas: 10
  selector:
    matchLabels:
      app: webapp1
  template:
    metadata:
      labels:
        app: webapp1
    spec:
      containers:
      - name: webapp1
        image: katacoda/docker-http-server:latest
        ports:
        - containerPort: 80

script.sh

for var in 1..5; do
        kubectl rollout restart deployment webapp1 > /dev/null 2>&1 &
done

Then executed the script and watched the behavior

    . script.sh; watch -n .5 kubectl get po

The watch command revealed that the Kubernetes maintained the desired state as commanded by the deployment.yaml. AT no time, fewer than 9 pods were in the Running state. Screenshots were taken few seconds apart

enter image description here

enter image description here

So, from this experiment, I deduce that no matter how many parallel rollout-restarts occur, Kubernetes controller manager is smart enough to still maintain the desired state.

Hence, the expected behavior will be as described in your manifest.

Read more here: Source link