Scheduling a restart of a Kubernetes Deployment using CronJobs
This article is now 5 years old! It is highly likely that this information is out of date and the author will have completely forgotten about it. Please take care when following any guidance to ensure you have up-to-date recommendations.
Most of my home network runs on my Raspberry Pi Kubernetes cluster
, and for the most part it’s rock solid. However, applications being applications, sometimes they become less responsive than they should (or for example, when my Synology updates itself and reboots, any mounted NFS volumes can cause the running pods to degrade in performance). This isn’t an issue with service liveliness, which can be mitigated with a liveness probe
that restarts the pod if a service isn’t running.
If my PiHole or Plex deployments become slow to respond I can generally restart the Deployment and everything springs back into life. Typically this is just a kubectl rollout restart deployment pihole command to bounce the pods. This is generally pretty safe as it creates a new ReplicaSet and ensures the Pods are running before terminating the old ones.
Rather than waiting for performance to degrade, then manually restarting the deployment, I wanted to bake in an automated restart once a week while the family sleep. The added benefit here is also that I can keep my Plex server up to date by specifying imagePullPolicy: Always. Fortunately, we can make use of the CronJob
functionality to schedule the command. To do this I need to create a few objects:
ServiceAccount - an account I can delegate the rights to restart the deployment, as the default service account does not have rights
1
2
3
4
5
| kind: ServiceAccount
apiVersion: v1
metadata:
name: restart-pihole
namespace: pihole
|
Role - a role with minimal permissions to perform the actions on the deployment
1
2
3
4
5
6
7
8
9
10
| apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: restart-pihole
namespace: pihole
rules:
- apiGroups: ["apps", "extensions"]
resources: ["deployments"]
resourceNames: ["pihole"]
verbs: ["get", "patch"]
|
RoleBinding - to bind the role to the service account
1
2
3
4
5
6
7
8
9
10
11
12
13
| apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: restart-pihole
namespace: pihole
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: restart-pihole
subjects:
- kind: ServiceAccount
name: restart-pihole
namespace: pihole
|
Lastly, I need a CronJob - the Cron-like task definition to run my kubectl command. I’m using the raspbernetes/kubectl
image to provide kubectl compatible with my Raspberry Pi’s architecture - if you’re doing it on a different architecture I’d recommend the bitnami/kubectl
image.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: restart-pihole
namespace: pihole
spec:
concurrencyPolicy: Forbid # Do not run concurrently!
schedule: '0 0 * * 0' # Run once a week at midnight on Sunday morning
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: restart-pihole # Run under the service account created above
restartPolicy: Never
containers:
- name: kubectl
image: raspbernetes/kubectl # Specify the kubectl image
command: # The kubectl command to execute
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment/pihole'
|
Once these configurations have been applied, I can view my new setup:
1
2
3
| $ kubectl get cronjobs.batch
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
restart-pihole 0 0 * * 0 False 0 <none> 10s
|
One final tip…if I don’t want to wait until next week to see if the cronjob works, I can create a new job based on the cronjob configuration using the --from=cronjob flag:
1
2
3
4
5
6
7
8
9
10
11
| $ kubectl create job --from=cronjob/restart-pihole restart-pihole-now
job.batch/restart-pihole-now created
$ kubectl get jobs
NAME COMPLETIONS DURATION AGE
restart-pihole-now 1/1 111s 2m22s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
pihole-c8774858-tqj65 1/1 Running 0 65s
restart-pihole-now-l2r69 0/1 Completed 0 90s
$ kubectl logs restart-pihole-now-l2r69
deployment.apps/pihole restarted
|
From the command above I can see that the job is created, then has completed (COMPLETIONS: 1/1) and the pihole pod was created 65s ago. I can also see the restart-pihole-now job pod has a STATUS: Completed, and if I check the pod logs I can see the response to the kubectl command.
Hopefully that’s a useful little starter for CronJobs in Kubernetes - as with cron jobs in Linux or scheduled tasks in Windows, it’s a useful tool in the administrator’s toolbelt!
Share this post