- CKAD certification is one of the demanding certification in cloud-native and DevOps world. In this article, i have formatted, CKAD Exam Questions for Practice, which will help to test your knowledge for the certification exam. These questions covers overall syllabus for the CKAD exam. Level of the question given in the article, may be slightly higher and lengthier than that, to be expected for the exam.
- As we don't yet offer sandboxes for writing this mock test, time yourself for checking your speed.
- The questions may not follow the order of syllabus, or single question may cover multiple topic. As in actual exam, questions can come in any order and some individual questions will check knowledge of multiple areas are seen.
- If you are not able to perform the task mentioned in the question, go through the answer and retry again.
- During the Exam, we are allowed to access Kubernetes Documentation. So navigate through documentation as and when you are blocked
CKAD Exam Questions (Mock) for Practice
Question 1
CKAD Exam Questions Level: Easy
System verification team need to create a namespace (resource-quota), but CPU and memory allocated for the namespace should be 100m and 1G. Also list all Namespaces and kubernetes node of the cluster and save it to /opt/ldh/question1
Solution:
[root@c1 ~]# kubectl create ns resource-quota namespace/resource-quotacreated #Create resource quota [root@c1 ~]# kubectl create quota resource-qt -n resource-quota --hard=cpu=100m,memory=1G resourcequota/resource-qt created [root@c1 ~]# kubectl get ns,nodes >> /opt/ldh/question1
Question 2
CKAD Exam Questions Level: High
Create a new cluster internal Service named ytk23-svc in Namespace thursday. This Service
should expose a single Pod named thursday-week-api of image nginx:1.17.3-alpine , create that Pod as well. The Pod should be
identified by label project: wednesday-api . The Service should use tcp port redirection of 3434:80 .
Finally use for example curl from a temporary nginx:alpine Pod to get the response from the Service. Write the response into
/opt/ldh/Question10_ . Also check if the logs of Pod thursday-week-api show the request and write those into
/opt/ldh/Question10_log .
Solution:
- Create the pod using kubernetes imperative command
[root@c1 ~]# kubectl -n thursday run thursday-week-api --image=nginx:1.17.3-alpine --labels project=wednesday-api
pod/thursday-week-api created
- Expose the created pod via service, as make sure port and target port is matching the requirement
[root@c1 ~]# kubectl -n thursday expose pod thursday-week-api --name ytk23-svc --port 3434 --target-port 80
service/ytk23-svc exposed
- Verify the response using a temporary nginx pod
[root@c1 ~]# kubectl run tmp --restart=Never --rm --image=nginx:alpine -i -- curl http://ytk23-svc.thursday:3434
If you don't see a command prompt, try pressing enter.
100 612 100 612 0 0 243 0 0:00:02 0:00:02 --:--:-- 243
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
- copy the above output to /opt/ldh/Question10_
- Execute the below command to see the logs
[root@c1 ~]# kubectl logs -n thursday thursday-week-api
10.44.0.24 - - [29/Dec/2022:12:47:59 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.86.0" "-"
10.44.0.24 - - [29/Dec/2022:12:48:20 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.86.0" "-"
10.36.0.15 - - [29/Dec/2022:12:50:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.86.0" "-"
- populate the log file (/opt/ldh/Question10_log) with logs
[root@c1 ~]# kubectl logs -n thursday thursday-week-api >> /opt/ldh/Question10_log
Question 3
CKAD Exam Questions Level: Low
Create a serviceAccount named auditor in namespace task. A team member from the auditing team needs the service account token. Save the service account token (base64 encoded) in /opt/ldh/question3
Solution:
[root@c1 ~]# kubectl create ns task namespace/task created [root@c1 ~]# kubectl create sa auditor -n task serviceaccount/auditor created
- Copy the secret to /opt/ldh/question3, after executing the below command
kubectl get secret -n task auditor -o jsonpath='{.data.token}' | base64 --decode > /opt/ldh/question3
Question 4
CKAD Exam Questions Level: Low
Create a deployment check-upgrade with image https:2.4.3 in Namespace upgrade. Change the image of the deployment post the pods came to running. Add a label to the pod "test=upgrade". List the roll out history and save the history to /opt/ldh/question5 . Rollback the deployment to its initial version
Solution:
- Below yaml file can be used creating the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: check-upgrade
name: check-upgrade
namespace: upgrade
spec:
replicas: 1
selector:
matchLabels:
app: check-upgrade
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: check-upgrade
test: upgrade
spec:
containers:
- image: https:2.4.3
name: https
resources: {}
status: {}
[root@c1 ~]# kubectl apply -f question.yaml
deployment.apps/check-upgrade created
- Update the image to latest
[root@c1 ~]# kubectl set image deployment/check-upgrade https=https:latest -n upgrade
deployment.apps/check-upgrade image updated
- Undo the deployment
[root@c1 ~]# kubectl rollout undo deploy check-upgrade -n upgrade
deployment.apps/check-upgrade rolled back
Question 5
CKAD Exam Questions Importance: High (time consuming)
- Create a helm chart with name helm-check-test.
- List the existing helm repos and add it to file /opt/question/repo
- Add a new helm repo bitnami
- Install chart bitnami/node of version 19.1.6 with replica count as 5
- Once the installation is complete, upgrade the chart to the latest version
- Check the history of the helm deployment and save the output to /opt/question/helmhistory
- Rollback the chart to its initial version
Solution:
- Create a helm chart with name helm-check-test.
[root@c1 ~]# helm create question5
Creating question5
- List the existing helm repos and add it to file /opt/question/repo
[root@c1 ~]# helm repo list >> /opt/question/repo
- Add a new helm repo bitnami
[root@c1 ~]# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
- Install chart bitnami/node of version 19.1.6 with replica count as 5
[root@c1 ~]# helm install mynode bitnami/node --set replicaCount=5 --version 19.1.6
WARNING: This chart is deprecated
NAME: mynode
LAST DEPLOYED: Thu Dec 29 19:15:46 2022
NAMESPACE: abhi
STATUS: deployed
REVISION: 1
TEST SUITE: None
- Once the installation is complete, upgrade the chart to the latest version
[root@c1 ~]# helm upgrade mynode bitnami/node --set replicaCount=5
WARNING: This chart is deprecated
Release "mynode" has been upgraded. Happy Helming!
NAME: mynode
LAST DEPLOYED: Thu Dec 29 19:21:48 2022
NAMESPACE: abhi
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
This Helm chart is deprecated
CHART NAME: node
CHART VERSION: 19.1.7
APP VERSION: 16.18.0
** Please be patient while the chart is being deployed **
1. Get the URL of your Node app by running:
- Check the history of the helm deployment and save the output to /opt/question/helmhistory
[root@c1 ~]# helm history mynode REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 1 Thu Dec 29 19:19:31 2022 superseded node-19.1.6 16.18.0 Install complete 2 Thu Dec 29 19:21:48 2022 deployed node-19.1.7 16.18.0 Upgrade complete [root@c1 ~]# helm history mynode >> /opt/question/helmhistory
- Rollback the chart to its initial version
[root@c1 ~]# helm rollback mynode 1
Rollback was a success! Happy Helming!
[root@c1 ~]# helm history mynode
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Thu Dec 29 19:19:31 2022 superseded node-19.1.6 16.18.0 Install complete
2 Thu Dec 29 19:21:48 2022 superseded node-19.1.7 16.18.0 Upgrade complete
3 Thu Dec 29 19:32:27 2022 deployed node-19.1.6 16.18.0 Rollback to 1
Question 6
CKAD Exam Questions Importance: Very High
CNBC broadcasting team need to test stability of deployment for two different software image version.
- Create a namespace cnbc, and make sure pod count in the namespace is not more than 10
- Create a deployment (cnbc-1) with image nginx:1.14
- Create a cluster IP service, which will drive traffic to the deployment
- Update deployment to (cnbc-2)with image nginx:latest , using canary deployment update strategy
- Add labels to deployment as upgrade=canary
- Redirect 60% of the incoming traffic to old deployment and 40% of the incoming traffic to new deployment
Solution
- Create namespace cnbc
[root@c1 ~]# kubectl create ns cnbc namespace/cnbc created
- Since our ask is to redirect 60% to cnbc-1 and 40% to cnbc-2 and the total number of pods shouldn't be more than 10. We are going to set replica count as 6 for the first deployment and replica count as 4 for the second deployment
- Create deployment with image nginx:1.14
[root@c1 ~]# kubectl create deploy cnbc-1 --image=nginx:1.14 --replicas=6 -n cnbc deployment.apps/cnbc-1 created
- Expose the port 80 for the deployment
[root@c1 ~]# kubectl expose -n cnbc deploy cnbc-1 --selector upgrade=canary --port=80 service/cnbc-1 exposed
- Create the second deployment with latest image
[root@c1 ~]# kubectl create deploy cnbc-2 --image=nginx:latest --replicas=4 -n cnbc deployment.apps/cnbc-2 created
- Label both deployment (and its pods) with same label as upgrade=canary. Below snippet shows the changes done for one
generation: 1 labels: app: cnbc-1 upgrade: canary #Added <trimmed> progressDeadlineSeconds: 600 replicas: 6 revisionHistoryLimit: 10 selector: matchLabels: app: cnbc-1 upgrade: canary #Added strategy: rollingUpdate: <trimmed> labels: app: cnbc-1 upgrade: canary #Added
- Expose the deployments. In below command we are mentioning only one deployment name, but since we have mentioned the label.. pods which have this label in the namespace will get exposed. So pods from both deployment will be exposed by below command
[root@c1 ~]# kubectl expose -n cnbc deploy cnbc-1 --selector upgrade=canary --port=80
service/cnbc-1 exposed
- Verify the changes service by describing it, and we can see that we have 10 end points corresponding to 10 pods from both deployment
[root@c1 ~]# kubectl describe svc -n cnbc cnbc-1 Name: cnbc-1 Namespace: cnbc Labels: app=cnbc-1 upgrade=canary Annotations: <none> Selector: upgrade=canary Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.107.143.32 IPs: 10.107.143.32 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.36.0.11:80,10.36.0.13:80,10.36.0.17:80 + 7 more... Session Affinity: None Events: <none>
Question 7
CKAD Exam Questions Level: Easy
Create a secret that defines the variable password=rubicks. Create a Deployment with the name ldhsecretapp, which starts the nginx image and uses this secret as variable.
Solution:
- Create secret using imperative command
[root@c1 ~]# kubectl create secret generic ldh-secret --from-literal=password=rubicks
secret/ldh-secret created
- Create a deployment using imperative command
[root@c1 ~]# kubectl create deploy ldhsecretapp --image=nginx
deployment.apps/ldhsecretapp created
- Create a enviornment variable from the secret
[root@c1 ~]# kubectl set env --from=secret/ldh-secret deployment/ldhsecretapp
deployment.apps/ldhsecretapp env updated
- Verify the solution
[root@c1 ~]# kubectl exec ldhsecretapp-5496b84d9f-qf6l6 -- env|grep -i pass
PASSWORD=rubicks
This same question can be done using deployment yaml file also. Since it is essential to save time. It is always good to proceed with imperative commands. As in yaml make take time to solve syntax and formating issue
Question 8
CKAD Exam Questions Importance: High
Create a Docker file and save the file in directory /opt/ldh/Question12 . The Docker file run an alpine image with the command "echo hello linuxdatahub" as the default command. Build the image, and export it in OCI format to a file file with the name "linuxdocker" and tag should be 9.8. Use sudo wherever required.
Solution:
- Below is the docker file content
[root@c1 Question12]# vi Dockerfile FROM alpine CMD ["echo","hello linuxdatahub"]
- Build the image from the docker file
[root@c1 Question12]# docker build -t linuxdocker:9.8 . Sending build context to Docker daemon 2.048kB Step 1/2 : FROM alpine latest: Pulling from library/alpine c158987b0551: Pull complete Digest: sha256:8914eb54f968791faf6a8638949e480fef81e697984fba772b3976835194c6d4 Status: Downloaded newer image for alpine:latest ---> 49176f190c7e Step 2/2 : CMD ["echo","hello linuxdatahub"] ---> Running in 878a7f74655d Removing intermediate container 878a7f74655d ---> 6b570b6471a5 Successfully built 6b570b6471a5 Successfully tagged linuxdocker:9.8
- Verify image
[root@c1 Question12]# docker images|grep linux linuxdocker 9.8 6b570b6471a5 59 seconds ago 7.05MB
- Docker images are by default OCI complaint
[root@c1 Question12]# docker save -o linuxdocker.tar linuxdocker:9.8 [root@c1 Question12]# ll total 7180 -rw-r--r-- 1 root root 47 Dec 29 13:10 Dockerfile -rw------- 1 root root 7347712 Dec 29 13:15 linuxdocker.tar
Question 9
CKAD Exam Questions Importance: High
Create a Multi-container Pod with the name data-sidecar-pod, that runs in the question13 namespace
- Main container should run busybox image, and writes the output of the date command to the /var/log/date.log every 10 seconds
- The sidecar container should provide nginx web-access to this file using shared volume mounted on /usr.share/nginx/html
- Ensure the image is always pulled and not taken from the local system repository
Solution:
- Below yaml file can be used to create the resources required for this question
[root@c1 Question12]# cat question13.yaml
apiVersion: v1
kind: Pod
metadata:
name: data-sidecar-pod
namespace: question13
spec:
restartPolicy: Never
volumes:
- name: shared-data
hostPath:
path: /mydata
containers:
- name: main-container
image: busybox
imagePullPolicy: Always #Required
volumeMounts:
- name: shared-data
mountPath: /var/log
args:
- sh
- -c
- while sleep 10; do date >> /var/log/date.log; done
- name: nginx-container
image: nginx
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html #Required
[root@c1 Question12]# kubectl apply -f question13.yaml pod/data-sidecar-pod created [root@c1 Question12]# kubectl get pods -n question13 NAME READY STATUS RESTARTS AGE data-sidecar-pod 2/2 Running 0 33s [root@c1 Question12]# kubectl exec -it -n question13 data-sidecar-pod sh -c nginx-container kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. # cat /usr/share/nginx/html/date.log Thu Dec 29 08:01:39 UTC 2022 Thu Dec 29 08:01:49 UTC 2022
Question 10
CKAD Exam Questions Level: Easy
Create a pod which runs as nginx webserver
- The webserver should expose port 80 and should be running in namespace thursday
- Pod should be marked as ready only after checking the /healthz path
Solution:
- Create namespace thursday
[root@c1 Question12]# kubectl create ns thursday namespace/thursday created
- Below yaml can used for implementing readiness probe
[root@c1 Question12]# vi question14.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: web-server
spec:
containers:
- name: webserver
image: nginx
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /healthz
port: 80
#Readiness probe configured to check the endpoint
initialDelaySeconds: 5
periodSeconds: 5
[root@c1 Question12]# kubectl get pods -n thursday NAME READY STATUS RESTARTS AGE web-server 1/1 Running 0 22s
Question 11
CKAD Exam Questions Importance: High
Create a YAML file with the name our-net-pol that runs two pods and a networkpolicy
- The first pod should run a nginx image with default settings
- The second pod should run a busybox image with sleep 3600 command
- Control the traffic between these two pods, such that access to the nginx server is only allowed from busybox pod and busybox pod is free to access or to be accessed from any where
Solution:
- Below yaml file contains, two pod creation and Network policy creation. Carefully analyse the labels and selector provided for network policy
apiVersion: v1
kind: Pod
metadata:
name: nginxnetp
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
---
apiVersion: v1
kind: Pod
metadata:
name: busyboxnetpol
labels:
access: allowed
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "3600"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
access: allowed
"our-net-pol.yaml" 44L, 638C written
- Expose the pod with port 80
[root@c1 Question12]# kubectl expose pod nginxnetp --port=80 service/nginxnetp exposed
- Verify the connectivity
[root@c1 ~]# kubectl exec -it busyboxnetpol -- wget --spider --timeout=1 nginxnetp Connecting to nginxnetp (10.102.240.118:80) remote file exists
- It should be noted that we are not implementing default deny policy and no policy for busy box pod, as the question is clearly telling busy box should be free to access and to be accessed
Question 12
CKAD Exam Questions Level: High
- Create a Persistent Volume with the name sem-pv. It should be of size 2GB, and muliple client should be able to access simultaneously. Use the storage type as hostPath
- Create a data that requests 1 GiB from any Persistent Volume that allows multiple clients simultaneous read/write access. Assign sem-pvc as its name
- Create a pod with nginx image. Mount the persistent volume at path /webdata. Name of the pod should by sem-pod
Solution:
- Below yaml file will create PV, PVC and pod which consumes the created PVC
apiVersion: v1
kind: PersistentVolume
metadata:
name: sem-pv
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sem-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: ldh-pv-pod
spec:
volumes:
- name: ldh-pv-storage
persistentVolumeClaim:
claimName: sem-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/webdata"
name: ldh-pv-storage
- Below command verifies the created resources
[root@c1 ~]# kubectl get pods,pv,pvc NAME READY STATUS RESTARTS AGE pod/ldh-pv-pod 1/1 Running 0 32s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/sem-pv 2Gi RWX Retain Bound abhi/sem-pvc 32s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/sem-pvc Bound sem-pv 2Gi RWX 32s
Question 13
CKAD Exam Questions Level: High
- Create namespace (resource-quota) with following characteristics
- Maximum of 5 pods can be allowed
- Max CPU should be 1000 millicore and Memory should be 2 GB
- Create a deployment with the name minimalnginx, replica size should be 3. Pod should have initial request should be 64 MiB and maximum allowed RAM should be 256 MiB
Solution:
- Create Namespace
[root@c1 ~]# kubectl create ns resource-quota namespace/resource-quota created
- Create Resource quota, which sets CPU, memory and number of pods
[root@c1 ~]# kubectl create quota resource-qt -n resource-quota --hard=cpu=1,memory=2G,pods=5 resourcequota/resource-qt created
- Verify the created resource quota
[root@c1 ~]# kubectl describe ns resource-quota Name: resource-quota Labels: kubernetes.io/metadata.name=resource-quota Annotations: <none> Status: Active Resource Quotas Name: resource-qt Resource Used Hard -------- --- --- cpu 0 1 memory 0 2G pods 0 5
- Below yaml file will create deployment with resources limits
[root@c1 ~]# cat question17.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: linuxpbj
name: linuxpbj
spec:
replicas: 3 # change
selector:
matchLabels:
app: linuxpbj
template:
metadata:
labels:
app: linuxpbj
spec:
containers:
- image: nginx
name: linuxpbj
resources:
limits:
memory: 256Mi
cpu: 200m
requests:
memory: 64Mi
cpu: 100m
- verify the resource quota once pods are up to see the utilization
[root@c1 ~]# kubectl describe ns resource-quota Name: resource-quota Labels: kubernetes.io/metadata.name=resource-quota Annotations: <none> Status: Active Resource Quotas Name: resource-qt Resource Used Hard -------- --- --- cpu 300m 1 memory 192Mi 2G pods 3 5
Question 14
CKAD Exam Questions Level: Medium
Create a pod with name sleeper. Image should be latest version of busybox. execute sleep 2600 as its default command. Ensure the primary user is a member of the supplementary group 2000 while this pod is created
Solution:
- Below yaml file can be used to create the resource. It should be noted that we are using security context
[root@c1 ~]# cat question18.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: sleeper
name: sleeper
spec:
containers:
- args:
- sleep
- "2600"
image: busybox
name: sleeper
resources: {}
dnsPolicy: ClusterFirst
securityContext:
fsGroup: 2000
restartPolicy: Always
status: {}
- Verify the supplementary group for the user
[root@c1 ~]# kubectl exec -it sleeper sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. / # id uid=0(root) gid=0(root) groups=10(wheel),2000
Question 15
CKAD Exam Questions Level: Low
Convert the api version of the below resource file to networking.k8s.io/v1
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
Solution:
- Below convert command can be used
kubectl convert -f <old-file> --output-version networking.k8s.io/v1
Hi – With regard to the solution you provided for Q6 (aka. canary deployment) – in the deployment YAML file, shouldn’t the “upgrade: canary #Added” be included under spec.selector.matchLabels rather than being placed within metadata.labels (which is the labels section for deployment)? This is because spec.selector.matchLabels and template.metadata.labels (used for pods) should correspond to each other. Thanks.
Yes, Mickael, the labels mentioned under the selector and the pod labels should be matching… other wise while applying the kubernetes yaml file, error will be shown. Thanks for pointing it out. Its corrected now
1. Do we get the Helm Documentation in the exam or do we need to remember helm commands
2. Do we need Docker commands in the exam?
3. Familiar with “Yq | JSON Path for Value Properties” is a must? Or can we inspect the kubernates objects and get values?
I have only two more weeks for the exam and already finished following Kubernatees content of the exam? Memorizing more will be a overload for me. So just need to know whether above things are a mush?
Answer to
qn 1: Yes we get access to the helm documentation
qn 2: Docker commands may not be required in the exam, except docker image building
qn 3: we can inspect the kubernetes objects and get values.. Also we will have access to kubernetes official doc (no third party websites allowed), from their you can copy paste
In short: memorizing is not required, if you know what all content is available, where all, then you will good.
also best of luck for you exam