More

    K8s CKA sample questions 2025

     

    The Kubernetes CKA was created by the Linux Foundation and the Cloud Native Computing Foundation (CNCF) as a part of their ongoing effort to help develop the Kubernetes ecosystem. The exam is online, proctored, performance-based test that requires solving multiple tasks from a command line running Kubernetes. In this post, we will walk through 10 essential Kubernetes CKA exam sample questions, including how to manage Pods, Nodes, NetworkPolicies, ClusterRoles, and perform upgrades. Each task will include both imperative commands and YAML configurations to help you manage your Kubernetes resources more effectively.

     

    Task 1: Create a New ClusterRole for a Deployment Pipeline

    Create a new ClusterRole named deployment-clusterrole, which only allows creating the following resource types: Deployment, StatefulSet, DaemonSet. Create a new ServiceAccount named cicd-token in the existing namespace app-team1. Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limited to the namespace app-team1.

    Solution

    1. Create the ClusterRole: 

      kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets
      

       

    2. Create the ServiceAccount:

      kubectl create serviceaccount cicd-token -n app-team1
    3. Bind the ClusterRole to the ServiceAccount:

      kubectl create rolebinding deployment-rolebinding --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token -n app-team1
      

       

    Task 2: Set a Node as Unavailable

    Set the node named ek8s-node-0 as unavailable and reschedule all the Pods running on it.

    Solution

    1. Cordon the Node:

      kubectl cordon ek8s-node-0

       

    2. Drain the Node:

      kubectl drain ek8s-node-0 --ignore-daemonsets --delete-local-data
      

       

      Task 3: Upgrade Kubernetes Control Plane and Nodes

      Given an existing Kubernetes cluster running version 1.22.1, upgrade all of the Kubernetes control plane and node components on the master node only to version 1.22.2. Be sure to drain the master node before upgrading it and uncordon it after the upgrade.

      Solution

      1. Drain the Node:

        kubectl drain <master-node-name> --ignore-daemonsets --delete-local-data
        

         

      2. Upgrade kubeadm:

        sudo apt-get update && sudo apt-get install -y kubeadm=1.22.2-00
        
      3. Upgrade the Master Node:

        sudo kubeadm upgrade apply v1.22.2
        
      4. Upgrade kubectl and kubelet:

        sudo apt-get update && sudo apt-get install -y kubectl=1.22.2-00 kubelet=1.22.2-00
        

         

      5. Uncordon the Node:

        kubectl uncordon <master-node-name>

         


      Task 4: Create an etcd Snapshot

      Create a snapshot of the existing etcd instance running at https://127.0.0.1:2379, saving the snapshot to /var/lib/backup/etcd-snapshot.db.

      Solution

      1. Create the Snapshot:
        ETCDCTL_API=3 etcdctl snapshot save /var/lib/backup/etcd-snapshot.db \
          --endpoints=https://127.0.0.1:2379 \
          --cert-file=/etc/etcd/etcd-server.crt \
          --key-file=/etc/etcd/etcd-server.key \
          --cacert-file=/etc/etcd/etcd-ca.crt
        

         


      Task 5: Create a NetworkPolicy

      Create a new NetworkPolicy named allow-port-from-namespace in the existing namespace fubar. Ensure that the new NetworkPolicy allows Pods in namespace internal to connect to port 9000 of Pods in namespace fubar. Further ensure that the new NetworkPolicy does not allow access to Pods which don’t listen on port 9000 or from Pods not in namespace internal.

      Solution

      YAML Configuration:

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-port-from-namespace
        namespace: fubar
      spec:
        podSelector:
          matchLabels:
            app: nginx
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                name: internal
          ports:
          - protocol: TCP
            port: 9000
      

       

      Apply the policy:

      kubectl apply -f allow-port-from-namespace.yaml

       


      Task 6: Expose a Deployment Using NodePort

      Original Question:
      Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container nginx. Create a new service named front-end-svc exposing the container port http. Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled.

      Solution

      1. Expose the Deployment:
        kubectl expose deployment front-end --name=front-end-svc --port=80 --target-port=80 --type=NodePort
        

      Task 7: Schedule a Pod with a Node Selector

      Schedule a Pod as follows: Name: nginx-kusc00401, Image: nginx, Node selector: disk=ssd.

      Solution

      YAML Configuration:

      apiVersion: v1
      kind: Pod
      metadata:
        name: nginx-kusc00401
      spec:
        containers:
        - name: nginx
          image: nginx
        nodeSelector:
          disk: ssd
      

      Apply the Pod configuration:

      kubectl apply -f pod.yaml

       


      Task 8: Count Ready Nodes

      Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/KUSC00402/kusc00402.txt.

      Solution

      1. Count Ready Nodes:

        kubectl get nodes --field-selector=status.conditions[type]=Ready --no-headers | grep -v NoSchedule | wc -l > /opt/KUSC00402/kusc00402.txt

         


      Task 9: Schedule a Pod with Multiple Containers

      Schedule a Pod as follows: Name: kucc8, App Containers: 2, Container Name/Images: nginx and consul.

      Solution

      YAML Configuration:

      apiVersion: v1
      kind: Pod
      metadata:
        name: kucc8
      spec:
        containers:
        - name: nginx
          image: nginx
        - name: consul
          image: consul
      

      Apply the pod yaml configuration:

       

      kubectl apply -f multi-pod.yaml

       


      Task 10: Schedule Another Pod

      Schedule a Pod as follows: Name: nginx-kusc00401, Image: nginx, Node selector: disk=ssd.

      Solution

      YAML Configuration:

      apiVersion: v1
      kind: Pod
      metadata:
        name: nginx-kusc00401
      spec:
        containers:
        - name: nginx
          image: nginx
        nodeSelector:
          disk: ssd
      

      Apply the yaml pod configuration:

      kubectl apply -f q10-pod.yaml

      Conclusion

      This post provides solutions to 10 common Kubernetes CKA exam questions, from creating ClusterRoles to scheduling Pods with specific configurations.

     

    Recent Articles

    Related Articles

    Leave A Reply

    Please enter your comment!
    Please enter your name here