Kubernetes (K8s) + Top 100 Interview Questions & Answers (2023 Update)

  • This article covers various Kubernetes (K8s) interview questions ranging from basic questions to advanced ones, and scenario-based questions too.
  • Here are K8s interview questions and answers for fresher as well as experienced candidates to get the dream job.
Table of Contents
 [show]

1) What’s the difference between k8s Vs. k3s?

K8s is just an abbreviation of Kubernetes (“K” followed by 8 letters “ubernete” followed by “s”). However, normally when people talk about either Kubernetes or K8s, they are talking about the original upstream project, designed as a really highly available and hugely scalable platform by Google. Key features of Kubernetes include:

  • Load Balancing and Service Discovery: Automatically assigns DNS names, IP addresses, and load-balances to pods.
  • Automatic Bin Packing: Ensures the availability and optimization of resources by placing containers based on their resource requirements.
  • Self-Recovery: Restarts failed containers, replaces containers after node failures, and removes containers that fail health checks.
  • Rollout and Rollbacks Automation: Rolls back to previous versions when issues occur and prevents system failure by running modifications.
  • Batch Execution and Scaling: Enables you to scale applications manually or automatically and manages batches and Continuous Integration (CI) workloads.

K3s is designed to be a single binary of less than 40MB that completely implements the Kubernetes API. In order to achieve this, they removed a lot of extra drivers that didn’t need to be part of the core and are easily replaced with add-ons.

K3s is a fully CNCF (Cloud Native Computing Foundation) certified Kubernetes offering. This means that you can write your YAML to operate against a regular “full-fat” Kubernetes and they’ll also apply against a k3s cluster.

Due to its low resource requirements, it’s possible to run a cluster on anything from 512MB of RAM machines upwards. This means that we can allow pods to run on the master, as well as nodes.

2) Kubernetes Vs. OpenShift : The Most Critical Differences?

Kubernetes and OpenShift are two of the most known container orchestration platforms.

Kubernetes Vs. Openshift: What Is the Difference?
S.NoKubernetes OpenShift
1. Product vs. ProjectKubernetes is an open-source projectWhile, OpenShift is a commercial product
2. SecurityThe setup and configuration of Kubernetes authentications require a lot of effortOpenShift has stronger security policies than Kubernetes
3. Web-UIYou have to install the Kubernetes dashboard separately and use the kube-proxy to forward a port of your local machine to the cluster’s admin server. In addition, you have to manually create a bearer token to provide authorization and authentication, since the dashboard does not have a login page. OpenShift’s web console has a login page. You can easily access the console and create or change most resources through a form. You can also visualize servers, projects, and cluster roles.
4. Deployment ApproachKubernetes deployment is done with deployment objects. Kubernetes deployment objects can handle multiple and concurrent updates.OpenShift deployment is done with the DeploymentConfig command. DeploymentConfig has other advantages like versioning and triggers that drive automated deployments.
5. CI/CDPlain Kubernetes does not offer an official CI/CD integration solution. You need to integrate third-party tools like CircleCI to build a CI/CD pipeline with Kubernetes.This process is easier in OpenShift because it offers a certified Jenkins container that you can use for the CI server.
6. Integrated Image RegistryKubernetes enables you to set up your own Docker registry, but you don’t get an integrated image registry.OpenShift provides an integrated image registry that you can use with Red Hat or Docker Hub. The image registry has a console where you can search for information about images and image streams to projects in a cluster.
7. UpdatesYou can upgrade existing Kubernetes clusters instead of rebuilding them from scratch on both platforms.
Kubernetes usually uses the kubeadm upgrade command to update to a newer version.
In OpenShift, you don’t get alerts on when you have to update to a new Kubernetes version. You have to use the Red Hat Enterprise Linux package management system to update OpenShift to the newest version.

3) How is Kubernetes (K8s) different from Docker Swarm?

FeaturesKubernetesDocker Swarm
Installation & Cluster ConfigSetup is very complicated, but once installed cluster is robust.Installation is very simple, but the cluster is not robust.
GUIGUI is the Kubernetes Dashboard.There is no GUI.
ScalabilityHighly scalable and scales fast.Highly scalable and scales 5x faster than Kubernetes.
Auto-scalingKubernetes can do auto-scaling.Docker swarm cannot do auto-scaling.
Load BalancingManual intervention needed for load balancing traffic between different containers and pods.Docker swarm does auto load balancing of traffic between containers in the cluster.
Rolling Updates & RollbacksCan deploy rolling updates and does automatic rollbacks.Can deploy rolling updates, but not automatic rollback.
DATA VolumesCan share storage volumes only with the other containers in the same pod.Can share storage volumes with any other container.
Logging & MonitoringIn-built tools for logging and monitoring.3rd party tools like ELK stack should be used for logging and monitoring.

4) What is Kubernetes?

What is Kubernetes?

Kubernetes is an open-source container management tool that holds the responsibilities of container deployment, scaling & descaling of containers & load balancing. Being Google’s brainchild, it offers excellent community and works brilliantly with all the cloud providers. So, we can say that Kubernetes is not a containerization platform, but it is a multi-container management solution. 

It’s a known fact that Docker provides the lifecycle management of containers and a Docker image builds the runtime containers. But, since these individual containers have to communicate, Kubernetes is used. So, Docker builds the containers and these containers communicate with each other via Kubernetes. So, containers running on multiple hosts can be manually linked and orchestrated using Kubernetes.

6) What is the difference between deploying applications on hosts and containers?

Deploying Applications On Host vs On Containers.

Refer to the above diagram. The left side architecture represents deploying applications on hosts. So, this kind of architecture will have an operating system and then the operating system will have a kernel that will have various libraries installed on the operating system needed for the application. So, in this kind of framework you can have n number of applications and all the applications will share the libraries present in that operating system whereas while deploying applications in containers the architecture is a little different.

This kind of architecture will have a kernel and that is the only thing that’s going to be the only thing common between all the applications. So, if there’s a particular application that needs Java then that particular application we’ll get access to Java and if there’s another application that needs Python then only that particular application will have access to Python.

The individual blocks that you can see on the right side of the diagram are basically containerized and these are isolated from other applications. So, the applications have the necessary libraries and binaries isolated from the rest of the system, and cannot be encroached by any other application.

7) What is Container Orchestration?

Consider a scenario where you have 5-6 microservices for an application. Now, these microservices are put in individual containers, but won’t be able to communicate without container orchestration. So, as orchestration means the amalgamation of all instruments playing together in harmony in music, similarly container orchestration means all the services in individual containers working together to fulfill the needs of a single server.

8) What is the need for Container Orchestration?

Consider you have 5-6 microservices for a single application performing various tasks, and all these microservices are put inside containers. Now, to make sure that these containers communicate with each other we need container orchestration.

Challenges without Container Orchestration

As you can see in the above diagram, there were also many challenges that came into place without the use of container orchestration. So, to overcome these challenges the container orchestration came into place.

9) What are the features of Kubernetes?

The features of Kubernetes, are as follows:

Kubernetes Features

10) How does Kubernetes simplify containerized Deployment?

As a typical application would have a cluster of containers running across multiple hosts, all these containers would need to talk to each other. So, to do this you need something big that would load balance, scale & monitor the containers. Since Kubernetes is cloud-agnostic and can run on any public/private providers it must be your choice simplify containerized deployment.

11) What do you know about clusters in Kubernetes?

The fundamental behind Kubernetes is that we can enforce the desired state management, by which I mean that we can feed the cluster services of a specific configuration, and it will be up to the cluster services to go out and run that configuration in the infrastructure.

Kubernetes Cluster Example

So, as you can see in the above diagram, the deployment file will have all the configurations required to be fed into the cluster services. Now, the deployment file will be fed to the API and then it will be up to the cluster services to figure out how to schedule these pods in the environment and make sure that the right number of pods are running.

So, the API which sits in front of services, the worker nodes & the Kubelet process that the nodes run, all together make up the Kubernetes Cluster.

12) What is Google Container Engine?

Google Container Engine (GKE) is an open-source management platform for Docker containers and clusters. This Kubernetes based engine supports only those clusters which run within Google’s public cloud services.

13)  What is Heapster? What is Minikube?

Heapster is a cluster-wide aggregator of data provided by Kubelet running on each node. This container management tool is supported natively on Kubernetes cluster and runs as a pod, just like any other pod in the cluster. So, it basically discovers all nodes in the cluster and queries usage information from the Kubernetes nodes in the cluster, via on-machine Kubernetes agent.

Minikube is a tool that makes it easy to run Kubernetes locally. This runs a single-node Kubernetes cluster inside a virtual machine.

14) What is Kubectl?

Kubectl is the platform using which you can pass commands to the cluster. So, it basically provides the CLI to run commands against the Kubernetes cluster with various ways to create and manage the Kubernetes component.

15) What is Kubelet?

This is an agent service which runs on each node and enables the slave to communicate with the master. So, Kubelet works on the description of containers provided to it in the PodSpec and makes sure that the containers described in the PodSpec are healthy and running.

16) What do you understand by a node in Kubernetes?
Kubernetes Node - Kubernetes Interview Questions - Edureka

17) What are the different components of Kubernetes Architecture?

The Kubernetes Architecture has mainly 2 components – the master node and the worker node. As you can see in the below diagram, the master and the worker nodes have many inbuilt components within them. The master node has the kube-controller-manager, kube-apiserver, kube-scheduler, etcd. Whereas the worker node has kubelet and kube-proxy running on each node.

Kubernetes Architecture Components
  • Master Node: The master node is the first and most vital component which is responsible for the management of Kubernetes cluster. It is the entry point for all kinds of administrative tasks. There may be more than one master node in the cluster to check for fault tolerance.
  • API Server: The API server acts as an entry point for all the REST commands used for controlling the cluster.
  • Scheduler: The scheduler schedules the tasks to the slave node. It stores the resource usage information for every slave node. It is responsible for distributing the workload.
  • Etcd: etcd components, store configuration detail, and wright values. It communicates with the most component to receive commands and work. It also manages network rules and port forwarding activity.
  • Worker/Slave nodes: Worker nodes are another essential component that contains all the required services to manage the networking between the containers, communicate with the master node, which allows you to assign resources to the scheduled containers.
  • Kubelet: It gets the configuration of a Pod from the API server and ensures that the described containers are up and running.
  • Docker Container: Docker container runs on each of the worker nodes, which runs the configured pods.
  • Pods: A pod is a combination of single or multiple containers that logically run together on nodes.

18) What do you understand by Kube-proxy?

Kube-proxy can run on each and every node and can do simple TCP/UDP packet forwarding across backend network service. So basically, it is a network proxy that reflects the services as configured in Kubernetes API on each node. So, the Docker-linkable compatible environment variables provide the cluster IPs and ports which are opened by proxy.

19) Can you brief on the working of the master node in Kubernetes?

Kubernetes master controls the nodes and inside the nodes the containers are present. Now, these individual containers are contained inside pods and inside each pod, you can have a various number of containers based upon the configuration and requirements. So, if the pods have to be deployed, then they can either be deployed using user interface or command-line interface. Then, these pods are scheduled on the nodes, and based on the resource requirements, the pods are allocated to these nodes. The kube-apiserver makes sure that there is communication established between the Kubernetes node and the master components.

Kubernetes Master

20) What is the role of kube-apiserver and kube-scheduler?

The kube – apiserver follows the scale-out architecture and, is the front-end of the master node control panel. This exposes all the APIs of the Kubernetes Master node components and is responsible for establishing communication between Kubernetes Node and the Kubernetes master components.

The kube-scheduler is responsible for the distribution and management of workload on the worker nodes. So, it selects the most suitable node to run the unscheduled pod based on resource requirement and keeps a track of resource utilization. It makes sure that the workload is not scheduled on nodes that are already full.

21) Can you brief about the Kubernetes controller manager?

Multiple controller processes run on the master node but are compiled together to run as a single process which is the Kubernetes Controller Manager. So, Controller Manager is a daemon that embeds controllers and does namespace creation and garbage collection. It owns the responsibility and communicates with the API server to manage the end-points.

So, the different types of controller manager running on the master node are :

Types Of Controllers

22) What are the different types of services in Kubernetes? 

The following are the different types of services used:

Types Of Services in Kubernetes

23) Load balancer in Kubernetes?

Load Balancing is one of the most common and standard ways of exposing the services. There are two types of load balancing in K8s and they are:

Internal load balancer – This type of balancer automatically balances loads and allocates the pods with the required incoming load.

External Load Balancer – This type of balancer directs the traffic from the external loads to backend pods.

23) What is Ingress network, and how does it work?

Ingress network is a collection of rules that acts as an entry point to the Kubernetes cluster. This allows inbound connections, which can be configured to give services externally through reachable URLs, load balance traffic, or by offering name-based virtual hosting. So, Ingress is an API object that manages external access to the services in a cluster, usually by HTTP and is the most powerful way of exposing service.

Now, let me explain to you the working of Ingress network with an example.

There are 2 nodes having the pod and root network namespaces with a Linux bridge. In addition to this, there is also a new virtual ethernet device called flannel0(network plugin) added to the root network.

Now, suppose we want the packet to flow from pod1 to pod 4. Refer to the below diagram.

Ingress Network - How it works? Example
  • So, the packet leaves pod1’s network at eth0 and enters the root network at veth0.
  • Then it is passed on to cbr0, which makes the ARP request to find the destination and it is found out that nobody on this node has the destination IP address.
  • So, the bridge sends the packet to flannel0 as the node’s route table is configured with flannel0.
  • Now, the flannel daemon talks to the API server of Kubernetes to know all the pod IPs and their respective nodes to create mappings for pods IPs to node IPs.
  • The network plugin wraps this packet in a UDP packet with extra headers changing the source and destination IP’s to their respective nodes and sends this packet out via eth0.
  • Now, since the route table already knows how to route traffic between nodes, it sends the packet to the destination node2.
  • The packet arrives at eth0 of node2 and goes back to flannel0 to de-capsulate and emits it back in the root network namespace.
  • Again, the packet is forwarded to the Linux bridge to make an ARP request to find out the IP that belongs to veth1.
  • The packet finally crosses the root network and reaches the destination Pod4.

24)  What do you understand by Cloud Controller Manager (CCM)?

The Cloud Controller Manager is responsible for persistent storage, network routing, abstracting the cloud-specific code from the core Kubernetes specific code, and managing the communication with the underlying cloud services. It might be split out into several different containers depending on which cloud platform you are running on and then it enables the cloud vendors and Kubernetes code to be developed without any inter-dependency. So, the cloud vendor develops their code and connects with the Kubernetes cloud-controller-manager while running the Kubernetes.

The various types of cloud controller manager are as follows:

Types Of Cloud Controller Manager

25) What is Container Resource Monitoring? What are Container Resource Monitoring Tools?

As for users, it is really important to understand the performance of the application and resource utilization at all the different abstraction layer, Kubernetes factored the management of the cluster by creating abstraction at different levels like container, pods, services and whole cluster. Now, each level can be monitored and this is nothing but Container resource monitoring.

The various container resource monitoring tools are as follows:

Container Resource Monitoring Tools

26) What is the difference between a replica set and replication controller?

Replica Set and Replication Controller do almost the same thing. Both of them ensure that a specified number of pod replicas are running at any given time. The difference comes with the usage of selectors to replicate pods. Replica Set use Set-Based selectors while replication controllers use Equity-Based selectors.

  • Equity-Based Selectors: This type of selector allows filtering by label key and values. So, in layman terms, the equity-based selector will only look for the pods which will have the exact same phrase as that of the label.
    Example: Suppose your label key says app=nginx, then, with this selector, you can only look for those pods with label app equal to nginx.
  • Selector-Based Selectors: This type of selector allows filtering keys according to a set of values. So, in other words, the selector based selector will look for pods whose label has been mentioned in the set.
    Example: Say your label key says app in (nginx, NPS, Apache). Then, with this selector, if your app is equal to any of nginx, NPS, or Apache, then the selector will take it as a true result.

In simple words, the only difference between replication controllers and replica sets is the selectors. Replication controllers don’t have selectors in their spec and also note that replication controllers are obsolete now in the latest version of Kubernetes.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:


# modify replicas according to your case
replicas: 3
  selector:
    matchLabels:
      tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3

27) What is a Headless Service?

Headless Service is similar to that of a ‘Normal’ services but does not have a Cluster IP. This service enables you to directly reach the pods without the need of accessing it through a proxy.

28) What are the best security measures that you can take while using Kubernetes?

The following are the best security measures that you can follow while using Kubernetes:

Security Measures in Kubernetes.

29) What are federated clusters?

Multiple Kubernetes clusters can be managed as a single cluster with the help of federated clusters. So, you can create multiple Kubernetes clusters within a data center/cloud and use federation to control/manage them all at one place.

The federated clusters can achieve this by doing the following two things. Refer to the below diagram.

Federated Clusters

30) Scenario-Based Interview Questions you may face in interviews

This section of questions will consist of various scenario-based questions that you may face in your interviews.

Scenario 1: Suppose a company built on monolithic architecture handles numerous products. Now, as the company expands in today’s scaling industry, their monolithic architecture started causing problems.

How do you think the company shifted from monolithic to microservices and deploy their services containers?

Solution:

As the company’s goal is to shift from their monolithic application to microservices, they can end up building piece by piece, in parallel and just switch configurations in the background. Then they can put each of these built-in microservices on the Kubernetes platform. So, they can start by migrating their services once or twice and monitor them to make sure everything is running stable. Once they feel everything is going good, then they can migrate the rest of the application into their Kubernetes cluster.

Scenario 2: Consider a multinational company with a very much distributed system, with a large number of data centers, virtual machines, and many employees working on various tasks.

How do you think can such a company manage all the tasks in a consistent way with Kubernetes?

Solution:

As all of us know that I.T. departments launch thousands of containers, with tasks running across a numerous number of nodes across the world in a distributed system.

In such a situation the company can use something that offers them agility, scale-out capability, and DevOps practice to the cloud-based applications.

So, the company can, therefore, use Kubernetes to customize their scheduling architecture and support multiple container formats. This makes it possible for the affinity between container tasks that gives greater efficiency with an extensive support for various container networking solutions and container storage.

Scenario 3: Consider a situation, where a company wants to increase its efficiency and the speed of its technical operations by maintaining minimal costs.

How do you think the company will try to achieve this?

Solution:

The company can implement the DevOps methodology, by building a CI/CD pipeline, but one problem that may occur here is the configurations may take time to go up and running. So, after implementing the CI/CD pipeline the company’s next step should be to work in the cloud environment. Once they start working on the cloud environment, they can schedule containers on a cluster and can orchestrate with the help of Kubernetes. This kind of approach will help the company reduce their deployment time, and also get faster across various environments.

Scenario 4:  Suppose a company wants to revise it’s deployment methods and wants to build a platform which is much more scalable and responsive.

How do you think this company can achieve this to satisfy their customers?

Solution:

In order to give millions of clients the digital experience they would expect, the company needs a platform that is scalable, and responsive, so that they could quickly get data to the client website. Now, to do this the company should move from their private data centers (if they are using any) to any cloud environment such as AWS. Not only this, but they should also implement the microservice architecture so that they can start using Docker containers. Once they have the base framework ready, then they can start using the best orchestration platform available i.e. Kubernetes. This would enable the teams to be autonomous in building applications and delivering them very quickly.

Scenario 5: Consider a multinational company with a very much distributed system, looking forward to solving the monolithic code base problem.

How do you think the company can solve their problem?

Solution

Well, to solve the problem, they can shift their monolithic code base to a microservice design and then each and every microservices can be considered as a container. So, all these containers can be deployed and orchestrated with the help of Kubernetes.

Scenario 6: All of us know that the shift from monolithic to microservices solves the problem from the development side, but increases the problem at the deployment side.

How can the company solve the problem on the deployment side?

Solution

The team can experiment with container orchestration platforms, such as Kubernetes and run it in data centers. So, with this, the company can generate a templated application, deploy it within five minutes, and have actual instances containerized in the staging environment at that point. This kind of Kubernetes project will have dozens of microservices running in parallel to improve the production rate as even if a node goes down, then it can be rescheduled immediately without performance impact.

Scenario 7:  Suppose a company wants to optimize the distribution of its workloads, by adopting new technologies.

How can the company achieve this distribution of resources efficiently?

Solution

The solution to this problem is none other than Kubernetes. Kubernetes makes sure that the resources are optimized efficiently, and only those resources are used which are needed by that particular application. So, with the usage of the best container orchestration tool, the company can achieve the distribution of resources efficiently.

Scenario 8: Consider a carpooling company wants to increase their number of servers by simultaneously scaling their platform.

How do you think will the company deal with the servers and their installation?

Solution

The company can adopt the concept of containerization. Once they deploy all their application into containers, they can use Kubernetes for orchestration and use container monitoring tools like Prometheus to monitor the actions in containers. So, with such usage of containers, giving them better capacity planning in the data center because they will now have fewer constraints due to this abstraction between the services and the hardware they run on.

Scenario 9: Consider a scenario where a company wants to provide all the required hand-outs to its customers having various environments.

How do you think they can achieve this critical target in a dynamic manner?

Solution

The company can use Docker environments, to put together a cross-sectional team to build a web application using Kubernetes. This kind of framework will help the company achieve the goal of getting the required things into production within the shortest time frame. So, with such a machine running, the company can give the hands-outs to all the customers having various environments.

Scenario 10: Suppose a company wants to run various workloads on different cloud infrastructure from bare metal to a public cloud.

How will the company achieve this in the presence of different interfaces?

Solution

The company can decompose its infrastructure into microservices and then adopt Kubernetes. This will let the company run various workloads on different cloud infrastructures.

31) List out some important Kubectl commands?

The important Kubectl commands are:

  • kubectl annotate
  • kubectl cluster-info
  • kubectl attach
  • kubectl apply
  • kubectl config
  • kubectl autoscale
  • kubectl config current-context
  • kubectl config set.

32) Explain the types of Kubernetes pods usage patterns?

There are two types of pods in Kubernetes:

  • Single Container Pod: It can be created with the run command.
  • Multicontainer pods: It can be created using the “create” command in Kubernetes.

33) What are the labels in Kubernetes?

Labels are a collection of keys that contain some values. The key values are connected to pods, replication controllers, and associated services. Generally, labels are added to some object during its creation time. They can be modified by the users at run time.

34) What are the objectives of the replication controller?

The objectives of the replication controller are:

  • It is responsible for controlling and administering the pod lifecycle.
  • It monitors and verifies whether the allowed number of replicas are running or not.
  • The replication controller helps the user to check the pod status.
  • It enables to alter a pod. The user can drag its position the way interested in it.

35) What do you mean by persistent volume?

A persistent volume is a storage unit that is controlled by the administrator. It is used to manage an individual pod in a cluster.

36) What are Secrets in Kubernetes?

Secrets are sensitive information like login credentials of the user. They are objects in Kubernetes that stores sensitive information like username and password after performing encryption.

37) What is Sematext Docker Agent?

Sematext Docker agent is a log collection agent with events and metrics. It runs as a small container in each Docker host. These agents gather metrics, events, and logs for all cluster nodes and containers.

38) Define Red Hat OpenShift?

OpenShift is a public cloud application development and hosting platform developed by Red Hat. It offers automation for management so that developers can focus on writing the code.

39) Mention the difference between Docker volumes and Kubernetes Volumes

Kubernetes VolumesDocker Volumes
Volumes are not limited to any container.Volumes are limited to a pod in the container.
Kubernetes volumes support all containers deployed in a pod of Kubernetes.Docker volumes do not support all containers deployed in Docker.

40) What are the ways to provide API-Security on Kubernetes?

The ways to provide API-Security on Kubernetes are:

  • Using correct auth mode with API server authentication mode= Node.
  • Make kubeless that protects its API via authorization-mode=Webhook.
  • Ensure the kube-dashboard uses a restrictive RBAC (Role-Based Access Control) policy

41) What is ContainerCreating pod?

A ContainerCreating pod is one that can be scheduled on a node but can’t start up properly.

42) What are the types of Kubernetes Volume?

The types of Kubernetes Volume are:

  • EmptyDir
  • GCE persistent disk
  • Flocker
  • HostPath
  • NFS
  • ISCSI
  • rbd
  • PersistentVolumeClaim
  • downwardAPI

43) Explain PVC?

The full form of PVC stands for Persistent Volume Claim. It is storage requested by Kubernetes for pods. The user does not require to know the underlying provisioning. This claim should be created in the same namespace where the pod is created.

44) What is the Kubernetes Network Policy?

Network Policy defines how the pods in the same namespace would communicate with each other and the network endpoint.

45) What is Kubernetes proxy service role?

Kubernetes proxy service is a service which runs on the node and helps in making it available to an external host.

46) How can containers within a pod communicate with each other?

Containers within a pod share networking space and can reach other on localhost. For instance, if you have two containers within a pod, a MySQL container running on port 3306, and a PHP container running on port 80, the PHP container could access the MySQL one through localhost:3306.

47) What does a Pod do?

Pods represent the processes running on a cluster. By limiting pods to a single process, Kubernetes can report on the health of each process running in the cluster. Pods have:

  • a unique IP address (which allows them to communicate with each other)
  • persistent storage volumes (as required)
  • configuration information that determine how a container should run.

Although most pods contain a single container, many will have a few containers that work closely together to execute a desired function.

48) Explain when to use Docker vs Docker Compose vs Docker Swarm vs Kubernetes?

  • Docker is a container engine, it makes you build and run usually no more than one container at most, locally on your PC for development purposes.
  • Docker Compose is a Docker utility to run multiple containers and let them share volumes and networking via the docker engine features, runs locally to emulate service composition and remotely on clusters. Docker Compose is mostly used as a helper when you want to start multiple Docker containers and don’t want to start each one separately using docker run ….
  • Docker Swarm is for running and connecting containers on multiple hosts. It does things like scaling, starting a new container when one crashes, networking containers.
  • Kubernetes is a container orchestration platform, it takes care of running containers and enhancing the engine features so that containers can be composed and scaled to serve complex applications (sort of PaaS, managed by you or cloud provider). Kubernetes’ goal is very similar to that for Docker Swarm but it’s developer by Google.

49) What are namespaces? What is the problem with using one default namespace?

Namespaces allow you split your cluster into virtual clusters where you can group your applications in a way that makes sense and is completely separated from the other groups (so you can for example create an app with the same name in two different namespaces).

  • When using the default namespace alone, it becomes hard over time to get an overview of all the applications you manage in your cluster. Namespaces make it easier to organize the applications into groups that makes sense, like a namespace of all the monitoring applications and a namespace for all the security applications, etc.
  • Namespaces can also be useful for managing Blue/Green environments where each namespace can include a different version of an app and also share resources that are in other namespaces (namespaces like logging, monitoring, etc.).
  • Another use case for namespaces is one cluster, multiple teams. When multiple teams use the same cluster, they might end up stepping on each others toes. For example if they end up creating an app with the same name it means one of the teams overriden the app of the other team because there can’t be too apps in Kubernetes with the same name (in the same namespace).

50) What does it mean that “pods are ephemeral”?

Pods are ephemeral. They are not designed to run forever, and when a Pod is terminated it cannot be brought back. In general, Pods do not disappear until they are deleted by a user or by a controller.

Pods do not “heal” or repair themselves. For example, if a Pod is scheduled on a node which later fails, the Pod is deleted. Similarly, if a Pod is evicted from a node for any reason, the Pod does not replace itself.

51) What happens when a master fails? What happens when a worker fails?

Kubernetes is designed to be resilient to any individual node failure, master or worker.

When a master fails the nodes of the cluster will keep operating, but there can be no changes including pod creation or service member changes until the master is available.

When a worker fails, the master stops receiving messages from the worker. If the master does not receive status updates from the worker the node will be marked as NotReady. If a node is NotReady for 5 minutes, the master reschedules all pods that were running on the dead node to other available nodes.

52) What is a StatefulSet in Kubernetes?

When using Kubernetes, most of the time you don’t care how your pods are scheduled, but sometimes you care that pods are deployed in order, that they have a persistent storage volume, or that they have a unique, stable network identifier across restarts and reschedules. In those cases, StatefulSets can help you accomplish your objective. It manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.

StatefulSets are valuable for applications that require one or more of the following.

  • Stable, unique network identifiers.
  • Stable, persistent storage.
  • Ordered, graceful deployment and scaling.
  • Ordered, automated rolling updates.

53) What is a DaemonSet?

DaemonSets are used in Kubernetes when you need to run one or more pods on all (or a subset of) the nodes in a cluster. The typical use case for a DaemonSet is logging and monitoring for the hosts. For example, a node needs a service (daemon) that collects health or log data and pushes them to a central system or database.

As the name suggests you can use daemon sets for running daemons (and other tools) that need to run on all nodes of a cluster. These can be things like cluster storage daemons (e.g. Quobyte, glusterd, ceph, etc.), log collectors (e.g. fluentd or logstash), or monitoring daemons (e.g. Prometheus Node Exporter, collectd, New Relic agent, etc.)

54) When to use StatefulSet?

Some examples of reasons you’d use a StatefulSet include:

  • A Redis pod that has access to a volume, but you want it to maintain access to the same volume even if it is redeployed or restarted
  • A Cassandra cluster and have each node maintain access to its data
  • A webapp that needs to communicate with its replicas using known predefined network identifiers

55) How to do maintenance activity on the K8 node?

Whenever there are security patches available the Kubernetes administrator has to perform the maintenance task to apply the security patch to the running container in order to prevent it from vulnerability, which is often an unavoidable part of the administration. The following two commands are useful to safely drain the K8s node.

  • kubectl cordon
  • kubectl drain –ignore-daemon set

The first command moves the node to maintenance mode or makes the node unavailable, followed by kubectl drain which will finally discard the pod from the node. After the drain command is a success you can perform maintenance.

Note: If you wish to perform maintenance on a single pod following two commands can be issued in order:

  • kubectl get nodes: to list all the nodes
  • kubectl drain <node name>: drain a particular node

56) How do we control the resource usage of POD?

With the use of limit and request resource usage of a POD can be controlled.

Request: The number of resources being requested for a container. If a container exceeds its request for resources, it can be throttled back down to its request.

Limit: An upper cap on the resources a single container can use. If it tries to exceed this predefined limit it can be terminated if K8’s decides that another container needs these resources. If you are sensitive towards pod restarts, it makes sense to have the sum of all container resource limits equal to or less than the total resource capacity for your cluster.

Example:

apiVersion: v1
kind: Pod
metadata:
 name: the basic tech info demo
spec:
 containers:
 - name: example1
 image:example/example1
 resources:
   requests:
     memory: "_Mi"
     cpu: "_m"
   limits:
     memory: "_Mi"
     cpu: "_m"

57) How to get the central logs from POD?

This architecture depends upon the application and many other factors. Following are the common logging patterns

  • Node level logging agent.
  • Streaming sidecar container.
  • Sidecar container with the logging agent.
  • Export logs directly from the application.

In the setup, journalbeat and filebeat are running as daemonset. Logs collected by these are dumped to the kafka topic which is eventually dumped to the ELK stack.

The same can be achieved using EFK stack and fluentd-bit.

58) How to turn the service defined below in the spec into an external one?

spec:
  selector:
    app: some-app
  ports:
    - protocol: UDP
      port: 8080
      targetPort: 8080

Explanation – 

Adding type: LoadBalancer and nodePort as follows:

spec:
 selector:
   app: some-app
 type: LoadBalancer
 ports:
   - protocol: UDP
     port: 8080
     targetPort: 8080
     nodePort: 32412

59) Complete the following configurationspec file to make it Ingress

metadata:
  name: someapp-ingress
spec:

Explanation –

One of the several ways to answer this question.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
 name: someapp-ingress
spec:
 rules:
 - host: my.host
   http:
     paths:
     - backend:
         serviceName: someapp-internal-service
         servicePort: 8080

60) How to configure TLS with Ingress?

Add tls and secretName entries.

spec:
 tls:
 - hosts:
   - some_app.com
   secretName: someapp-secret-tls

61) In the following file which service and in which namespace is referred?

apiVersion: v1
kind: ConfigMap
metadata:
  name: some-configmap
data:
  some_url: thebasictechinfo.example

Answer – It’s referencing the service “thebasictechinfo” in the namespace called “example”.

62) How to run Kubernetes locally?

Kubernetes can be set up locally using the Minikube tool. It runs a single-node bunch in a VM on the computer. Therefore, it offers the perfect way for users who have just ongoing learning Kubernetes.

63) What the following in the Deployment configuration file mean?

spec:
  containers:
    - name: USER_PASSWORD
      valueFrom:
        secretKeyRef:
          name: some-secret
          key: password

Explanation – USER_PASSWORD environment variable will store the value from the password key in the secret called “some-secret” In other words, you reference a value from a Kubernetes Secret.

64) How to troubleshoot if the POD is not getting scheduled?

In K8’s scheduler is responsible to spawn pods into nodes. There are many factors that can lead to unstartable POD. The most common one is running out of resources, use the commands like kubectl describe <POD> -n <Namespace> to see the reason why POD is not started. Also, keep an eye on kubectl to get events to see all events coming from the cluster.

65) How to run a POD on a particular node?

Various methods are available to achieve it.

  • nodeName: specify the name of a node in POD spec configuration, it will try to run the POD on a specific node.
  • nodeSelector: Assign a specific label to the node which has special resources and use the same label in POD spec so that POD will run only on that node.
  • nodeaffinities: required DuringSchedulingIgnoredDuringExecution, preferredDuringSchedulingIgnoredDuringExecution are hard and soft requirements for running the POD on specific nodes. This will be replacing nodeSelector in the future. It depends on the node labels.

66) What are the different ways to provide external network connectivity to K8?

By default, POD should be able to reach the external network but vice-versa we need to make some changes. Following options are available to connect with POD from the outer world.

  • Nodeport (it will expose one port on each node to communicate with it)
  • Load balancers (L4 layer of TCP/IP protocol)
  • Ingress (L7 layer of TCP/IP Protocol)

Another method is to use Kube-proxy which can expose a service with only cluster IP on the local system port.

$ kubectl proxy –port=8080 $ http://localhost:8080/api/v1/proxy/namespaces//services/:/

67) How can we forward the port 8080 (container) -> 8080 (service) -> 8080 (ingress) -> 80 (browser) and how it can be done?

The ingress is exposing port 80 externally for the browser to access, and connecting to a service that listens on 8080. The ingress will listen on port 80 by default. An “ingress controller” is a pod that receives external traffic and handles the ingress and is configured by an ingress resource For this you need to configure the ingress selector and if no ‘ingress controller selector’ is mentioned then no ingress controller will manage the ingress.

Simple ingress Config will look like

host: the.basic.tech.info
http:
paths:
backend:
serviceName: abc-service
servicePort: 8080
Then the service will look like
kind: Service
apiVersion: v1
metadata:
name: abc-service
spec:
ports:
protocol: TCP
port: 8080 # port to which the service listens to
targetPort: 8080

68) What is the difference between config map and secret?

Config maps ideally stores application configuration in a plain text format whereas Secrets store sensitive data like password in an encrypted format. Both config maps and secrets can be used as volume and mounted inside a pod through a pod definition file.

Config map:

 kubectl create configmap myconfigmap --from-literal=env=dev

Secret:

echo -n ‘admin’ > ./username.txt
echo -n ‘abcd1234’ ./password.txt
kubectl create secret generic mysecret --from-file=./username.txt --from-file=./password.txt

69) What is the difference between a Pod and a Job? Differentiate the answers as with examples?

A Pod always ensure that a container is running whereas the Job ensures that the pods run to its completion. Job is to do a finite task.

Examples:

kubectl run mypod1 --image=nginx --restart=Never
kubectl run mypod2 --image=nginx --restart=onFailure

○ → kubectl get pods
NAME           READY STATUS   RESTARTS AGE
mypod1         1/1 Running   0 59s

○ → kubectl get job
NAME     DESIRED SUCCESSFUL   AGE
mypod1   1 0            19s

70) How do you deploy a feature with zero downtime in Kubernetes?

By default Deployment in Kubernetes using RollingUpdate as a strategy. Let’s say we have an example that creates a deployment in Kubernetes

kubectl run nginx --image=nginx # creates a deployment
○ → kubectl get deploy
NAME    DESIRED  CURRENT UP-TO-DATE   AVAILABLE AGE
nginx   1  1 1            0 7s
Now let’s assume we are going to update the nginx image

kubectl set image deployment nginx nginx=nginx:1.15 # updates the image 
Now when we check the replica sets

kubectl get replicasets # get replica sets
NAME               DESIRED CURRENT READY   AGE
nginx-65899c769f   0 0 0       7m
nginx-6c9655f5bb   1 1 1       13s
From the above, we can notice that one more replica set was added and then the other replica set was brought down

kubectl rollout status deployment nginx 
# check the status of a deployment rollout

kubectl rollout history deployment nginx
 # check the revisions in a deployment

○ → kubectl rollout history deployment nginx
deployment.extensions/nginx
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

71) Having a Pod with two containers, can I ping each other? like using the container name?

Containers on same pod act as if they are on the same machine. You can ping them using localhost:port itself. Every container in a pod shares the same IP. You can `ping localhost` inside a pod. Two containers in the same pod share an IP and a network namespace and They are both localhost to each other. Discovery works like this: Component A’s pods -> Service Of Component B -> Component B’s pods and Services have domain names servicename.namespace.svc.cluster.local, the dns search path of pods by default includes that stuff, so a pod in namespace Foo can find a Service bar in same namespace Foo by connecting to ‘bar’.

72) What is ETCD?

Etcd is written in Go programming language and is a distributed key-value store used for coordinating distributed work. So, Etcd stores the configuration data of the Kubernetes cluster, representing the state of the cluster at any given point in time.

73) Does the rolling update with state full set replicas =1 makes sense?

No, because there is only 1 replica, any changes to state full set would result in an outage. So rolling update of a StatefulSet would need to tear down one (or more) old pods before replacing them. In case 2 replicas, a rolling update will create the second pod, which it will not be succeeded, the PD is locked by first (old) running pod, the rolling update is not deleting the first pod in time to release the lock on the PDisk in time for the second pod to use it. If there’s only one that rolling update goes 1 -> 0 -> 1.f the app can run with multiple identical instances concurrently, use a Deployment and roll 1 -> 2 -> 1 instead.

74) Is there any other way to update configmap for deployment without pod restarts?

Well you need to have some way of triggering the reload. ether do a check every minute or have a reload endpoint for an api or project the configmap as a volume, could use inotify to be aware of the change. Depends on how the configmap is consumed by the container. If env vars, then no. If a volumeMount, then the file is updated in the container ready to be consumed by the service but it needs to reload the file

The container does not restart. if the configmap is mounted as a volume it is updated dynamically. If it is an environment variable it stays as the old value until the container is restarted.volume mount the configmap into the pod, the projected file is updated periodically. NOT realtime, then have the app recognise if the config on disk has changed and reload

75) Do rolling updates declared with a deployment take effect if I manually delete pods of the replica set with kubectl delete pods or with the dashboard? Will the minimum required a number of pods be maintained?

Yes, the scheduler will make sure (as long as you have the correct resources) that the number of desired pods are met. If you delete a pod, it will recreate it. Also deleting a service won’t delete the Replica set. if you remove Service or deployment you want to remove all resources which Service created. Also having a single replica for a deployment is usually not recommended because you cannot scale out and are treating in a specific way

Any app should be ‘Ingress’ -> ‘Service’ -> ‘Deployment’ -> (volume mount or 3rd-party cloud storage)

You can skip ingress and just have ‘LoadBalancer (service)’ -> ‘Deployment’ (or Pod but they don’t auto restart, deployments do)

76) In  Kubernetes – A Pod is running 2 containers, when One container stops – another Container is still running, on this event, I want to terminate this Pod?

You need to add a liveness and readiness probe to query each container,  if the probe fails, the entire pod will be restarted .add liveness object that calls any api that returns 200 to you from another container and both liveness and readiness probes run in infinite loops for example, If X depended to Y So add liveness  in X that check the health of Y.Both readiness/liveness probes always have to run after the container has been started .kubelet component performs the liveness/readiness checks and set initialDelaySeconds and it can be anything from a few seconds to a few minutes depending on app start time. Below is the configuration spec

  • livenessProbe spec:
  • livenessProbe:
  • httpGet:
  • path: /path/test/
  • port: 10000
  • initialDelaySeconds: 30
  • timeoutSeconds: 5
  • readinessProbe spec:
  • readinessProbe:
  • httpGet:
  • path: /path/test/
  • port: 10000
  • initialDelaySeconds: 30
  • timeoutSeconds: 5

77) What happens if  daemonset can be set to listen on a specific interface since the Anycast IP will be assigned to a network interface alias?

Yes, hostnetwork for the daemonset gets you to the host, so an interface with an Anycast IP should work. You’ll have to proxy the data through the daemonset.Daemonset allows you to run the pod on the host network, so anycast is possible.Daemonset allows us to run the pod on the host network At the risk of being pedantic, any pod can be specified to run on the host network.  The only thing special about DaemonSet is you get one pod per host. Most of the issues with respect to IP space is solved by daemonsets. As kube-proxy is run as daemonset, the node has to be Ready for the kube-proxy daemonset to be up.

78) If you have multiple containers in a Deployment file, does use the HorizontalPodAutoscaler scale all of the containers?

Yes, it would scale all of them, internally the deployment creates a replica set (which does the scaling), and then a set number of pods are made by that replica set. the pod is what actually holds both of those containers. and if you want to scale them independently they should be separate pods (and therefore replica sets, deployments, etc).so for hpa to work You need to specify min and max replicas  and the threshold what percentage of cpu and memory you want your pods to autoscale..without having the manually run kubectl autoscale deployment ,you can use the below yaml file to do the same 

  • apiVersion: autoscaling/v1
  • kind: HorizontalPodAutoscaler
  • metadata:
  • annotations:
  • name: app
  • spec:
  • maxReplicas: 15
  • minReplicas: 10
  • scaleTargetRef:
  • apiVersion: autoscaling/v1
  • kind: Deployment
  • name: app targetCPUUtilizationPercentage: 70

79) Are deployments with more than one replica automatically doing rolling updates when a new deployment config is applied?

The Deployment updates Pods in a rolling update fashion when .spec.strategy.type==RollingUpdate .You can specify maxUnavailable and maxSurge to control the rolling update process. Rolling update is the default deployment strategy.kubectl rolling-update updates Pods and ReplicationControllers in a similar fashion. But, Deployments are recommended, since they are declarative, and have additional features, such as rolling back to any previous revision even after the rolling update is done.So for rolling updates to work as one may expect, a readiness probe is essential. Redeploying deployments is easy but rolling updates will do it nicely for me without any downtime. The way to make a  rolling update of a Deployment and kubctl apply on it is as below

  • spec:
  • minReadySeconds: 180
  • replicas: 9
  • revisionHistoryLimit: 20
  • selector:
  • matchLabels:
  • deployment: standard
  • name: standard-pod
  • strategy:
  • rollingUpdate:
  • maxSurge: 1
  • maxUnavailable: 1
  • type: RollingUpdate

80) If a pod exceeds its memory “limit” what signal is sent to the process?

SIGKILL as immediately terminates the container and spawns a new one with OOM error. The OS, if using a cgroup based containerisation (docker, rkt, etc), will do the OOM killing. Kubernetes simply sets the cgroup limits but is not ultimately responsible for killing the processes.’SIGTERM’ is sent to PID 1 and k8s waits for (default of 30 seconds) ‘terminationGracePeriodSeconds’ before sending the ‘SIGKILL’ or you can change that time with terminationGracePeriodSeconds in the pod. As long as your container will eventually exit, it should be fine to have a long grace period. If you want a graceful restart it would have to do it inside the pod. If you don’t want it killed, then you shouldn’t set a memory ‘limit’ on the pod and there’s not a way to disable it for the whole node. Also, when the liveness probe fails, the container will SIGTERM and SIGKILL after some grace period.

81) Let’s say a Kubernetes job should finish in 40 seconds, however on a rare occasion it takes 5 minutes, How can I make sure to stop the application if it exceeds more than 40 seconds?

When we create a job spec, we can give –activeDeadlineSeconds flag to the command, this flag relates to the duration of the job, once the job reaches the threshold specified by the flag, the job will be terminated.

kind: CronJob
apiVersion: batch/v1beta1
metadata:
  name: mycronjob
spec:
  schedule: "*/1 * * * *"
activeDeadlineSeconds: 200
  jobTemplate:
    metadata:
      name: google-check-job
    spec:
      template:
        metadata:
          name: mypod
        spec:
          restartPolicy: OnFailure
          containers:
            - name: mycontainer
             image: alpine
             command: ["/bin/sh"]
             args: ["-c", "ping -w 1 google.com"]

82) How do you test a manifest without actually executing it?

use –dry-run flag to test the manifest. This is really useful not only to ensure if the yaml syntax is right for a particular Kubernetes object but also to ensure that a spec has required key-value pairs.

kubectl create -f <test.yaml> –dry-run

Let us now look at an example Pod spec that will launch an nginx pod

○ → cat example_pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: my-nginx
  namespace: mynamespace
spec:
  containers:
    - name: my-nginx
      image: nginx
○ → kubectl create -f example_pod.yaml --dry-run
pod/my-nginx created (dry run)

83) How do you initiate a rollback for an application?

Rollback and rolling updates are a feature of Deployment object in the Kubernetes. We do the Rollback to an earlier Deployment revision if the current state of the Deployment is not stable due to the application code or the configuration. Each rollback updates the revision of the Deployment

○ → kubectl get deploy
NAME    DESIRED  CURRENT UP-TO-DATE   AVAILABLE AGE
nginx   1  1 1            1 15h
○ → kubectl rollout history deploy nginx
deployment.extensions/nginx
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
kubectl undo deploy <deploymentname>
○ → kubectl rollout undo deploy nginx
deployment.extensions/nginx
○ → kubectl rollout history deploy nginx
deployment.extensions/nginx
REVISION  CHANGE-CAUSE
2         <none>
3         <none>
We can also check the history of the changes by the below command

kubectl rollout history deploy <deploymentname>

84) How do you package Kubernetes applications?

Helm is a package manager which allows users to package, configure, and deploy applications and services to the Kubernetes cluster.

helm init  # when you execute this command client is going to create a deployment in the cluster and that deployment will install the tiller, the server side of Helm

The packages we install through client are called charts. They are bundles of templatized manifests. All the templating work is done by the Tiller

helm search redis # searches for a specific application
helm install stable/redis # installs the application
helm ls # list the applications 

85) What are init containers?

Generally, in Kubenetes, a pod can have many containers. Init container gets executed before any other containers run in the pod.

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
  annotations:
    pod.beta.Kubernetes.io/init-containers: '[
        {
            "name": "init-myservice",
            "image": "busybox",
            "command": ["sh", "-c", "until nslookup myservice; do echo waiting for myservice; sleep 2; done;"]
        }
    ]'
spec:
  containers:
  - name: myapp-container
    image: busybox
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']

86) What is node affinity and pod affinity?

  1. Node Affinity ensures that pods are hosted on particular nodes.
  2. Pod Affinity ensures two pods to be co-located in a single node.

Node Affinity

apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: Kubernetes.io/e2e-az-name
            operator: In
            values:
            - e2e-az1

Pod Affinity

apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: security
            operator: In
            values:
            - S1

The pod affinity rule says that the pod can be scheduled to a node only if that node is in the same zone as at least one already-running pod that has a label with key “security” and value “S1”.

87) How do you drain the traffic from a Pod during maintenance?

When we take the node for maintenance, pods inside the nodes also take a hit. However, we can avoid it by using the below command

kubectl drain <nodename>

When we run the above command it marks the node unschedulable for newer pods then the existing pods are evicted if the API Server supports eviction else it deletes the pods

Once the node is up and running and you want to add it in rotation we can run the below command

kubectl uncordon <nodename>

Note: If you prefer not to use kubectl drain (such as to avoid calling to an external command, or to get finer control over the pod eviction process), you can also programmatically cause evictions using the eviction API.

88) I have one POD and inside 2 containers are running one is Nginx and another one is  wordpress So, how can access these 2 containers from the Browser with IP address?

Just do port forward kubectl port-forward [nginx-pod-name] 80:80 kubectl port-forward [wordpress-pod-name] drupal-port:wordpress-port.

To make it permanent, you need to expose those through nodeports whenever you do kubectl port forward it adds a rule to the firewall to allow that traffic across nodes but by default that isn’t allowed since flannel or firewall probably blocks it.proxy tries to connect over the network of the apiserver host as you correctly found, port-forward on the other hand is a mechanism that the node kubelet exposes over its own API

89) If I have multiple containers running inside a pod, and I want to wait for a specific container to start before starting another one.

One way is  Init Containers are for one-shot tasks that start, run, end; all before the next init container or the main container start, but  if a client in one container wants to consume some resources exposed by some server provided by another container or If the server  ever crashes or is restarted, the client will need to retry connections. So the client can retry always, even if the server isn’t up yet. The best way is sidecar pattern_ are where one container is the Main one, and other containers expose metrics or logs or encrypted tunnel or somesuch. In these cases, the other containers can be killed when the Main one is done/crashed/evicted.

90) What is the impact of upgrading kubelet if we leave the pods on the worker node – will it break running pods? why?

Restarting kubelet, which has to happen for an upgrade will cause all the Pods on the node to stop and be started again. It’s generally better to drain a node because that way Pods can be gracefully migrated, and things like Disruption Budgets can be honored. The problem is that `kubectl` keeps up with the state of all running pods, so when it goes away the containers don’t necessarily die, but as soon as it comes back up, they are all killed so `kubectl` can create a clean slate. As kubelet communicates with the apiserver, so if something happens in between of upgrade process, rescheduling of pods may take place and health checks may fail in between the process. During the restart, the kubelet will stop querying the API, so it won’t start/stop containers, and Heapster won’t be able to fetch system metrics from cAdvisor. Just make sure it’s not down for too long or the node will be removed from the cluster.

91) How service that selects apps based on the label and has an externalIP?

The service selects apps based on labels, so if no pods have appropriate labels, the service has nothing to route and labels can be anything you like. Since all pod names should be unique, you can just set the labels as the pod name. Since statesets create the same pods multiple times, they won’t be configured with distinct labels you could use to point disparate services to the correct pod. If you gave the pods their own labels manually it will work. Also, service selects pods based on selector as well their location label as well Below .yaml file of Grafana dashboard service shows the same.

apiVersion: v1
kind: Service
metadata:

name: grafanaportforward
namespace: kubeflow
labels:

run: grafana-test
spec:

ports:

- port: 3000
protocol: TCP
name: grafana
externalIPs:- x.y.x.q
selector:app: grafana-test

92) Does the container restart When applying/updating the secret object (kubectl apply -f mysecret.yml)?  If not, how is the new password applied to the database?

If you are mounting the secret as a volume into your pod, when the secret is updated the content will be updated in your pod, without the pod restarting. It’s up to your application to detect that change and reload, or to write your own logic that rolls the pods if the secret changes .volumeMount controls what part of the secret volume is mounted into a particular container (defaults to the root, containing all those files, but can point to a specific file using `subPath`), and where in the container it should be mounted with ‘mountPath’.Example spec is below

  • volumeMounts:
  • – readOnly: true
  • mountPath: /certs/server
  • name: my-new-server-cert
  • volumes:
  • – name: server-cert
  • secret:
  • secretName: mysecret

Also, it depends on how the secret is consumed by a container. If env vars, then no. If a volumeMount, then the file is updated in the container ready to be consumed by the service but it needs to reload the file. The container does not restart. if the secret is mounted as a volume it is updated dynamically. if it is an environment variable it stays as the old value until the container is restarted.

93) How should you connect an app pod with a database pod?

By using a service object. reason being, if the database pod goes away, it’s going to come up with a different name and IP address. Which means the connection string would need to be updated every time, managing that is difficult. The service proxies traffic to pods and it also helps in load balancing of traffic if you have multiple pods to talk to. It has its own IP and as long as service exists pod referencing this service in upstream will work and if the pods behind the service are not running, a pod will not see that and will try to forward the traffic but it will return a 502 bad gateway. So just defined the Service and then bring up your Pods with the proper label so the Service will pick them up.

94) How to configure a default ImagePullSecret for any deployment?

You can attach an image pull secret to a service account. Any pod using that service account (including default) can take advantage of the secret.you can bind the pullSecret to your pod, but you’re still left with having to create the secret every time you make a namespace.

  • imagePullSecrets:
  • name: test

Also, you can  Create the rc/deployment manually and either specify the imagepullsecret or a service account that has the secret or add the imagepullsecret to the default service account, in which case you’d be able to use `kubectl run` and not have to make any manual changes to the manifest. Depending on your environment and how secret this imagepullsecret is, will change how you approach it.

95) I have a configmap for 3 files

that are going to be mounted in supposing “fluentd/etc/” and the respective files would be fluent.conf,  kubernetes.conf, systemd.conf, config map in deployment.yaml is like this

configmap

configmaps are mounted read-only so that you can’t touch the files. when the master configmap changes the mounted file also changes. so if you were to modify the local mounted file, it would be overwritten anyways.

96) If you have a pod that is using a ConfigMap which you updated, and you want the container to be updated with those changes, what should you do?

if the config map is mounted into the pod as a volume, it will automatically update not instantly and the files will change inside the container. If it is an environment variable it stays as the old value until the container is restarted

For example: create a new config.yaml with your custom values

  • apiVersion: v1
  • kind: ConfigMap
  • metadata:
  • name: testconfig
  • namespace: default
  • data:
  • config.yaml: |
  • namespaces:
  • default
  • labels:
  • “app”
  • “owner”

Then create a pod definition, referencing the ConfigMap

  • apiVersion: v1
  • kind: Pod
  • metadata:
  • name: testobject
  • spec:
  • serviceAccountName: testobject
  • containers:
  • name: testobject
  • image: test/appv1
  • volumeMounts:
  • name: config-volume
  • mountPath: /app/config.yaml
  • subPath: config.yaml
  • volumes:
  • name: config-volume
  • configMap:
  • name: testconfig
  • restartPolicy: Never

97) Do all of the nodes have to be the same size in your cluster?

No, they don’t. The Kubernetes components, like kubelet, will take up resources on your nodes, and you’ll still need more capacity for the node to do any work. In a larger cluster, it often makes sense to create a mix of different instance sizes. That way, pods that require a lot of memory with intensive compute workloads can be scheduled by Kubernetes on large nodes, and smaller nodes can handle smaller pods.

98)  What happens when you enter “kubectl get pods”?

Don’t underestimate this question. The simple statement, “It lists the pods on the kubernetes cluster,” is far from the answer an interviewer wants to hear. Let’s go over the steps that are happening when you enter this command, at a high level:

Step 1: Config

When running kubectl, it will first look into the kubernetes configuration file. The default path for the configuration is ~/.kube/config (where ~ is the home directory of the current user). 

Make sure you are familiar with this configuration file. The config file can contain multiple clusters, users, and contexts, but there’s typically only one context activated (the current context). To see the contexts and which one is current, you can use the command “kubectl config get-contexts.” 

For someone with real-world kubernetes experience, these commands are for switching from one cluster to another cluster (or for wrapping tools that will do this changing of context for you).

Step 2: Kubernetes API Server

Once the current context is found, the user and server configuration can be retrieved. The server information will then contact the kubernetes API server (running on the kubernetes master), which can be queried to get information, but also to make changes. 

It’s basically a REST API protected by two-way authentication (called mTLS or “mutual TLS”).

Step 3: Authentication

This brings us to the authentication step. First, a TLS handshake will need to take place. To make this happen, you need the Certificate Authority Certificate (CA certificate) which is typically embedded within the kubernetes config (or, there is a reference to the file path). 

This certificate is needed to validate the identity of the Kubernetes API server. If someone else attempted a “man in the middle” attack by setting up a fake API server, the verification would fail, because you need the Certificate Authority Key (CA key) to be able to run a Kubernetes server with the same identity. 

The CA key is never shared with the clients. The client will have access to only the X.509 CA certificate to be able to validate the identity of the server, not create a server with the same identity.

Once kubectl can verify the identity of the Kubernetes API server, it will perform the authentication. For this, we need another X.509 certificate (called the “client certificate”) signed by the Certificate Authority (so we couldn’t just make up any cert+key pair), and a “client key.”

Step 4: REST Endpoint

To get the pods from the Kubernetes API server, kubectl will hit the REST endpoint with the following URL path: /api/v1/namespaces/default/pods?limit=500. 

This endpoint will list the pods in the “default” namespace. If you have another default namespace configured, then it will use that name instead of the “default” namespace. 

Every REST call will use the CA certificate, client key, and client certificate (as explained earlier) to pass the authentication part of the Kubernetes API server. If one of these elements was missing, instead of a HTTP 200 OK code, you’d get a HTTP 403 Permission Denied error.

Chart overview of what happens when you enter “kubectl get pods”

Step 5: Authorization

Next to authentication, there is also an authorization part. Authorization is controlled by RBAC and is typically turned on by default in any recent Kubernetes cluster. With RBAC, you control the access of an entity (a user, group, or service account) to specific resources. 

For example, let’s say you’re executing “kubectl get pods” with a user. The user needs to be able to do an HTTP GET request on /api/v1/namespaces/default/pods. You’ll need either a ClusterRole (cluster level) or a Role (namespace level) for that. If your user has the predefined cluster-admin role, you’ll have access to read/modify any resource.

99) How to install Kubernetes on ubuntu?

In the Kubernetes interview question for experienced, it is an important topic. The steps to install Kubernetes on Ubuntu are:

  • Step 1: Install Docker into system,
  • Step 2: After that start and enable the Docker,
  • Step 3: Install Kubernetes afterward,
  • Step 4: Add the Kubernetes signing key,
  • Step 5: Add software repositories in the system,
  • Step 6: Add Kubernetes installation tools i.e. Kubeadm (Kubernetes Admin) which is a tool for initializing clusters,
  • Step 6: Begin Kubernetes deployment in the systems,
  • Step 7: Assign unique hostname for servers node,
  • Step 8: Initialize Kubernetes on the main node
  • Step 9: Deploy the pod network into clusters,
  • Step 10: Join worker node to Clusters. And this is how we install kubernetes on ubuntu.

100) Where is the Kubernetes cluster data stored?

The primary data store of Kubernetes is etcd which is responsible for all Kubernetes cluster datastore.

101) How to set a static IP for Kubernetes load balancer?

Kubernetes Master assigns a new IP address.You can set a static IP for Kubernetes load balancer by changing the DNS records every time.

Thank you for your time. Keep visiting :), will add more latest Q & A soon.