Guest Author

Kubernetes Best Practices

Vikash Kumar
Ambassador Labs
Published in
8 min readNov 7, 2023

--

According to a report generated by Statista, around 61% of organizations worldwide have started using Kubernetes. One of the reasons for this is the many benefits that Kubernetes provides for businesses across the world. For instance, Kubernetes can automate self-healing, workload discovery, and scaling containerized applications. It also helps to simplify the process of deploying and running distributed apps at scale, thereby making it the ideal tool for managing microservices.

In this article, I’ll highlight Kubernetes best practices that will help developers improve their project’s performance, security, and costs.

1. Kubernetes best practices for configuration

The very first Kubernetes best practices I’ll be sharing in this article are about configuration! When it comes to configuration, many small things matter, and we need to do these things. Here are some of them:

  • While defining configurations, the developers must ensure that the API version they are using is the latest one or at least a stable version.
  • Configuration files that are required to be used must be stored in version control. This is because when updates are pushed to the cluster, rolling back the configuration changes won’t be possible. Doing this also enables cluster restoration and re-creation.
  • When writing the configuration files, the developer should use YAML and not JSON. This is because YAML is more user-friendly than JSON, making it easier for beginners to work with.
  • When there are many objects in the files, all the related objects must be grouped into a single file because one file can be easily managed compared to several files.
  • While configuring Kubernetes, one must never specify default values unnecessarily. When the configuration is simple and kept to a minimum, it will be quickly processed without any errors.
  • The objects specified in the configuration files should be described in annotations, as it enables all the developers in the team to have better introspection.

2. Use the latest Kubernetes version

Even though this is one of the unsaid Kubernetes best practices, using the latest version of any technology is the ideal practice to access the latest and upgraded system functionalities — and this is the same with Kubernetes. It is recommended that developers use the latest versions of Kubernetes, which are 1.28, 1.27, and 1.26 (at the time of this writing), as these three versions have the most recent upgrades that come with improved features and bug-fixing functionalities. They have many API deletions and a more scalable, dependable, and user-friendly Kubernetes environment.

3. Use namespaces

Namespaces are essential in Kubernetes as they are used to organize objects, enable the creation of logical partitions inside the Kubernetes cluster, and are helpful for security purposes. When it comes to namespaces, two Kubernetes best practices come to mind, which are: 1) Always define the standard size of the deployed container in the namespace and configure the LimitRangeobjects against namespaces. 2) ResourceQuotas can be used to limit the resource consumption of all containers that are present in a particular namespace.

For context, Kubernetes comes with three default namespaces in its cluster, and they are Kube-public, default, and kube-system. You can create a namespace declaratively using the YAML configuration file below. Once applied into a Kubernetes cluster using the kubectl apply command, this will create a namespace called dev.

apiVersion:app/v1
kind: Deployment
metadata:
name:dev
namespace:dev
labels:
image:dev01
spec:
containers:
-name:dev01
image:nginx:1.14.2

4. Use kubectl

Using kubectl is one of the most recommended practices shared by experts in the field when implementing Kubernetes best practices. This is because kubectl enables developers to create single container services and deployments, amongst other things, quickly.

Here’s the syntax for running kubectl commands on your terminal:

kubectl [command] [TYPE] [NAME] [flags]:

  • The command can be any specific operation like create, delete, or get that you want to perform
  • TYPE is used to describe the resource type you are targeting
  • NAME is used to specify the name of the resource
  • The flags option, as the name implies, is used to specify optional flags. For example, — server or -sflags can be used to specify the port and address of the Kubernetes API server.

5. Avoid using hostPort and hostNetwork

It is best to avoid using hostPort and hostNetwork unless it is necessary. This is because the hostPort and hostNetwork setting applies to the Kubernetes containers. So, when you bind the pod to a hostPort, it limits the number of places where the pod can be scheduled as each <hostIP, hostPort, protocol> combination must be unique.

If you don’t specify the protocol and hostIP, Kubernetes will use TCP as the default protocol and 0.0.0.0 as the default hostIP. For debugging purposes, if you are required to access a port, you can use the kubectl post-forward command or the CNCF debugging tool, Telepresence.

And, on the node, if you need to expose a Pod’s port, use a NodePort Service before resorting to hostPort. Avoid hostNetwork for the same reason.

6. Use Role-based Access Control (RBAC)

It is recommended that role-based access control (RBAC) rights be assigned to service accounts and the developers of the Kubernetes project. This enables them to use services or operations explicitly required for their role. Here are other Kubernetes best practices to keep in mind when it comes to role-based access control:

  • When assigning only permissions required at the namespace level, the developer must use RoleBindings rather than ClusterRoleBindings to give users rights. Find below a YAML configuration file utilizing RoleBinding:

apiVersion:rbac.authorization.k8s.io/v1
kind:RoleBinding
metadata:
name:read-pods
namespace:default
subjects:
-kind:User
name:Test
apiGroup:rbac.authorization.k8s.io
roleRef:
kind:Role
name:pod-reader
apiGroup:rbac.authorization.k8s.io

  • Administrators must not use cluster-admin accounts except for some specific requirements.
  • Developers must avoid offering a low-privileged account with impersonation rights as it will enable the users to create accidental modifications.
  • Wildcard permissions must be avoided when possible to all resources as Kubernetes is a system that tends to offer wildcard access rights to all object types present in the cluster currently and to the object types that will be created in the future.
  • Avoid adding more users to the system:masters group. The reason for this is that anyone who is a member of this group has the right to bypass all the RBAC rights and access the entire system with unrestricted superuser access.

7. Follow the GitOps workflow

Following the GitOps workflow is one of the most essential Kubernetes best practices. When a development team wants to deploy a Kubernetes project successfully, it needs to focus on the workflow processes that are used by the entire team. For this, a git-based workflow is essential as it enables automation with the help of CI/CD pipelines. This helps in increasing the efficiency and speed of the application deployment process. Besides, CI/CD also provides an audit trail for K8s deployments.

In addition, when it comes to Kubernetes deployment, Git must be a single source of truth for all automation, as it enables the developers to manage the Kubernetes cluster in a unified manner.

8. Don’t use “Naked” pods

If you can avoid it, don’t use naked pods (pods not bound to replicaSet or deployment) because, in the event of a node failure, naked pods will not be rescheduled. As a replacement, use a deployment that creates ReplicaSet to ensure the availability of desired pods and also specify the strategy to replace pods (Ex. RollingUpdate).

9. Configure least-privilege access to secrets

When a developer is planning an access control mechanism like RBAC, etc., these Kubernetes best practices should be followed:

  • Humans: When offering access to Secrets to humans, the developers must restrict the get, list, or watch access to Secrets. And only cluster administrators should be allowed to access everything. If a developer wants to have more complex access control, like restricting access to Secrets with some particular annotations, they should use a third-party authorization mechanism.
  • Components: Components must only have access to Secrets if their behavior needs it. Otherwise, they should be restricted to only watch or list access.
  • When a user creates a Pod using a Secret, it can see the value of that Secret. In this case, even if Kubernetes cluster policies don’t allow the user to read the Secret directly, the user can run a Pod that can expose the Secret. In such cases, the developers need to restrict or limit the impact caused by Secret explosion or use the following recommendations: 1) Implement the audit rules that alert the authorized administrator on specific events like a single user reading multiple Secrets. 2) Utilize a short-lived Secrets approach.

10. Use Readiness and Liveness probes

One of the Kubernetes best practices I stand by is using Readiness and Liveness probes for health checks.

For context, a Readiness probe is a concept that enables the Kubernetes project to ensure that the requests sent to a pod are only directed to it in the case where the pod is ready to serve requests. And if the pod is not ready, the requests are sent elsewhere. When utilizing Readiness probes, it is essential for the developer to define the readiness probe for every container that is used in the Kubernetes project, as they don’t have any default values set. For instance, if a pod takes around 20 seconds to start and misses the readiness probe, it will cause a failure, which is why the readiness probes are independent.

On the other hand, the Liveness probe is an approach that enables the testing of applications to see if it is running properly or not. For instance, it tests a specific path of the web application to check whether it is responding in an ideal way or not. If it is not, then the pod will be marked as an unhealthy pod, and the probe failure will cause the launch of a new pod with the help of kubelet. The new pod will be tested again to ensure it works correctly with the system.

11. Monitor the cluster resources

To proactively manage the clusters, Kubernetes monitoring is required. It eases the containerized infrastructure by tracking the utilization of cluster resources like CPU, memory, and storage. If the desired number of Pods are not running, then Cluster operators receive alerts. Here are some metrics you should focus on while monitoring:

Apart from this, Cluster state metric, API request latency, resources request, limits, etc., can also be monitored. This is one of the Kubernetes best practices that shouldn’t be ignored.

Conclusion

Kubernetes is a phenomenal open source container orchestration solution that helps development teams manage containers, clusters, nodes, and deployments. But it also comes with its own challenges, and that’s why implementing the Kubernetes best practices shared in this article is essential.

As a reminder, this article highlighted some essential Kubernetes best practices to follow when designing, developing, running, and maintaining applications on Kubernetes and the Kubernetes cluster. Following these Kubernetes best practices will eliminate security risks, improve overall performance, and enable companies to not only retain their old clients but also get new ones on board.

Close the developer experience gap of Kubernetes development with Telepresence

Building and testing your microservice-based application becomes complex when you can no longer run everything locally due to resource requirements. Moving to the cloud for testing is a no-brainer, but how do you synchronize your local changes against your remote Kubernetes environment? By using Telepresence!

Telepresence redirects network traffic from a service in the cloud to a service or container on your local machine, merging the benefits of cloud and local test environments. This means you can run integration tests locally instead of waiting on a remote deployment.

Try Telepresence today or learn more about Telepresence.

--

--

Manager at software development company Tatvasoft.com. He also featured his bylines at many other publications like Entrepreneur, YourStory and many more.