Kubernetes

Table of Contents

Kubernetes

  1. Kubernetes -
    1. https://kubernetes.io/
    2. https://kubernetes.io/docs/home/
  2. Installing Minikube - https://minikube.sigs.k8s.io/docs/start/?arch=%2Flinux%2Fx86-64%2Fstable%2Fbinary+download
  3. Kubernetes Concepts -
    1. https://kubernetes.io/docs/concepts/overview/
    2. https://kubernetes.io/docs/concepts/
  4. Cloud Native Computer Foundation - https://www.cncf.io/
  5. Namespaces
    1. https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
  6. https://spacelift.io/blog/kubernetes-cheat-sheet

kubectl

  1. https://kubernetes.io/docs/reference/kubectl/
  2. https://kubernetes.io/docs/reference/kubectl/quick-reference/
  3. https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/

Is kubernetes a cicd tool?

This question gets to the heart of modern DevOps architecture.

The short answer is: No, Kubernetes itself is not a CI/CD tool, but it is the foundational platform that makes modern CI/CD possible for containerized applications.

Kubernetes serves as the runtime environment and orchestrator where Continuous Delivery (CD) tools perform their deployments.

You don’t use Kubernetes to do CI/CD; you use CI/CD tools to manage deployments onto Kubernetes.

The Role of Kubernetes in CI/CD

To understand its role, it’s helpful to break down the CI/CD acronym:

1. Continuous Integration (CI) - The Build Phase

Kubernetes does not handle CI.

  1. What CI is: This phase is about developing, testing, and packaging the code. It involves tasks like compiling source code, running unit tests, and building a container image (e.g., a Docker image).
  2. Tools used: Dedicated CI tools like GitHub Actions, GitLab CI, Jenkins, or CircleCI handle this.
  3. Kubernetes Connection: The output of this CI phase is a finalized container image pushed to a container registry (like Docker Hub or AWS ECR). Kubernetes takes this completed image as its input for deployment.

2. Continuous Delivery (CD) - The Deployment Phase

Kubernetes acts as the declarative deployment engine for CD.

  1. What CD is: This phase is about getting the finalized application from the registry into the various environments (Dev, Staging, Production). It involves updating application manifests (YAML files, Helm charts), executing rollouts, and monitoring health.
  2. Kubernetes’ Role: Kubernetes is the system that executes the deployment commands. When a CD tool tells Kubernetes, “The new version of the application is image v2.0,” Kubernetes handles all the complicated steps:
    1. Rolling Updates: Gradually replacing old pods with new ones to ensure zero downtime.
    2. Self-Healing: Continuously monitoring the application’s health and restarting failed containers.
    3. Scaling: Adjusting the number of running instances based on demand (via Horizontal Pod Autoscaler).

Kubernetes simplifies the delivery process by providing a unified, declarative API for application updates.

CI/CD Tools That Work With Kubernetes

Because Kubernetes is so powerful for the target environment, a whole category of tools has emerged to manage the deployment process onto Kubernetes.

Tool Category Function Examples
Traditional CI/CD Builds, tests, and pushes deployment commands to Kubernetes. GitHub Actions, Jenkins, GitLab CI/CD
Kubernetes-Native CI Frameworks that run the entire CI pipeline directly on Kubernetes (inside pods). Tekton
GitOps CD Dedicated controllers that run inside Kubernetes and pull the desired state from Git. Argo CD, FluxCD
Advanced Deployment Kubernetes controllers that enable advanced deployment strategies (Canary, Blue/Green). Argo Rollouts

What is a Kubeconfig File?

The kubeconfig file (usually located at ~/.kube/config) is a YAML file that holds the configuration information for accessing one or more Kubernetes clusters. It contains three main sections:

  1. Clusters: Defines the cluster API server location and certificate authority.
  2. Users: Defines the credentials kubectl should use (like tokens or client certificates).
  3. Contexts: Combines a Cluster, a User, and an optional Namespace into a single, easy-to-reference name. \*\*\*\*

What is the difference between Clusters and Contexts.

  1. A Cluster is the physical server definition.
  2. A Context is the access profile that links a Cluster, a User, and a Namespace.

kubectl - Find out the list of available contexts

How to find your context names:

kubectl config get-contexts

This command lists all the contexts defined in your kubeconfig file, and one of them will be marked with an asterisk \((*)\) indicating it is the currently active context.

kubectl - How do I connect to a specific cluster?

Connecting to a specific Kubernetes cluster using kubectl involves ensuring your kubeconfig file is set up correctly and then telling kubectl which context (which includes the cluster, user, and namespace) to use.

Here are the primary ways to connect to a specific cluster:

The quickest way to target a specific cluster for a single command is to use the --context flag.

  • Command Structure:
    kubectl --context **<context-name>** <command>
    
  • Example: If you have a context named dev-cluster-us-east-1 and you want to get the nodes from it, you would run:
    kubectl --context **dev-cluster-us-east-1** get nodes
    

If you plan to work with a specific cluster for several commands, it’s more convenient to set it as your default current context.

  • Command:
    kubectl config use-context **<context-name>**
    
  • Example: To permanently switch your active cluster to the one defined in the prod-cluster-europe-west context:
    kubectl config use-context **prod-cluster-europe-west**
    
  • Verification:
    kubectl config current-context
    
    This will confirm which context is now active. Subsequent kubectl commands will target this cluster until you change it again.

3. Using the KUBECONFIG Environment Variable

If you have multiple kubeconfig files (e.g., one for work, one for a personal project), you can specify which file kubectl should read. This is useful for scripting or when you want to avoid merging all configs into ~/.kube/config.

  • Command Structure (for one file):
    export KUBECONFIG=**</path/to/your/config/file>**
    kubectl get pods
    
  • Command Structure (for multiple files): You can specify multiple files separated by a colon (:) on Linux/macOS or a semicolon (;) on Windows. kubectl will merge them.
    export KUBECONFIG=**$HOME/.kube/config:$HOME/.kube/another-config**
    kubectl config get-contexts
    

In summary, the best practice is to use kubectl config use-context <name> for extended sessions and the --context <name> flag for one-off commands.

What if I am not seeing all the clusters in my terminal when using kubectl? What am I missing?

This is a very common issue when dealing with multiple Kubernetes environments. If you are not seeing all your expected clusters when you run kubectl config get-contexts, it means kubectl is not looking at all the necessary configuration files.

The problem almost always boils down to one simple concept: How kubectl finds and merges your kubeconfig files.

Here is what you are likely missing, starting with the most common cause:


1. The KUBECONFIG Environment Variable

By default, kubectl only looks for a single configuration file at the standard location:

  • Linux/macOS: ~/.kube/config
  • Windows: %USERPROFILE%\.kube\config

If your other cluster configurations are stored in separate files (e.g., dev-config.yaml, prod-config.yaml), they won’t be seen by default.

The Fix: Merge Kubeconfig Files

You need to set the KUBECONFIG environment variable to include a colon-separated list of all your config files. This tells kubectl to merge the contexts, users, and clusters from all of them.

Operating System Command to Set KUBECONFIG
Linux/macOS export KUBECONFIG=~/.kube/config:~/path/to/dev-config.yaml:~/path/to/prod-config.yaml
Windows (PowerShell) $env:KUBECONFIG="$HOME\.kube\config;$HOME\path\to\dev-config.yaml;$HOME\path\to\prod-config.yaml"

After running the export command, immediately run:

kubectl config get-contexts

You should now see all the contexts from all three files listed\!

> Note: This export command is only active for the current terminal session. To make it permanent, you must add the export line to your shell’s startup file (e.g., ~/.bashrc, ~/.zshrc, or ~/.profile).

2. Incorrect File Format or Content

If the missing cluster’s config file is being used (either because it’s the only one or it’s included in KUBECONFIG), but the contexts still aren’t appearing, the file itself might be invalid.

  • YAML Syntax Errors: kubectl is very strict. An indentation error or a missing colon in the YAML file will cause it to be ignored completely or merged incorrectly.
    • Fix: Use kubectl config view to see the merged config. If the output is empty or only shows some content, you can run kubectl config view --kubeconfig=/path/to/problem/file to troubleshoot that specific file.
  • Missing or Corrupted Credentials: The file might be missing the required clusters, users, or contexts sections, or the paths to certificate files/keys might be wrong.
  • Relative Paths: If the configuration file was copied from a different machine, and it uses relative paths for certificates (e.g., client-certificate: ./my-cert.crt), the file will be invalid because the relative path points to a location that no longer exists on your machine.

3. Not Using the Context Name

Remember the difference between Clusters and Contexts.

  • A Cluster is the physical server definition.
  • A Context is the access profile that links a Cluster, a User, and a Namespace.

When you run kubectl config get-contexts, you are listing the available Contexts. If you have a cluster defined, but no context that uses it, you won’t see it in the list.

Fix: Create the Missing Context

If you manually added a cluster and user entry to your config, you must create the context to link them:

# Check if the Cluster and User names are already defined
kubectl config view --minify

# Create the new context
kubectl config set-context **<new-context-name>** \
    --cluster=**<cluster-name-from-view>** \
    --user=**<user-name-from-view>**

If you are certain all your files are correct and you’ve set the KUBECONFIG variable, the best step is to view the merged configuration to confirm what kubectl is actually seeing:

kubectl config view

If you see the missing entries in the output of kubectl config view, but they don’t appear in kubectl config get-contexts, there is a serious formatting issue preventing the context definition from being read correctly.

Connecting to and exploring resources in a cluster using Teleport shell (tsh)

  1. https://goteleport.com/docs/connect-your-client/
  2. https://goteleport.com/docs/connect-your-client/tsh/

Can I add a specific cluster using a kubectl command? What if they are using teleport in the company?

If your company is using Teleport, the standard kubectl commands and direct cloud provider commands (like gcloud get-credentials) are replaced by the Teleport workflow.

Teleport acts as a Zero Trust Access Proxy that sits in front of all your infrastructure, including Kubernetes clusters, databases, and servers. Its primary goal is to eliminate static, long-lived credentials (like certificate files or API tokens) and provide Single Sign-On (SSO) and auditing.

*When you use Teleport, you don’t use kubectl config commands to *add a cluster; you use tsh login and tsh kube login to authorize and configure your local client to access a cluster that Teleport is already managing.**

Here is the secure, standard way to connect to a specific cluster when using Teleport:

The Teleport Connection Workflow

You will use the Teleport Client, tsh (Teleport Shell), which handles the authentication and configuration of your local kubectl.

Step 1: Log in to Teleport (SSO)

First, you need to authenticate yourself to the main Teleport Proxy Service using your company’s identity provider (Okta, Google Workspace, Azure AD, etc.).

tsh login --proxy=<teleport-proxy-address> --user=<your-username>

or

tsh login --proxy=bedrock-company.teleport.sh:443 bedrock-company.teleport.sh --auth=bedrock-sso
  1. The command will open a browser window for you to log in with your SSO credentials.
  2. Upon successful authentication, tsh fetches a short-lived certificate that grants you access to all resources (including Kubernetes clusters) you are authorized for.

Step 2: List Available Kubernetes Clusters

After logging in, you can see all the Kubernetes clusters you have permission to access using the tsh kube ls command.

tsh kube ls
Kube Cluster Name Labels Selected
prod-cluster-us-east env:prod, region:us-east
dev-frontend-cluster env:dev, app:frontend
staging-cluster-eu env:staging, region:eu *

Step 3: Connect and Configure kubectl

To connect to the cluster and update your local kubeconfig file to use the short-lived credentials, you use the tsh kube login command.

tsh kube login **<cluster-name>**

Example: To connect to the production cluster:

tsh kube login **prod-cluster-us-east**
  • What this command does:

    1. Updates kubeconfig: It creates a new context in your local ~/.kube/config file (usually prefixed with the Teleport cluster name, e.g., teleport-main-prod-cluster-us-east).
    2. Configures Authentication: It sets the authentication mechanism for that context to use the Teleport-issued short-lived certificate, ensuring that the connection is secure and your access is constrained by your Teleport roles.
    3. Sets Current Context: It automatically runs the equivalent of kubectl config use-context <new-teleport-context>, making it the active cluster in your terminal.

Step 4: Use kubectl Normally

Once Step 3 is complete, your standard kubectl commands will be proxied securely through Teleport:

This command now targets the Teleport-selected cluster

Command Description
kubectl config get-contexts List your kubernetes clusters
kubect config use-context a-context-1 If you plan to work with a specific cluster for several commands, it’s more convenient to set it as your default current context.
kubectl get namespace List all namespaces
kubectl get namespaces List all namespaces
kubectl auth can-i list pods –namespace a-namespace-1 Tells you whether you can or cannot see the pods in this namespace
kubectl get pods List pods
kubectl get pods –namespace a-namespace-1 List pods in a given namespace
kubectl logs –namespace a-namespace-1 api-rollout-12345678 Get logs for a given pod (api-rollout-12345678)
kubectl get deployment –namespace a-namespace-1 List the current deployments in this namespace

Key Differences with Teleport

Feature Standard kubectl Teleport-Managed Access
Authentication Static certificates, long-lived tokens, or cloud-specific helpers (IAM). Short-lived, SSO-driven certificates issued by the Teleport Auth Service.
Connecting Requires manual configuration of cluster/user/context or cloud-specific commands (gcloud get-credentials). Uses one command, tsh kube login, after a single SSO login.
Cluster View Requires merging multiple kubeconfig files and running kubectl config get-contexts. Uses a centralized view: tsh kube ls lists all clusters you are authorized for, regardless of location.
Audit Relies on Kubernetes API audit logs (internal to the cluster). Teleport records and stores a full, centralized audit log of all kubectl sessions, including interactive exec commands.

kubectl POD or TYPENAME

The phrase “kubectl POD or TYPENAME” likely refers to the syntax used in many kubectl commands, particularly when specifying a target resource.

In Kubernetes kubectl command syntax:

  1. TYPENAME refers to the Resource Type (or kind) of the Kubernetes object you want to interact with.

    • Examples: pod, service, deployment, namespace, node, configmap, etc.
    • You can often use the full name, plural name, or a short alias (e.g., po for pod, svc for service, deploy for deployment).
    • In a command: kubectl get **pods** lists all pods.
  2. POD (or a specific resource name after the type) refers to the Specific Name of the instance of that resource type.

Common Syntax Patterns:

You will typically see the arguments structured in one of two ways, especially for commands like get, describe, delete, logs, or exec:

Syntax Example Command Description
[TYPE] [NAME] kubectl describe **pod** **my-pod-name** Specifies the resource type and its specific name.
[TYPE]/[NAME] kubectl describe **pod/my-pod-name** An alternative, often more explicit, way to specify the type and name.
[TYPE] kubectl get **pods** Specifies the resource type to get all instances of that type (e.g., all pods).
[POD] (in context) kubectl logs **my-pod-name** For commands like logs or exec, if the type is implicitly a pod (which is common for these commands), you often only need to provide the pod name directly.

The full syntax is often:

\[\text{kubectl } [\text{command}] [\text{TYPE}] [\text{NAME}] [\text{flags}]\]

So, the phrase is highlighting the choice in context: you either provide the full TYPENAME (like deployment or service) or, in the case of pod-centric commands, you sometimes only need the POD (the specific name).

Practical Examples

Command Syntax Used Meaning
kubectl get **pods** [TYPE] Get all resources of type pod.
kubectl get **pod/nginx-6799fc88d8-kjr86** [TYPE]/[NAME] Get the specific pod named nginx-6799fc88d8-kjr86.
kubectl describe **node** **worker-01** [TYPE] [NAME] Describe the specific node named worker-01.
kubectl logs **my-app-pod** [POD] (Name only) Get logs for the specific pod named my-app-pod. (The type pod is implied).

Would you like to try running a specific kubectl command, or see more examples of how to use resource types?

What is a Service Mesh and Why is it Needed for Kubernetes?

  1. https://kubernetes.io/blog/2017/05/managing-microservices-with-istio-service-mesh/
  2. https://www.f5.com/glossary/service-mesh
  3. https://linkerd.io/what-is-a-service-mesh/
  4. https://www.dynatrace.com/news/blog/what-is-a-service-mesh/
  5. https://imesh.ai/blog/what-is-a-service-mesh-and-why-is-it-needed-for-kubernetes/
  6. https://www.tigera.io/learn/guides/service-mesh/service-mesh-kubernetes/

A Kubernetes service mesh is an infrastructure layer for managing service-to-service communication (east-west traffic) in a Kubernetes cluster, providing benefits like traffic management, security, and observability. It uses a network of proxies, typically as sidecars in each pod, to handle all communication, decoupling these functions from the application code and providing features like mTLS encryption, fault injection, and detailed metrics. This helps manage the complexity of microservices, making it easier to enforce policies and monitor application performance.

How it works

  1. Sidecar proxies: A service mesh injects a network proxy, called a “sidecar,” into each pod alongside your application’s containers.
  2. Data plane: All network traffic between services is intercepted and routed through these sidecar proxies, forming the “data plane”.
  3. Control plane: A central “control plane” manages and configures the proxies, providing a single point for setting policies, security, and routing rules.

Key benefits

  1. Traffic management: Features like load balancing, traffic routing to different versions of a service, and fault injection (e.g., introducing delays) can be controlled by the mesh without code changes.
  2. Security: It can automatically enforce mutual TLS (mTLS) for encrypted communication and manage service-to-service authentication and authorization.
  3. Observability: The mesh provides deep insights into inter-service communication through detailed metrics, distributed tracing, and logging, making it easier to troubleshoot issues.
  4. Decoupling: By moving networking concerns to the infrastructure layer, developers can focus on writing application code, while operations teams manage the service mesh infrastructure.

Potential drawbacks

  1. Increased complexity: It adds another complex component to manage in your cluster.
  2. Resource overhead: The sidecar proxies consume additional CPU and memory resources.
  3. Latency: Routing traffic through an extra proxy hop can introduce a small amount of latency.

What is a kubernetes namespace?

A Kubernetes Namespace is a mechanism that provides a way to virtually partition a single Kubernetes cluster into multiple sub-clusters.

You can think of a Namespace as a dedicated virtual space within your cluster. It is used primarily for:

1. 🗂️ Organization and Scope for Names

  1. Namespaces allow you to organize resources (like Pods, Deployments, and Services) into distinct groups.
  2. Name Scoping: Resource names only need to be unique within a namespace, not across the entire cluster. For example, you can have a “database” service in a dev namespace and another “database” service in a prod namespace without conflict.

2. 🧱 Isolation and Multi-Tenancy

  1. They help isolate groups of resources, making it possible for different teams, projects, or environments (e.g., development, staging, production) to safely share the same physical cluster without interfering with each other’s work.
  2. Actions taken on resources in one namespace do not affect other namespaces.

3. 🛡️ Access Control (RBAC)

  1. Namespaces are a key part of implementing Role-Based Access Control (RBAC). You can define security policies that restrict which users or service accounts can view, modify, or delete resources only within a specific namespace.

4. ⚖️ Resource Quotas

  1. You can set Resource Quotas on a namespace to limit the total amount of CPU and memory that all resources within that space can consume. This prevents one tenant or project from unintentionally consuming all the cluster’s resources.

Default Namespaces

Every Kubernetes cluster starts with a few default namespaces:

  1. default: Where resources are placed if you don’t explicitly specify a namespace. Many smaller teams or simple deployments can operate entirely within this namespace.
  2. kube-system: Reserved for objects created by the Kubernetes system itself, such as the control plane components. You should not place your own applications here.
  3. kube-public: Used for public resources that should be readable by all users.
  4. kube-node-lease: Holds lease objects associated with nodes to improve performance.

In summary, namespaces are a tool for managing complexity in larger, shared clusters by providing logical separation, organization, and a scope for applying policies.

Tags

  1. Argo CD