notes

Table of contents generated with markdown-toc

Architecture

Cluster

Group of nodes grouped together. If one node fails, it could be still accessible

Master (Node)

Machine (virtualized or physical) that watches other nodes in the cluster and is responsible for the actual orchestration of containers on the worker nodes.

(Worker) Node/Minion

Worker Machine (virtualized or physical)

Pods

The smallest objects that could be created in K8s which means that scaling happens. Containers are encapsulated into pod. For most of the time pods has 1-1 mapping with containers, however there’s a possibility to have a multi-container Pods, however it is a rare case.

K8s installation components

What makes a node specific node in hierarchy?

Container Runtime

  1. Kubernetes implements Container Runtime Interface (CRI)
  2. A standard for CRI is called Open Container Initiative (OCI). It consists imagespec and runtimespec that defines standards accordingly.
  3. rkt (Rocket) runtime supports CRI.
  4. Docker don’t support CRI so Kubernetes introduced what is known as dockershim. It was a hacky but temporary way to support Docker outside CRI.
  5. Docker consist multiple tools like: CLI, API, BUILD, VOLUMES, AUTH, SECURITY, containerd (runtime daemon).
  6. containerd is CRI compatible tool, so it could be used on its own with K8s.
  7. For version 1.24 K8s decided to remove dockershim entirely.
  8. All the images build with docker works because these follows imagespec as well as containerd follows runtimespec,
  9. however Docker itself was removed from supported runtime.

Command-line interfaces

({Purpose} {Community} {Works With})

Objects configuration (YAML)

Required fields (root properties)

apiVersion: <v1|apps/v1>
kind: <Pod|Service|ReplicaSet|Deployment>
metadata:  # Typed Dictionary
  name: myapp-pod
  labels: # Dictionary
    app: myapp
    type: back-end
  
spec:
  containers:  # List/Array
    - name: nginx-container  # dash indicates first item in a list
      image: nginx

Workload Management (Controllers)

Replica Set and Replica Controller

It helps to run multiple instances of single pod in k8s cluster to achieve high availability. Having only one instance of a pod you can still use the controller to make sure there’s at least on always running (in case of an application crash). Another reason to use it is load balancing and scaling. In case of high traffic the controller could run another pod instance in a node or even across nodes.

apiVersion: v1
kind: ReplicationController
metadata:
  name: myapp-rc
  labels:
    app: myapp
    type: back-end

spec:
  template:
    <copy paste of a pod metadata & spec>
  replicas: 3
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-rs
  labels:
    app: myapp
    type: back-end
spec:
  template:
    <copy paste of a pod metadata & spec>
  replicas: 3
  selector:  # (optional) major difference between controller & set
    matchLabels:
      type: back-end

Commands cheatsheet

ctr images pull docker.io/library/redis:alpine
ctr run docker.io/library/redis:alpine redis

nerdctl run --name redis redis:alpine
nerdctl run --name webserver -p 80:80 -d nginx

crictl pull busybox
crictl images
crictl ps -a
crictl exec -i -t 743vzcvzasdf... ls
crictl logs 1029348fgjkx902
crictl pods

crictl --runtime-endoint
export CONTAINER_RUNTIME_ENDPOINT

kubectl

kubectl get pods
kubectl describe pod myapp-pod
kubectl get pods -o wide

kubectl run redis --image=redis123 --dry-run=client -o yaml > redis.yaml
kubectl create -f redis.yaml
# Change file
kubectl apply -f redis.yaml

kubectl get replicationcontroller
kubectl create -f replicaset-definition.yml
kubectl delete replicaset myapp-replicaset
kubectl replace -f replicaset-definition.yml
kubectl scale -replicas=6 -f replicaset-definition.yml
kubectl sclae --replicas=6 replicaset myapp-replicaset

kubectl edit rs <rs-name>

Docker

docker ps -a
docker ps
docker images
docker volues ls

# run, enter, not exited(0)
docker run -it {image}:{version}
# detached, not exited(0)
docker run -td {image}:{version}
# run, pass env
docker run --env-file ./env.list {image-name}:{version}

#stop, kill all running containers
docker stop $(docker ps -a -q) &
docker kill $(docker ps -q)

# Enter
docker exec -it {container-id} bash

docker build -t {image-name}:{version} . 

# containers
docker rm -f $(docker ps -a -q)
# volumes
docker volume rm $(docker volume ls -q)
# images
docker rmi -f $(docker images -a -q)
# corrupted images
docker image prune -f

docker inspect <container-id> | grep {attribute}

docker-compose up -d {service}
docker-compose up
docker-compose -f {docker-compose.yml} up
docker-compose -f {docker-compose.yml} -f {service} down
docker-compose kill