Table of contents generated with markdown-toc
Group of nodes grouped together. If one node fails, it could be still accessible
Machine (virtualized or physical) that watches other nodes in the cluster and is responsible for the actual orchestration of containers on the worker nodes.
Worker Machine (virtualized or physical)
The smallest objects that could be created in K8s which means that scaling happens. Containers are encapsulated into pod. For most of the time pods has 1-1 mapping with containers, however there’s a possibility to have a multi-container Pods, however it is a rare case.
What makes a node specific node in hierarchy?
({Purpose} | {Community} | {Works With}) |
ctr (Debugging | containerd | containerd) |
nerdctl (General Purpose | containerd | containerd) |
cricrl (debugging | k8s | all CRI compatible runtimes) |
Required fields (root properties)
apiVersion: <v1|apps/v1>
kind: <Pod|Service|ReplicaSet|Deployment>
metadata: # Typed Dictionary
name: myapp-pod
labels: # Dictionary
app: myapp
type: back-end
spec:
containers: # List/Array
- name: nginx-container # dash indicates first item in a list
image: nginx
It helps to run multiple instances of single pod in k8s cluster to achieve high availability. Having only one instance of a pod you can still use the controller to make sure there’s at least on always running (in case of an application crash). Another reason to use it is load balancing and scaling. In case of high traffic the controller could run another pod instance in a node or even across nodes.
apiVersion: v1
kind: ReplicationController
metadata:
name: myapp-rc
labels:
app: myapp
type: back-end
spec:
template:
<copy paste of a pod metadata & spec>
replicas: 3
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-rs
labels:
app: myapp
type: back-end
spec:
template:
<copy paste of a pod metadata & spec>
replicas: 3
selector: # (optional) major difference between controller & set
matchLabels:
type: back-end
ctr images pull docker.io/library/redis:alpine
ctr run docker.io/library/redis:alpine redis
nerdctl run --name redis redis:alpine
nerdctl run --name webserver -p 80:80 -d nginx
crictl pull busybox
crictl images
crictl ps -a
crictl exec -i -t 743vzcvzasdf... ls
crictl logs 1029348fgjkx902
crictl pods
crictl --runtime-endoint
export CONTAINER_RUNTIME_ENDPOINT
kubectl get pods
kubectl describe pod myapp-pod
kubectl get pods -o wide
kubectl run redis --image=redis123 --dry-run=client -o yaml > redis.yaml
kubectl create -f redis.yaml
# Change file
kubectl apply -f redis.yaml
kubectl get replicationcontroller
kubectl create -f replicaset-definition.yml
kubectl delete replicaset myapp-replicaset
kubectl replace -f replicaset-definition.yml
kubectl scale -replicas=6 -f replicaset-definition.yml
kubectl sclae --replicas=6 replicaset myapp-replicaset
kubectl edit rs <rs-name>
docker ps -a
docker ps
docker images
docker volues ls
# run, enter, not exited(0)
docker run -it {image}:{version}
# detached, not exited(0)
docker run -td {image}:{version}
# run, pass env
docker run --env-file ./env.list {image-name}:{version}
#stop, kill all running containers
docker stop $(docker ps -a -q) &
docker kill $(docker ps -q)
# Enter
docker exec -it {container-id} bash
docker build -t {image-name}:{version} .
# containers
docker rm -f $(docker ps -a -q)
# volumes
docker volume rm $(docker volume ls -q)
# images
docker rmi -f $(docker images -a -q)
# corrupted images
docker image prune -f
docker inspect <container-id> | grep {attribute}
docker-compose up -d {service}
docker-compose up
docker-compose -f {docker-compose.yml} up
docker-compose -f {docker-compose.yml} -f {service} down
docker-compose kill