There are disadvantages and advantages when running virtualized VMs instead of bare metal.
over-subscription,
Overhead | 1.5G |
Kubernetes Cluster on a single Macbook Pro using multiple VMware nodes
Kubernetes Cluster across multiple OSX machines with single VMware nodes
Node | VM |
---|---|
/vms/current/1/ubuntu1604_20180618.vmwarevm | |
Experiment: Run a full saturation DaemonSet kubernetes deployment across all nodes in the cluster
see Performance#KubernetesDaemonSet
The following RKE cluster consists of 5 nodes - 1 32g macbook laptop as a non-worker node and four 64G windows servers - all running VMWare Ubuntu virtual machines.
sudo ./rke up amdocs@obriensystemsu0:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.0.114 Ready controlplane,etcd,worker 102s v1.18.3 192.168.0.59 Ready controlplane,etcd,worker 100s v1.18.3 amdocs@obriensystemsu0:~$ sudo cp kube_config_cluster.yml ~/.kube/config amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx default-http-backend-598b7d7dbd-dnjlq 1/1 Running 0 2m30s 10.42.0.3 192.168.0.114 <none> <none> ingress-nginx nginx-ingress-controller-5pjln 1/1 Running 0 2m30s 192.168.0.59 192.168.0.59 <none> <none> ingress-nginx nginx-ingress-controller-lqpxc 1/1 Running 0 2m30s 192.168.0.114 192.168.0.114 <none> <none> kube-system canal-6f8h5 2/2 Running 0 2m46s 192.168.0.59 192.168.0.59 <none> <none> kube-system canal-tw64d 2/2 Running 0 2m46s 192.168.0.114 192.168.0.114 <none> <none> kube-system coredns-849545576b-vrfrd 1/1 Running 0 2m37s 10.42.1.3 192.168.0.59 <none> <none> kube-system coredns-849545576b-wlnzk 1/1 Running 0 2m40s 10.42.0.2 192.168.0.114 <none> <none> kube-system coredns-autoscaler-5dcd676cbd-pgctz 1/1 Running 0 2m39s 10.42.1.2 192.168.0.59 <none> <none> kube-system metrics-server-697746ff48-mk6g2 1/1 Running 0 2m35s 10.42.1.4 192.168.0.59 <none> <none> kube-system rke-coredns-addon-deploy-job-t9d65 0/1 Completed 0 2m41s 192.168.0.59 192.168.0.59 <none> <none> kube-system rke-ingress-controller-deploy-job-rh7v7 0/1 Completed 0 2m31s 192.168.0.59 192.168.0.59 <none> <none> kube-system rke-metrics-addon-deploy-job-rjlxp 0/1 Completed 0 2m36s 192.168.0.59 192.168.0.59 <none> <none> kube-system rke-network-plugin-deploy-job-6kd94 0/1 Completed 0 2m47s 192.168.0.59 192.168.0.59 <none> <none> add a 3rd node 101 to existing 59 and 114 amdocs@obriensystemsu0:~$ vi cluster_59_114_101.yml amdocs@obriensystemsu0:~$ cp cluster_59_114_101.yml cluster.yml amdocs@obriensystemsu0:~$ sudo ./rke up INFO[0000] Running RKE version: v1.1.3 INFO[0000] Initiating Kubernetes cluster INFO[0000] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates INFO[0000] [certificates] Generating Kubernetes API server certificates INFO[0000] [certificates] Generating admin certificates and kubeconfig INFO[0000] [certificates] Generating kube-etcd-192-168-0-59 certificate and key INFO[0000] [certificates] Generating kube-etcd-192-168-0-114 certificate and key INFO[0000] [certificates] Generating kube-etcd-192-168-0-101 certificate and key INFO[0146] Finished building Kubernetes cluster successfully amdocs@obriensystemsu0:~$ sudo cp kube_config_cluster.yml ~/.kube/config amdocs@obriensystemsu0:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.0.101 Ready controlplane,etcd,worker 84s v1.18.3 192.168.0.114 Ready controlplane,etcd,worker 84s v1.18.3 192.168.0.59 Ready controlplane,etcd,worker 79s v1.18.3 mdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx default-http-backend-598b7d7dbd-dtpr5 1/1 Running 0 66s 10.42.2.4 192.168.0.59 <none> <none> ingress-nginx nginx-ingress-controller-27cd4 1/1 Running 0 66s 192.168.0.101 192.168.0.101 <none> <none> ingress-nginx nginx-ingress-controller-rtnjc 1/1 Running 0 66s 192.168.0.114 192.168.0.114 <none> <none> ingress-nginx nginx-ingress-controller-v4bcw 1/1 Running 0 66s 192.168.0.59 192.168.0.59 <none> <none> kube-system canal-7qcbv 2/2 Running 0 89s 192.168.0.101 192.168.0.101 <none> <none> kube-system canal-gqjbj 2/2 Running 0 89s 192.168.0.59 192.168.0.59 <none> <none> kube-system canal-jc572 2/2 Running 0 89s 192.168.0.114 192.168.0.114 <none> <none> kube-system coredns-849545576b-dbzx8 1/1 Running 0 81s 10.42.2.3 192.168.0.59 <none> <none> kube-system coredns-849545576b-rjtg2 1/1 Running 0 84s 10.42.0.4 192.168.0.114 <none> <none> kube-system coredns-autoscaler-5dcd676cbd-6nmcr 1/1 Running 0 82s 10.42.2.2 192.168.0.59 <none> <none> kube-system metrics-server-697746ff48-2j4x6 1/1 Running 0 78s 10.42.1.2 192.168.0.101 <none> <none> kube-system rke-coredns-addon-deploy-job-2crsj 0/1 Completed 0 85s 192.168.0.59 192.168.0.59 <none> <none> kube-system rke-ingress-controller-deploy-job-bltrl 0/1 Completed 0 74s 192.168.0.59 192.168.0.59 <none> <none> kube-system rke-metrics-addon-deploy-job-vgxnf 0/1 Completed 0 80s 192.168.0.59 192.168.0.59 <none> <none> kube-system rke-network-plugin-deploy-job-987zl 0/1 Completed 0 91s 192.168.0.59 192.168.0.59 <none> <none> amdocs@obriensystemsu0:~$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 192.168.0.101 177m 1% 3278Mi 6% 192.168.0.114 199m 1% 2982Mi 5% 192.168.0.59 697m 4% 3122Mi 19%
amdocs@obriensystemsu0:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.0.101 Ready controlplane,etcd,worker 4h21m v1.18.3 192.168.0.114 Ready controlplane,etcd,worker 4h21m v1.18.3 192.168.0.59 Ready controlplane,etcd,worker 4h21m v1.18.3 amdocs@obriensystemsu0:~$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 192.168.0.101 194m 1% 3485Mi 6% 192.168.0.114 214m 1% 3257Mi 6% 192.168.0.59 439m 3% 3691Mi 23% amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces -o wide | grep collatz amdocs@obriensystemsu0:~$ kubectl apply -f daemonset.yaml daemonset.apps/collatz created amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces -o wide | grep collatz default collatz-5khsq 1/1 Running 0 6s 10.42.2.9 192.168.0.59 <none> <none> default collatz-9vrx5 1/1 Running 0 6s 10.42.0.9 192.168.0.114 <none> <none> default collatz-rksmh 1/1 Running 0 6s 10.42.1.7 192.168.0.101 <none> <none> amdocs@obriensystemsu0:~$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 192.168.0.101 11041m 92% 5626Mi 10% 192.168.0.114 10613m 88% 6242Mi 11% 192.168.0.59 11171m 79% 4781Mi 30% amdocs@obriensystemsu0:~$ kubectl delete -f daemonset.yaml daemonset.apps "collatz" deleted amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces -o wide | grep collatz default collatz-5khsq 1/1 Terminating 0 70s 10.42.2.9 192.168.0.59 <none> <none> default collatz-9vrx5 1/1 Terminating 0 70s 10.42.0.9 192.168.0.114 <none> <none> default collatz-rksmh 1/1 Terminating 0 70s 10.42.1.7 192.168.0.101 <none> <none> amdocs@obriensystemsu0:~$ amdocs@obriensystemsu0:~$ kubectl logs -f collatz-4fdwb availableProc : 12 fjps threads : 5,6 freeMemory() : 855638016 maxMemory() : 13715374080 totalMemory() : 857735168 12796,5,22,1222,8
Experiment: increase ram on nodes
On one of my OSX machines - the Macbook Pro 16 with 64G - I can verify we hit close to 40G from 20G by doubling the size of both nodes.
Both nodes at 10G amdocs@obriensystemsu0:~$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 192.168.75.129 171m 2% 1689Mi 17% 192.168.75.130 186m 2% 1868Mi 18% amdocs@obriensystemsu0:~$ free total used free shared buff/cache available Mem: 10221640 1574784 6971320 27716 1675536 8250376 Swap: 9289724 0 9289724 shutdown and reconfigure each node for 20G amdocs@obriensystemsu0:~$ free total used free shared buff/cache available Mem: 20543556 1732000 17256404 27716 1555152 18365120 Swap: 9289724 0 9289724 amdocs@obriensystemsu0:~$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 192.168.75.129 511m 3% 1785Mi 8% 192.168.75.130 577m 4% 1924Mi 9%
Experiment: Shut down all nodes - restart
mdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx default-http-backend-598b7d7dbd-hwk58 0/1 NodeAffinity 0 17m <none> 192.168.75.130 <none> <none> ingress-nginx default-http-backend-598b7d7dbd-kjkfs 0/1 ContainerCreating 0 3s <none> 192.168.75.129 <none> <none> ingress-nginx nginx-ingress-controller-q7fmt 0/1 Running 2 17m 192.168.75.129 192.168.75.129 <none> <none> ingress-nginx nginx-ingress-controller-xwg5g 0/1 Running 1 17m 192.168.75.130 192.168.75.130 <none> <none> kube-system canal-fxk8k 2/2 Running 2 17m 192.168.75.130 192.168.75.130 <none> <none> kube-system canal-t64zb 2/2 Running 2 17m 192.168.75.129 192.168.75.129 <none> <none> kube-system coredns-849545576b-6wg5w 0/1 ContainerCreating 0 3s <none> 192.168.75.129 <none> <none> kube-system coredns-849545576b-l9wlh 0/1 NodeAffinity 0 16m <none> 192.168.75.130 <none> <none> kube-system coredns-849545576b-qjq2c 0/1 NodeAffinity 0 17m <none> 192.168.75.129 <none> <none> kube-system coredns-849545576b-s7b8q 0/1 Running 0 3s 10.42.0.6 192.168.75.130 <none> <none> kube-system coredns-autoscaler-5dcd676cbd-7prsh 0/1 NodeAffinity 0 17m <none> 192.168.75.130 <none> <none> kube-system coredns-autoscaler-5dcd676cbd-x2jh6 0/1 ContainerCreating 0 3s <none> 192.168.75.129 <none> <none> kube-system metrics-server-697746ff48-89fch 0/1 NodeAffinity 0 17m <none> 192.168.75.129 <none> <none> kube-system metrics-server-697746ff48-knst6 0/1 ContainerCreating 0 3s <none> 192.168.75.130 <none> <none> kube-system rke-coredns-addon-deploy-job-p5m85 0/1 Completed 0 17m 192.168.75.130 192.168.75.130 <none> <none> kube-system rke-ingress-controller-deploy-job-7lv76 0/1 Completed 0 17m 192.168.75.130 192.168.75.130 <none> <none> kube-system rke-metrics-addon-deploy-job-89w94 0/1 Completed 0 17m 192.168.75.130 192.168.75.130 <none> <none> kube-system rke-network-plugin-deploy-job-gvbcw 0/1 Completed 0 17m 192.168.75.130 192.168.75.130 <none> <none> amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces -o wide | grep 0/1 ingress-nginx default-http-backend-598b7d7dbd-hwk58 0/1 NodeAffinity 0 18m <none> 192.168.75.130 <none> <none> kube-system coredns-849545576b-l9wlh 0/1 NodeAffinity 0 18m <none> 192.168.75.130 <none> <none> kube-system coredns-849545576b-qjq2c 0/1 NodeAffinity 0 18m <none> 192.168.75.129 <none> <none> kube-system coredns-autoscaler-5dcd676cbd-7prsh 0/1 NodeAffinity 0 18m <none> 192.168.75.130 <none> <none> kube-system metrics-server-697746ff48-89fch 0/1 NodeAffinity 0 18m <none> 192.168.75.129 <none> <none>
20211218: Upgrade RKE cluster on OSX from Kubernetes 1.18 to 1.21
Current state
Revisit NAT setup for 2022 port forwarding
Ubuntu 18 VM VMware Fusion 12.2.0 RKE 1.1.3 vi cluster.yml # switch ip # ssh into the vm ssh amdocs@192.168.15.23 -p 2022 sudo cp kube_config_cluster.yml ~/.kube/config kubectl get pods --all-namespaces
20220101: RKE 1.3.3 ships with Kubernetes 1.21.7 for OSX 12 Intel under Fusion 12.1.2
On Mac Mini 2020 Intel: upgrade kubectl
amdocs@obriensystemsu0:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.15.21 Ready controlplane,etcd,worker 98m v1.21.7 amdocs@obriensystemsu0:~$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 192.168.15.21 392m 3% 4100Mi 34% amdocs@obriensystemsu0:~$ ./rke version INFO[0000] Running RKE version: v1.3.3 Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7", GitCommit:"1f86634ff08f37e54e8bfcd86bc90b61c98f84d4", GitTreeState:"clean", BuildDate:"2021-11-17T14:35:38Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"} amdocs@obriensystemsu0:~$ kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7", GitCommit:"1f86634ff08f37e54e8bfcd86bc90b61c98f84d4", GitTreeState:"clean", BuildDate:"2021-11-17T14:35:38Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}