ctl Kubernetes Developer Guide | Helm Development Guide | Reference Architecture
Kubernetes is the de-facto docker container orchestrator and control plane. As of Aug 2019 VMWare is installing kubernetes as the control plane for VSphere. Amazon EKS running fargate is the state of the art for serverless container managed services. You can also run your own kubernetes cluster on any cloud or on premises systems using Rancher RKE. Kubernetes
- KUBERNETES-1Getting issue details... STATUS
Quickstart
Directly from the source https://github.com/kelseyhightower/kubernetes-the-hard-way
Installing kubectl - alternate
Kubectl will come with docker desktop - however if you are running on a constrained environment and can only reference a remote kubernetes cluster - install kubectl and helm manually
Manual kubectl install - windows
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/windows/amd64/kubectl.exe # add the exe to your path
Manual kubectl install - linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"or curl -LO https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl
Get your public and private keys on the Ubuntu 16.04 VM.
Adjust authorized_keys with your public key if not already - aws has it, openstack may not
get rke script from jira, gerrit or by cloning https://github.com/obrienlabs/magellan.git for https://github.com/obrienlabs/magellan/blob/master/kubernetes/rke_setup.sh or directly https://github.com/rancher/rke/releases
Versions
Kubernetes 1.16 is out https://kubernetes.io/blog/2019/09/18/kubernetes-1-16-release-announcement/ - as of 20200511 Docker Desktop 2.3.0.2 supports Kubernetes 1.16.5
Kubernetes 1.18.8 is out in Docker Desktop 2.4.0.0 as of 20201002
Installer | Kubernetes | Helm | Docker | Go | Released |
---|---|---|---|---|---|
OKD 3/4 | |||||
K0S - Kubernetes distribution | 1.19.3 | ||||
1.19.3 | |||||
RKE 1.1.9 | 1.18.9 | 3.2.1 | 19.03 | 202010 | |
RKE 1.1.3 | 1.18.3 | ||||
RKE 1.0.8 | 1.17.5 | 3.2.1 | 19.0.3.8 | 1.13.9 | 202005 |
RKE 0.12 | 1.14.6 | 2.14.3 | 19.03.2 | ||
Docker Desktop OSX 2.2.0.5 | 1.15.5 | 3.1.0 | 19.03.5 | 1.13.8 | |
Docker Desktop OSX 2.3.0.2 | 1.16.5 | 3.2.1 | 19.03.8 | 20200511 | |
Docker Desktop 2.4.0.0 | 1.18.8 | 3.2.1 | 19.03.12 | 1.13.10 | 20201002 |
Docker Desktop 2.5.0.1 | 1.19.3 | 19.03.13 | |||
Docker Desktop OSX 3.0.3 | 1.19.3 | 20201201 | |||
3.1.0 | 1.19.3 | 20.10.2 |
RKE Installations
Verify RKE ports in https://rancher.com/docs/rke/latest/en/os/ and https://github.com/rancher/rke/releases/
Verify Docker versions in https://docs.docker.com/engine/release-notes/
Manual Installation of RKE - Rancher Kubernetes Engine - see Kubernetes RKE Cluster on 4 Intel NUC machines with 64G RAM
or use my automated script Quickstart
Install Docker
https://github.com/rancher/rke/releases/
Ubuntu 22.04
RKE 1.4.3 uses docker 1.24.10
sudo curl https://releases.rancher.com/install-docker/24.10.sh | sh
works up to Ubuntu 20.04 not 22
https://docs.docker.com/engine/install/ubuntu/
sudo apt update sudo apt upgrade sudo apt-get install curl sudo curl https://releases.rancher.com/install-docker/20.10.sh | sh sudo usermod -aG docker <user>
Install Docker directly on EC2
ubuntu@ip-10-0-0-129:~$ sudo snap install docker docker 19.03.11 from Canonical✓ installed cpus are defaulted only to 2 - fixing
Install Podman as alternative to Docker on Redhat RHEL 8
Redhat Enterprise Linux#InstallingPodmanasanalternativetoDockeronRedhatRHEL8
Single Node Kubernetes cluster running RKE on AWS EC2 with Helm
Quickstart Kubernetes Install using RKE on EC2
Run 20201017
biometric:kubernetes michaelobrien$ sudo scp ~/keys/onap_rsa ubuntu@services.obrienlabs.cloud:~/ onap_rsa 100% 1675 33.6KB/s 00:00 biometric:kubernetes michaelobrien$ ssh ubuntu@services.obrienlabs.cloud ubuntu@ip-172-31-91-213:~$ ls onap_rsa ubuntu@ip-172-31-91-213:~$ sudo chmod 400 onap_rsa ubuntu@ip-172-31-91-213:~$ sudo cp onap_rsa ~/.ssh # verify ubuntu@ip-172-31-91-213:~$ cat ~/.ssh/authorized_keys sudo curl https://releases.rancher.com/install-docker/19.03.sh | sh sudo usermod -aG docker ubuntu wget https://github.com/rancher/rke/releases/download/v1.1.9/rke_linux-amd64 mv rke_linux-amd64 rke sudo mv ./rke /usr/local/bin/rke sudo chmod 777 /usr/local/bin/rke ubuntu@ip-172-31-91-213:~$ rke --version rke version v1.1.9 ubuntu@ip-172-31-91-213:~$ rke config [+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: ~/.ssh/onap_rsa [+] Number of Hosts [1]: [+] SSH Address of host (1) [none]: 34.200.202.. [+] SSH Port of host (1) [22]: [+] SSH Private Key Path of host (services.obrienlabs.cloud) [none]: ~/.ssh/onap_rsa [+] SSH User of host (services.obrienlabs.cloud) [ubuntu]: [+] Is host (services.obrienlabs.cloud) a Control Plane host (y/n)? [y]: y [+] Is host (services.obrienlabs.cloud) a Worker host (y/n)? [n]: y [+] Is host (services.obrienlabs.cloud) an etcd host (y/n)? [n]: y [+] Override Hostname of host (services.obrienlabs.cloud) [none]: [+] Internal IP of host (services.obrienlabs.cloud) [none]: [+] Docker socket path on host (services.obrienlabs.cloud) [/var/run/docker.sock]: [+] Network Plugin Type (flannel, calico, weave, canal) [canal]: [+] Authentication Strategy [x509]: [+] Authorization Mode (rbac, none) [rbac]: [+] Kubernetes Docker image [rancher/hyperkube:v1.18.9-rancher1]: [+] Cluster domain [cluster.local]: [+] Service Cluster IP Range [10.43.0.0/16]: [+] Enable PodSecurityPolicy [n]: [+] Cluster Network CIDR [10.42.0.0/16]: [+] Cluster DNS Service IP [10.43.0.10]: [+] Add addon manifest URLs or YAML files [no]: ubuntu@ip-172-31-91-213:~$ vi cluster.yml ubuntu@ip-172-31-91-213:~$ sudo rke up INFO[0000] Running RKE version: v1.1.9 INFO[0000] Initiating Kubernetes cluster INFO[0000] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates INFO[0000] [certificates] Generating admin certificates and kubeconfig INFO[0000] Successfully Deployed state file at [./cluster.rkestate] INFO[0000] Building Kubernetes cluster INFO[0000] [dialer] Setup tunnel for host [34.200.202.57] INFO[0000] [network] Deploying port listener containers INFO[0000] Pulling image [rancher/rke-tools:v0.1.65] on host [34.200.202.57], try #1 INFO[0004] Image [rancher/rke-tools:v0.1.65] exists on host [34.200.202.57] ubuntu@ip-172-31-91-213:~$ sudo curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubectl ubuntu@ip-172-31-91-213:~$ sudo mv ./kubectl /usr/local/bin/kubectl ubuntu@ip-172-31-91-213:~$ sudo chmod +x /usr/local/bin/kubectl ubuntu@ip-172-31-91-213:~$ sudo mkdir ~/.kube ubuntu@ip-172-31-91-213:~$ sudo cp kube_config_cluster.yml ~/.kube/config ubuntu@ip-172-31-91-213:~$ sudo chmod 777 ~/.kube/config ubuntu@ip-172-31-91-213:~$ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx default-http-backend-598b7d7dbd-bzfwh 1/1 Running 0 10m 10.42.0.2 34.200.202.57 <none> <none> ingress-nginx nginx-ingress-controller-qmpdv 1/1 Running 0 10m 34.200.202.57 34.200.202.57 <none> <none> kube-system canal-szxkw 2/2 Running 0 11m 34.200.202.57 34.200.202.57 <none> <none> kube-system coredns-849545576b-j5zn5 1/1 Running 0 11m 10.42.0.3 34.200.202.57 <none> <none> kube-system coredns-autoscaler-5dcd676cbd-t6dsd 1/1 Running 0 11m 10.42.0.4 34.200.202.57 <none> <none> kube-system metrics-server-697746ff48-wdr66 1/1 Running 0 11m 10.42.0.5 34.200.202.57 <none> <none> kube-system rke-coredns-addon-deploy-job-bpdx6 0/1 Completed 0 11m 34.200.202.57 34.200.202.57 <none> <none> kube-system rke-ingress-controller-deploy-job-6b7s5 0/1 Completed 0 10m 34.200.202.57 34.200.202.57 <none> <none> kube-system rke-metrics-addon-deploy-job-44jd2 0/1 Completed 0 11m 34.200.202.57 34.200.202.57 <none> <none> kube-system rke-network-plugin-deploy-job-6m7sc 0/1 Completed 0 11m 34.200.202.57 34.200.202.57 <none> <none> ubuntu@ip-172-31-91-213:~$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 34.200.202.57 145m 7% 1925Mi 24% ubuntu@ip-172-31-91-213:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION 34.200.202.57 Ready controlplane,etcd,worker 11m v1.18.9
Add Grafana dashboard
vi grafana-datasource-config.yaml
|
|
|
ubuntu@ip-172-31-91-213:~$ kubectl create namespace monitoring namespace/monitoring created ubuntu@ip-172-31-91-213:~$ kubectl create -f grafana-datasource-config.yaml configmap/grafana-datasources created ubuntu@ip-172-31-91-213:~/grafana$ kubectl create -f deployment.yaml deployment.apps/grafana created ubuntu@ip-172-31-91-213:~/grafana$ kubectl create -f service.yaml service/grafana created ubuntu@ip-172-31-91-213:~/grafana$ kubectl get pods --all-namespaces monitoring grafana-86b84774bb-xct98 1/1 Running 0 2m21s ubuntu@ip-172-31-91-213:~/grafana$ kubectl get services --all-namespaces monitoring grafana NodePort 10.43.36.85 <none> 3000:32000/TCP 52s
http://services.obrienlabs.cloud:32000/
Private SSH key
scp your public key to the box - ideally to ~/.ssh and chmod 400 it - make sure you add your key to authorized_keys
Elastic Reserved IP
get a VIP or EIP and assign this to your VM
generate cluster.yml - optional
cluster.yml will generated by the script rke_setup.sh
azure config - no need to hand build the yml Watch the path of your 2 keys Also don't add an "addon" until you have one of the config job will fail ubuntu@a-rke:~$ rke config --name cluster.yml # use the updated Kubernetes 1.14.6 cluster.yml in the rke_setup.sh script
Setup SSH key access
see Developer Guide#Linux-Ubuntu16.04/18.04
# on your laptop/where your cert is # chmod 777 your cert before you scp it over scp ~/wse/onap_rsa ubuntu@kub0:~/ # on the host sudo mkdir ~/.ssh sudo cp onap_rsa ~/.ssh sudo chmod 400 ~/.ssh/onap_rsa sudo chown ubuntu:ubuntu ~/.ssh/onap_rsa # on the target - add the public key to authorized_keys if not already associated with the VM $ cat ~/.ssh/onap_rsa.pub ssh-rsa AAAAB3N......trics ubuntu@ubuntu:~$ sudo vi ~/.ssh/authorized_keys # login from another host $ ssh -i ~/.ssh/onap_rsa ubuntu@192.168.20.137 Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-62-generic x86_64) Last login: Mon Dec 30 08:14:43 2019 from 192.168.20.137 # back on the target - check for the session ubuntu@ubuntu:~$ who ubuntu tty7 2019-11-01 04:47 (:0) ubuntu pts/1 2019-12-30 08:17 (192.168.20.137) # fix the VM if required sudo nano /etc/apt/sources.list # remove any "deb cdrom:
Disable Password Authentication
see Developer Guide#Linux-Ubuntu16.04/18.04
ubuntu@ubuntu:~$ sudo vi /etc/ssh/sshd_config PasswordAuthentication no
Install Kubernetes RKE script
# this test on a VMWare VM on OSX git clone --recurse-submodules https://github.com/obrienlabs/magellan.git cd magellan/kubernetes #chmod 777 rke_setup.sh amdocs@obriensystemsu0:~/magellan/kubernetes$ sudo ./rke_setup.sh -b master -s 172.16.173.130 -e obl -k onap_rsa -l ubuntu please supply your ssh key as provided by the -k keyname - it must be be chmod 400 and chown user:user in ~/.ssh/ The RKE version specific cluster.yaml is already integrated in this script for 0.2.8 no need for below generation... rke config --name cluster.yml specifically 9address: 172.16.173.130 user: ubuntu ssh_key_path: /home/ubuntu/.ssh/onap_rsa Installing on 172.16.173.130 for master: RKE: 0.2.8 Kubectl: 1.14.6 Helm: 2.14.3 Docker: 19.03.2 username: ubuntu Install docker - If you must install as non-root - comment out the docker install below - run it separately, run the user mod, logout/login and continue this script % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 15429 100 15429 0 0 22864 0 --:--:-- --:--:-- --:--:-- 22891 + sh -c apt-get update Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB] Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB] Get:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB] Fetched 325 kB in 0s (487 kB/s) Reading package lists... Done + sh -c apt-get install -y -q apt-transport-https ca-certificates curl software-properties-common Reading package lists... Building dependency tree... Reading state information... The following additional packages will be installed: libcurl3-gnutls python3-software-properties software-properties-gtk The following packages will be upgraded: apt-transport-https ca-certificates curl libcurl3-gnutls python3-software-properties software-properties-common software-properties-gtk 7 upgraded, 0 newly installed, 0 to remove and 597 not upgraded. Need to get 593 kB of archives. After this operation, 55.3 kB disk space will be freed. Get:1 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 curl amd64 7.47.0-1ubuntu2.14 [139 kB] Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcurl3-gnutls amd64 7.47.0-1ubuntu2.14 [184 kB] Get:3 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt-transport-https amd64 1.2.32 [26.5 kB] Get:4 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 ca-certificates all 20170717~16.04.2 [167 kB] Get:5 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 software-properties-common all 0.96.20.9 [9,452 B] Get:6 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 software-properties-gtk all 0.96.20.9 [47.2 kB] Get:7 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-software-properties all 0.96.20.9 [20.1 kB] Fetched 593 kB in 1s (436 kB/s) Preconfiguring packages ... (Reading database ... 182650 files and directories currently installed.) Preparing to unpack .../curl_7.47.0-1ubuntu2.14_amd64.deb ... Unpacking curl (7.47.0-1ubuntu2.14) over (7.47.0-1ubuntu2.2) ... Preparing to unpack .../libcurl3-gnutls_7.47.0-1ubuntu2.14_amd64.deb ... Unpacking libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.14) over (7.47.0-1ubuntu2.2) ... Preparing to unpack .../apt-transport-https_1.2.32_amd64.deb ... Unpacking apt-transport-https (1.2.32) over (1.2.24) ... Preparing to unpack .../ca-certificates_20170717~16.04.2_all.deb ... Unpacking ca-certificates (20170717~16.04.2) over (20160104ubuntu1) ... Preparing to unpack .../software-properties-common_0.96.20.9_all.deb ... Unpacking software-properties-common (0.96.20.9) over (0.96.20.7) ... Preparing to unpack .../software-properties-gtk_0.96.20.9_all.deb ... Unpacking software-properties-gtk (0.96.20.9) over (0.96.20.7) ... Preparing to unpack .../python3-software-properties_0.96.20.9_all.deb ... Unpacking python3-software-properties (0.96.20.9) over (0.96.20.7) ... Processing triggers for man-db (2.7.5-1) ... Processing triggers for libc-bin (2.23-0ubuntu9) ... Processing triggers for dbus (1.10.6-1ubuntu3.3) ... Processing triggers for hicolor-icon-theme (0.15-0ubuntu1) ... Processing triggers for shared-mime-info (1.5-2ubuntu0.1) ... Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ... Processing triggers for desktop-file-utils (0.22-1ubuntu5.1) ... Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20160824-0ubuntu1) ... Rebuilding /usr/share/applications/bamf-2.index... Processing triggers for mime-support (3.59ubuntu1) ... Setting up libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.14) ... Setting up curl (7.47.0-1ubuntu2.14) ... Setting up apt-transport-https (1.2.32) ... Setting up ca-certificates (20170717~16.04.2) ... Setting up python3-software-properties (0.96.20.9) ... Setting up software-properties-common (0.96.20.9) ... Setting up software-properties-gtk (0.96.20.9) ... Processing triggers for libc-bin (2.23-0ubuntu9) ... Processing triggers for ca-certificates (20170717~16.04.2) ... Updating certificates in /etc/ssl/certs... 17 added, 42 removed; done. Running hooks in /etc/ca-certificates/update.d... Removing debian:WoSign.pem done. done. + curl -fsSl https://download.docker.com/linux/ubuntu/gpg + sh -c apt-key add - OK + sh -c add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" + [ ubuntu = debian ] + sh -c apt-get update Get:1 https://download.docker.com/linux/ubuntu xenial InRelease [66.2 kB] ... + sh -c apt-get install -y -q docker-ce=5:19.03.2~3-0~ubuntu-xenial ... 1 upgraded, 6 newly installed, 0 to remove and 596 not upgraded. Need to get 87.8 MB of archives. After this operation, 390 MB of additional disk space will be used. Get:1 https://download.docker.com/linux/ubuntu xenial/stable amd64 containerd.io amd64 1.2.6-3 [22.6 MB] ... Processing triggers for ureadahead (0.100.0-19) ... + sh -c docker version Client: Docker Engine - Community Version: 19.03.2 API version: 1.40 Go version: go1.12.8 Git commit: 6a30dfc Built: Thu Aug 29 05:28:19 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.2 API version: 1.40 (minimum version 1.12) Go version: go1.12.8 Git commit: 6a30dfc Built: Thu Aug 29 05:26:54 2019 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.2.6 GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb runc: Version: 1.0.0-rc8 GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f docker-init: Version: 0.18.0 GitCommit: fec3683 If you would like to use Docker as a non-root user, you should now consider adding your user to the "docker" group with something like: sudo usermod -aG docker your-user Remember that you will have to log out and back in for this to take effect! ... Install RKE --2019-09-25 21:03:10-- https://github.com/rancher/rke/releases/download/v0.2.8/rke_linux-amd64 Resolving github.com (github.com)... 140.82.113.3 Connecting to github.com (github.com)|140.82.113.3|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/108337180/839f6f80-c343-11e9-9c3c-49c76b856e47?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190926%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190926T020311Z&X-Amz-Expires=300&X-Amz-Signature=5a94946dcef52d35177ee4b2eba8cb8e5cf58c0f9251cf41ce8a8bf96e06ce00&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Drke_linux-amd64&response-content-type=application%2Foctet-stream [following] --2019-09-25 21:03:11-- https://github-production-release-asset-2e65be.s3.amazonaws.com/108337180/839f6f80-c343-11e9-9c3c-49c76b856e47?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190926%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190926T020311Z&X-Amz-Expires=300&X-Amz-Signature=5a94946dcef52d35177ee4b2eba8cb8e5cf58c0f9251cf41ce8a8bf96e06ce00&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Drke_linux-amd64&response-content-type=application%2Foctet-stream Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.128.11 Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.128.11|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 40394065 (39M) [application/octet-stream] Saving to: ‘rke_linux-amd64’ rke_linux-amd64 100%[============================================================================================================================>] 38.52M 15.5MB/s in 2.5s 2019-09-25 21:03:13 (15.5 MB/s) - ‘rke_linux-amd64’ saved [40394065/40394065] Install make - required for beijing+ - installed via yum groupinstall Development Tools in RHEL E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable) E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it? % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 41.1M 100 41.1M 0 0 26.5M 0 0:00:01 0:00:01 --:--:-- 26.5M --2019-09-25 21:03:15-- http://storage.googleapis.com/kubernetes-helm/helm-v2.14.3-linux-amd64.tar.gz Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.1.16, 2607:f8b0:400b:801::2010 Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.1.16|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 26533763 (25M) [application/x-tar] Saving to: ‘helm-v2.14.3-linux-amd64.tar.gz’ helm-v2.14.3-linux-amd64.tar.gz 100%[============================================================================================================================>] 25.30M 38.1MB/s in 0.7s 2019-09-25 21:03:16 (38.1 MB/s) - ‘helm-v2.14.3-linux-amd64.tar.gz’ saved [26533763/26533763] linux-amd64/ linux-amd64/helm linux-amd64/README.md linux-amd64/LICENSE linux-amd64/tiller Bringing RKE up - using supplied cluster.yml INFO[0000] Initiating Kubernetes cluster INFO[0000] [dialer] Setup tunnel for host [172.16.173.130] INFO[0000] [state] Pulling image [rancher/rke-tools:v0.1.42] on host [172.16.173.130] INFO[0005] [state] Successfully pulled image [rancher/rke-tools:v0.1.42] on host [172.16.173.130] INFO[0006] [state] Successfully started [cluster-state-deployer] container on host [172.16.173.130] INFO[0006] [certificates] Generating CA kubernetes certificates INFO[0006] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates INFO[0006] [certificates] Generating Kubernetes API server certificates INFO[0006] [certificates] Generating Service account token key INFO[0006] [certificates] Generating Kubernetes API server proxy client certificates INFO[0007] [certificates] Generating etcd-172.16.173.130 certificate and key INFO[0007] [certificates] Generating Kube Controller certificates INFO[0007] [certificates] Generating Kube Scheduler certificates INFO[0007] [certificates] Generating Kube Proxy certificates INFO[0007] [certificates] Generating Node certificate INFO[0007] [certificates] Generating admin certificates and kubeconfig INFO[0008] Successfully Deployed state file at [./cluster.rkestate] INFO[0008] Building Kubernetes cluster ...
INFO[0111] Finished building Kubernetes cluster successfully wait 2 extra min for the cluster 1 more min copy kube_config_cluter.yaml generated - to ~/.kube/config Verify all pods up on the kubernetes system - will return localhost:8080 until a host is added kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx default-http-backend-5954bd5d8c-tqmrq 1/1 Running 0 2m19s ingress-nginx nginx-ingress-controller-qfr48 1/1 Running 0 2m17s kube-system canal-fdp4m 2/2 Running 0 2m34s kube-system coredns-autoscaler-5d5d49b8ff-tnnjh 1/1 Running 0 2m27s kube-system coredns-bdffbc666-zmqzg 1/1 Running 0 2m28s kube-system metrics-server-7f6bd4c888-6xqkp 1/1 Running 0 2m22s kube-system rke-coredns-addon-deploy-job-jbnhb 0/1 Completed 0 2m30s kube-system rke-ingress-controller-deploy-job-m9xdp 0/1 Completed 0 2m20s kube-system rke-metrics-addon-deploy-job-pbt2t 0/1 Completed 0 2m25s kube-system rke-network-plugin-deploy-job-5z2vg 0/1 Completed 0 2m43s install tiller/helm serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created Creating /home/amdocs/.helm Creating /home/amdocs/.helm/repository Creating /home/amdocs/.helm/repository/cache Creating /home/amdocs/.helm/repository/local Creating /home/amdocs/.helm/plugins Creating /home/amdocs/.helm/starters Creating /home/amdocs/.helm/cache/archive Creating /home/amdocs/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /home/amdocs/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Waiting for deployment "tiller-deploy" rollout to finish: 0 of 1 updated replicas are available... deployment "tiller-deploy" successfully rolled out upgrade server side of helm in kubernetes Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} sleep 30 $HELM_HOME has been configured at /home/amdocs/.helm. Tiller (the Helm server-side component) has been upgraded to the current version. sleep 30 verify both versions are the same below Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} start helm server sleep 30 Regenerating index. This may take a moment. Now serving you on 127.0.0.1:8879 add local helm repo "local" has been added to your repositories NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879 To enable grafana dashboard - do this after running cd.sh which brings up onap - or you may get a 302xx port conflict kubectl expose -n kube-system deployment monitoring-grafana --type=LoadBalancer --name monitoring-grafana-client to get the nodeport for a specific VM running grafana kubectl get services --all-namespaces | grep graf Client: Docker Engine - Community Version: 19.03.2 API version: 1.40 Go version: go1.12.8 Git commit: 6a30dfc Built: Thu Aug 29 05:28:19 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.2 API version: 1.40 (minimum version 1.12) Go version: go1.12.8 Git commit: 6a30dfc Built: Thu Aug 29 05:26:54 2019 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.2.6 GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb runc: Version: 1.0.0-rc8 GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f docker-init: Version: 0.18.0 GitCommit: fec3683 Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5m3s ingress-nginx default-http-backend ClusterIP 10.43.151.5 <none> 80/TCP 4m9s kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m18s kube-system metrics-server ClusterIP 10.43.71.16 <none> 443/TCP 4m13s kube-system tiller-deploy ClusterIP 10.43.194.75 <none> 44134/TCP 108s NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx default-http-backend-5954bd5d8c-tqmrq 1/1 Running 0 4m9s ingress-nginx nginx-ingress-controller-qfr48 1/1 Running 0 4m7s kube-system canal-fdp4m 2/2 Running 0 4m24s kube-system coredns-autoscaler-5d5d49b8ff-tnnjh 1/1 Running 0 4m17s kube-system coredns-bdffbc666-zmqzg 1/1 Running 0 4m18s kube-system metrics-server-7f6bd4c888-6xqkp 1/1 Running 0 4m12s kube-system rke-coredns-addon-deploy-job-jbnhb 0/1 Completed 0 4m20s kube-system rke-ingress-controller-deploy-job-m9xdp 0/1 Completed 0 4m10s kube-system rke-metrics-addon-deploy-job-pbt2t 0/1 Completed 0 4m15s kube-system rke-network-plugin-deploy-job-5z2vg 0/1 Completed 0 4m33s kube-system tiller-deploy-7f4d76c4b6-nnts8 1/1 Running 0 108s finished! amdocs@obriensystemsu0:~/magellan/kubernetes$
Manual Installation of Kubernetes via RKE on Ubuntu 16.04 VM - optional
Determine RKE and Docker versions
Don't just use the latest docker version - check the RKE release page to get the version pair - 0.1.15/17.03 and 0.1.16/18.06, 1.1.9/19.03 - see https://github.com/docker/docker-ce/releases - currently https://github.com/docker/docker-ce/releases/tag/v18.06.3-ce
ubuntu@a-rke:~$ sudo curl https://releases.rancher.com/install-docker/18.06.sh | sh ubuntu@a-rke:~$ sudo usermod -aG docker ubuntu ubuntu@a-rke:~$ sudo docker version Client: Version: 18.06.3-ce API version: 1.38 Go version: go1.10.3 Git commit: d7080c1 Built: Wed Feb 20 02:27:18 2019 # install RKE sudo wget https://github.com/rancher/rke/releases/download/v0.1.16/rke_linux-amd64 mv rke_linux-amd64 rke sudo mv ./rke /usr/local/bin/rke ubuntu@a-rke:~$ rke --version rke version v0.1.16 # for 20201013 sudo wget https://github.com/rancher/rke/releases/download/v1.1.9/rke_linux-amd64 sudo curl https://releases.rancher.com/install-docker/19.03.sh | sh
Add Rancher Chart and cert-manager Chart
see also adding the default google storageclass and provisioner - Asynchronous Messaging using Kafka#AddAdditionalstorageprovisioner
https://certbot.eff.org/lets-encrypt/pip-apache
https://rancher.com/docs/rancher/v2.x/en/installation/k8s-install/helm-rancher/
https://cert-manager.io/docs/installation/kubernetes/
$ sudo helm repo add rancher-latest https://releases.rancher.com/server-charts/latest [sudo] password for amdocs: "rancher-latest" has been added to your repositories $ kubectl create namespace cattle-system namespace/cattle-system created $ kubectl create namespace cert-manager namespace/cert-manager created $ helm repo add jetstack https://charts.jetstack.io Error: open /home/amdocs/.helm/repository/repositories.lock: permission denied $ sudo helm repo add jetstack https://charts.jetstack.io "jetstack" has been added to your repositories $ sudo helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "rancher-latest" chart repository ...Successfully got an update from the "incubator" chart repository ...Successfully got an update from the "jetstack" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. $ sudo helm install --name cert-manager --namespace cert-manager --version 0.15.1 jetstack/cert-manager --set installCRDs=true NAME: cert-manager LAST DEPLOYED: Mon Jul 13 15:39:42 2020 NAMESPACE: cert-manager STATUS: DEPLOYED RESOURCES: ==> v1/ClusterRole NAME CREATED AT cert-manager-edit 2020-07-13T20:39:43Z cert-manager-view 2020-07-13T20:39:43Z ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 0/1 1 0 0s cert-manager-cainjector 0/1 1 0 0s cert-manager-webhook 0/1 1 0 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE cert-manager-5fbcbb85b-c2pvd 0/1 ContainerCreating 0 0s cert-manager-cainjector-8664665f47-d846p 0/1 ContainerCreating 0 0s cert-manager-webhook-65cf7bc9d4-2h5ms 0/1 ContainerCreating 0 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cert-manager ClusterIP 10.43.44.159 <none> 9402/TCP 0s cert-manager-webhook ClusterIP 10.43.75.82 <none> 443/TCP 0s ==> v1/ServiceAccount NAME SECRETS AGE cert-manager 1 0s cert-manager-cainjector 1 0s cert-manager-webhook 1 0s ==> v1beta1/ClusterRole NAME CREATED AT cert-manager-cainjector 2020-07-13T20:39:43Z cert-manager-controller-certificates 2020-07-13T20:39:43Z cert-manager-controller-challenges 2020-07-13T20:39:43Z cert-manager-controller-clusterissuers 2020-07-13T20:39:43Z cert-manager-controller-ingress-shim 2020-07-13T20:39:43Z cert-manager-controller-issuers 2020-07-13T20:39:43Z cert-manager-controller-orders 2020-07-13T20:39:43Z ==> v1beta1/ClusterRoleBinding NAME ROLE AGE cert-manager-cainjector ClusterRole/cert-manager-cainjector 0s cert-manager-controller-certificates ClusterRole/cert-manager-controller-certificates 0s cert-manager-controller-challenges ClusterRole/cert-manager-controller-challenges 0s cert-manager-controller-clusterissuers ClusterRole/cert-manager-controller-clusterissuers 0s cert-manager-controller-ingress-shim ClusterRole/cert-manager-controller-ingress-shim 0s cert-manager-controller-issuers ClusterRole/cert-manager-controller-issuers 0s cert-manager-controller-orders ClusterRole/cert-manager-controller-orders 0s ==> v1beta1/CustomResourceDefinition NAME CREATED AT certificaterequests.cert-manager.io 2020-07-13T20:39:43Z certificates.cert-manager.io 2020-07-13T20:39:43Z challenges.acme.cert-manager.io 2020-07-13T20:39:43Z clusterissuers.cert-manager.io 2020-07-13T20:39:43Z issuers.cert-manager.io 2020-07-13T20:39:43Z orders.acme.cert-manager.io 2020-07-13T20:39:43Z ==> v1beta1/MutatingWebhookConfiguration NAME WEBHOOKS AGE cert-manager-webhook 1 0s ==> v1beta1/Role NAME CREATED AT cert-manager-cainjector:leaderelection 2020-07-13T20:39:43Z cert-manager-webhook:dynamic-serving 2020-07-13T20:39:43Z cert-manager:leaderelection 2020-07-13T20:39:43Z ==> v1beta1/RoleBinding NAME ROLE AGE cert-manager-cainjector:leaderelection Role/cert-manager-cainjector:leaderelection 0s cert-manager-webhook:dynamic-serving Role/cert-manager-webhook:dynamic-serving 0s cert-manager:leaderelection Role/cert-manager:leaderelection 0s ==> v1beta1/ValidatingWebhookConfiguration NAME WEBHOOKS AGE cert-manager-webhook 1 0s NOTES: cert-manager has been deployed successfully! In order to begin issuing certificates, you will need to set up a ClusterIssuer or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). More information on the different types of issuers and how to configure them can be found in our documentation: https://cert-manager.io/docs/configuration/ For information on how to configure cert-manager to automatically provision Certificates for Ingress resources, take a look at the `ingress-shim` documentation: https://cert-manager.io/docs/usage/ingress/ $ sudo helm install --name rancher rancher-latest/rancher --namespace cattle-system --set hostname=obriensystemsu0 NAME: rancher LAST DEPLOYED: Mon Jul 13 15:45:25 2020 NAMESPACE: cattle-system STATUS: DEPLOYED RESOURCES: ==> v1/ClusterRoleBinding NAME ROLE AGE rancher ClusterRole/cluster-admin 0s ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE rancher 0/3 3 0 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE rancher-658cd9fb6b-4hl8f 0/1 ContainerCreating 0 0s rancher-658cd9fb6b-99ph7 0/1 ContainerCreating 0 0s rancher-658cd9fb6b-q2g77 0/1 ContainerCreating 0 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rancher ClusterIP 10.43.81.9 <none> 80/TCP 0s ==> v1/ServiceAccount NAME SECRETS AGE rancher 1 0s ==> v1alpha2/Issuer NAME READY AGE rancher False 0s ==> v1beta1/Ingress NAME CLASS HOSTS ADDRESS PORTS AGE rancher <none> obriensystemsu0 80, 443 0s NOTES: Rancher Server has been installed. NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued and Ingress comes up. Check out our docs at https://rancher.com/docs/rancher/v2.x/en/ Browse to https://obriensystemsu0 Happy Containering! $ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cattle-system cattle-cluster-agent-65f6b88b7b-wpgmx 0/1 Error 153 20h 10.42.0.47 192.168.75.129 <none> <none> cattle-system cattle-node-agent-gsqsk 1/1 Running 0 20h 192.168.75.129 192.168.75.129 <none> <none> cattle-system rancher-658cd9fb6b-4hl8f 1/1 Running 0 20h 10.42.0.44 192.168.75.129 <none> <none> cattle-system rancher-658cd9fb6b-99ph7 1/1 Running 0 20h 10.42.0.45 192.168.75.129 <none> <none> cattle-system rancher-658cd9fb6b-q2g77 1/1 Running 0 20h 10.42.0.46 192.168.75.129 <none> <none> cert-manager cert-manager-5fbcbb85b-c2pvd 1/1 Running 0 21h 10.42.0.43 192.168.75.129 <none> <none> cert-manager cert-manager-cainjector-8664665f47-d846p 1/1 Running 3 21h 10.42.0.41 192.168.75.129 <none> <none> cert-manager cert-manager-webhook-65cf7bc9d4-2h5ms 1/1 Running 0 21h 10.42.0.42 192.168.75.129 <none> <none> default kafka2-0 0/1 Pending 0 20h <none> <none> <none> <none> default kafka2-zookeeper-0 1/1 Running 0 20h 10.42.0.48 192.168.75.129 <none> <none> default kafka2-zookeeper-1 1/1 Running 0 20h 10.42.0.49 192.168.75.129 <none> <none> default kafka2-zookeeper-2 1/1 Running 0 20h 10.42.0.50 192.168.75.129 <none> <none> default local-storageclass-provisioner-lbfs9 1/1 Running 0 21h 10.42.0.34 192.168.75.129 <none> <none> ingress-nginx default-http-backend-598b7d7dbd-7x7k5 1/1 Running 2 4d14h 10.42.0.16 192.168.75.129 <none> <none> ingress-nginx nginx-ingress-controller-zc6rx 1/1 Running 2 4d14h 192.168.75.129 192.168.75.129 <none> <none> kube-system canal-66qlp 2/2 Running 4 4d14h 192.168.75.129 192.168.75.129 <none> <none> kube-system coredns-849545576b-d6rs4 1/1 Running 2 4d14h 10.42.0.14 192.168.75.129 <none> <none> kube-system coredns-autoscaler-5dcd676cbd-sdm6q 1/1 Running 2 4d14h 10.42.0.12 192.168.75.129 <none> <none> kube-system metrics-server-697746ff48-8589b 1/1 Running 2 4d14h 10.42.0.13 192.168.75.129 <none> <none> kube-system rke-coredns-addon-deploy-job-h5c45 0/1 Completed 0 4d14h 192.168.75.129 192.168.75.129 <none> <none> kube-system rke-ingress-controller-deploy-job-bcnw5 0/1 Completed 0 4d14h 192.168.75.129 192.168.75.129 <none> <none> kube-system rke-metrics-addon-deploy-job-tf998 0/1 Completed 0 4d14h 192.168.75.129 192.168.75.129 <none> <none> kube-system rke-network-plugin-deploy-job-dgp6l 0/1 Completed 0 4d14h 192.168.75.129 192.168.75.129 <none> <none> kube-system tiller-deploy-5d58456765-w7l8h 1/1 Running 2 4d14h 10.42.0.15 192.168.75.129 <none> <none> kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-wzw2r 1/1 Running 0 4d13h 10.42.0.18 192.168.75.129 <none> <none> kubernetes-dashboard kubernetes-dashboard-7b544877d5-9v65f 1/1 Running 0 4d13h 10.42.0.17 192.168.75.129 <none> <none> $ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 22h sc-tcct7 (default) driver.longhorn.io Delete Immediate true 20h
Add Local storage class for PVC provisioning to RKE
Docker desktop comes with a default Hostpath storageclass. RKE by default does not create a storageclass which is required to resolve persistent volume claims.
Minikube comes with a standard hostpath storage class and provisioner but there are issues with VM reboot on the /tmp mapping to /mnt/sda1 - so a 2nd storage class can be used as a verification.
see https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/ and https://kubernetes.io/docs/concepts/storage/storage-classes/#local
and
https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#user-guide
Add Additional storage provisioner
see also adding the Rancher Longhorn storageclass/provisioner in AddRancherChartandcert-managerChart
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer $ vi storageclass.yaml $ kubectl apply -f storageclass.yaml storageclass.storage.k8s.io/local-storage created $ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 26s
Make the storage class default
see https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/
$ kubectl patch storageclass local-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' storageclass.storage.k8s.io/local-storage patched $ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-storage (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 8m50s # optionally turn off other defaults biometric:sig-storage-local-static-provisioner michaelobrien$ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-storage (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 27s standard (default) k8s.io/minikube-hostpath Delete Immediate false 56m biometric:sig-storage-local-static-provisioner michaelobrien$ kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' storageclass.storage.k8s.io/standard patched biometric:sig-storage-local-static-provisioner michaelobrien$ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-storage (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 52s standard k8s.io/minikube-hostpath Delete Immediate false 56m
Add the provisioner
$ git clone --depth=1 https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner.git $ cd sig-storage-local-static-provisioner/ /sig-storage-local-static-provisioner$ sudo helm template ./helm/provisioner -f helm/provisioner/values.yaml --name local-storageclass --namespace default > local-volume-provisioner.generated.yaml edit the values.yaml or pass in through --set a change to helm/provisioner/values.yaml #hostDir: /mnt/fast-disks hostDir: /mnt/sda1 ~/sig-storage-local-static-provisioner$ kubectl create -f local-volume-provisioner.generated.yaml configmap/local-storageclass-provisioner-config created serviceaccount/local-storageclass-provisioner created clusterrolebinding.rbac.authorization.k8s.io/local-storageclass-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storageclass-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storageclass-provisioner-node-binding created daemonset.apps/local-storageclass-provisioner created ~/sig-storage-local-static-provisioner$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default local-storageclass-provisioner-lbfs9 1/1 Running 0 44s
Add the mounts
Try a quick StatefulSet to test the PVC
On the default minikube provisioner to validate the StatefulSet vi ss_test.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: local-test spec: serviceName: "local-service" replicas: 3 selector: matchLabels: app: local-test template: metadata: labels: app: local-test spec: containers: - name: test-container image: k8s.gcr.io/busybox command: - "/bin/sh" args: - "-c" - "sleep 100000" volumeMounts: - name: local-vol mountPath: /usr/test-pod volumeClaimTemplates: - metadata: name: local-vol spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "standard" resources: requests: storage: 1Gi biometric:sig-storage-local-static-provisioner michaelobrien$ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 21m standard (default) k8s.io/minikube-hostpath Delete Immediate false 77m biometric:sig-storage-local-static-provisioner michaelobrien$ kubectl apply -f ss_test.yaml statefulset.apps/local-test created biometric:sig-storage-local-static-provisioner michaelobrien$ kubectl describe pod local-test-0 Warning FailedScheduling <unknown> default-scheduler error while running "VolumeBinding" filter plugin for pod "local-test-0": pod has unbound immediate PersistentVolumeClaims Normal Scheduled <unknown> default-scheduler Successfully assigned default/local-test-0 to minikube Normal Pulling 4s kubelet, minikube Pulling image "k8s.gcr.io/busybox" Normal Pulled 2s kubelet, minikube Successfully pulled image "k8s.gcr.io/busybox" Normal Created 2s kubelet, minikube Created container test-container Normal Started 2s kubelet, minikube Started container test-container biometric:sig-storage-local-static-provisioner michaelobrien$ kubectl get pv --all-namespaces NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-0571d435-0cfc-4e34-9d58-de6098e61546 1Gi RWO Delete Bound default/local-vol-local-test-1 standard 20s pvc-88cbcaa0-2b9b-4e1c-b714-59af1bcbe799 1Gi RWO Delete Bound default/local-vol-local-test-0 standard 24s biometric:sig-storage-local-static-provisioner michaelobrien$ kubectl get pvc --all-namespaces NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default local-vol-local-test-0 Bound pvc-88cbcaa0-2b9b-4e1c-b714-59af1bcbe799 1Gi RWO standard 34s default local-vol-local-test-1 Bound pvc-0571d435-0cfc-4e34-9d58-de6098e61546 1Gi RWO standard 30s biometric:sig-storage-local-static-provisioner michaelobrien$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default local-storageclass-provisioner-8c8br 1/1 Running 1 35m default local-test-0 1/1 Running 0 42s default local-test-1 1/1 Running 0 38s delete everything biometric:sig-storage-local-static-provisioner michaelobrien$ kubectl delete -f ss_test.yaml statefulset.apps "local-test" deleted wait until they terminate biometric:sig-storage-local-static-provisioner michaelobrien$ kubectl delete pv --all persistentvolume "pvc-0571d435-0cfc-4e34-9d58-de6098e61546" deleted persistentvolume "pvc-88cbcaa0-2b9b-4e1c-b714-59af1bcbe799" deleted biometric:sig-storage-local-static-provisioner michaelobrien$ kubectl delete pvc --all persistentvolumeclaim "local-vol-local-test-0" deleted persistentvolumeclaim "local-vol-local-test-1" deleted rerun with a change in the yaml storageClassName: "local-storage" not working yet - still getting a pending pvc - triaging
read
Kubernetes HA Cluster Production Installation
Minikube Single node Kubernetes cluster
Microk8s Single node Kubernetes cluster
Docker Desktop - Default single node Kubernetes cluster
2020 Oct - Docker Desktop 2.4.0.0 is out with Kubernetes 1.18.8 support
Docker Desktop 2.0.0.3 comes with Kubernetes v1.10.11 on top of Docker 18.09.2 - I would recommend moving up the far more recent Kubernetes v1.14.3 on top of Docker 19.03.1 as part of Docker Desktop 2.1.0.1.
Docker Desktop will give you a default hostpath storageclass
Kubernetes 1.10 | Kubernetes 1.14 |
---|---|
Docker desktop comes by default with a kubernetes cluster (you must enable it)
Enable kubernetes
20201002 Install Docker Desktop 2.4.0.0 to get Kubernetes 1.18.8
PS F:\wse_helm\reference> docker version Client: Docker Engine - Community Azure integration 0.1.15 Version: 19.03.12 API version: 1.40 Go version: go1.13.10 Git commit: 48a66213fe Built: Mon Jun 22 15:43:18 2020 OS/Arch: windows/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.12 API version: 1.40 (minimum version 1.12) Go version: go1.13.10 Git commit: 48a66213fe Built: Mon Jun 22 15:49:27 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.2.13 GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683 Kubernetes: Version: v1.16.6-beta.0 StackAPI: v1beta2 PS F:\wse_helm\reference> docker version Client: Docker Engine - Community Azure integration 0.1.15 Version: 19.03.12 API version: 1.40 Go version: go1.13.10 Git commit: 48a66213fe Built: Mon Jun 22 15:43:18 2020 OS/Arch: windows/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.12 API version: 1.40 (minimum version 1.12) Go version: go1.13.10 Git commit: 48a66213fe Built: Mon Jun 22 15:49:27 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.2.13 GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683 Kubernetes: Version: v1.16.6-beta.0 StackAPI: v1beta2 PS F:\wse_helm\reference> kubectl version Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:18:29Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"} PS F:\wse_helm\reference> helm version version.BuildInfo{Version:"v3.2.0", GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71", GitTreeState:"clean", GoVersion:"go1.13.10"} PS F:\wse_helm\reference> docker version Client: Docker Engine - Community Azure integration 0.1.15 Version: 19.03.12 API version: 1.40 Go version: go1.13.10 Git commit: 48a66213fe Built: Mon Jun 22 15:43:18 2020 OS/Arch: windows/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.12 API version: 1.40 (minimum version 1.12) Go version: go1.13.10 Git commit: 48a66213fe Built: Mon Jun 22 15:49:27 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.2.13 GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683 Kubernetes: Version: v1.16.6-beta.0 StackAPI: v1beta2 PS F:\wse_helm\reference> docker version Client: Docker Engine - Community Cloud integration 0.1.18 Version: 19.03.13 API version: 1.40 Go version: go1.13.15 Git commit: 4484c46d9d Built: Wed Sep 16 17:00:27 2020 OS/Arch: windows/amd64 Experimental: false error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/version: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running. PS F:\wse_helm\reference> docker version Client: Docker Engine - Community Cloud integration 0.1.18 Version: 19.03.13 API version: 1.40 Go version: go1.13.15 Git commit: 4484c46d9d Built: Wed Sep 16 17:00:27 2020 OS/Arch: windows/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.13 API version: 1.40 (minimum version 1.12) Go version: go1.13.15 Git commit: 4484c46d9d Built: Wed Sep 16 17:07:04 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.3.7 GitCommit: 8fba4e9a7d01810a393d5d25a3621dc101981175 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683 Kubernetes: Version: Unknown StackAPI: Unknown PS F:\wse_helm\reference> kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:04:18Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"} PS F:\wse_helm\reference> helm version version.BuildInfo{Version:"v3.2.0", GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71", GitTreeState:"clean", GoVersion:"go1.13.10"}
Install the Kubernetes Dashboard Pods
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc3/aio/deploy/recommended.yaml kubectl proxy
Verify the default hostpath storage class
https://kubernetes.io/docs/concepts/storage/storage-classes/
obrienlabs:~ $ kubectl get storageclass NAME PROVISIONER AGE hostpath (default) docker.io/hostpath 60d Check on PVCs :install $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-kafka-0 Bound pvc-6a69f312-055a-4444-9c24-fa8fe4878d3a 1Gi RWO hostpath 58d datadir-kafka-1 Bound pvc-84387536-09be-44ea-b137-2ef40cc6d6f3 1Gi RWO hostpath 58d datadir-kafka-2 Bound pvc-1a0ca822-5bf2-4b11-bffb-39b4f5849d69 1Gi RWO hostpath 58d
// In windows MINGW64 ~ $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES @ MINGW64 ~ $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c8e1c1dc6b93 docker/kube-compose-controller "/compose-controller…" About a minute ago Up About a minute k8s_compose_compose-74649b4db6-fn6bt_docker_6af81994-caac-11e9-9b7e-00155d663100_0 60df4dcc5951 docker/kube-compose-api-server "/api-server --kubec…" About a minute ago Up About a minute k8s_compose_compose-api-7564f85bcf-pzzst_docker_6ae014d2-caac-11e9-9b7e-00155d663100_0 a21462e44609 k8s.gcr.io/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_compose-74649b4db6-fn6bt_docker_6af81994-caac-11e9-9b7e-00155d663100_0 bce704c0c0ed k8s.gcr.io/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_compose-api-7564f85bcf-pzzst_docker_6ae014d2-caac-11e9-9b7e-00155d663100_0 b3080360fa2b k8s.gcr.io/k8s-dns-sidecar-amd64 "/sidecar --v=2 --lo…" About a minute ago Up About a minute k8s_sidecar_kube-dns-86f4d74b45-8b7n6_kube-system_530cdf8d-caac-11e9-9b7e-00155d663100_0 59863c770eb1 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 "/dnsmasq-nanny -v=2…" About a minute ago Up About a minute k8s_dnsmasq_kube-dns-86f4d74b45-8b7n6_kube-system_530cdf8d-caac-11e9-9b7e-00155d663100_0 bdde470b375d k8s.gcr.io/k8s-dns-kube-dns-amd64 "/kube-dns --domain=…" About a minute ago Up About a minute k8s_kubedns_kube-dns-86f4d74b45-8b7n6_kube-system_530cdf8d-caac-11e9-9b7e-00155d663100_0 398c7f8c6e0d k8s.gcr.io/kube-proxy-amd64 "/usr/local/bin/kube…" About a minute ago Up About a minute k8s_kube-proxy_kube-proxy-nzm9j_kube-system_5309f3c0-caac-11e9-9b7e-00155d663100_0 76992079974c k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up About a minute k8s_POD_kube-dns-86f4d74b45-8b7n6_kube-system_530cdf8d-caac-11e9-9b7e-00155d663100_0 3421d48f9150 k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up About a minute k8s_POD_kube-proxy-nzm9j_kube-system_5309f3c0-caac-11e9-9b7e-00155d663100_0 c23b1b374fd7 e851a7aeb6e8 "kube-apiserver --ad…" 2 minutes ago Up 2 minutes k8s_kube-apiserver_kube-apiserver-docker-for-desktop_kube-system_bb0ce6461863dda427ec695afd7382b1_1 dd1a5c5b954e k8s.gcr.io/etcd-amd64 "etcd --client-cert-…" 2 minutes ago Up 2 minutes k8s_etcd_etcd-docker-for-desktop_kube-system_48668e6f8eb2c5de8ec8f4109bcc57cc_0 dce159bcbfc3 k8s.gcr.io/kube-scheduler-amd64 "kube-scheduler --le…" 2 minutes ago Up 2 minutes k8s_kube-scheduler_kube-scheduler-docker-for-desktop_kube-system_ecf299f4fa454da5ab299dffcd70c70f_0 ae2aeb910a75 k8s.gcr.io/kube-controller-manager-amd64 "kube-controller-man…" 2 minutes ago Up 2 minutes k8s_kube-controller-manager_kube-controller-manager-docker-for-desktop_kube-system_14d6eb408e956ff69623d89a5202834b_0 73f818622deb k8s.gcr.io/pause-amd64:3.1 "/pause" 3 minutes ago Up 2 minutes k8s_POD_etcd-docker-for-desktop_kube-system_48668e6f8eb2c5de8ec8f4109bcc57cc_0 c8fe8b3804e2 k8s.gcr.io/pause-amd64:3.1 "/pause" 3 minutes ago Up 2 minutes k8s_POD_kube-apiserver-docker-for-desktop_kube-system_bb0ce6461863dda427ec695afd7382b1_0 c7b5de703df4 k8s.gcr.io/pause-amd64:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-controller-manager-docker-for-desktop_kube-system_14d6eb408e956ff69623d89a5202834b_0 81f2d85cee89 k8s.gcr.io/pause-amd64:3.1 "/pause" 3 minutes ago Up 2 minutes k8s_POD_kube-scheduler-docker-for-desktop_kube-system_ecf299f4fa454da5ab299dffcd70c70f_0 @ MINGW64 ~ $ kubectl get nodes NAME STATUS ROLES AGE VERSION docker-for-desktop Ready master 2m v1.10.11 @ MINGW64 ~ $ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE docker compose-74649b4db6-fn6bt 1/1 Running 0 1m docker compose-api-7564f85bcf-pzzst 1/1 Running 0 1m kube-system etcd-docker-for-desktop 1/1 Running 0 1m kube-system kube-apiserver-docker-for-desktop 1/1 Running 1 1m kube-system kube-controller-manager-docker-for-desktop 1/1 Running 0 1m kube-system kube-dns-86f4d74b45-8b7n6 3/3 Running 0 2m kube-system kube-proxy-nzm9j 1/1 Running 0 2m kube-system kube-scheduler-docker-for-desktop 1/1 Running 0 1m @ MINGW64 ~ $ kubectl get services --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9m docker compose-api ClusterIP 10.108.148.162 <none> 443/TCP 8m kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 9m Kubernetes 1.14 based PS C:\Windows\system32> docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8b13d62e7bfc docker/kube-compose-controller "/compose-controller…" About a minute ago Up About a minute k8s_compose_compose-6c67d745f6-4b4bn_docker_cf2269f0-cc2e-11e9-a3cc-00155d663102_0 59d0faa98d85 docker/kube-compose-api-server "/api-server --kubec…" About a minute ago Up About a minute k8s_compose_compose-api-57ff65b8c7-rdlh9_docker_cf1b94c4-cc2e-11e9-a3cc-00155d663102_0 398a3b5e96f9 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_compose-6c67d745f6-4b4bn_docker_cf2269f0-cc2e-11e9-a3cc-00155d663102_0 8107237c9e58 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_compose-api-57ff65b8c7-rdlh9_docker_cf1b94c4-cc2e-11e9-a3cc-00155d663102_0 78a359d43285 eb516548c180 "/coredns -conf /etc…" 2 minutes ago Up 2 minutes k8s_coredns_coredns-fb8b8dccf-6qqnh_kube-system_a33262a3-cc2e-11e9-a3cc-00155d663102_0 427183ed6d57 eb516548c180 "/coredns -conf /etc…" 2 minutes ago Up 2 minutes k8s_coredns_coredns-fb8b8dccf-ltbgd_kube-system_a3312bf3-cc2e-11e9-a3cc-00155d663102_0 2c60fe972a24 004666307c5b "/usr/local/bin/kube…" 2 minutes ago Up 2 minutes k8s_kube-proxy_kube-proxy-qrjvf_kube-system_a30ee329-cc2e-11e9-a3cc-00155d663102_0 60d5e4a4fb17 k8s.gcr.io/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_coredns-fb8b8dccf-6qqnh_kube-system_a33262a3-cc2e-11e9-a3cc-00155d663102_0 77bddf0e283b k8s.gcr.io/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_coredns-fb8b8dccf-ltbgd_kube-system_a3312bf3-cc2e-11e9-a3cc-00155d663102_0 aeebbfadf9c5 k8s.gcr.io/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-proxy-qrjvf_kube-system_a30ee329-cc2e-11e9-a3cc-00155d663102_0 91e4d986093e 9946f563237c "kube-apiserver --ad…" 2 minutes ago Up 2 minutes k8s_kube-apiserver_kube-apiserver-docker-desktop_kube-system_7c4f3d43558e9fadf2d2b323b2e78235_0 1fcac02063de 2c4adeb21b4f "etcd --advertise-cl…" 2 minutes ago Up 2 minutes k8s_etcd_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_0 7893ab856a39 ac2ce44462bc "kube-controller-man…" 2 minutes ago Up 2 minutes k8s_kube-controller-manager_kube-controller-manager-docker-desktop_kube-system_9c58c6d32bd3a2d42b8b10905b8e8f54_0 4303da6a46a5 953364a3ae7a "kube-scheduler --bi…" 2 minutes ago Up 2 minutes k8s_kube-scheduler_kube-scheduler-docker-desktop_kube-system_124f5bab49bf26c80b1c1be19641c3e8_0 01d222f23f98 k8s.gcr.io/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-scheduler-docker-desktop_kube-system_124f5bab49bf26c80b1c1be19641c3e8_0 0dcc4e343bc0 k8s.gcr.io/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-controller-manager-docker-desktop_kube-system_9c58c6d32bd3a2d42b8b10905b8e8f54_0 ca16b0d85cda k8s.gcr.io/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-apiserver-docker-desktop_kube-system_7c4f3d43558e9fadf2d2b323b2e78235_0 4c921ad85555 k8s.gcr.io/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_0 PS C:\Windows\system32> kubectl get services --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2m43s docker compose-api ClusterIP 10.100.135.112 <none> 443/TCP 81s kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2m42s PS C:\Windows\system32> kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} PS C:\Windows\system32> kubectl get nodes NAME STATUS ROLES AGE VERSION docker-desktop Ready master 3m7s v1.14.3 PS C:\Windows\system32> kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE docker compose-6c67d745f6-4b4bn 1/1 Running 0 107s docker compose-api-57ff65b8c7-rdlh9 1/1 Running 0 107s kube-system coredns-fb8b8dccf-6qqnh 1/1 Running 0 3m kube-system coredns-fb8b8dccf-ltbgd 1/1 Running 0 3m1s kube-system etcd-docker-desktop 1/1 Running 0 2m13s kube-system kube-apiserver-docker-desktop 1/1 Running 0 111s kube-system kube-controller-manager-docker-desktop 1/1 Running 0 108s kube-system kube-proxy-qrjvf 1/1 Running 0 3m1s kube-system kube-scheduler-docker-desktop 1/1 Running 0 111s Helm needs to be installed
Multi Node Kubernetes cluster running RKE on VMWare Workstation
Exposing RKE ports on the VMs
Enable the following ports through the NAT or via VMware VMWare#PortForwardingonVMwareFusion
https://rancher.com/docs/rancher/v2.x/en/installation/requirements/ports/#commonly-used-ports
port | use | |
---|---|---|
22 | ||
80 | ||
443 | ||
2376 | ||
2379 | ||
2380 | ||
3389 | ||
6443 | ||
8472 | ||
9099 | ||
10250 | ||
10254 | ||
30000- |
Turn off windows firewall and test SSH access through the NAT
sudo vi /Library/Preferences/VMware\ Fusion/vmnet2/nat.conf [incomingtcp] # The format and example are as follows: #<external port number> = <VM's IP address>:<VM's port number> #8080 = 172.16.3.128:80 443 = 192.168.199.128:443 #443 10250 = 192.168.199.128:10250 30000 = 192.168.199.128:30000 2380 = 192.168.199.128:2380 2023 = 192.168.199.129:22 #22 8472 = 192.168.199.128:8472 2022 = 192.168.199.128:22 #22 9099 = 192.168.199.128:9099 10254 = 192.168.199.128:10254 2379 = 192.168.199.128:2379 6443 = 192.168.199.128:6443 30001 = 192.168.199.128:30001 3389 = 192.168.199.128:3389 2376 = 192.168.199.128:2376 80 = 192.168.199.128:80 #80 [incomingudp] # UDP port forwarding example #6000 = 172.16.3.0:6001 30000 = 192.168.199.128:30000 8472 = 192.168.199.128:8472 30001 = 192.168.199.128:30001 sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --stop sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --start
Restart the VM network via vm-cli in VMWare#PortForwardingonVMwareFusion
Helm Charts
- REF-3Getting issue details... STATUS
Installations
Docker is the lowest layer in our Docker | Kubernetes | Helm orchestration stack. On Ubuntu installation is a couple lines of code, however on Windows installation is a bit more involved. Docker desktop for windows or OSX comes with a kubernetes stack out of the box.
Docker Installation
Docker Installation on Ubuntu
sudo apt update sudo apt upgrade sudo apt-get install curl sudo curl https://releases.rancher.com/install-docker/19.03.sh | sh sudo usermod -aG docker ubuntu
Docker Installation on OSX
VMware Fusion and Docker can co-exist on OSX
Docker Installation on M1 Mac OSX - Apple Silicon
As of April 2021 - Docker installs on the ARM M1 chipset using Rosetta
https://www.docker.com/blog/released-docker-desktop-for-mac-apple-silicon/
softwareupdate --install-rosetta
The Kubernetes distribution in docker desktop works fine on Apple Silicon M1 chips on the 2021 Mac Mini
michael@Michaels-Mac-mini ~ % kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-f9fd979d6-ghmnj 1/1 Running 0 171m kube-system coredns-f9fd979d6-lhtnp 1/1 Running 0 171m kube-system etcd-docker-desktop 1/1 Running 0 170m kube-system kube-apiserver-docker-desktop 1/1 Running 0 170m kube-system kube-controller-manager-docker-desktop 1/1 Running 0 170m kube-system kube-proxy-m49f4 1/1 Running 0 171m kube-system kube-scheduler-docker-desktop 1/1 Running 0 170m kube-system storage-provisioner 1/1 Running 0 170m kube-system vpnkit-controller 1/1 Running 0 170m michaelMichaels-Mac-mini ~ % docker --version Docker version 20.10.5, build 55c4c88
Docker Installation on ARM Raspberry PI 4
Docker Installation on Windows
Note: until VMware changes their control plane to kubernetes - Windows installations of Docker require Hyper-v which is incompatible with VMware Workstation.
Docker Installation on Windows non-Admin accounts
If you wish to run docker from a non-admin account - do the following first.
Install docker desktop from the admin account.
Add the non-admin account to the docker-users group.
Startup docker.
If you attempt to download docker image layers from a docker repository (as part of running a container) - and you are running behind a docker proxy - set the proxy in docker preferences first.
Restart docker after any configuration change.
Verify you can startup a simple container with no file system shares.
Enable the built in Kubernetes cluster.
Startup a tomcat container to verify docker desktop.
Override any firewall rules blocking port access.
Enabled file sharing so we can use persistent volumes from Docker or Kubernetes charts.
Enable file sharing through your firewall for docker containers.
open port 445 on 10.0.75.1 as per https://docs.docker.com/docker-for-windows/#firewall-rules-for-shared-drives
Upgrading inside a firewall
An error occurred while sending the request. at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at LightweightInstaller.DownloadStep.<DoAsync>d__35.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at LightweightInstaller.InstallWorkflow.<ProcessAsync>d__23.MoveNext()
Verify versions
PS C:\Users\> docker version Client: Docker Engine - Community Version: 19.03.2 API version: 1.40 Go version: go1.12.8 Git commit: 6a30dfc Built: Thu Aug 29 05:26:49 2019 OS/Arch: windows/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.2 API version: 1.40 (minimum version 1.12) Go version: go1.12.8 Git commit: 6a30dfc Built: Thu Aug 29 05:32:21 2019 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.2.6 GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb runc: Version: 1.0.0-rc8 GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f docker-init: Version: 0.18.0 GitCommit: fec3683 PS C:\Users\> docker-compose version docker-compose version 1.24.1, build 4667896b docker-py version: 3.7.3 CPython version: 3.6.8 OpenSSL version: OpenSSL 1.0.2q 20 Nov 2018
Kubernetes Native Applications - Kubernetes Operator
Operators enable us to develop for Kubernetes.
https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
Existing at https://operatorhub.io/
Writing your own at https://kudo.dev/ | https://book.kubebuilder.io/ | https://github.com/operator-framework/getting-started
https://github.com/operator-framework/operator-sdk-samples/tree/master/go/memcached-operator/
https://github.com/operator-framework/getting-started
https://medium.com/@mtreacher/writing-a-kubernetes-operator-a9b86f19bfb9
https://medium.com/@cloudark/kubernetes-operator-faq-e018132c6ea2
https://enterprisersproject.com/article/2019/2/kubernetes-operators-plain-english
https://kubernetes.io/docs/concepts/extend-kubernetes/operator/#what-s-next
Golang based Kubernetes Operators
see https://medium.com/@mtreacher/writing-a-kubernetes-operator-a9b86f19bfb9
Containerizing Applications
https://blogs.oracle.com/javamagazine/containerizing-apps-with-jlink
Container Dependencies
Container startup is blocked by all initContainers in they deployment yaml
Kubernetes Chart Customization
Kubernetes ConfigMaps
Kubernetes Secrets
Kubernetes Frameworks Plugins and Tools
Persistent Volumes
use GlusterFS -
Open Policy Agent
JSON based CNCF project run as a sidecar container or cluster vm DaemonSet
https://www.openpolicyagent.org/docs/v0.12.2/kubernetes-admission-control/
Kubectl Command Reference
see Kubernetes Cheetsheet https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Switching context from Azure AKS back to local Docker Desktop Kubernetes
Connecting to an Azure AKS instance at the same time as running your own developer kubernetes install
:reference-nbi $ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://kubernetes.docker.internal:6443 name: docker-desktop - cluster: certificate-authority-data: DATA+OMITTED server: https://obl-dev-....5.hcp.eastus.azmk8s.io:443 name: obl-dev contexts: - context: cluster: docker-desktop user: docker-desktop name: docker-desktop - context: cluster: docker-desktop user: docker-desktop name: docker-for-desktop - context: cluster: obl-dev user: clusterUser_obl_dev_aks_obl-dev name: obl-dev current-context: obl-dev kind: Config preferences: {} users: - name: clusterUser_obl_dev_aks_obl-dev user: client-certificate-data: REDACTED client-key-data: REDACTED token: 3a0ee7e0....126fe - name: docker-desktop user: client-certificate-data: REDACTED client-key-data: REDACTED :reference-nbi $ kubectl config current-context obl-dev :reference-nbi $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE docker-desktop docker-desktop docker-desktop docker-for-desktop docker-desktop docker-desktop * obl-dev obl-dev clusterUser_obl_dev_aks_obl-dev :reference-nbi $ kubectl config use-context docker-desktop Switched to context "docker-desktop". :reference-nbi $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * docker-desktop docker-desktop docker-desktop docker-for-desktop docker-desktop docker-desktop obl-dev obl-dev clusterUser_obl_dev_aks_obl-dev :reference-nbi $ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default kafka-0 1/1 Running 1 62d default kafka-1 1/1 Running 1 62d default kafka-2 1/1 Running 1 62d default kafka-zookeeper-0 1/1 Running 0 62d default kafka-zookeeper-1 1/1 Running 0 62d default kafka-zookeeper-2 1/1 Running 0 62d default testclient 1/1 Running 0 62d default tomcat-dev-76d87c8fb6-9xjr6 1/1 Running 0 54d docker compose-7b7c5cbbcc-6nhng 1/1 Running 0 62d docker compose-api-dbbf7c5db-2lsq2 1/1 Running 0 62d kube-system coredns-5c98db65d4-rbggz 1/1 Running 1 62d kube-system coredns-5c98db65d4-txftp 1/1 Running 1 62d kube-system etcd-docker-desktop 1/1 Running 0 62d kube-system kube-apiserver-docker-desktop 1/1 Running 0 62d kube-system kube-controller-manager-docker-desktop 1/1 Running 0 62d kube-system kube-proxy-7brgm 1/1 Running 0 62d kube-system kube-scheduler-docker-desktop 1/1 Running 0 62d kube-system storage-provisioner 1/1 Running 1 7d1h
Kubernetes Autoscaling
https://www.giantswarm.io/blog/horizontal-autoscaling-in-kubernetes
Jiras
https://github.com/kubernetes/kubernetes/issues/83253
Links
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
https://coreos.com/blog/rkt-and-kubernetes.html
3 Comments
Michael O'Brien
20191124 https://medium.com/better-programming/build-your-own-multi-node-kubernetes-cluster-with-monitoring-346a7e2ef6e2
https://collabnix.com/kubernetes-dashboard-on-docker-desktop-for-windows-2-0-0-3-in-2-minutes/
Michael O'Brien
ubuntu@ip-172-31-91-213:~/grafana$ kubectl get services -n monitoring -o json | jq -r '.items[0].spec.ports[0].nodePort'
32000
Michael O'Brien
https://github.com/kubernetes-sigs/kubespray
use NFS
https://kubernetes.io/docs/concepts/storage/storage-classes/#local