Page tree

Michael O'Brien

Skip to end of metadata
Go to start of metadata

See the Kubernetes Developer Guide for kubernetes and helm installations on any type of bare metal, virtual or cloud VM.

Gone are the days of setting up your own Java EE distributed cluster implementing ForkJoin through remote EJB.  We now just add machines to a kubernetes cluster.

Hardware

For Intel NUC computers (two i7, one i5, one i3 - all with 16G ram and 128-512 nVMe SSD drives) connected by a private layer 3 switch on a 192.168.15.0/24 subnet.

Software

Kubernetes 1.14.6, Helm 2.14.3, Docker 19.03.2, Ubuntu 16.04

20200517: RKE 1.0.8: Kubernetes 1.17.5 on Ubuntu 16.04

https://github.com/rancher/rke/releases/tag/v1.0.8

4 Node Deployment

Prepare Nodes

Add ssh key Kubernetes Developer Guide#SingleNodeKubernetesclusterrunningRKEonAWSEC2withHelm

Kubernetes Developer Guide#InstallDocker


Prepare Master

Kubernetes Developer Guide#InstallDocker

ubuntu@kub0:~$ wget https://github.com/rancher/rke/releases/download/v1.0.8/rke_linux-amd64


ubuntu@kub0:~$ cp rke_linux-amd64 rke
ubuntu@kub0:~$ sudo chmod 777 rke
[sudo] password for ubuntu: 
ubuntu@kub0:~$ ./rke --version
rke version v1.0.8
ubuntu@kub0:~$ ./rke config --name cluster.yml
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: ~/.ssh/onap_rsa
[+] Number of Hosts [1]: 4
[+] SSH Address of host (1) [none]: 192.168.0.200
[+] SSH Port of host (1) [22]: 
[+] SSH Private Key Path of host (192.168.0.200) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (192.168.0.200) [ubuntu]: 
[+] Is host (192.168.0.200) a Control Plane host (y/n)? [y]: y
[+] Is host (192.168.0.200) a Worker host (y/n)? [n]: y
[+] Is host (192.168.0.200) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (192.168.0.200) [none]: 
[+] Internal IP of host (192.168.0.200) [none]: 
[+] Docker socket path on host (192.168.0.200) [/var/run/docker.sock]: 
[+] SSH Address of host (2) [none]: 192.168.0.201
[+] SSH Port of host (2) [22]: 
[+] SSH Private Key Path of host (192.168.0.201) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (192.168.0.201) [ubuntu]: 
[+] Is host (192.168.0.201) a Control Plane host (y/n)? [y]: y
[+] Is host (192.168.0.201) a Worker host (y/n)? [n]: y
[+] Is host (192.168.0.201) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (192.168.0.201) [none]: 
[+] Internal IP of host (192.168.0.201) [none]: 192.168.0.201
[+] Docker socket path on host (192.168.0.201) [/var/run/docker.sock]: 
[+] SSH Address of host (3) [none]: 192.168.0.203
[+] SSH Port of host (3) [22]: 
[+] SSH Private Key Path of host (192.168.0.203) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (192.168.0.203) [ubuntu]: 
[+] Is host (192.168.0.203) a Control Plane host (y/n)? [y]: y
[+] Is host (192.168.0.203) a Worker host (y/n)? [n]: y
[+] Is host (192.168.0.203) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (192.168.0.203) [none]: 
[+] Internal IP of host (192.168.0.203) [none]: 192.168.0.203
[+] Docker socket path on host (192.168.0.203) [/var/run/docker.sock]: 
[+] SSH Address of host (4) [none]: 192.168.0.203
[+] SSH Port of host (4) [22]: 
[+] SSH Private Key Path of host (192.168.0.203) [none]: 
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (192.168.0.203) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (192.168.0.203) [ubuntu]: 
[+] Is host (192.168.0.203) a Control Plane host (y/n)? [y]: y
[+] Is host (192.168.0.203) a Worker host (y/n)? [n]: y
[+] Is host (192.168.0.203) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (192.168.0.203) [none]: 
[+] Internal IP of host (192.168.0.203) [none]: 192.168.0.203
[+] Docker socket path on host (192.168.0.203) [/var/run/docker.sock]: 
[+] Network Plugin Type (flannel, calico, weave, canal) [canal]: 
[+] Authentication Strategy [x509]: 
[+] Authorization Mode (rbac, none) [rbac]: 
[+] Kubernetes Docker image [rancher/hyperkube:v1.17.5-rancher1]: 
[+] Cluster domain [cluster.local]: 
[+] Service Cluster IP Range [10.43.0.0/16]: 
[+] Enable PodSecurityPolicy [n]: 
[+] Cluster Network CIDR [10.42.0.0/16]: 
[+] Cluster DNS Service IP [10.43.0.10]: 
[+] Add addon manifest URLs or YAML files [no]:

editedubuntu@kub0:~$ cat cluster.yml 
# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: 192.168.0.200
port: "22"
internal_address: 192.168.0.200
role:
- controlplane
- worker
- etcd
hostname_override: ""
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/onap_rsa
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- address: 192.168.0.201
port: "22"
internal_address: 192.168.0.201
role:
- controlplane
- worker
- etcd
hostname_override: ""
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/onap_rsa
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- address: 192.168.0.202
port: "22"
internal_address: 192.168.0.202
role:
- controlplane
- worker
- etcd
hostname_override: ""
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/onap_rsa
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- address: 192.168.0.203
port: "22"
internal_address: 192.168.0.203
role:
- controlplane
- worker
- etcd
hostname_override: ""
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/onap_rsa
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
services:
etcd:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
external_urls: []
ca_cert: ""
cert: ""
key: ""
path: ""
uid: 0
gid: 0
snapshot: null
retention: ""
creation: ""
backup_config: null
kube-api:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: ""
pod_security_policy: false
always_pull_images: false
secrets_encryption_config: null
audit_log: null
admission_configuration: null
event_rate_limit: null
kube-controller:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
scheduler:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
kubelet:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_domain: cluster.local
infra_container_image: ""
cluster_dns_server: 10.43.0.10
fail_swap_on: false
generate_serving_certificate: false
kubeproxy:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
network:
plugin: canal
options: {}
mtu: 0
node_selector: {}
authentication:
strategy: x509
sans: []
webhook: null
addons: ""
addons_include: []
system_images:
etcd: rancher/coreos-etcd:v3.4.3-rancher1
alpine: rancher/rke-tools:v0.1.56
nginx_proxy: rancher/rke-tools:v0.1.56
cert_downloader: rancher/rke-tools:v0.1.56
kubernetes_services_sidecar: rancher/rke-tools:v0.1.56
kubedns: rancher/k8s-dns-kube-dns:1.15.0
dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0
kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0
kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
coredns: rancher/coredns-coredns:1.6.5
coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
nodelocal: rancher/k8s-dns-node-cache:1.15.7
kubernetes: rancher/hyperkube:v1.17.5-rancher1
flannel: rancher/coreos-flannel:v0.11.0-rancher1
flannel_cni: rancher/flannel-cni:v0.3.0-rancher5
calico_node: rancher/calico-node:v3.13.0
calico_cni: rancher/calico-cni:v3.13.0
calico_controllers: rancher/calico-kube-controllers:v3.13.0
calico_ctl: rancher/calico-ctl:v2.0.0
calico_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.0
canal_node: rancher/calico-node:v3.13.0
canal_cni: rancher/calico-cni:v3.13.0
canal_flannel: rancher/coreos-flannel:v0.11.0
canal_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.0
weave_node: weaveworks/weave-kube:2.5.2
weave_cni: weaveworks/weave-npc:2.5.2
pod_infra_container: rancher/pause:3.1
ingress: rancher/nginx-ingress-controller:nginx-0.25.1-rancher1
ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
metrics_server: rancher/metrics-server:v0.3.6
windows_pod_infra_container: rancher/kubelet-pause:v0.1.3
ssh_key_path: ~/.ssh/onap_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
mode: rbac
options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
provider: ""
options: {}
node_selector: {}
extra_args: {}
dns_policy: ""
extra_envs: []
extra_volumes: []
extra_volume_mounts: []
cluster_name: ""
cloud_provider:
name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
address: ""
port: ""
user: ""
ssh_key: ""
ssh_key_path: ""
ssh_cert: ""
ssh_cert_path: ""
monitoring:
provider: ""
options: {}
node_selector: {}
restore:
restore: false
snapshot_name: ""
dns: null



ubuntu@kub0:~$ curl https://releases.rancher.com/install-docker/19.03.sh | sh
and 1,2,3


rke up

ubuntu@kub3:~$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
rancher/rke-tools   v0.1.56             8c8e0533fa43        7 weeks ago         132MB
ubuntu@kub3:~$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
rancher/hyperkube   v1.17.5-rancher1    ff99d966b0ee        4 weeks ago         1.56GB
rancher/rke-tools   v0.1.56             8c8e0533fa43        7 weeks ago         132MB



ubuntu@kub0:~$ rke --version
rke version v1.0.8
ubuntu@kub0:~$ sudo rke up
INFO[0000] Running RKE version: v1.0.8                  
INFO[0000] Initiating Kubernetes cluster                
INFO[0000] [dialer] Setup tunnel for host [192.168.0.203] 
INFO[0000] [dialer] Setup tunnel for host [192.168.0.200] 
INFO[0000] [dialer] Setup tunnel for host [192.168.0.201] 
INFO[0000] [dialer] Setup tunnel for host [192.168.0.202] 
INFO[0000] Checking if container [cluster-state-deployer] is running on host [192.168.0.200], try #1 
INFO[0000] Pulling image [rancher/rke-tools:v0.1.56] on host [192.168.0.200], try #1 
INFO[0006] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0006] Starting container [cluster-state-deployer] on host [192.168.0.200], try #1 
INFO[0006] [state] Successfully started [cluster-state-deployer] container on host [192.168.0.200] 
INFO[0006] Checking if container [cluster-state-deployer] is running on host [192.168.0.201], try #1 
INFO[0006] Pulling image [rancher/rke-tools:v0.1.56] on host [192.168.0.201], try #1 
INFO[0013] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0013] Starting container [cluster-state-deployer] on host [192.168.0.201], try #1 
INFO[0013] [state] Successfully started [cluster-state-deployer] container on host [192.168.0.201] 
INFO[0013] Checking if container [cluster-state-deployer] is running on host [192.168.0.202], try #1 
INFO[0013] Pulling image [rancher/rke-tools:v0.1.56] on host [192.168.0.202], try #1 
INFO[0020] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0020] Starting container [cluster-state-deployer] on host [192.168.0.202], try #1 
INFO[0021] [state] Successfully started [cluster-state-deployer] container on host [192.168.0.202] 
INFO[0021] Checking if container [cluster-state-deployer] is running on host [192.168.0.203], try #1 
INFO[0021] Pulling image [rancher/rke-tools:v0.1.56] on host [192.168.0.203], try #1 
INFO[0029] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0029] Starting container [cluster-state-deployer] on host [192.168.0.203], try #1 
INFO[0029] [state] Successfully started [cluster-state-deployer] container on host [192.168.0.203] 
INFO[0029] [certificates] Generating CA kubernetes certificates 
INFO[0029] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates 
INFO[0029] [certificates] Generating Kubernetes API server certificates 
INFO[0030] [certificates] Generating Service account token key 
INFO[0030] [certificates] Generating Kube Controller certificates 
INFO[0030] [certificates] Generating Kube Scheduler certificates 
INFO[0030] [certificates] Generating Kube Proxy certificates 
INFO[0030] [certificates] Generating Node certificate   
INFO[0030] [certificates] Generating admin certificates and kubeconfig 
INFO[0030] [certificates] Generating Kubernetes API server proxy client certificates 
INFO[0031] [certificates] Generating kube-etcd-192-168-0-200 certificate and key 
INFO[0031] [certificates] Generating kube-etcd-192-168-0-201 certificate and key 
INFO[0031] [certificates] Generating kube-etcd-192-168-0-202 certificate and key 
INFO[0031] [certificates] Generating kube-etcd-192-168-0-203 certificate and key 
INFO[0031] Successfully Deployed state file at [./cluster.rkestate] 
INFO[0031] Building Kubernetes cluster                  
INFO[0031] [dialer] Setup tunnel for host [192.168.0.203] 
INFO[0031] [dialer] Setup tunnel for host [192.168.0.202] 
INFO[0031] [dialer] Setup tunnel for host [192.168.0.200] 
INFO[0031] [dialer] Setup tunnel for host [192.168.0.201] 
INFO[0031] [network] Deploying port listener containers 
INFO[0031] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0031] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0031] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0031] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0031] Starting container [rke-etcd-port-listener] on host [192.168.0.201], try #1 
INFO[0032] Starting container [rke-etcd-port-listener] on host [192.168.0.203], try #1 
INFO[0032] Starting container [rke-etcd-port-listener] on host [192.168.0.202], try #1 
INFO[0032] Starting container [rke-etcd-port-listener] on host [192.168.0.200], try #1 
INFO[0032] [network] Successfully started [rke-etcd-port-listener] container on host [192.168.0.201] 
INFO[0032] [network] Successfully started [rke-etcd-port-listener] container on host [192.168.0.203] 
INFO[0032] [network] Successfully started [rke-etcd-port-listener] container on host [192.168.0.200] 
INFO[0032] [network] Successfully started [rke-etcd-port-listener] container on host [192.168.0.202] 
INFO[0032] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0032] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0032] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0032] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0032] Starting container [rke-cp-port-listener] on host [192.168.0.201], try #1 
INFO[0032] Starting container [rke-cp-port-listener] on host [192.168.0.200], try #1 
INFO[0032] Starting container [rke-cp-port-listener] on host [192.168.0.203], try #1 
INFO[0032] Starting container [rke-cp-port-listener] on host [192.168.0.202], try #1 
INFO[0033] [network] Successfully started [rke-cp-port-listener] container on host [192.168.0.201] 
INFO[0033] [network] Successfully started [rke-cp-port-listener] container on host [192.168.0.200] 
INFO[0033] [network] Successfully started [rke-cp-port-listener] container on host [192.168.0.203] 
INFO[0033] [network] Successfully started [rke-cp-port-listener] container on host [192.168.0.202] 
INFO[0033] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0033] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0033] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0033] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0033] Starting container [rke-worker-port-listener] on host [192.168.0.201], try #1 
INFO[0033] Starting container [rke-worker-port-listener] on host [192.168.0.200], try #1 
INFO[0033] Starting container [rke-worker-port-listener] on host [192.168.0.203], try #1 
INFO[0033] Starting container [rke-worker-port-listener] on host [192.168.0.202], try #1 
INFO[0033] [network] Successfully started [rke-worker-port-listener] container on host [192.168.0.201] 
INFO[0033] [network] Successfully started [rke-worker-port-listener] container on host [192.168.0.200] 
INFO[0033] [network] Successfully started [rke-worker-port-listener] container on host [192.168.0.203] 
INFO[0033] [network] Successfully started [rke-worker-port-listener] container on host [192.168.0.202] 
INFO[0033] [network] Port listener containers deployed successfully 
INFO[0033] [network] Running etcd <-> etcd port checks  
INFO[0033] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0033] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0033] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0033] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0033] Starting container [rke-port-checker] on host [192.168.0.201], try #1 
INFO[0033] Starting container [rke-port-checker] on host [192.168.0.200], try #1 
INFO[0033] Starting container [rke-port-checker] on host [192.168.0.203], try #1 
INFO[0034] Starting container [rke-port-checker] on host [192.168.0.202], try #1 
INFO[0034] [network] Successfully started [rke-port-checker] container on host [192.168.0.200] 
INFO[0034] [network] Successfully started [rke-port-checker] container on host [192.168.0.201] 
INFO[0034] Removing container [rke-port-checker] on host [192.168.0.200], try #1 
INFO[0034] Removing container [rke-port-checker] on host [192.168.0.201], try #1 
INFO[0034] [network] Successfully started [rke-port-checker] container on host [192.168.0.203] 
INFO[0034] [network] Successfully started [rke-port-checker] container on host [192.168.0.202] 
INFO[0034] Removing container [rke-port-checker] on host [192.168.0.203], try #1 
INFO[0034] Removing container [rke-port-checker] on host [192.168.0.202], try #1 
INFO[0034] [network] Running control plane -> etcd port checks 
INFO[0034] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0034] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0034] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0034] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0034] Starting container [rke-port-checker] on host [192.168.0.201], try #1 
INFO[0034] Starting container [rke-port-checker] on host [192.168.0.203], try #1 
INFO[0034] Starting container [rke-port-checker] on host [192.168.0.200], try #1 
INFO[0034] Starting container [rke-port-checker] on host [192.168.0.202], try #1 
INFO[0034] [network] Successfully started [rke-port-checker] container on host [192.168.0.201] 
INFO[0035] [network] Successfully started [rke-port-checker] container on host [192.168.0.200] 
INFO[0035] Removing container [rke-port-checker] on host [192.168.0.201], try #1 
INFO[0035] Removing container [rke-port-checker] on host [192.168.0.200], try #1 
INFO[0035] [network] Successfully started [rke-port-checker] container on host [192.168.0.203] 
INFO[0035] [network] Successfully started [rke-port-checker] container on host [192.168.0.202] 
INFO[0035] Removing container [rke-port-checker] on host [192.168.0.203], try #1 
INFO[0035] Removing container [rke-port-checker] on host [192.168.0.202], try #1 
INFO[0035] [network] Running control plane -> worker port checks 
INFO[0035] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0035] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0035] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0035] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0035] Starting container [rke-port-checker] on host [192.168.0.200], try #1 
INFO[0035] Starting container [rke-port-checker] on host [192.168.0.203], try #1 
INFO[0035] Starting container [rke-port-checker] on host [192.168.0.201], try #1 
INFO[0035] Starting container [rke-port-checker] on host [192.168.0.202], try #1 
INFO[0035] [network] Successfully started [rke-port-checker] container on host [192.168.0.200] 
INFO[0035] [network] Successfully started [rke-port-checker] container on host [192.168.0.201] 
INFO[0035] Removing container [rke-port-checker] on host [192.168.0.200], try #1 
INFO[0035] Removing container [rke-port-checker] on host [192.168.0.201], try #1 
INFO[0035] [network] Successfully started [rke-port-checker] container on host [192.168.0.203] 
INFO[0035] [network] Successfully started [rke-port-checker] container on host [192.168.0.202] 
INFO[0035] Removing container [rke-port-checker] on host [192.168.0.203], try #1 
INFO[0035] Removing container [rke-port-checker] on host [192.168.0.202], try #1 
INFO[0036] [network] Running workers -> control plane port checks 
INFO[0036] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0036] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0036] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0036] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0036] Starting container [rke-port-checker] on host [192.168.0.200], try #1 
INFO[0036] Starting container [rke-port-checker] on host [192.168.0.203], try #1 
INFO[0036] Starting container [rke-port-checker] on host [192.168.0.201], try #1 
INFO[0036] Starting container [rke-port-checker] on host [192.168.0.202], try #1 
INFO[0036] [network] Successfully started [rke-port-checker] container on host [192.168.0.200] 
INFO[0036] [network] Successfully started [rke-port-checker] container on host [192.168.0.201] 
INFO[0036] Removing container [rke-port-checker] on host [192.168.0.200], try #1 
INFO[0036] Removing container [rke-port-checker] on host [192.168.0.201], try #1 
INFO[0036] [network] Successfully started [rke-port-checker] container on host [192.168.0.203] 
INFO[0036] [network] Successfully started [rke-port-checker] container on host [192.168.0.202] 
INFO[0036] Removing container [rke-port-checker] on host [192.168.0.203], try #1 
INFO[0036] Removing container [rke-port-checker] on host [192.168.0.202], try #1 
INFO[0036] [network] Checking KubeAPI port Control Plane hosts 
INFO[0036] [network] Removing port listener containers  
INFO[0036] Removing container [rke-etcd-port-listener] on host [192.168.0.203], try #1 
INFO[0036] Removing container [rke-etcd-port-listener] on host [192.168.0.200], try #1 
INFO[0036] Removing container [rke-etcd-port-listener] on host [192.168.0.201], try #1 
INFO[0036] Removing container [rke-etcd-port-listener] on host [192.168.0.202], try #1 
INFO[0036] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.0.200] 
INFO[0036] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.0.201] 
INFO[0036] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.0.203] 
INFO[0037] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.0.202] 
INFO[0037] Removing container [rke-cp-port-listener] on host [192.168.0.200], try #1 
INFO[0037] Removing container [rke-cp-port-listener] on host [192.168.0.201], try #1 
INFO[0037] Removing container [rke-cp-port-listener] on host [192.168.0.202], try #1 
INFO[0037] Removing container [rke-cp-port-listener] on host [192.168.0.203], try #1 
INFO[0037] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.0.201] 
INFO[0037] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.0.200] 
INFO[0037] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.0.203] 
INFO[0037] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.0.202] 
INFO[0037] Removing container [rke-worker-port-listener] on host [192.168.0.200], try #1 
INFO[0037] Removing container [rke-worker-port-listener] on host [192.168.0.201], try #1 
INFO[0037] Removing container [rke-worker-port-listener] on host [192.168.0.203], try #1 
INFO[0037] Removing container [rke-worker-port-listener] on host [192.168.0.202], try #1 
INFO[0037] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.0.200] 
INFO[0037] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.0.201] 
INFO[0037] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.0.203] 
INFO[0037] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.0.202] 
INFO[0037] [network] Port listener containers removed successfully 
INFO[0037] [certificates] Deploying kubernetes certificates to Cluster nodes 
INFO[0037] Checking if container [cert-deployer] is running on host [192.168.0.201], try #1 
INFO[0037] Checking if container [cert-deployer] is running on host [192.168.0.202], try #1 
INFO[0037] Checking if container [cert-deployer] is running on host [192.168.0.200], try #1 
INFO[0037] Checking if container [cert-deployer] is running on host [192.168.0.203], try #1 
INFO[0037] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0037] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0037] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0037] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0037] Starting container [cert-deployer] on host [192.168.0.201], try #1 
INFO[0037] Starting container [cert-deployer] on host [192.168.0.200], try #1 
INFO[0037] Starting container [cert-deployer] on host [192.168.0.203], try #1 
INFO[0037] Starting container [cert-deployer] on host [192.168.0.202], try #1 
INFO[0037] Checking if container [cert-deployer] is running on host [192.168.0.200], try #1 
INFO[0037] Checking if container [cert-deployer] is running on host [192.168.0.201], try #1 
INFO[0038] Checking if container [cert-deployer] is running on host [192.168.0.203], try #1 
INFO[0038] Checking if container [cert-deployer] is running on host [192.168.0.202], try #1 
INFO[0042] Checking if container [cert-deployer] is running on host [192.168.0.200], try #1 
INFO[0042] Removing container [cert-deployer] on host [192.168.0.200], try #1 
INFO[0042] Checking if container [cert-deployer] is running on host [192.168.0.201], try #1 
INFO[0042] Removing container [cert-deployer] on host [192.168.0.201], try #1 
INFO[0043] Checking if container [cert-deployer] is running on host [192.168.0.203], try #1 
INFO[0043] Removing container [cert-deployer] on host [192.168.0.203], try #1 
INFO[0043] Checking if container [cert-deployer] is running on host [192.168.0.202], try #1 
INFO[0043] Removing container [cert-deployer] on host [192.168.0.202], try #1 
INFO[0043] [reconcile] Rebuilding and updating local kube config 
INFO[0043] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
INFO[0043] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
INFO[0043] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
INFO[0043] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
INFO[0043] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
INFO[0043] [reconcile] Reconciling cluster state        
INFO[0043] [reconcile] This is newly generated cluster  
INFO[0043] Pre-pulling kubernetes images                
INFO[0043] Pulling image [rancher/hyperkube:v1.17.5-rancher1] on host [192.168.0.200], try #1 
INFO[0043] Pulling image [rancher/hyperkube:v1.17.5-rancher1] on host [192.168.0.201], try #1 
INFO[0043] Pulling image [rancher/hyperkube:v1.17.5-rancher1] on host [192.168.0.203], try #1 
INFO[0043] Pulling image [rancher/hyperkube:v1.17.5-rancher1] on host [192.168.0.202], try #1 
INFO[0188] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.200] 
INFO[0197] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.201] 
INFO[0202] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.202] 
INFO[0204] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.203] 
INFO[0204] Kubernetes images pulled successfully        
INFO[0204] [etcd] Building up etcd plane..              
INFO[0204] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0204] Starting container [etcd-fix-perm] on host [192.168.0.200], try #1 
INFO[0205] Successfully started [etcd-fix-perm] container on host [192.168.0.200] 
INFO[0205] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.200] 
INFO[0205] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.200] 
INFO[0205] Container [etcd-fix-perm] is still running on host [192.168.0.200] 
INFO[0206] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.200] 
INFO[0206] Removing container [etcd-fix-perm] on host [192.168.0.200], try #1 
INFO[0206] [remove/etcd-fix-perm] Successfully removed container on host [192.168.0.200] 
INFO[0206] Pulling image [rancher/coreos-etcd:v3.4.3-rancher1] on host [192.168.0.200], try #1 
INFO[0211] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [192.168.0.200] 
INFO[0211] Starting container [etcd] on host [192.168.0.200], try #1 
INFO[0211] [etcd] Successfully started [etcd] container on host [192.168.0.200] 
INFO[0211] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.0.200] 
INFO[0211] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0211] Starting container [etcd-rolling-snapshots] on host [192.168.0.200], try #1 
INFO[0212] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.0.200] 
INFO[0217] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0217] Starting container [rke-bundle-cert] on host [192.168.0.200], try #1 
INFO[0217] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.0.200] 
INFO[0217] Waiting for [rke-bundle-cert] container to exit on host [192.168.0.200] 
INFO[0217] Container [rke-bundle-cert] is still running on host [192.168.0.200] 
INFO[0218] Waiting for [rke-bundle-cert] container to exit on host [192.168.0.200] 
INFO[0218] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.0.200] 
INFO[0218] Removing container [rke-bundle-cert] on host [192.168.0.200], try #1 
INFO[0218] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0219] Starting container [rke-log-linker] on host [192.168.0.200], try #1 
INFO[0219] [etcd] Successfully started [rke-log-linker] container on host [192.168.0.200] 
INFO[0219] Removing container [rke-log-linker] on host [192.168.0.200], try #1 
INFO[0219] [remove/rke-log-linker] Successfully removed container on host [192.168.0.200] 
INFO[0219] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0219] Starting container [etcd-fix-perm] on host [192.168.0.201], try #1 
INFO[0219] Successfully started [etcd-fix-perm] container on host [192.168.0.201] 
INFO[0219] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.201] 
INFO[0219] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.201] 
INFO[0219] Container [etcd-fix-perm] is still running on host [192.168.0.201] 
INFO[0220] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.201] 
INFO[0220] Removing container [etcd-fix-perm] on host [192.168.0.201], try #1 
INFO[0220] [remove/etcd-fix-perm] Successfully removed container on host [192.168.0.201] 
INFO[0220] Pulling image [rancher/coreos-etcd:v3.4.3-rancher1] on host [192.168.0.201], try #1 
INFO[0226] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [192.168.0.201] 
INFO[0226] Starting container [etcd] on host [192.168.0.201], try #1 
INFO[0228] [etcd] Successfully started [etcd] container on host [192.168.0.201] 
INFO[0228] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.0.201] 
INFO[0228] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0229] Starting container [etcd-rolling-snapshots] on host [192.168.0.201], try #1 
INFO[0229] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.0.201] 
INFO[0234] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0234] Starting container [rke-bundle-cert] on host [192.168.0.201], try #1 
INFO[0235] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.0.201] 
INFO[0235] Waiting for [rke-bundle-cert] container to exit on host [192.168.0.201] 
INFO[0235] Container [rke-bundle-cert] is still running on host [192.168.0.201] 
INFO[0236] Waiting for [rke-bundle-cert] container to exit on host [192.168.0.201] 
INFO[0236] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.0.201] 
INFO[0236] Removing container [rke-bundle-cert] on host [192.168.0.201], try #1 
INFO[0236] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0236] Starting container [rke-log-linker] on host [192.168.0.201], try #1 
INFO[0236] [etcd] Successfully started [rke-log-linker] container on host [192.168.0.201] 
INFO[0236] Removing container [rke-log-linker] on host [192.168.0.201], try #1 
INFO[0236] [remove/rke-log-linker] Successfully removed container on host [192.168.0.201] 
INFO[0236] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0236] Starting container [etcd-fix-perm] on host [192.168.0.202], try #1 
INFO[0237] Successfully started [etcd-fix-perm] container on host [192.168.0.202] 
INFO[0237] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.202] 
INFO[0237] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.202] 
INFO[0237] Container [etcd-fix-perm] is still running on host [192.168.0.202] 
INFO[0238] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.202] 
INFO[0238] Removing container [etcd-fix-perm] on host [192.168.0.202], try #1 
INFO[0238] [remove/etcd-fix-perm] Successfully removed container on host [192.168.0.202] 
INFO[0238] Pulling image [rancher/coreos-etcd:v3.4.3-rancher1] on host [192.168.0.202], try #1 
INFO[0244] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [192.168.0.202] 
INFO[0244] Starting container [etcd] on host [192.168.0.202], try #1 
INFO[0244] [etcd] Successfully started [etcd] container on host [192.168.0.202] 
INFO[0244] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.0.202] 
INFO[0244] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0245] Starting container [etcd-rolling-snapshots] on host [192.168.0.202], try #1 
INFO[0245] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.0.202] 
INFO[0250] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0250] Starting container [rke-bundle-cert] on host [192.168.0.202], try #1 
INFO[0250] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.0.202] 
INFO[0250] Waiting for [rke-bundle-cert] container to exit on host [192.168.0.202] 
INFO[0250] Container [rke-bundle-cert] is still running on host [192.168.0.202] 
INFO[0251] Waiting for [rke-bundle-cert] container to exit on host [192.168.0.202] 
INFO[0251] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.0.202] 
INFO[0251] Removing container [rke-bundle-cert] on host [192.168.0.202], try #1 
INFO[0251] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0252] Starting container [rke-log-linker] on host [192.168.0.202], try #1 
INFO[0252] [etcd] Successfully started [rke-log-linker] container on host [192.168.0.202] 
INFO[0252] Removing container [rke-log-linker] on host [192.168.0.202], try #1 
INFO[0252] [remove/rke-log-linker] Successfully removed container on host [192.168.0.202] 
INFO[0252] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0252] Starting container [etcd-fix-perm] on host [192.168.0.203], try #1 
INFO[0253] Successfully started [etcd-fix-perm] container on host [192.168.0.203] 
INFO[0253] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.203] 
INFO[0253] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.203] 
INFO[0253] Container [etcd-fix-perm] is still running on host [192.168.0.203] 
INFO[0254] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.203] 
INFO[0254] Removing container [etcd-fix-perm] on host [192.168.0.203], try #1 
INFO[0254] [remove/etcd-fix-perm] Successfully removed container on host [192.168.0.203] 
INFO[0254] Pulling image [rancher/coreos-etcd:v3.4.3-rancher1] on host [192.168.0.203], try #1 
INFO[0260] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [192.168.0.203] 
INFO[0260] Starting container [etcd] on host [192.168.0.203], try #1 
INFO[0260] [etcd] Successfully started [etcd] container on host [192.168.0.203] 
INFO[0260] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.0.203] 
INFO[0260] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0260] Starting container [etcd-rolling-snapshots] on host [192.168.0.203], try #1 
INFO[0261] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.0.203] 
INFO[0266] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0266] Starting container [rke-bundle-cert] on host [192.168.0.203], try #1 
INFO[0266] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.0.203] 
INFO[0266] Waiting for [rke-bundle-cert] container to exit on host [192.168.0.203] 
INFO[0266] Container [rke-bundle-cert] is still running on host [192.168.0.203] 
INFO[0267] Waiting for [rke-bundle-cert] container to exit on host [192.168.0.203] 
INFO[0267] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.0.203] 
INFO[0267] Removing container [rke-bundle-cert] on host [192.168.0.203], try #1 
INFO[0267] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0267] Starting container [rke-log-linker] on host [192.168.0.203], try #1 
INFO[0267] [etcd] Successfully started [rke-log-linker] container on host [192.168.0.203] 
INFO[0267] Removing container [rke-log-linker] on host [192.168.0.203], try #1 
INFO[0267] [remove/rke-log-linker] Successfully removed container on host [192.168.0.203] 
INFO[0267] [etcd] Successfully started etcd plane.. Checking etcd cluster health 
INFO[0268] [controlplane] Building up Controller Plane.. 
INFO[0268] Checking if container [service-sidekick] is running on host [192.168.0.200], try #1 
INFO[0268] Checking if container [service-sidekick] is running on host [192.168.0.202], try #1 
INFO[0268] Checking if container [service-sidekick] is running on host [192.168.0.201], try #1 
INFO[0268] Checking if container [service-sidekick] is running on host [192.168.0.203], try #1 
INFO[0268] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0268] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0268] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0268] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0268] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.200] 
INFO[0268] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.203] 
INFO[0268] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.201] 
INFO[0268] Starting container [kube-apiserver] on host [192.168.0.200], try #1 
INFO[0268] Starting container [kube-apiserver] on host [192.168.0.203], try #1 
INFO[0268] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.202] 
INFO[0268] Starting container [kube-apiserver] on host [192.168.0.201], try #1 
INFO[0268] Starting container [kube-apiserver] on host [192.168.0.202], try #1 
INFO[0268] [controlplane] Successfully started [kube-apiserver] container on host [192.168.0.200] 
INFO[0268] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.0.200] 
INFO[0268] [controlplane] Successfully started [kube-apiserver] container on host [192.168.0.201] 
INFO[0268] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.0.201] 
INFO[0268] [controlplane] Successfully started [kube-apiserver] container on host [192.168.0.203] 
INFO[0268] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.0.203] 
INFO[0268] [controlplane] Successfully started [kube-apiserver] container on host [192.168.0.202] 
INFO[0268] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.0.202] 
INFO[0276] [healthcheck] service [kube-apiserver] on host [192.168.0.201] is healthy 
INFO[0276] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0276] [healthcheck] service [kube-apiserver] on host [192.168.0.200] is healthy 
INFO[0276] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0276] Starting container [rke-log-linker] on host [192.168.0.201], try #1 
INFO[0276] Starting container [rke-log-linker] on host [192.168.0.200], try #1 
INFO[0276] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.201] 
INFO[0276] Removing container [rke-log-linker] on host [192.168.0.201], try #1 
INFO[0276] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.200] 
INFO[0276] Removing container [rke-log-linker] on host [192.168.0.200], try #1 
INFO[0276] [remove/rke-log-linker] Successfully removed container on host [192.168.0.201] 
INFO[0276] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.201] 
INFO[0276] Starting container [kube-controller-manager] on host [192.168.0.201], try #1 
INFO[0276] [remove/rke-log-linker] Successfully removed container on host [192.168.0.200] 
INFO[0276] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.200] 
INFO[0276] Starting container [kube-controller-manager] on host [192.168.0.200], try #1 
INFO[0277] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.0.201] 
INFO[0277] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.0.201] 
INFO[0277] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.0.200] 
INFO[0277] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.0.200] 
INFO[0277] [healthcheck] service [kube-apiserver] on host [192.168.0.202] is healthy 
INFO[0277] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0277] Starting container [rke-log-linker] on host [192.168.0.202], try #1 
INFO[0277] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.202] 
INFO[0277] Removing container [rke-log-linker] on host [192.168.0.202], try #1 
INFO[0278] [remove/rke-log-linker] Successfully removed container on host [192.168.0.202] 
INFO[0278] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.202] 
INFO[0278] Starting container [kube-controller-manager] on host [192.168.0.202], try #1 
INFO[0278] [healthcheck] service [kube-apiserver] on host [192.168.0.203] is healthy 
INFO[0278] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0278] Starting container [rke-log-linker] on host [192.168.0.203], try #1 
INFO[0278] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.203] 
INFO[0278] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.0.202] 
INFO[0278] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.0.202] 
INFO[0278] Removing container [rke-log-linker] on host [192.168.0.203], try #1 
INFO[0278] [remove/rke-log-linker] Successfully removed container on host [192.168.0.203] 
INFO[0278] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.203] 
INFO[0279] Starting container [kube-controller-manager] on host [192.168.0.203], try #1 
INFO[0279] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.0.203] 
INFO[0279] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.0.203] 
INFO[0282] [healthcheck] service [kube-controller-manager] on host [192.168.0.201] is healthy 
INFO[0282] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0282] [healthcheck] service [kube-controller-manager] on host [192.168.0.200] is healthy 
INFO[0282] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0282] Starting container [rke-log-linker] on host [192.168.0.201], try #1 
INFO[0282] Starting container [rke-log-linker] on host [192.168.0.200], try #1 
INFO[0282] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.201] 
INFO[0282] Removing container [rke-log-linker] on host [192.168.0.201], try #1 
INFO[0282] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.200] 
INFO[0282] Removing container [rke-log-linker] on host [192.168.0.200], try #1 
INFO[0282] [remove/rke-log-linker] Successfully removed container on host [192.168.0.201] 
INFO[0282] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.201] 
INFO[0282] Starting container [kube-scheduler] on host [192.168.0.201], try #1 
INFO[0282] [remove/rke-log-linker] Successfully removed container on host [192.168.0.200] 
INFO[0282] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.200] 
INFO[0282] Starting container [kube-scheduler] on host [192.168.0.200], try #1 
INFO[0283] [controlplane] Successfully started [kube-scheduler] container on host [192.168.0.201] 
INFO[0283] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.0.201] 
INFO[0283] [controlplane] Successfully started [kube-scheduler] container on host [192.168.0.200] 
INFO[0283] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.0.200] 
INFO[0284] [healthcheck] service [kube-controller-manager] on host [192.168.0.202] is healthy 
INFO[0284] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0284] Starting container [rke-log-linker] on host [192.168.0.202], try #1 
INFO[0284] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.202] 
INFO[0284] Removing container [rke-log-linker] on host [192.168.0.202], try #1 
INFO[0284] [remove/rke-log-linker] Successfully removed container on host [192.168.0.202] 
INFO[0284] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.202] 
INFO[0284] [healthcheck] service [kube-controller-manager] on host [192.168.0.203] is healthy 
INFO[0284] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0284] Starting container [kube-scheduler] on host [192.168.0.202], try #1 
INFO[0284] Starting container [rke-log-linker] on host [192.168.0.203], try #1 
INFO[0285] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.203] 
INFO[0285] Removing container [rke-log-linker] on host [192.168.0.203], try #1 
INFO[0285] [controlplane] Successfully started [kube-scheduler] container on host [192.168.0.202] 
INFO[0285] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.0.202] 
INFO[0285] [remove/rke-log-linker] Successfully removed container on host [192.168.0.203] 
INFO[0285] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.203] 
INFO[0285] Starting container [kube-scheduler] on host [192.168.0.203], try #1 
INFO[0285] [controlplane] Successfully started [kube-scheduler] container on host [192.168.0.203] 
INFO[0285] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.0.203] 
INFO[0288] [healthcheck] service [kube-scheduler] on host [192.168.0.201] is healthy 
INFO[0288] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0288] [healthcheck] service [kube-scheduler] on host [192.168.0.200] is healthy 
INFO[0288] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0288] Starting container [rke-log-linker] on host [192.168.0.201], try #1 
INFO[0288] Starting container [rke-log-linker] on host [192.168.0.200], try #1 
INFO[0288] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.201] 
INFO[0288] Removing container [rke-log-linker] on host [192.168.0.201], try #1 
INFO[0288] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.200] 
INFO[0288] Removing container [rke-log-linker] on host [192.168.0.200], try #1 
INFO[0288] [remove/rke-log-linker] Successfully removed container on host [192.168.0.201] 
INFO[0288] [remove/rke-log-linker] Successfully removed container on host [192.168.0.200] 
INFO[0290] [healthcheck] service [kube-scheduler] on host [192.168.0.202] is healthy 
INFO[0290] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0290] Starting container [rke-log-linker] on host [192.168.0.202], try #1 
INFO[0290] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.202] 
INFO[0290] Removing container [rke-log-linker] on host [192.168.0.202], try #1 
INFO[0291] [healthcheck] service [kube-scheduler] on host [192.168.0.203] is healthy 
INFO[0291] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0291] Starting container [rke-log-linker] on host [192.168.0.203], try #1 
INFO[0291] [remove/rke-log-linker] Successfully removed container on host [192.168.0.202] 
INFO[0291] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.203] 
INFO[0291] Removing container [rke-log-linker] on host [192.168.0.203], try #1 
INFO[0291] [remove/rke-log-linker] Successfully removed container on host [192.168.0.203] 
INFO[0291] [controlplane] Successfully started Controller Plane.. 
INFO[0291] [authz] Creating rke-job-deployer ServiceAccount 
INFO[0291] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0291] [authz] Creating system:node ClusterRoleBinding 
INFO[0291] [authz] system:node ClusterRoleBinding created successfully 
INFO[0291] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding 
INFO[0291] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully 
INFO[0291] Successfully Deployed state file at [./cluster.rkestate] 
INFO[0291] [state] Saving full cluster state to Kubernetes 
INFO[0291] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: cluster-state 
INFO[0291] [worker] Building up Worker Plane..          
INFO[0291] Checking if container [service-sidekick] is running on host [192.168.0.201], try #1 
INFO[0291] Checking if container [service-sidekick] is running on host [192.168.0.200], try #1 
INFO[0291] Checking if container [service-sidekick] is running on host [192.168.0.202], try #1 
INFO[0291] Checking if container [service-sidekick] is running on host [192.168.0.203], try #1 
INFO[0291] [sidekick] Sidekick container already created on host [192.168.0.200] 
INFO[0291] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.200] 
INFO[0291] [sidekick] Sidekick container already created on host [192.168.0.201] 
INFO[0291] [sidekick] Sidekick container already created on host [192.168.0.202] 
INFO[0291] [sidekick] Sidekick container already created on host [192.168.0.203] 
INFO[0291] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.201] 
INFO[0291] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.203] 
INFO[0291] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.202] 
INFO[0291] Starting container [kubelet] on host [192.168.0.203], try #1 
INFO[0291] Starting container [kubelet] on host [192.168.0.201], try #1 
INFO[0291] Starting container [kubelet] on host [192.168.0.200], try #1 
INFO[0291] Starting container [kubelet] on host [192.168.0.202], try #1 
INFO[0291] [worker] Successfully started [kubelet] container on host [192.168.0.200] 
INFO[0291] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.0.200] 
INFO[0291] [worker] Successfully started [kubelet] container on host [192.168.0.201] 
INFO[0291] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.0.201] 
INFO[0291] [worker] Successfully started [kubelet] container on host [192.168.0.203] 
INFO[0291] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.0.203] 
INFO[0291] [worker] Successfully started [kubelet] container on host [192.168.0.202] 
INFO[0291] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.0.202] 
INFO[0297] [healthcheck] service [kubelet] on host [192.168.0.200] is healthy 
INFO[0297] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0297] Starting container [rke-log-linker] on host [192.168.0.200], try #1 
INFO[0297] [worker] Successfully started [rke-log-linker] container on host [192.168.0.200] 
INFO[0297] Removing container [rke-log-linker] on host [192.168.0.200], try #1 
INFO[0297] [remove/rke-log-linker] Successfully removed container on host [192.168.0.200] 
INFO[0297] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.200] 
INFO[0297] Starting container [kube-proxy] on host [192.168.0.200], try #1 
INFO[0297] [worker] Successfully started [kube-proxy] container on host [192.168.0.200] 
INFO[0297] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.0.200] 
INFO[0297] [healthcheck] service [kube-proxy] on host [192.168.0.200] is healthy 
INFO[0297] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0297] Starting container [rke-log-linker] on host [192.168.0.200], try #1 
INFO[0297] [worker] Successfully started [rke-log-linker] container on host [192.168.0.200] 
INFO[0298] Removing container [rke-log-linker] on host [192.168.0.200], try #1 
INFO[0298] [remove/rke-log-linker] Successfully removed container on host [192.168.0.200] 
INFO[0317] [healthcheck] service [kubelet] on host [192.168.0.201] is healthy 
INFO[0317] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0317] Starting container [rke-log-linker] on host [192.168.0.201], try #1 
INFO[0317] [healthcheck] service [kubelet] on host [192.168.0.203] is healthy 
INFO[0317] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0317] [worker] Successfully started [rke-log-linker] container on host [192.168.0.201] 
INFO[0317] Removing container [rke-log-linker] on host [192.168.0.201], try #1 
INFO[0317] [healthcheck] service [kubelet] on host [192.168.0.202] is healthy 
INFO[0317] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0317] Starting container [rke-log-linker] on host [192.168.0.203], try #1 
INFO[0317] Starting container [rke-log-linker] on host [192.168.0.202], try #1 
INFO[0317] [remove/rke-log-linker] Successfully removed container on host [192.168.0.201] 
INFO[0317] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.201] 
INFO[0318] Starting container [kube-proxy] on host [192.168.0.201], try #1 
INFO[0318] [worker] Successfully started [rke-log-linker] container on host [192.168.0.203] 
INFO[0318] Removing container [rke-log-linker] on host [192.168.0.203], try #1 
INFO[0318] [worker] Successfully started [kube-proxy] container on host [192.168.0.201] 
INFO[0318] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.0.201] 
INFO[0318] [healthcheck] service [kube-proxy] on host [192.168.0.201] is healthy 
INFO[0318] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0318] [worker] Successfully started [rke-log-linker] container on host [192.168.0.202] 
INFO[0318] Removing container [rke-log-linker] on host [192.168.0.202], try #1 
INFO[0318] Starting container [rke-log-linker] on host [192.168.0.201], try #1 
INFO[0318] [remove/rke-log-linker] Successfully removed container on host [192.168.0.203] 
INFO[0318] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.203] 
INFO[0318] Starting container [kube-proxy] on host [192.168.0.203], try #1 
INFO[0318] [worker] Successfully started [kube-proxy] container on host [192.168.0.203] 
INFO[0318] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.0.203] 
INFO[0318] [worker] Successfully started [rke-log-linker] container on host [192.168.0.201] 
INFO[0318] Removing container [rke-log-linker] on host [192.168.0.201], try #1 
INFO[0318] [remove/rke-log-linker] Successfully removed container on host [192.168.0.202] 
INFO[0318] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.202] 
INFO[0318] Starting container [kube-proxy] on host [192.168.0.202], try #1 
INFO[0318] [healthcheck] service [kube-proxy] on host [192.168.0.203] is healthy 
INFO[0318] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0318] Starting container [rke-log-linker] on host [192.168.0.203], try #1 
INFO[0318] [remove/rke-log-linker] Successfully removed container on host [192.168.0.201] 
INFO[0318] [worker] Successfully started [kube-proxy] container on host [192.168.0.202] 
INFO[0318] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.0.202] 
INFO[0318] [worker] Successfully started [rke-log-linker] container on host [192.168.0.203] 
INFO[0318] Removing container [rke-log-linker] on host [192.168.0.203], try #1 
INFO[0318] [healthcheck] service [kube-proxy] on host [192.168.0.202] is healthy 
INFO[0318] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0319] Starting container [rke-log-linker] on host [192.168.0.202], try #1 
INFO[0319] [remove/rke-log-linker] Successfully removed container on host [192.168.0.203] 
INFO[0319] [worker] Successfully started [rke-log-linker] container on host [192.168.0.202] 
INFO[0319] Removing container [rke-log-linker] on host [192.168.0.202], try #1 
INFO[0319] [remove/rke-log-linker] Successfully removed container on host [192.168.0.202] 
INFO[0319] [worker] Successfully started Worker Plane.. 
INFO[0319] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.200] 
INFO[0319] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.201] 
INFO[0319] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.203] 
INFO[0319] Image [rancher/rke-tools:v0.1.56] exists on host [192.168.0.202] 
INFO[0319] Starting container [rke-log-cleaner] on host [192.168.0.200], try #1 
INFO[0319] Starting container [rke-log-cleaner] on host [192.168.0.203], try #1 
INFO[0319] Starting container [rke-log-cleaner] on host [192.168.0.201], try #1 
INFO[0319] Starting container [rke-log-cleaner] on host [192.168.0.202], try #1 
INFO[0319] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.0.200] 
INFO[0319] Removing container [rke-log-cleaner] on host [192.168.0.200], try #1 
INFO[0319] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.0.201] 
INFO[0319] Removing container [rke-log-cleaner] on host [192.168.0.201], try #1 
INFO[0319] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.0.203] 
INFO[0319] Removing container [rke-log-cleaner] on host [192.168.0.203], try #1 
INFO[0320] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.0.202] 
INFO[0320] Removing container [rke-log-cleaner] on host [192.168.0.202], try #1 
INFO[0320] [remove/rke-log-cleaner] Successfully removed container on host [192.168.0.200] 
INFO[0320] [remove/rke-log-cleaner] Successfully removed container on host [192.168.0.201] 
INFO[0320] [remove/rke-log-cleaner] Successfully removed container on host [192.168.0.203] 
INFO[0320] [remove/rke-log-cleaner] Successfully removed container on host [192.168.0.202] 
INFO[0320] [sync] Syncing nodes Labels and Taints       
INFO[0320] [sync] Successfully synced nodes Labels and Taints 
INFO[0320] [network] Setting up network plugin: canal   
INFO[0320] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0320] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0320] [addons] Executing deploy job rke-network-plugin 
INFO[0330] [addons] Setting up coredns                  
INFO[0330] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes 
INFO[0330] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes 
INFO[0330] [addons] Executing deploy job rke-coredns-addon 
INFO[0335] [addons] CoreDNS deployed successfully       
INFO[0335] [dns] DNS provider coredns deployed successfully 
INFO[0335] [addons] Setting up Metrics Server           
INFO[0335] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0335] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0335] [addons] Executing deploy job rke-metrics-addon 
INFO[0340] [addons] Metrics Server deployed successfully 
INFO[0340] [ingress] Setting up nginx ingress controller 
INFO[0340] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0340] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0340] [addons] Executing deploy job rke-ingress-controller 
INFO[0345] [ingress] ingress controller nginx deployed successfully 
INFO[0345] [addons] Setting up user addons              
INFO[0345] [addons] no user addons defined              
INFO[0345] Finished building Kubernetes cluster successfully 



ubuntu@kub0:~$ sudo curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.5/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 41.4M  100 41.4M    0     0  9913k      0  0:00:04  0:00:04 --:--:-- 9915k
ubuntu@kub0:~$ sudo chmod +x ./kubectl
ubuntu@kub0:~$ sudo mv ./kubectl /usr/local/bin/kubectl
ubuntu@kub0:~$ sudo mkdir ~/.kube
ubuntu@kub0:~$ wget http://storage.googleapis.com/kubernetes-helm/helm-v3.2.1-linux-amd64.tar.gz
--2020-05-18 16:07:22--  http://storage.googleapis.com/kubernetes-helm/helm-v3.2.1-linux-amd64.tar.gz
Resolving storage.googleapis.com (storage.googleapis.com)... 2607:f8b0:400b:809::2010, 172.217.1.176
Connecting to storage.googleapis.com (storage.googleapis.com)|2607:f8b0:400b:809::2010|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-05-18 16:07:22 ERROR 404: Not Found.

ubuntu@kub0:~$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
ubuntu@kub0:~$ chmod 700 get_helm.sh
ubuntu@kub0:~$ ./get_helm.sh 
Downloading https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm


ubuntu@kub0:~$ sudo cp kube_config_cluster.yml ~/.kube/config
ubuntu@kub0:~$ sudo chmod 777 ~/.kube/config
ubuntu@kub0:~$ kubectl get pods --all-namespaces -o wide
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE   IP              NODE            NOMINATED NODE   READINESS GATES
ingress-nginx   default-http-backend-67cf578fc4-b2rh8     1/1     Running     0          21m   10.42.2.2       192.168.0.202   <none>           <none>
ingress-nginx   nginx-ingress-controller-8gh7x            1/1     Running     0          21m   192.168.0.202   192.168.0.202   <none>           <none>
ingress-nginx   nginx-ingress-controller-8wgkl            1/1     Running     0          21m   192.168.0.201   192.168.0.201   <none>           <none>
ingress-nginx   nginx-ingress-controller-hwmjs            1/1     Running     0          21m   192.168.0.203   192.168.0.203   <none>           <none>
ingress-nginx   nginx-ingress-controller-r677w            1/1     Running     0          21m   192.168.0.200   192.168.0.200   <none>           <none>
kube-system     canal-6wgdm                               2/2     Running     0          21m   192.168.0.201   192.168.0.201   <none>           <none>
kube-system     canal-c44qz                               2/2     Running     0          21m   192.168.0.200   192.168.0.200   <none>           <none>
kube-system     canal-hq5ks                               2/2     Running     0          21m   192.168.0.203   192.168.0.203   <none>           <none>
kube-system     canal-lzhhs                               2/2     Running     0          21m   192.168.0.202   192.168.0.202   <none>           <none>
kube-system     coredns-7c5566588d-j4bzm                  1/1     Running     0          19m   10.42.0.2       192.168.0.200   <none>           <none>
kube-system     coredns-7c5566588d-msmzh                  1/1     Running     0          21m   10.42.1.2       192.168.0.201   <none>           <none>
kube-system     coredns-autoscaler-65bfc8d47d-8tbbh       1/1     Running     0          21m   10.42.2.3       192.168.0.202   <none>           <none>
kube-system     metrics-server-6b55c64f86-l2f99           1/1     Running     0          21m   10.42.3.2       192.168.0.203   <none>           <none>
kube-system     rke-coredns-addon-deploy-job-8h7xh        0/1     Completed   0          21m   192.168.0.200   192.168.0.200   <none>           <none>
kube-system     rke-ingress-controller-deploy-job-dqp9q   0/1     Completed   0          21m   192.168.0.200   192.168.0.200   <none>           <none>
kube-system     rke-metrics-addon-deploy-job-nrvfv        0/1     Completed   0          21m   192.168.0.200   192.168.0.200   <none>           <none>
kube-system     rke-network-plugin-deploy-job-s9nh6       0/1     Completed   0          22m   192.168.0.200   192.168.0.200   <none>           <none>

ubuntu@kub0:~$ kubectl get nodes
NAME            STATUS   ROLES                      AGE   VERSION
192.168.0.200   Ready    controlplane,etcd,worker   22m   v1.17.5
192.168.0.201   Ready    controlplane,etcd,worker   22m   v1.17.5
192.168.0.202   Ready    controlplane,etcd,worker   22m   v1.17.5
192.168.0.203   Ready    controlplane,etcd,worker   22m   v1.17.5
ubuntu@kub0:~$ 
ubuntu@kub0:~$ kubectl get services --all-namespaces -o wide
NAMESPACE       NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE   SELECTOR
default         kubernetes             ClusterIP   10.43.0.1       <none>        443/TCP                  33m   <none>
ingress-nginx   default-http-backend   ClusterIP   10.43.72.129    <none>        80/TCP                   32m   app=default-http-backend
kube-system     kube-dns               ClusterIP   10.43.0.10      <none>        53/UDP,53/TCP,9153/TCP   32m   k8s-app=kube-dns
kube-system     metrics-server         ClusterIP   10.43.105.202   <none>        443/TCP                  32m   k8s-app=metrics-server

ubuntu@kub0:~$ kubectl top node
NAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
192.168.0.200   95m          2%     2612Mi          16%       
192.168.0.201   93m          2%     2195Mi          13%       
192.168.0.202   122m         3%     2225Mi          14%       
192.168.0.203   131m         3%     2172Mi          13%  

Install Tomcat helm chart on 4 node kubernetes cluster

ubuntu@kub0:~$ sudo helm repo add stable https://kubernetes-charts.storage.googleapis.com
"stable" has been added to your repositories
ubuntu@kub0:~$ helm repo list
NAME  	URL                                             
stable	https://kubernetes-charts.storage.googleapis.com
ubuntu@kub0:~$ helm list
NAME	NAMESPACE	REVISION	UPDATED	STATUS	CHART	APP VERSION
ubuntu@kub0:~$ sudo helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 

ubuntu@kub0:~$ sudo helm install tomcat-dev stable/tomcat
[sudo] password for ubuntu: 
NAME: tomcat-dev
LAST DEPLOYED: Sat May 23 14:59:58 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w tomcat-dev'
  export SERVICE_IP=$(kubectl get svc --namespace default tomcat-dev -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  echo http://$SERVICE_IP:

# get the node IP - however any IP on the cluster will work as it is a LoadBalancer
ubuntu@kub0:~$ kubectl get pods --all-namespaces -o wide
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE     IP              NODE            NOMINATED NODE   READINESS GATES
default         tomcat-dev-64d5c484b8-rn6rd               1/1     Running     0          3m49s   10.42.2.5       192.168.0.202   <none>           <none>

# get the port
ubuntu@kub0:~$ kubectl get services --all-namespaces -o wide
NAMESPACE       NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
default         kubernetes             ClusterIP      10.43.0.1       <none>        443/TCP                  4d23h   <none>
default         tomcat-dev             LoadBalancer   10.43.133.127   <pending>     80:32353/TCP             6m57s   app=tomcat,release=tomcat-dev

# check server
ubuntu@kub0:~$ kubectl exec -it tomcat-dev-64d5c484b8-rn6rd bash
root@tomcat-dev-64d5c484b8-rn6rd:/usr/local/tomcat# cat logs/catalina.2020-05-23.log 
INFO: Deployment of web application archive [/usr/local/tomcat/webapps/sample.war] has finished in [275] ms
May 23, 2020 8:00:01 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 342 ms

# check the site
ubuntu@kub0:~$ wget http://192.168.0.202:32353/sample
--2020-05-23 15:08:52--  http://192.168.0.202:32353/sample
Connecting to 192.168.0.202:32353... connected.
HTTP request sent, awaiting response... 302 Found
Location: /sample/ [following]
HTTP request sent, awaiting response... 200 OK
Length: 636 [text/html]
2020-05-23 15:08:52 (186 MB/s) - ‘sample’ saved [636/636]

Add 5th osx node

Move key to master node

biometric:wse_go user$ scp ~/__devops/macmini1/id_rsa ubuntu@192.168.0.200:~/macmini1
# rerun rke yaml

biometric:wse_go user$ ssh ubuntu@192.168.0.200
ubuntu@kub0:~$ vi cluster.yml
- address: 192.168.0.53
  port: "22"
  internal_address: 192.168.0.53
  role:
  - worker
  hostname_override: ""
  user: user
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/macmini1/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []

ubuntu@kub0:~$ sudo rke up --update-only

top

PID   COMMAND      %CPU  TIME     #TH    #WQ  #PORT MEM    PURG  CMPRS  PGRP PPID STATE    BOOSTS          %CPU_ME %CPU_OTHRS UID  FAULTS   COW  MSGSENT  MSGRECV  SYSBSD   SYSMACH  CSW       PAGEIN IDLEW   POWER INSTRS     CYCLES
971   com.docker.h 144.7 03:57.99 19/5   0    43    11G    0B    8436K  959  963  running  *0[1]           0.00000 0.00000    501  2866956+ 422  364      176      5648246+ 415      3237189+  0      248811+ 154.4 1409125945 4390646170
961   com.docker.v 46.0  00:15.77 10/1   0    27    53M-   0B    848K   959  959  running  *0[1]           0.00000 0.00000    501  41585+   1461 62       29       885359+  2021+    162290+   0      926+    46.3  958398813  1407087845

INFO[0036] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.200] 
INFO[0036] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.201] 
INFO[0036] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.203] 
INFO[0036] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.168.0.202] 
INFO[0036] Pulling image [rancher/hyperkube:v1.17.5-rancher1] on host [192.168.0.53], try #1 

macmini1:~ user$ docker ps
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS               NAMES
1f8f32661b82        rancher/rke-tools:v0.1.56   "nginx-proxy CP_HOST…"   28 seconds ago      Up 5 seconds                            nginx-proxy

# fs issue
INFO[0180] Starting container [kubelet] on host [192.168.0.53], try #1 
WARN[0180] Can't start Docker container [kubelet] on host [192.168.0.53]: Error response from daemon: Mounts denied: 
The paths /var/log/pods and /var/log/containers and /etc/ceph and /var/lib/kubelet and /var/lib/calico and /opt/cni
are not shared from OS X and are not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
. 
INFO[0180] Starting container [kubelet] on host [192.168.0.53], try #2 
WARN[0180] Can't start Docker container [kubelet] on host [192.168.0.53]: Error response from daemon: Mounts denied: 
The paths /var/log/pods and /var/log/containers and /etc/ceph and /var/lib/kubelet and /var/lib/calico and /opt/cni
are not shared from OS X and are not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
. 
INFO[0180] Starting container [kubelet] on host [192.168.0.53], try #3 
WARN[0180] Can't start Docker container [kubelet] on host [192.168.0.53]: Error response from daemon: Mounts denied: 
The paths /var/log/pods and /var/log/containers and /etc/ceph and /var/lib/kubelet and /var/lib/calico and /opt/cni
are not shared from OS X and are not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
. 
FATA[0180] [workerPlane] Failed to bring up Worker Plane: [Failed to start [kubelet] container on host [192.168.0.53]: Error response from daemon: Mounts denied: 
The paths /var/log/pods and /var/log/containers and /etc/ceph and /var/lib/kubelet and /var/lib/calico and /opt/cni
are not shared from OS X and are not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
.] 


# fixing
  508  sudo mkdir /var/log/pods
  509  sudo mkdir /var/log/containers
  510  sudo mkdir /var/lib/kubelet
  511  sudo mkdir /etc/ceph
  512  sudo mkdir /var/lib/calico
  513  sudo mkdir /opt/cni

how to handle the proxy /var and /etc dirs into /private - manually add them - don't use the fs picker in (-)
rke up
bettermacmini1:~user$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6e033ff09d39 rancher/hyperkube:v1.17.5-rancher1 "/opt/rke-tools/entr…" 25 minutes ago Up 11 seconds kubelet
1f8f32661b82 rancher/rke-tools:v0.1.56 "nginx-proxy CP_HOST…" 26 minutes ago Up About a minute nginx-proxy

NFO[0053] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.0.53] 
FATA[0106] [workerPlane] Failed to bring up Worker Plane: [Failed to verify healthcheck: Failed to check http://localhost:10248/healthz for service [kubelet] on host [192.168.0.53]: Get http://localhost:10248/healthz: Unable to access the service on localhost:10248. The service might be still starting up. Error: ssh: rejected: connect failed (Connection refused), log: F0621 20:48:23.458119 4084 server.go:253] mkdir /var/lib/kubelet/pki: permission denied]
because of a slow machine (2012 mac min - waiting on 2018 version)macmini1:~ user$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6e033ff09d39 rancher/hyperkube:v1.17.5-rancher1 "/opt/rke-tools/entr…" 26 minutes ago Restarting (255) 12 seconds ago kubeletmacmini1:~ user$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6e033ff09d39 rancher/hyperkube:v1.17.5-rancher1 "/opt/rke-tools/entr…" 27 minutes ago Restarting (255) 34 seconds ago kubelet
1f8f32661b82 rancher/rke-tools:v0.1.56 "nginx-proxy CP_HOST…" 27 minutes ago Up 3 minutes nginx-proxy
macmini1:~ user$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6e033ff09d39 rancher/hyperkube:v1.17.5-rancher1 "/opt/rke-tools/entr…" 27 minutes ago Up 1 second kubelet
1f8f32661b82 rancher/rke-tools:v0.1.56 "nginx-proxy CP_HOST…" 28 minutes ago Up 3 minutes nginx-proxyrestarting6e033ff09d39 rancher/hyperkube:v1.17.5-rancher1 "/opt/rke-tools/entr…" 28 minutes ago Restarting (255) 47 seconds ago kubelet


add role etcd

macmini1:~ user$ sudo mkdir /var/lib/etcd

INFO[0078] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.53] 
FATA[0078] [etcd] Failed to bring up Etcd Plane: Container [etcd-fix-perm] exited with non-zero exit code [1] on host [192.168.0.53]: stdout: , stderr: chmod: /var/lib/rancher/etcd/: Operation not permitted 

chmod 777 on all 7 dirs

further
INFO[0049] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.0.53] 
FATA[0104] [workerPlane] Failed to bring up Worker Plane: [Failed to verify healthcheck: Failed to check http://localhost:10248/healthz for service [kubelet] on host [192.168.0.53]: Get http://localhost:10248/healthz: Unable to access the service on localhost:10248. The service might be still starting up. Error: ssh: rejected: connect failed (Connection refused), log: F0621 21:09:07.155830   20503 kubelet.go:1380] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 105 in cached partitions map] 

trying etcd role againCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5d43fef301a7 rancher/coreos-etcd:v3.4.3-rancher1 "/usr/local/bin/etcd…" 12 minutes ago Up 3 minutes etcd
6e033ff09d39 rancher/hyperkube:v1.17.5-rancher1 "/opt/rke-tools/entr…" 48 minutes ago Restarting (255) 10 seconds ago kubelet
1f8f32661b82 rancher/rke-tools:v0.1.56 "nginx-proxy CP_HOST…" 49 minutes ago Up 13 minutes nginx-proxy


FATA[0006] [[network] Host [192.168.0.201] is not able to connect to the following ports: [192.168.0.53:2380, 192.168.0.53:2379]. Please check network policies and firewall rules] 

flipping kubernetes on/off in docker desktop
  • No labels

1 Comment

  1. RKE 1.1.3 ubuntu 16 VM on OSX

    wget https://github.com/rancher/rke/releases/download/v1.1.3/rke_linux-amd64
    cp rke_linux-amd64 rke
    sudo chmod 777 rke
    ./rke --version
    
    amdocs@obriensystemsu0:~$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
    amdocs@obriensystemsu0:~$ chmod 700 get_helm.sh
    amdocs@obriensystemsu0:~$ ./get_helm.sh 
    Downloading https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz
    Preparing to install helm into /usr/local/bin
    helm installed into /usr/local/bin/helm
    
    amdocs@obriensystemsu0:~$ ./rke config --name cluster.yml
    [+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: ~/.ssh/onap_rsa
    [+] Number of Hosts [1]: 2
    [+] SSH Address of host (1) [none]: 192.168.199.130
    [+] SSH Port of host (1) [22]: 
    [+] SSH Private Key Path of host (192.168.199.130) [none]: ~/.ssh/onap_rsa
    [+] SSH User of host (192.168.199.130) [ubuntu]: amdocs
    [+] Is host (192.168.199.130) a Control Plane host (y/n)? [y]: y
    [+] Is host (192.168.199.130) a Worker host (y/n)? [n]: y
    [+] Is host (192.168.199.130) an etcd host (y/n)? [n]: y
    [+] Override Hostname of host (192.168.199.130) [none]: 
    [+] Internal IP of host (192.168.199.130) [none]: 
    [+] Docker socket path on host (192.168.199.130) [/var/run/docker.sock]: 
    [+] SSH Address of host (2) [none]: 192.168.0.104
    [+] SSH Port of host (2) [22]: 
    [+] SSH Private Key Path of host (192.168.0.104) [none]: ~/.ssh/onap_rsa
    [+] SSH User of host (192.168.0.104) [ubuntu]: amdocs
    [+] Is host (192.168.0.104) a Control Plane host (y/n)? [y]: n
    [+] Is host (192.168.0.104) a Worker host (y/n)? [n]: y
    [+] Is host (192.168.0.104) an etcd host (y/n)? [n]: n
    [+] Override Hostname of host (192.168.0.104) [none]: 
    [+] Internal IP of host (192.168.0.104) [none]: 
    [+] Docker socket path on host (192.168.0.104) [/var/run/docker.sock]: 
    [+] Network Plugin Type (flannel, calico, weave, canal) [canal]: 
    [+] Authentication Strategy [x509]: 
    [+] Authorization Mode (rbac, none) [rbac]: 
    [+] Kubernetes Docker image [rancher/hyperkube:v1.18.3-rancher2]: 
    [+] Cluster domain [cluster.local]: 
    [+] Service Cluster IP Range [10.43.0.0/16]: 
    [+] Enable PodSecurityPolicy [n]: 
    [+] Cluster Network CIDR [10.42.0.0/16]: 
    [+] Cluster DNS Service IP [10.43.0.10]: 
    [+] Add addon manifest URLs or YAML files [no]:
    FATA[0022] [[network] Host [192.168.0.104] is not able to connect to the following ports: [192.168.199.130:6443]. Please check network policies and firewall rules] use real IP and 2022 port - edited yml
    INFO[0019] Pulling image [rancher/hyperkube:v1.18.3-rancher2] on host [192.168.0.59], try #1 
    INFO[0019] Pulling image [rancher/hyperkube:v1.18.3-rancher2] on host [192.168.0.104], try #1
    
    amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces -o wide
    NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    ingress-nginx default-http-backend-598b7d7dbd-9zvkp 1/1 Running 0 3m13s 10.42.0.3 192.168.0.59 <none> <none>
    ingress-nginx nginx-ingress-controller-dvmrh 1/1 Running 0 3m13s 192.168.0.59 192.168.0.59 <none> <none>
    ingress-nginx nginx-ingress-controller-gx6br 1/1 Running 0 3m13s 192.168.0.104 192.168.0.104 <none> <none>
    kube-system canal-dnpp4 2/2 Running 0 3m26s 192.168.0.104 192.168.0.104 <none> <none>
    kube-system canal-w8lz8 2/2 Running 0 3m26s 192.168.0.59 192.168.0.59 <none> <none>
    kube-system coredns-849545576b-hmx7d 1/1 Running 0 2m51s 10.42.1.3 192.168.0.104 <none> <none>
    kube-system coredns-849545576b-hrxwd 1/1 Running 0 3m23s 10.42.0.4 192.168.0.59 <none> <none>
    kube-system coredns-autoscaler-5dcd676cbd-x7k7l 1/1 Running 0 3m22s 10.42.0.2 192.168.0.59 <none> <none>
    kube-system metrics-server-697746ff48-qltpk 1/1 Running 0 3m18s 10.42.1.2 192.168.0.104 <none> <none>
    kube-system rke-coredns-addon-deploy-job-fwb8z 0/1 Completed 0 3m25s 192.168.0.59 192.168.0.59 <none> <none>
    kube-system rke-ingress-controller-deploy-job-7mfng 0/1 Completed 0 3m15s 192.168.0.59 192.168.0.59 <none> <none>
    kube-system rke-metrics-addon-deploy-job-h9snz 0/1 Completed 0 3m20s 192.168.0.59 192.168.0.59 <none> <none>
    kube-system rke-network-plugin-deploy-job-84t78 0/1 Completed 0 3m35s 192.168.0.59 192.168.0.59 <none> <none>
    
    
    top nodes not working with 1 control/etc node - will set both nodes to controlplane and etccamdocs@obriensystemsu0:~$ kubectl top nodes
    Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)amdocs@obriensystemsu0:~$ sudo ./rke up
    INFO[0000] Running RKE version: v1.1.3 
    INFO[0000] Initiating Kubernetes cluster 
    INFO[0000] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates 
    INFO[0000] [certificates] Generating Kubernetes API server certificates 
    INFO[0000] [certificates] Generating admin certificates and kubeconfig 
    INFO[0000] [certificates] Generating kube-etcd-192-168-0-59 certificate and key 
    INFO[0000] [certificates] Generating kube-etcd-192-168-0-104 certificate and key 
    INFO[0000] Successfully Deployed state file at [./cluster.rkestate] 
    INFO[0000] Building Kubernetes cluster 
    INFO[0000] [dialer] Setup tunnel for host [192.168.0.104] 
    INFO[0000] [dialer] Setup tunnel for host [192.168.0.59] 
    INFO[0000] [network] Deploying port listener containers 
    INFO[0000] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0000] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0000] Starting container [rke-etcd-port-listener] on host [192.168.0.104], try #1 
    INFO[0000] Starting container [rke-etcd-port-listener] on host [192.168.0.59], try #1 
    INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [192.168.0.104] 
    INFO[0001] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0001] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0001] Starting container [rke-cp-port-listener] on host [192.168.0.104], try #1 
    INFO[0001] Starting container [rke-cp-port-listener] on host [192.168.0.59], try #1 
    INFO[0002] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0002] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0002] Starting container [rke-worker-port-listener] on host [192.168.0.104], try #1 
    INFO[0002] Starting container [rke-worker-port-listener] on host [192.168.0.59], try #1 
    INFO[0002] [network] Port listener containers deployed successfully 
    INFO[0002] [network] Running etcd <-> etcd port checks 
    INFO[0002] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0002] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0002] Starting container [rke-port-checker] on host [192.168.0.104], try #1 
    INFO[0002] Starting container [rke-port-checker] on host [192.168.0.59], try #1 
    INFO[0003] [network] Successfully started [rke-port-checker] container on host [192.168.0.59] 
    INFO[0003] [network] Successfully started [rke-port-checker] container on host [192.168.0.104] 
    INFO[0003] Removing container [rke-port-checker] on host [192.168.0.59], try #1 
    INFO[0003] Removing container [rke-port-checker] on host [192.168.0.104], try #1 
    INFO[0003] [network] Running control plane -> etcd port checks 
    INFO[0003] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0003] Starting container [rke-port-checker] on host [192.168.0.59], try #1 
    INFO[0003] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0003] Starting container [rke-port-checker] on host [192.168.0.104], try #1 
    INFO[0003] [network] Successfully started [rke-port-checker] container on host [192.168.0.59] 
    INFO[0003] Removing container [rke-port-checker] on host [192.168.0.59], try #1 
    INFO[0003] [network] Successfully started [rke-port-checker] container on host [192.168.0.104] 
    INFO[0004] Removing container [rke-port-checker] on host [192.168.0.104], try #1 
    INFO[0004] [network] Running control plane -> worker port checks 
    INFO[0004] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0004] Starting container [rke-port-checker] on host [192.168.0.59], try #1 
    INFO[0004] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0004] Starting container [rke-port-checker] on host [192.168.0.104], try #1 
    INFO[0004] [network] Successfully started [rke-port-checker] container on host [192.168.0.59] 
    INFO[0004] Removing container [rke-port-checker] on host [192.168.0.59], try #1 
    INFO[0004] [network] Successfully started [rke-port-checker] container on host [192.168.0.104] 
    INFO[0004] Removing container [rke-port-checker] on host [192.168.0.104], try #1 
    INFO[0004] [network] Running workers -> control plane port checks 
    INFO[0004] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0004] Starting container [rke-port-checker] on host [192.168.0.59], try #1 
    INFO[0004] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0004] Starting container [rke-port-checker] on host [192.168.0.104], try #1 
    INFO[0005] [network] Successfully started [rke-port-checker] container on host [192.168.0.59] 
    INFO[0005] Removing container [rke-port-checker] on host [192.168.0.59], try #1 
    INFO[0005] [network] Successfully started [rke-port-checker] container on host [192.168.0.104] 
    INFO[0005] Removing container [rke-port-checker] on host [192.168.0.104], try #1 
    INFO[0005] [network] Checking KubeAPI port Control Plane hosts 
    INFO[0005] [network] Removing port listener containers 
    INFO[0005] Removing container [rke-etcd-port-listener] on host [192.168.0.59], try #1 
    INFO[0005] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.0.59] 
    INFO[0005] Removing container [rke-etcd-port-listener] on host [192.168.0.104], try #1 
    INFO[0005] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.0.104] 
    INFO[0005] Removing container [rke-cp-port-listener] on host [192.168.0.59], try #1 
    INFO[0005] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.0.59] 
    INFO[0005] Removing container [rke-cp-port-listener] on host [192.168.0.104], try #1 
    INFO[0006] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.0.104] 
    INFO[0006] Removing container [rke-worker-port-listener] on host [192.168.0.59], try #1 
    INFO[0006] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.0.59] 
    INFO[0006] Removing container [rke-worker-port-listener] on host [192.168.0.104], try #1 
    INFO[0006] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.0.104] 
    INFO[0006] [network] Port listener containers removed successfully 
    INFO[0006] [certificates] kube-apiserver certificate changed, force deploying certs 
    INFO[0006] [certificates] Deploying kubernetes certificates to Cluster nodes 
    INFO[0006] Checking if container [cert-deployer] is running on host [192.168.0.104], try #1 
    INFO[0006] Checking if container [cert-deployer] is running on host [192.168.0.59], try #1 
    INFO[0006] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0006] Starting container [cert-deployer] on host [192.168.0.59], try #1 
    INFO[0006] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0006] Starting container [cert-deployer] on host [192.168.0.104], try #1 
    INFO[0006] Checking if container [cert-deployer] is running on host [192.168.0.59], try #1 
    INFO[0006] Checking if container [cert-deployer] is running on host [192.168.0.104], try #1 
    INFO[0011] Checking if container [cert-deployer] is running on host [192.168.0.59], try #1 
    INFO[0011] Removing container [cert-deployer] on host [192.168.0.59], try #1 
    INFO[0011] Checking if container [cert-deployer] is running on host [192.168.0.104], try #1 
    INFO[0011] Removing container [cert-deployer] on host [192.168.0.104], try #1 
    INFO[0011] [reconcile] Rebuilding and updating local kube config 
    INFO[0011] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
    INFO[0011] [reconcile] host [192.168.0.59] is active master on the cluster 
    INFO[0011] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
    INFO[0011] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.0.59] 
    INFO[0011] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0011] Starting container [file-deployer] on host [192.168.0.59], try #1 
    INFO[0012] Successfully started [file-deployer] container on host [192.168.0.59] 
    INFO[0012] Waiting for [file-deployer] container to exit on host [192.168.0.59] 
    INFO[0012] Waiting for [file-deployer] container to exit on host [192.168.0.59] 
    INFO[0012] Container [file-deployer] is still running on host [192.168.0.59] 
    INFO[0013] Waiting for [file-deployer] container to exit on host [192.168.0.59] 
    INFO[0013] Removing container [file-deployer] on host [192.168.0.59], try #1 
    INFO[0013] [remove/file-deployer] Successfully removed container on host [192.168.0.59] 
    INFO[0013] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.0.104] 
    INFO[0013] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0013] Starting container [file-deployer] on host [192.168.0.104], try #1 
    INFO[0013] Successfully started [file-deployer] container on host [192.168.0.104] 
    INFO[0013] Waiting for [file-deployer] container to exit on host [192.168.0.104] 
    INFO[0013] Waiting for [file-deployer] container to exit on host [192.168.0.104] 
    INFO[0014] Container [file-deployer] is still running on host [192.168.0.104] 
    INFO[0015] Waiting for [file-deployer] container to exit on host [192.168.0.104] 
    INFO[0015] Removing container [file-deployer] on host [192.168.0.104], try #1 
    INFO[0015] [remove/file-deployer] Successfully removed container on host [192.168.0.104] 
    INFO[0015] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes 
    INFO[0015] [reconcile] Reconciling cluster state 
    INFO[0015] [reconcile] Check etcd hosts to be deleted 
    INFO[0015] [reconcile] Check etcd hosts to be added 
    INFO[0015] [add/etcd] Adding member [etcd-192.168.0.104] to etcd cluster 
    INFO[0015] [add/etcd] Successfully Added member [etcd-192.168.0.104] to etcd cluster 
    INFO[0015] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0015] Starting container [etcd-fix-perm] on host [192.168.0.104], try #1 
    INFO[0016] Successfully started [etcd-fix-perm] container on host [192.168.0.104] 
    INFO[0016] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.104] 
    INFO[0016] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.104] 
    INFO[0016] Container [etcd-fix-perm] is still running on host [192.168.0.104] 
    INFO[0017] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.104] 
    INFO[0017] Removing container [etcd-fix-perm] on host [192.168.0.104], try #1 
    INFO[0017] [remove/etcd-fix-perm] Successfully removed container on host [192.168.0.104] 
    INFO[0017] Pulling image [rancher/coreos-etcd:v3.4.3-rancher1] on host [192.168.0.104], try #1 
    INFO[0020] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [192.168.0.104] 
    INFO[0020] Starting container [etcd] on host [192.168.0.104], try #1 
    INFO[0021] [etcd] Successfully started [etcd] container on host [192.168.0.104] 
    INFO[0021] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0021] Starting container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0022] [etcd] Successfully started [rke-log-linker] container on host [192.168.0.104] 
    INFO[0022] Removing container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0022] [remove/rke-log-linker] Successfully removed container on host [192.168.0.104] 
    INFO[0033] [reconcile] Rebuilding and updating local kube config 
    INFO[0033] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
    INFO[0033] [reconcile] host [192.168.0.59] is active master on the cluster 
    INFO[0033] Restarting container [kube-apiserver] on host [192.168.0.59], try #1 
    INFO[0033] [restart/kube-apiserver] Successfully restarted container on host [192.168.0.59] 
    INFO[0033] Restarting container [kube-controller-manager] on host [192.168.0.59], try #1 
    INFO[0034] [restart/kube-controller-manager] Successfully restarted container on host [192.168.0.59] 
    INFO[0034] Restarting container [etcd] on host [192.168.0.59], try #1 
    INFO[0036] [restart/etcd] Successfully restarted container on host [192.168.0.59] 
    INFO[0036] Restarting container [etcd] on host [192.168.0.104], try #1 
    INFO[0037] [restart/etcd] Successfully restarted container on host [192.168.0.104] 
    INFO[0037] [reconcile] Reconciled cluster state successfully 
    INFO[0037] max_unavailable_worker got rounded down to 0, resetting to 1 
    INFO[0037] Setting maxUnavailable for worker nodes to: 1 
    INFO[0037] Setting maxUnavailable for controlplane nodes to: 1 
    INFO[0037] Pre-pulling kubernetes images 
    INFO[0037] Image [rancher/hyperkube:v1.18.3-rancher2] exists on host [192.168.0.59] 
    INFO[0037] Image [rancher/hyperkube:v1.18.3-rancher2] exists on host [192.168.0.104] 
    INFO[0037] Kubernetes images pulled successfully 
    INFO[0037] [etcd] Building up etcd plane.. 
    INFO[0037] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0037] Starting container [etcd-fix-perm] on host [192.168.0.59], try #1 
    INFO[0038] Successfully started [etcd-fix-perm] container on host [192.168.0.59] 
    INFO[0038] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.59] 
    INFO[0038] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.59] 
    INFO[0038] Container [etcd-fix-perm] is still running on host [192.168.0.59] 
    INFO[0039] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.59] 
    INFO[0039] Removing container [etcd-fix-perm] on host [192.168.0.59], try #1 
    INFO[0039] [remove/etcd-fix-perm] Successfully removed container on host [192.168.0.59] 
    INFO[0039] Checking if container [etcd] is running on host [192.168.0.59], try #1 
    INFO[0039] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [192.168.0.59] 
    INFO[0039] Checking if container [old-etcd] is running on host [192.168.0.59], try #1 
    INFO[0039] Stopping container [etcd] on host [192.168.0.59] with stopTimeoutDuration [5s], try #1 
    INFO[0039] Waiting for [etcd] container to exit on host [192.168.0.59] 
    INFO[0039] Renaming container [etcd] to [old-etcd] on host [192.168.0.59], try #1 
    INFO[0039] Starting container [etcd] on host [192.168.0.59], try #1 
    INFO[0039] [etcd] Successfully updated [etcd] container on host [192.168.0.59] 
    INFO[0039] Removing container [old-etcd] on host [192.168.0.59], try #1 
    INFO[0039] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.0.59] 
    INFO[0039] Removing container [etcd-rolling-snapshots] on host [192.168.0.59], try #1 
    INFO[0039] [remove/etcd-rolling-snapshots] Successfully removed container on host [192.168.0.59] 
    INFO[0039] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0039] Starting container [etcd-rolling-snapshots] on host [192.168.0.59], try #1 
    INFO[0040] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.0.59] 
    INFO[0045] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0045] Starting container [rke-bundle-cert] on host [192.168.0.59], try #1 
    INFO[0045] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.0.59] 
    INFO[0045] Waiting for [rke-bundle-cert] container to exit on host [192.168.0.59] 
    INFO[0045] Container [rke-bundle-cert] is still running on host [192.168.0.59] 
    INFO[0046] Waiting for [rke-bundle-cert] container to exit on host [192.168.0.59] 
    INFO[0046] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.0.59] 
    INFO[0046] Removing container [rke-bundle-cert] on host [192.168.0.59], try #1 
    INFO[0046] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0046] Starting container [rke-log-linker] on host [192.168.0.59], try #1 
    INFO[0046] [etcd] Successfully started [rke-log-linker] container on host [192.168.0.59] 
    INFO[0046] Removing container [rke-log-linker] on host [192.168.0.59], try #1 
    INFO[0047] [remove/rke-log-linker] Successfully removed container on host [192.168.0.59] 
    INFO[0047] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0047] Starting container [etcd-fix-perm] on host [192.168.0.104], try #1 
    INFO[0047] Successfully started [etcd-fix-perm] container on host [192.168.0.104] 
    INFO[0047] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.104] 
    INFO[0047] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.104] 
    INFO[0047] Container [etcd-fix-perm] is still running on host [192.168.0.104] 
    INFO[0048] Waiting for [etcd-fix-perm] container to exit on host [192.168.0.104] 
    INFO[0048] Removing container [etcd-fix-perm] on host [192.168.0.104], try #1 
    INFO[0049] [remove/etcd-fix-perm] Successfully removed container on host [192.168.0.104] 
    INFO[0049] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.0.104] 
    INFO[0049] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0049] Starting container [etcd-rolling-snapshots] on host [192.168.0.104], try #1 
    INFO[0049] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.0.104] 
    INFO[0054] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0054] Starting container [rke-bundle-cert] on host [192.168.0.104], try #1 
    INFO[0055] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.0.104] 
    INFO[0055] Waiting for [rke-bundle-cert] container to exit on host [192.168.0.104] 
    INFO[0055] Container [rke-bundle-cert] is still running on host [192.168.0.104] 
    INFO[0056] Waiting for [rke-bundle-cert] container to exit on host [192.168.0.104] 
    INFO[0056] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.0.104] 
    INFO[0056] Removing container [rke-bundle-cert] on host [192.168.0.104], try #1 
    INFO[0056] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0056] Starting container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0057] [etcd] Successfully started [rke-log-linker] container on host [192.168.0.104] 
    INFO[0057] Removing container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0057] [remove/rke-log-linker] Successfully removed container on host [192.168.0.104] 
    INFO[0057] [etcd] Successfully started etcd plane.. Checking etcd cluster health 
    INFO[0057] [controlplane] Now checking status of node 192.168.0.59, try #1 
    INFO[0057] [controlplane] Now checking status of node 192.168.0.104, try #1 
    INFO[0057] [controlplane] Processing controlplane hosts for upgrade 1 at a time 
    INFO[0057] Processing controlplane host 192.168.0.59 
    INFO[0057] [controlplane] Now checking status of node 192.168.0.59, try #1 
    INFO[0057] [controlplane] Getting list of nodes for upgrade 
    INFO[0057] Upgrading controlplane components for control host 192.168.0.59 
    INFO[0057] Checking if container [service-sidekick] is running on host [192.168.0.59], try #1 
    INFO[0058] [sidekick] Sidekick container already created on host [192.168.0.59] 
    INFO[0058] Checking if container [kube-apiserver] is running on host [192.168.0.59], try #1 
    INFO[0058] Image [rancher/hyperkube:v1.18.3-rancher2] exists on host [192.168.0.59] 
    INFO[0058] Checking if container [old-kube-apiserver] is running on host [192.168.0.59], try #1 
    INFO[0058] Stopping container [kube-apiserver] on host [192.168.0.59] with stopTimeoutDuration [5s], try #1 
    INFO[0058] Waiting for [kube-apiserver] container to exit on host [192.168.0.59] 
    INFO[0058] Renaming container [kube-apiserver] to [old-kube-apiserver] on host [192.168.0.59], try #1 
    INFO[0058] Starting container [kube-apiserver] on host [192.168.0.59], try #1 
    INFO[0058] [controlplane] Successfully updated [kube-apiserver] container on host [192.168.0.59] 
    INFO[0058] Removing container [old-kube-apiserver] on host [192.168.0.59], try #1 
    INFO[0058] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.0.59] 
    INFO[0078] [healthcheck] service [kube-apiserver] on host [192.168.0.59] is healthy 
    INFO[0078] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0079] Starting container [rke-log-linker] on host [192.168.0.59], try #1 
    INFO[0079] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.59] 
    INFO[0079] Removing container [rke-log-linker] on host [192.168.0.59], try #1 
    INFO[0079] [remove/rke-log-linker] Successfully removed container on host [192.168.0.59] 
    INFO[0079] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.0.59] 
    INFO[0079] [healthcheck] service [kube-controller-manager] on host [192.168.0.59] is healthy 
    INFO[0079] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0079] Starting container [rke-log-linker] on host [192.168.0.59], try #1 
    INFO[0080] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.59] 
    INFO[0080] Removing container [rke-log-linker] on host [192.168.0.59], try #1 
    INFO[0080] [remove/rke-log-linker] Successfully removed container on host [192.168.0.59] 
    INFO[0080] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.0.59] 
    INFO[0080] [healthcheck] service [kube-scheduler] on host [192.168.0.59] is healthy 
    INFO[0080] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0080] Starting container [rke-log-linker] on host [192.168.0.59], try #1 
    INFO[0080] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.59] 
    INFO[0080] Removing container [rke-log-linker] on host [192.168.0.59], try #1 
    INFO[0081] [remove/rke-log-linker] Successfully removed container on host [192.168.0.59] 
    INFO[0081] [controlplane] Now checking status of node 192.168.0.59, try #1 
    INFO[0081] Processing controlplane host 192.168.0.104 
    INFO[0081] [controlplane] Now checking status of node 192.168.0.104, try #1 
    INFO[0081] [controlplane] Getting list of nodes for upgrade 
    INFO[0081] Upgrading controlplane components for control host 192.168.0.104 
    INFO[0081] Removing container [nginx-proxy] on host [192.168.0.104], try #1 
    INFO[0082] [remove/nginx-proxy] Successfully removed container on host [192.168.0.104] 
    INFO[0082] Checking if container [service-sidekick] is running on host [192.168.0.104], try #1 
    INFO[0082] [sidekick] Sidekick container already created on host [192.168.0.104] 
    INFO[0082] Image [rancher/hyperkube:v1.18.3-rancher2] exists on host [192.168.0.104] 
    INFO[0082] Starting container [kube-apiserver] on host [192.168.0.104], try #1 
    INFO[0082] [controlplane] Successfully started [kube-apiserver] container on host [192.168.0.104] 
    INFO[0082] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.0.104] 
    INFO[0123] [healthcheck] service [kube-apiserver] on host [192.168.0.104] is healthy 
    INFO[0123] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0123] Starting container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0124] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.104] 
    INFO[0124] Removing container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0124] [remove/rke-log-linker] Successfully removed container on host [192.168.0.104] 
    INFO[0124] Image [rancher/hyperkube:v1.18.3-rancher2] exists on host [192.168.0.104] 
    INFO[0124] Starting container [kube-controller-manager] on host [192.168.0.104], try #1 
    INFO[0125] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.0.104] 
    INFO[0125] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.0.104] 
    INFO[0130] [healthcheck] service [kube-controller-manager] on host [192.168.0.104] is healthy 
    INFO[0130] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0130] Starting container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0131] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.104] 
    INFO[0131] Removing container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0131] [remove/rke-log-linker] Successfully removed container on host [192.168.0.104] 
    INFO[0131] Image [rancher/hyperkube:v1.18.3-rancher2] exists on host [192.168.0.104] 
    INFO[0131] Starting container [kube-scheduler] on host [192.168.0.104], try #1 
    INFO[0132] [controlplane] Successfully started [kube-scheduler] container on host [192.168.0.104] 
    INFO[0132] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.0.104] 
    INFO[0137] [healthcheck] service [kube-scheduler] on host [192.168.0.104] is healthy 
    INFO[0137] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0137] Starting container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0138] [controlplane] Successfully started [rke-log-linker] container on host [192.168.0.104] 
    INFO[0138] Removing container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0138] [remove/rke-log-linker] Successfully removed container on host [192.168.0.104] 
    INFO[0138] Upgrading workerplane components for control host 192.168.0.104 
    INFO[0138] Checking if container [service-sidekick] is running on host [192.168.0.104], try #1 
    INFO[0138] [sidekick] Sidekick container already created on host [192.168.0.104] 
    INFO[0138] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.0.104] 
    INFO[0139] [healthcheck] service [kubelet] on host [192.168.0.104] is healthy 
    INFO[0139] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0139] Starting container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0139] [worker] Successfully started [rke-log-linker] container on host [192.168.0.104] 
    INFO[0139] Removing container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0139] [remove/rke-log-linker] Successfully removed container on host [192.168.0.104] 
    INFO[0139] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.0.104] 
    INFO[0140] [healthcheck] service [kube-proxy] on host [192.168.0.104] is healthy 
    INFO[0140] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0140] Starting container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0140] [worker] Successfully started [rke-log-linker] container on host [192.168.0.104] 
    INFO[0140] Removing container [rke-log-linker] on host [192.168.0.104], try #1 
    INFO[0141] [remove/rke-log-linker] Successfully removed container on host [192.168.0.104] 
    INFO[0141] [controlplane] Now checking status of node 192.168.0.104, try #1 
    INFO[0141] [controlplane] Successfully upgraded Controller Plane.. 
    INFO[0141] [authz] Creating rke-job-deployer ServiceAccount 
    INFO[0141] [authz] rke-job-deployer ServiceAccount created successfully 
    INFO[0141] [authz] Creating system:node ClusterRoleBinding 
    INFO[0141] [authz] system:node ClusterRoleBinding created successfully 
    INFO[0141] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding 
    INFO[0142] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully 
    INFO[0142] Successfully Deployed state file at [./cluster.rkestate] 
    INFO[0142] [state] Saving full cluster state to Kubernetes 
    INFO[0142] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state 
    INFO[0142] [worker] Upgrading Worker Plane.. 
    INFO[0142] [worker] Successfully upgraded Worker Plane.. 
    INFO[0142] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.104] 
    INFO[0142] Starting container [rke-log-cleaner] on host [192.168.0.104], try #1 
    INFO[0142] Image [rancher/rke-tools:v0.1.58] exists on host [192.168.0.59] 
    INFO[0142] Starting container [rke-log-cleaner] on host [192.168.0.59], try #1 
    INFO[0142] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.0.104] 
    INFO[0142] Removing container [rke-log-cleaner] on host [192.168.0.104], try #1 
    INFO[0143] [remove/rke-log-cleaner] Successfully removed container on host [192.168.0.104] 
    INFO[0143] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.0.59] 
    INFO[0143] Removing container [rke-log-cleaner] on host [192.168.0.59], try #1 
    INFO[0143] [remove/rke-log-cleaner] Successfully removed container on host [192.168.0.59] 
    INFO[0143] [sync] Syncing nodes Labels and Taints 
    INFO[0143] [sync] Successfully synced nodes Labels and Taints 
    INFO[0143] [network] Setting up network plugin: canal 
    INFO[0143] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes 
    INFO[0143] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes 
    INFO[0143] [addons] Executing deploy job rke-network-plugin 
    INFO[0144] [addons] Setting up coredns 
    INFO[0144] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes 
    INFO[0144] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes 
    INFO[0144] [addons] Executing deploy job rke-coredns-addon 
    INFO[0144] [addons] CoreDNS deployed successfully 
    INFO[0144] [dns] DNS provider coredns deployed successfully 
    INFO[0144] [addons] Setting up Metrics Server 
    INFO[0144] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
    INFO[0144] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes 
    INFO[0144] [addons] Executing deploy job rke-metrics-addon 
    INFO[0144] [addons] Metrics Server deployed successfully 
    INFO[0144] [ingress] Setting up nginx ingress controller 
    INFO[0144] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes 
    INFO[0144] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes 
    INFO[0144] [addons] Executing deploy job rke-ingress-controller 
    INFO[0144] [ingress] ingress controller nginx deployed successfully 
    INFO[0144] [addons] Setting up user addons 
    INFO[0145] [addons] no user addons defined 
    INFO[0145] Finished building Kubernetes cluster successfully
    
    revert to one node for now
    amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces -o wide
    NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE   IP             NODE           NOMINATED NODE   READINESS GATES
    ingress-nginx   default-http-backend-598b7d7dbd-9zvkp     1/1     Running     0          63m   10.42.0.3      192.168.0.59   <none>           <none>
    ingress-nginx   nginx-ingress-controller-dvmrh            1/1     Running     0          63m   192.168.0.59   192.168.0.59   <none>           <none>
    kube-system     canal-w8lz8                               2/2     Running     0          64m   192.168.0.59   192.168.0.59   <none>           <none>
    kube-system     coredns-849545576b-hrxwd                  1/1     Running     0          64m   10.42.0.4      192.168.0.59   <none>           <none>
    kube-system     coredns-autoscaler-5dcd676cbd-x7k7l       1/1     Running     0          64m   10.42.0.2      192.168.0.59   <none>           <none>
    kube-system     metrics-server-697746ff48-rtw4z           1/1     Running     0          80s   10.42.0.5      192.168.0.59   <none>           <none>
    kube-system     rke-coredns-addon-deploy-job-fwb8z        0/1     Completed   0          64m   192.168.0.59   192.168.0.59   <none>           <none>
    kube-system     rke-ingress-controller-deploy-job-7mfng   0/1     Completed   0          63m   192.168.0.59   192.168.0.59   <none>           <none>
    kube-system     rke-metrics-addon-deploy-job-h9snz        0/1     Completed   0          64m   192.168.0.59   192.168.0.59   <none>           <none>
    kube-system     rke-network-plugin-deploy-job-84t78       0/1     Completed   0          64m   192.168.0.59   192.168.0.59   <none>           <none>
    amdocs@obriensystemsu0:~$ kubectl top nodes
    NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
    192.168.0.59   701m         5%     2962Mi          18% 
    
    retry after I forgot to port forward the tcp port 8472 in addition to the udp port 8472
    amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces -o wide
    NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE    IP              NODE            NOMINATED NODE   READINESS GATES
    ingress-nginx   default-http-backend-598b7d7dbd-9zvkp     1/1     Running     0          71m    10.42.0.3       192.168.0.59    <none>           <none>
    ingress-nginx   nginx-ingress-controller-dvmrh            1/1     Running     1          71m    192.168.0.59    192.168.0.59    <none>           <none>
    ingress-nginx   nginx-ingress-controller-ffdwd            1/1     Running     0          20s    192.168.0.104   192.168.0.104   <none>           <none>
    kube-system     canal-5l6jd                               2/2     Running     0          26s    192.168.0.104   192.168.0.104   <none>           <none>
    kube-system     canal-w8lz8                               2/2     Running     0          71m    192.168.0.59    192.168.0.59    <none>           <none>
    kube-system     coredns-849545576b-4tfvz                  1/1     Running     0          19s    10.42.1.2       192.168.0.104   <none>           <none>
    kube-system     coredns-849545576b-hrxwd                  1/1     Running     0          71m    10.42.0.4       192.168.0.59    <none>           <none>
    kube-system     coredns-autoscaler-5dcd676cbd-x7k7l       1/1     Running     0          71m    10.42.0.2       192.168.0.59    <none>           <none>
    kube-system     metrics-server-697746ff48-rtw4z           1/1     Running     0          9m7s   10.42.0.5       192.168.0.59    <none>           <none>
    kube-system     rke-coredns-addon-deploy-job-fwb8z        0/1     Completed   0          71m    192.168.0.59    192.168.0.59    <none>           <none>
    kube-system     rke-ingress-controller-deploy-job-7mfng   0/1     Completed   0          71m    192.168.0.59    192.168.0.59    <none>           <none>
    kube-system     rke-metrics-addon-deploy-job-h9snz        0/1     Completed   0          71m    192.168.0.59    192.168.0.59    <none>           <none>
    kube-system     rke-network-plugin-deploy-job-84t78       0/1     Completed   0          72m    192.168.0.59    192.168.0.59    <none>           <none>
    amdocs@obriensystemsu0:~$ kubectl top nodes
    NAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
    192.168.0.104   217m         1%     2423Mi          4%        
    192.168.0.59    593m         4%     3062Mi          19%