Page tree
Skip to end of metadata
Go to start of metadata

Kubernetes Developer Guide | Helm Development Guide | Reference Architecture

todo:  https://www.terraform.io/docs/providers/helm/index.html

Helm V2

We can run collocated helm installs by moving the helm binary to helm3 and installing helm2

helm v3 changes from v2 https://helm.sh/docs/faq/

  brew uninstall helm
  brew uninstall helm@2
  brew install helm@2
  helm version
  vi ~/.bash_profile 

add
export PATH="/usr/local/opt/helm@2/bin:$PATH"
  source ~/.bash_profile 
  helm version


# enable tiller in minikube

 kubectl -n kube-system create serviceaccount tiller
 kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
 helm init --service-account tiller
 kubectl -n kube-system  rollout status deploy/tiller-deploy
 sudo helm init
 helm serve &
 helm repo add local http://127.0.0.1:8879
 helm repo list


Helm V3 

For kubernetes installation/development see Kubernetes Developer Guide

see also Asynchronous Messaging using Kafka#KafkaOperatorCharts

REF-3 - Getting issue details... STATUS

Install helm

Install Helm on OSX

https://helm.sh/docs/intro/install/

Install Brew 

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

biometric:wse_helm michaelobrien$ brew install helm
==> Downloading https://homebrew.bintray.com/bottles/helm-3.0.3.catalina.bottle.tar.gz
==> Downloading from https://akamai.bintray.com/59/5987c80ea21063f3c26a799889ad3e0b35c73275bd3579e5a1f6785d6f3f43d5?__gda__=exp=1581208465~hmac=fcf13391d90275fbcd6c015d86b3a0ec9abeb02aa1583eb3dca3f653da1aa281&response-content-disposition=attachment%3Bfilename%3D%22helm-3.0.3.catali
######################################################################## 100.0%
==> Pouring helm-3.0.3.catalina.bottle.tar.gz
==> Caveats
Bash completion has been installed to:
  /usr/local/etc/bash_completion.d

zsh completions have been installed to:
  /usr/local/share/zsh/site-functions
==> Summary
  /usr/local/Cellar/helm/3.0.3: 7 files, 40.6MB

biometric:wse_helm michaelobrien$ helm version
version.BuildInfo{Version:"v3.0.3", GitCommit:"ac925eb7279f4a6955df663a0128044a8a6b7593", GitTreeState:"clean", GoVersion:"go1.13.7"}

biometric:wse_go michaelobrien$ kubectl get services --all-namespaces
default       tomcat-dev                 LoadBalancer   10.103.194.12    localhost     80:32305/TCP                 75s

biometric:wse_go michaelobrien$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
default       tomcat-dev-76d87c8fb6-9xjr6              1/1     Running   0          106s

biometric:wse_go michaelobrien$ helm list
NAME      	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART       	APP VERSION
tomcat-dev	default  	1       	2020-02-16 11:15:14.196403 -0500 EST	deployed	tomcat-0.4.1	7.0 

Upgrade Helm using Brew

brew upgrade helm

==> Upgrading 1 outdated package:
helm 3.0.3 -> 3.2.1
==> Upgrading helm 3.0.3 -> 3.2.1

biometric:wse_helm $ helm version
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"} 


Override values.yaml parameters in the default helm chart

root@tomcat-dev-76d87c8fb6-7nxjx:/usr/local/tomcat/logs# curl --head http://127.0.0.1:8080/sample
HTTP/1.1 302 Found
Server: Apache-Coyote/1.1
Location: /sample/
Transfer-Encoding: chunked
Date: Sun, 16 Feb 2020 18:15:28 GMT

root@tomcat-dev-76d87c8fb6-7nxjx:/usr/local/tomcat/logs# curl --head http://127.0.0.1:8080/sample/
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Accept-Ranges: bytes
ETag: W/"636-1185812788000"
Last-Modified: Mon, 30 Jul 2007 16:26:28 GMT
Content-Type: text/html
Content-Length: 636
Date: Sun, 16 Feb 2020 18:16:02 GMT

curl http://192.168.20.144:80/sample/
192.168.65.3 - - [16/Feb/2020:18:21:18 +0000] "GET /sample/ HTTP/1.1" 200 636

change the port from 80 to 31111
helm upgrade tomcat-dev stable/tomcat --set service.externalPort=31111
default       tomcat-dev    LoadBalancer   10.97.22.152   localhost     31111:31962/TCP          26m

n> curl http://192.168.20.144:31111/sample/
StatusCode        : 200
StatusDescription : OK


Install Helm on Windows 10

https://github.com/helm/helm/releases/tag/v3.7.0

extract the zip and add the exe to your classpath
F:\opt\helm\helm.exe

$ helm version
version.BuildInfo{Version:"v3.2.0", GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71", GitTreeState:"clean", GoVersion:"go1.13.10"}

Helm Chart Examples

Install Tomcat from a stable Helm chart

Follow https://github.com/helm/charts/tree/master/stable/tomcat

biometric:wse_go michaelobrien$ helm repo add stable https://kubernetes-charts.storage.googleapis.com
"stable" has been added to your repositories

biometric:wse_go michaelobrien$ helm install tomcat-dev stable/tomcat
NAME: tomcat-dev
LAST DEPLOYED: Sun Feb 16 11:15:14 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w tomcat-dev'
  export SERVICE_IP=$(kubectl get svc --namespace default tomcat-dev -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  echo http://$SERVICE_IP:
biometric:wse_go michaelobrien$ export SERVICE_IP=$(kubectl get svc --namespace default tomcat-dev -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
biometric:wse_go michaelobrien$ echo http://$SERVICE_IP:
http://localhost:


Your own Helm charts

https://helm.sh/docs/using_helm/#quickstart-guide  - this example contains a default nginx server

Helm Lifecycle

Create Helm Chart

ubuntu@ip-172-31-81-46:~/obrienlabs$ sudo helm create difference-nbi
Creating difference-nbi
ubuntu@ip-172-31-81-46:~/obrienlabs$ ls difference-nbi/
Chart.yaml  charts  templates  values.yaml

Package Helm Chart

ubuntu@ip-172-31-81-46:~/obrienlabs/difference-nbi$ cd ..
ubuntu@ip-172-31-81-46:~/obrienlabs$ sudo helm package difference-nbi
Successfully packaged chart and saved it to: /home/ubuntu/obrienlabs/difference-nbi-0.1.0.tgz

package to local chart repo

Kubernetes Secrets

https://developers.redhat.com/blog/2017/10/04/configuring-spring-boot-kubernetes-secrets#setup

Generating secret opaque coding

https://v1-18.docs.kubernetes.io/docs/concepts/configuration/secret/



biometric:reference-helm michaelobrien$ kubectl create secret generic spring-security --from-literal=spring.user.name=demo --from-literal=spring.user.password=password
secret/spring-security created
biometric:reference-helm michaelobrien$ kubectl get secret spring-security -o yaml
apiVersion: v1
data:
  spring.user.name: ZGVtbw==
  spring.user.password: cGFzc3dvcmQ=
kind: Secret
metadata:
  creationTimestamp: "2021-09-21T16:41:47Z"
  name: spring-security
  namespace: default
  resourceVersion: "1554507"
  selfLink: /api/v1/namespaces/default/secrets/spring-security
  uid: 49733d89-4cdf-4d32-b871-7efa63b60c07
type: Opaque


in deployment yaml

        - name: {{ .Chart.Name }}

          env:
            - name: DB_USERNAME
              valueFrom: 
                secretKeyRef:
                  name: datasource-credentials
                  key: username
            - name: DB_PASSWORD
              valueFrom: 
                secretKeyRef:
                  name: datasource-credentials
                  key: password

          envFrom:
            - secretRef:
                name: spring-security
                #key: spring.user.password

in spring boot
    	String secret = System.getenv("spring.user.password");

in secrets.yaml
---
apiVersion: v1
data:
  username: ZGVtbzI=
  # generate with echo -n 'password2' | base64
  password: cGFzc3dvcmQy
kind: Secret
metadata:
  creationTimestamp: "2021-09-21T16:41:47Z"
  name: datasource-credentials
  #namespace: default
  resourceVersion: "1554507"
  selfLink: /api/v1/namespaces/default/secrets/datasource-credentials
  uid: 49733d89-4cdf-4d32-b871-7efa63b60c07
type: Opaque

redeploy
helm3 delete reference-nbi
sudo helm3 package reference-nbi

# using the directory not the tgz package
helm3 install --set name=reference-nbi reference-nbi ./reference-nbi
biometric:reference-helm michaelobrien$ kubectl get secrets | grep spring
spring-security                       Opaque                                2      176m

run an endpoint
http://localhost:30040/nbi/api

logs
kubectl logs -f reference-nbi-7c45ff855d-nh4dd
2021-09-21 19:27:43.607  INFO 8 --- [nio-8080-exec-9] c.c.reference.nbi.ApiController          : secret: password

helm3 uninstall reference-nbi



Set readiness and liveness probes in the helm charts

https://github.com/obrienlabs/refarch/commit/99197c874f20a30fcbb2e9db91d24d8adf6abb99

service.yaml
apiVersion: v1
kind: Service
metadata:
  name: {{ include "reference-nbi.fullname" . }}
  labels:
{{ include "reference-nbi.labels" . | indent 4 }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.port }}
      targetPort: {{ .Values.service.port }}
      protocol: TCP
      name: http
      nodePort: {{ .Values.service.nodePort }}
  selector:
    app.kubernetes.io/name: {{ include "reference-nbi.name" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}

deployment.yaml
apiVersion: apps/v1
#apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ include "reference-nbi.fullname" . }}
  labels:
{{ include "reference-nbi.labels" . | indent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app.kubernetes.io/name: {{ include "reference-nbi.name" . }}
      app.kubernetes.io/instance: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: {{ include "reference-nbi.name" . }}
        app.kubernetes.io/instance: {{ .Release.Name }}
    spec:
    {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
    {{- end }}
      serviceAccountName: {{ template "reference-nbi.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: {{ .Values.service.port }}
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /nbi/api
              port: http
          readinessProbe:
            httpGet:
              path: /nbi/api
              port: http
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
    {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
    {{- end }}

values.yaml
# Default values for reference-nbi.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: obrienlabs/reference-nbi
  tag: 0.0.1
  pullPolicy: IfNotPresent

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000


service:
# expose externally
  type: NodePort
  port: 8080
  nodePort: 30040

ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []

  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}


Install Helm Chart

https://hub.docker.com/repository/docker/obrienlabs/reference-nbi/general



# helm 3
# build/push the image to dockerhub first
biometric:docker michaelobrien$ ./build.sh 

biometric:reference-helm michaelobrien$ helm3 delete reference-nbi
release "reference-nbi" uninstalled
biometric:reference-helm michaelobrien$ sudo helm3 package reference-nbi
Password:
Successfully packaged chart and saved it to: /Users/michaelobrien/wse_github/refarch/reference-helm/reference-nbi-0.1.0.tgz
biometric:reference-helm michaelobrien$ kubectl logs -f reference-nbi-599f679559-xv9v7
Error from server (NotFound): pods "reference-nbi-599f679559-xv9v7" not found
biometric:reference-helm michaelobrien$ helm3 install --set name=reference-nbi reference-nbi ./reference-nbi
NAME: reference-nbi
LAST DEPLOYED: Mon Sep 20 23:16:48 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services reference-nbi)
  export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT
biometric:reference-helm michaelobrien$ kubectl get services
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP          78d
reference-nbi   NodePort    10.98.198.47   <none>        8080:30040/TCP   8s
biometric:reference-helm michaelobrien$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
reference-nbi-599f679559-gbxcx   0/1     Running   0          18s
biometric:reference-helm michaelobrien$ kubectl exec -it reference-nbi-599f679559-dj296 bash
root@reference-nbi-599f679559-dj296:/# curl http://127.0.0.1:8080/nbi/api
{"id":1,"content":"1 PASS cloud.containerization.reference.nbi.ApiController URL: http://127.0.0.1:8080/nbi/api URI: /nbi/api path: null referer: null caller: null Host: 127.0.0.1:8080 queryString: null decodedQueryString: null session attributes:  :  remoteAddr: 127.0.0.1 localAddr: 127.0.0.1 remoteHost: 127.0.0.1 serverName: 127.0.0.1"}root@reference-nbi-599f679559-dj296:/# curl http://127.0.0.1:8080/nbi/api
{"id":2,"content":"2 PASS cloud.containerization.reference.nbi.ApiController URL: http://127.0.0.1:8080/nbi/api URI: /nbi/api path: null referer: null caller: null Host: 127.0.0.1:8080 queryString: null decodedQueryString: null session attributes:  :  remoteAddr: 127.0.0.1 localAddr: 127.0.0.1 remoteHost: 127



healthcheck
2021-09-21 04:12:47.792 DEBUG 8 --- [nio-8080-exec-2] o.s.web.servlet.DispatcherServlet        : Completed 200 OK
2021-09-21 04:12:50.369 DEBUG 8 --- [nio-8080-exec-3] o.s.web.servlet.DispatcherServlet        : GET "/nbi/api", parameters={}
2021-09-21 04:12:50.370 DEBUG 8 --- [nio-8080-exec-3] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped to cloud.containerization.reference.nbi.ApiController#process(String, HttpServletRequest)
2021-09-21 04:12:50.371  INFO 8 --- [nio-8080-exec-3] c.c.reference.nbi.ApiController          : queryString decoded: null
2021-09-21 04:12:50.371  INFO 8 --- [nio-8080-exec-3] c.c.reference.nbi.ApiController          : cloud.containerization.reference.nbi.ApiController 31 PASS cloud.containerization.reference.nbi.ApiController URL: http://10.1.0.62:8080/nbi/api URI: /nbi/api path: null referer: null caller: null Host: 10.1.0.62:8080 queryString: null decodedQueryString: null session attributes:  :  remoteAddr: 10.1.0.1 localAddr: 10.1.0.62 remoteHost: 10.1.0.1 serverName: 10.1.0.62
2021-09-21 04:12:50.371 DEBUG 8 --- [nio-8080-exec-3] m.m.a.RequestResponseBodyMethodProcessor : Using 'application/json', given [*/*] and supported [application/json, application/*+json, application/json, application/*+json, application/x-jackson-smile, application/cbor]
2021-09-21 04:12:50.371 DEBUG 8 --- [nio-8080-exec-3] m.m.a.RequestResponseBodyMethodProcessor : Writing [cloud.containerization.reference.nbi.Api@2568dd58]
2021-09-21 04:12:50.373 DEBUG 8 --- [nio-8080-exec-3] o.s.web.servlet.DispatcherServlet        : Completed 200 OK


# helm 2
ubuntu@ip-172-31-81-46:~/obrienlabs$ helm install difference-nbi --name difference-nbi
NAME:   difference-nbi
LAST DEPLOYED: Mon Jun 10 19:00:59 2019
ubuntu@ip-172-31-81-46:~/obrienlabs$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
default         difference-nbi-5fc754f69-hqkr2            1/1     Running     0          16s
ubuntu@ip-172-31-81-46:~/obrienlabs$ kubectl get services --all-namespaces
NAMESPACE       NAME                                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                  AGE
default         difference-nbi                         ClusterIP   10.43.82.98     <none>        80/TCP                                   75s
ubuntu@ip-172-31-81-46:~/charts/stable$ sudo helm ls
NAME          	REVISION	UPDATED                 	STATUS  	CHART                       	APP VERSION  	NAMESPACE
difference-nbi	1       	Mon Jun 10 19:00:59 2019	DEPLOYED	difference-nbi-0.1.0        	1.0          	default   

ubuntu@ubuntu:~$ kubectl port-forward difference-nbi-74955fd75b-9kgb2 8180:80
Forwarding from 127.0.0.1:8180 -> 80
Forwarding from [::1]:8180 -> 80

SCP Helm Chart and git commit it

obrienbiometrics:difference-kubernetes michaelobrien$ scp -rp ubuntu@rke.obrienlabs.cloud:~/obrienlabs/* .
Chart.yaml                                                                                                                                                     100%  110     1.4KB/s   00:00    
.helmignore                                                                                                                                                    100%  342     4.5KB/s   00:00    
values.yaml                                                                                                                                                    100% 1070    13.7KB/s   00:00    
service.yaml                                                                                                                                                   100%  611     7.1KB/s   00:00    
deployment.yaml                                                                                                                                                100% 1581    12.7KB/s   00:00    
ingress.yaml                                                                                                                                                   100% 1070    13.2KB/s   00:00    
_helpers.tpl                                                                                                                                                   100% 1066    14.3KB/s   00:00    
test-connection.yaml                                                                                                                                           100%  585     9.9KB/s   00:00    
NOTES.txt                                                                                                                                                      100% 1513    20.9KB/s   00:00    


Upgrade Helm Chart

Use helm hooks to key into any part of the lifecycle.

ONAP references to helm upgrade

https://gitlab.com/obriensystems/oom/-/blob/master/kubernetes/helm/plugins/deploy/deploy.sh

https://git.onap.org/oom/tree/kubernetes/helm/plugins/deploy/deploy.sh#n184

https://docs.onap.org/en/elalto/submodules/oom.git/docs/oom_quickstart_guide.html

Issues with pod deletion - see https://github.com/obrienlabs/onap-root/blob/master/cd.sh#L63


Uninstall/delete Helm Chart

delete the chart, delete handing pods, optionally delete the pvc then pv.
sudo helm delete kafka
sudo helm del --purge kafka
kubectl delete pvc/datadir-kafka-0
or delete all in order

kubectl delete pvc --all
kubectl delete pv --all


Helm Backup Plugin

see https://github.com/maorfr/helm-backup from https://v2.helm.sh/docs/related/

sudo helm plugin install https://github.com/maorfr/helm-backup
helm install --namespace kafka --name kafka kafka/
sudo helm backup kafka
helm delete --purge kafka
sudo helm backup kafka --restore
biometric:incubator michaelobrien$ kubectl get pods --all-namespaces | grep kafka
kafka         kafka-0                                1/1     Running   1          2m53s
kafka         kafka-1                                1/1     Running   0          80s
kafka         kafka-2                                1/1     Running   0          47s
kafka         kafka-zookeeper-0                      1/1     Running   1          2m53s
kafka         kafka-zookeeper-1                      1/1     Running   0          2m17s
kafka         kafka-zookeeper-2                      1/1     Running   0          113s


Kubectl apply from Helm get/pull backup

There are cases where a deployed helm chart can be backed up to a single yaml using "helm get".  In order to redeploy this yaml without doing a helm install - the yaml must be edited to remove the initial values.yaml extract at the top before the first apiVersion line.

biometric:incubator michaelobrien$ sudo helm get kafka > kafka_get.yaml
biometric:incubator michaelobrien$ helm delete --purge kafka

biometric:incubator michaelobrien$ vi kafka_get.yaml
remove the following to avoid
error: error parsing kafka_get.yaml: error converting YAML to JSON: yaml: line 7: could not find expected ':'

REVISION: 1
RELEASED: Thu Jul 23 14:26:00 2020
CHART: kafka-0.21.2
USER-SUPPLIED VALUES: {}
...
to
HOOKS:

biometric:incubator michaelobrien$ kubectl apply -f kafka_get.yaml
pod/kafka-test-topic-create-consume-produce created
poddisruptionbudget.policy/kafka-zookeeper created
configmap/kafka-zookeeper created
service/kafka-zookeeper-headless created
service/kafka-zookeeper created
service/kafka created
service/kafka-headless created
statefulset.apps/kafka-zookeeper created
statefulset.apps/kafka created

biometric:incubator michaelobrien$ kubectl get pods --all-namespaces | grep kafka
default       kafka-0                                   1/1     Running   1          4m58s
default       kafka-1                                   1/1     Running   0          3m37s
default       kafka-2                                   1/1     Running   0          3m5s
default       kafka-test-topic-create-consume-produce   0/1     Error     0          4m58s
default       kafka-zookeeper-0                         1/1     Running   1          4m58s
default       kafka-zookeeper-1                         1/1     Running   0          4m24s
default       kafka-zookeeper-2                         1/1     Running   0          3m58s

biometric:incubator michaelobrien$ kubectl delete -f kafka_get.yaml 
pod "kafka-test-topic-create-consume-produce" deleted
poddisruptionbudget.policy "kafka-zookeeper" deleted
configmap "kafka-zookeeper" deleted
service "kafka-zookeeper-headless" deleted
service "kafka-zookeeper" deleted
service "kafka" deleted
service "kafka-headless" deleted
statefulset.apps "kafka-zookeeper" deleted
statefulset.apps "kafka" deleted


Helm v3 API

https://helm.sh/docs/faq/


Other Deprecated Tools on top of Kubernetes

https://github.com/kubernetes-sigs/kustomize

  • No labels

1 Comment

  1. For deployments

    pre kubernetes 1.16

    apiVersion: extensions/v1beta1

    kubernetes 1.16+ 

    apiVersion: apps/v1

    to avoid

    biometric:reference-helm michaelobrien$ helm install --name reference-nbi reference-nbi-0.1.0.tgz

    Error: validation failed: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"


    with apps/v1

    biometric:reference-helm michaelobrien$ helm install --name reference-nbi reference-nbi-0.1.0.tgz

    NAME:   reference-nbi

    LAST DEPLOYED: Wed Jul 15 21:18:54 2020

    NAMESPACE: default

    STATUS: DEPLOYED


    RESOURCES:

    ==> v1/Deployment

    NAME           READY  UP-TO-DATE  AVAILABLE  AGE

    reference-nbi  0/1    1           0          0s


    default       reference-nbi-6bf49d969d-wrhrh     1/1     Running   0          33s