Page tree

Michael O'Brien

Skip to end of metadata
Go to start of metadata

Architecture

Decoupled microservices need an asynchronous publish/subscribe queue.

We will use Kafka for the messaging queue, Zookeeper for the key/value store and streaming processor.

Cloud Kafka as a Service

AWS: AWS MSK - Managed Streaming for Kafka

Azure: https://azure.microsoft.com/en-ca/blog/announcing-the-general-availability-of-azure-event-hubs-for-apache-kafka/

Kubernetes Cluster

Use any available Kubernetes cluster like RKE or Docker Desktop - as long as you have kubectl installed

Install helm

Kafka

Kafka Operator Charts

https://github.com/banzaicloud/kafka-operator

see Helm Development Guide

Kafka Installation using Helm Charts

see https://helm.sh/docs/helm/helm_install/ and the incubator chart https://github.com/helm/charts/tree/master/incubator/kafka

Run the current kafka helm chart consisting of a 3 node zookeeper stateful set and a 3 node kafka stateful set

# running Kubernetes 1.15.5 and Helm 3.0.3
:wse_helm $ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:07:57Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
:wse_helm $ helm version
version.BuildInfo{Version:"v3.0.3", GitCommit:"ac925eb7279f4a6955df663a0128044a8a6b7593", GitTreeState:"clean", GoVersion:"go1.13.7"}

# update the repo
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator

:wse_helm $ helm install kafka incubator/kafka
or helm 2
helm install --name kafka incubator/kafka
NAME: kafka
LAST DEPLOYED: Sat Feb  8 19:28:41 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
### Connecting to Kafka from inside Kubernetes
You can connect to Kafka by running a simple pod in the K8s cluster like this with a configuration like this:

# don't use confluentinc/cp-kafka:5.0.1 - use solsson/kafka:0.11.0.0
apiVersion: v1
kind: Pod
metadata:
  name: testclient
  namespace: default
spec:
  containers:
  - name: kafka
    image: confluentinc/cp-kafka:5.0.1
    command:
      - sh
      - -c
      - "exec tail -f /dev/null"

Once you have the testclient pod above running, you can list all kafka
topics with:
  kubectl -n default exec testclient -- kafka-topics --zookeeper kafka-zookeeper:2181 --list

To create a new topic:
  kubectl -n default exec testclient -- kafka-topics --zookeeper kafka-zookeeper:2181 --topic test1 --create --partitions 1 --replication-factor 1

To listen for messages on a topic:
  kubectl -n default exec -ti testclient -- kafka-console-consumer --bootstrap-server kafka:9092 --topic test1 --from-beginning

To stop the listener session above press: Ctrl+C
To start an interactive message producer session:
  kubectl -n default exec -ti testclient -- kafka-console-producer --broker-list kafka-headless:9092 --topic test1

To create a message in the above session, simply type the message and press "enter"
To end the producer session try: Ctrl+C

If you specify "zookeeper.connect" in configurationOverrides, please replace "kafka-zookeeper:2181" with the value of "zookeeper.connect", or you will get error.

:main $ kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
default       kafka-0                                  0/1     Running   1          72s
default       kafka-zookeeper-0                        1/1     Running   0          72s
default       kafka-zookeeper-1                        0/1     Running   0          23s

:main $ kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
default       kafka-0                                  0/1     Running   1          89s
default       kafka-zookeeper-0                        1/1     Running   0          89s
default       kafka-zookeeper-1                        1/1     Running   0          40s
default       kafka-zookeeper-2                        0/1     Running   0          15s

:main $ kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
default       kafka-0                                  1/1     Running   1          2m14s
default       kafka-1                                  0/1     Running   0          32s
default       kafka-zookeeper-0                        1/1     Running   0          2m14s
default       kafka-zookeeper-1                        1/1     Running   0          85s
default       kafka-zookeeper-2                        1/1     Running   0          60s

:main $ kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
default       kafka-0                                  1/1     Running   1          2m35s
default       kafka-1                                  1/1     Running   0          53s
default       kafka-2                                  0/1     Running   0          16s
default       kafka-zookeeper-0                        1/1     Running   0          2m35s
default       kafka-zookeeper-1                        1/1     Running   0          106s
default       kafka-zookeeper-2                        1/1     Running   0          81s

:main $ kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
default       kafka-0                                  1/1     Running   1          2m58s
default       kafka-1                                  1/1     Running   0          76s
default       kafka-2                                  1/1     Running   0          39s
default       kafka-zookeeper-0                        1/1     Running   0          2m58s
default       kafka-zookeeper-1                        1/1     Running   0          2m9s
default       kafka-zookeeper-2                        1/1     Running   0          104s

:main $ kubectl get services --all-namespaces
NAMESPACE     NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
default       kafka                      ClusterIP   10.100.190.143   <none>        9092/TCP                     135m
default       kafka-headless             ClusterIP   None             <none>        9092/TCP                     135m
default       kafka-zookeeper            ClusterIP   10.103.230.77    <none>        2181/TCP                     135m
default       kafka-zookeeper-headless   ClusterIP   None             <none>        2181/TCP,3888/TCP,2888/TCP   135m

Running a Kafka client inside the default kubernetes namespace


:kafka $ vi kakfa-client.yaml
apiVersion: v1
kind: Pod
metadata:
  name: testclient
  namespace: default
spec:
  containers:
  - name: kafka
    image: solsson/kafka:0.11.0.0
    command:
      - sh
      - -c
      - "exec tail -f /dev/null"
                             
:kafka $ kubectl apply -f kakfa-client.yaml 
pod/testclient created

default       testclient                               1/1     Running   0          11s

:kafka $ kubectl exec -it testclient bash
root@testclient:/opt/kafka#cd bin 
root@testclient:/opt/kafka/bin# ./kafka-topics.sh --zookeeper kafka-zookeeper:2181 --list
root@testclient:/opt/kafka/bin# ./kafka-topics.sh --zookeeper kafka-zookeeper:2181 --topic test1 --create --partitions 1 --replication-factor 1
Created topic "test1".


Run a kafka producer and consumer across 2 kubernetes client pod shells

# producer
root@testclient:/opt/kafka/bin# ./kafka-console-producer.sh --broker-list kafka-headless:9092 --topic test1
>message1
>message2

# client
root@testclient:/opt/kafka/bin# ./kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic test1 --from-beginning
message1
message2
^C
Processed a total of 2 messages
root@testclient:/opt/kafka/bin# exit

Kafka on Kubernetes 1.16.6 over Docker 19.03.8 via Docker Desktop 2.3.0.2

20200513 rerun after Kubernetes 1.16 upgrade

# refresh helm charts
:charts $ kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:18:29Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

:charts $ helm repo list
NAME        	URL                                                      
incubator   	http://storage.googleapis.com/kubernetes-charts-incubator
stable      	https://kubernetes-charts.storage.googleapis.com         
oteemocharts	https://oteemo.github.io/charts 

Troubleshoot PVC on Kubernetes Cluster without a default Storage Class

see: out of the box RKE does not ship with a default storageclass or provisioner  - these need to be added post cluster install Kubernetes Developer Guide#AddLocalstorageclassforPVCprovisioningtoRKE

$ kubectl describe pod kafka-0
Name:           kafka-0

  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-kafka-0
    ReadOnly:   false
  default-token-d2z97:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-d2z97
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
$ kubectl get pv
No resources found in default namespace.
$ kubectl get pvc
NAME              STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
datadir-kafka-0   Pending                                      local-storage   98s
$ sudo helm delete kafka
[sudo] password for amdocs: 
release "kafka" deleted
$ kubectl get pvc
NAME              STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
datadir-kafka-0   Pending                                      local-storage   7m35s
$ sudo helm del --purge kafka
release "kafka" deleted
$ kubectl delete pvc/datadir-kafka-0
persistentvolumeclaim "datadir-kafka-0" deleted

or delete pvc before deleting pv (s)
 kubectl delete pvc --all
 kubectl delete pv --all

Add Additional storage provisioner

see adding the Rancher Longhorn storageclass/provisioner in Kubernetes Developer Guide#AddRancherChartandcert-managerChart

Modifying/packaging Charts

Note: temporarily move the zookeeper chart off a created "charts" directory off kafka - if you are packaging it yourself and not using the repo

:incubator michaelobrien$ cd kafka
:kafka michaelobrien$ sudo helm dependency update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "incubator" chart repository
...Successfully got an update from the "oteemocharts" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading zookeeper from repo https://kubernetes-charts-incubator.storage.googleapis.com/
Deleting outdated charts
:kafka michaelobrien$ cd ..
:incubator michaelobrien$ sudo helm package kafka
Successfully packaged chart and saved it to: /Users/michaelobrien/wse_github/charts/incubator/kafka-0.20.9.tgz
:incubator michaelobrien$ ls *.tgz
kafka-0.20.9.tgz	zookeeper-2.1.3.tgz

Successfully packaged chart and saved it to: /Users/michaelobrien/wse_github/charts/incubator/kafka-0.20.9.tgz
:incubator michaelobrien$ ls -la /Users/michaelobrien/Library/Caches/helm/repository
total 20544
drwxr-xr-x  13 michaelobrien  staff      416 13 Jul 23:22 .
drwxr-xr-x   3 michaelobrien  staff       96  8 Feb 19:24 ..
-rw-r--r--   1 root           staff      898 13 Jul 23:05 incubator-charts.txt
-rw-r--r--   1 michaelobrien  staff   843920 13 Jul 23:05 incubator-index.yaml

:incubator michaelobrien$ sudo helm package kafka
:incubator michaelobrien$ sudo helm install kafka kafka-0.20.9.tgz 
NAME: kafka

check that pv/pvc s are coming in bound
:incubator michaelobrien$ kubectl get pvc --all-namespaces
NAMESPACE   NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     datadir-kafka-0   Bound    pvc-f1712213-cd18-42f0-a78a-20a1b0acf357   1Gi        RWO            standard       8s
:incubator michaelobrien$ kubectl get pv --all-namespaces
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
pvc-f1712213-cd18-42f0-a78a-20a1b0acf357   1Gi        RWO            Delete           Bound    default/datadir-kafka-0   standard                20s

note that in constrained clusters like minikube - you will get some restarts/crashloopbackoffs initially

:incubator michaelobrien$ kubectl get pods --all-namespaces | grep kafka
default        kafka-0                                   0/1     Error     2          2m19s
default        kafka-zookeeper-0                         0/1     Running   2          2m19s
default        kafka-zookeeper-1                         0/1     Running   1          89s
default        kafka-zookeeper-2                         1/1     Running   0          46s
:incubator michaelobrien$ kubectl get pods --all-namespaces | grep kafka
default        kafka-0                                   0/1     CrashLoopBackOff   2          2m37s
default        kafka-zookeeper-0                         1/1     Running            2          2m37s
default        kafka-zookeeper-1                         1/1     Running            1          107s
default        kafka-zookeeper-2                         1/1     Running            0          64s

The will eventually start coming up
:incubator michaelobrien$ kubectl get pods --all-namespaces | grep kafka
default        kafka-0                                   1/1     Running             3          3m26s
default        kafka-1                                   0/1     ContainerCreating   0          2s
default        kafka-zookeeper-0                         1/1     Running             2          3m26s
default        kafka-zookeeper-1                         1/1     Running             1          2m36s
default        kafka-zookeeper-2                         1/1     Running             0          113s

check provisioner logs
iometric:incubator michaelobrien$ kubectl describe pvc datadir-kafka-0
  Normal  ExternalProvisioning   5m58s (x2 over 5m58s)  persistentvolume-controller                                    waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
  Normal  Provisioning           5m58s                  k8s.io/minikube-hostpath f31db6e7-c85e-11ea-9ea8-000c2969d6f9  External provisioner is provisioning volume for claim "default/datadir-kafka-0"
  Normal  ProvisioningSucceeded  5m58s                  k8s.io/minikube-hostpath f31db6e7-c85e-11ea-9ea8-000c2969d6f9  Successfully provisioned volume pvc-f1712213-cd18-42f0-a78a-20a1b0acf357

:incubator michaelobrien$ kubectl get pods --all-namespaces | grep kafka
default        kafka-0                                   1/1     Running   3          6m52s
default        kafka-1                                   1/1     Running   0          3m28s
default        kafka-2                                   1/1     Running   0          2m56s
default        kafka-zookeeper-0                         1/1     Running   2          6m52s
default        kafka-zookeeper-1                         1/1     Running   1          6m2s
default        kafka-zookeeper-2                         1/1     Running   0          5m19s

for reference - a faster vm

:incubator michaelobrien$ kubectl get pods --all-namespaces | grep kafka
default                kafka-0                                      1/1     Running   1          2m47s
default                kafka-1                                      1/1     Running   0          72s
default                kafka-2                                      1/1     Running   0          33s
default                kafka-zookeeper-0                            1/1     Running   0          2m47s
default                kafka-zookeeper-1                            1/1     Running   0          2m24s
default                kafka-zookeeper-2                            1/1     Running   0          114s

:incubator michaelobrien$ helm list
NAME 	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART       	APP VERSION
kafka	default  	1       	2020-07-13 23:24:05.547084 -0400 EDT	deployed	kafka-0.20.9	5.0.1 


Note: even though there are events in the log on failed bindings - all 3 kafka pods eventually bind to the PVC
you can see that kafka-0 had restarts, kafka-1 did not - but still has failure logs initially
:incubator michaelobrien$ kubectl describe pod kafka-1

  Warning  FailedScheduling  <unknown>  default-scheduler  running "VolumeBinding" filter plugin for pod "kafka-1": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  running "VolumeBinding" filter plugin for pod "kafka-1": pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled         <unknown>  default-scheduler  Successfully assigned default/kafka-1 to minikube
  Normal   Pulled            4m54s      kubelet, minikube  Container image "confluentinc/cp-kafka:5.0.1" already present on machine
  Normal   Created           4m54s      kubelet, minikube  Created container kafka-broker
  Normal   Started           4m54s      kubelet, minikube  Started container kafka-broker
:incubator michaelobrien$ kubectl get pv --all-namespaces
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
pvc-1f7ea6b6-b222-4b34-8124-ef4da0699c13   1Gi        RWO            Delete           Bound    default/datadir-kafka-2   standard                4m57s
pvc-b015ef5f-5c2a-4aec-80b1-248c1cd674d7   1Gi        RWO            Delete           Bound    default/datadir-kafka-1   standard                5m29s
pvc-f1712213-cd18-42f0-a78a-20a1b0acf357   1Gi        RWO            Delete           Bound    default/datadir-kafka-0   standard                8m53s
:incubator michaelobrien$ kubectl get pvc --all-namespaces
NAMESPACE   NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     datadir-kafka-0   Bound    pvc-f1712213-cd18-42f0-a78a-20a1b0acf357   1Gi        RWO            standard       8m58s
default     datadir-kafka-1   Bound    pvc-b015ef5f-5c2a-4aec-80b1-248c1cd674d7   1Gi        RWO            standard       5m34s
default     datadir-kafka-2   Bound    pvc-1f7ea6b6-b222-4b34-8124-ef4da0699c13   1Gi        RWO            standard       5m2s

Deleting and fully removing the chart

Wait for all pods to stop
:incubator michaelobrien$ helm delete kafka
release "kafka" deleted
:incubator michaelobrien$ kubectl get pods --all-namespaces --watch
NAMESPACE      NAME                                      READY   STATUS        RESTARTS   AGE
cert-manager   cert-manager-7747db9d88-7zsqr             1/1     Running       1          115m
cert-manager   cert-manager-cainjector-87c85c6ff-6g499   1/1     Running       1          115m
cert-manager   cert-manager-webhook-64dc9fff44-rqhlr     1/1     Running       1          115m
default        kafka-1                                   1/1     Terminating   0          7m42s
default        kafka-2                                   1/1     Terminating   0          7m10s


:incubator michaelobrien$ kubectl get pvc --all-namespaces
NAMESPACE   NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     datadir-kafka-0   Bound    pvc-6ea5527c-97a2-4ff4-b5a9-9b267a1b5f70   1Gi        RWO            standard       2d21h
default     datadir-kafka-1   Bound    pvc-07a5222d-a54d-4aae-b909-0858bd1024e0   1Gi        RWO            standard       2d21h
default     datadir-kafka-2   Bound    pvc-7eea5881-0dc2-4f4e-9b47-0b75e6d3541e   1Gi        RWO            standard       2d21h

:incubator michaelobrien$ kubectl get pv --all-namespaces
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
pvc-07a5222d-a54d-4aae-b909-0858bd1024e0   1Gi        RWO            Delete           Bound    default/datadir-kafka-1   standard                2d21h
pvc-6ea5527c-97a2-4ff4-b5a9-9b267a1b5f70   1Gi        RWO            Delete           Bound    default/datadir-kafka-0   standard                2d21h
pvc-7eea5881-0dc2-4f4e-9b47-0b75e6d3541e   1Gi        RWO            Delete           Bound    default/datadir-kafka-2   standard                2d21h

:incubator michaelobrien$ kubectl delete pvc/datadir-kafka-0
persistentvolumeclaim "datadir-kafka-0" deleted
:incubator michaelobrien$ kubectl delete pvc/datadir-kafka-1
persistentvolumeclaim "datadir-kafka-1" deleted
:incubator michaelobrien$ kubectl delete pvc/datadir-kafka-2
persistentvolumeclaim "datadir-kafka-2" deleted

:incubator michaelobrien$ kubectl delete pv/pvc-07a5222d-a54d-4aae-b909-0858bd1024e0
persistentvolume "pvc-07a5222d-a54d-4aae-b909-0858bd1024e0" deleted
:incubator michaelobrien$ kubectl delete pv/pvc-6ea5527c-97a2-4ff4-b5a9-9b267a1b5f70
persistentvolume "pvc-6ea5527c-97a2-4ff4-b5a9-9b267a1b5f70" deleted
:incubator michaelobrien$ kubectl delete pv/pvc-7eea5881-0dc2-4f4e-9b47-0b75e6d3541e
persistentvolume "pvc-7eea5881-0dc2-4f4e-9b47-0b75e6d3541e" deleted


or all at once
:incubator michaelobrien$ kubectl delete pvc --all
persistentvolumeclaim "datadir-kafka-0" deleted
persistentvolumeclaim "datadir-kafka-1" deleted
persistentvolumeclaim "datadir-kafka-2" deleted
:incubator michaelobrien$ kubectl delete pv --all
persistentvolume "pvc-1f7ea6b6-b222-4b34-8124-ef4da0699c13" deleted
persistentvolume "pvc-b015ef5f-5c2a-4aec-80b1-248c1cd674d7" deleted
persistentvolume "pvc-f1712213-cd18-42f0-a78a-20a1b0acf357" deleted

verify
:incubator michaelobrien$ kubectl get pv --all-namespaces
No resources found
:incubator michaelobrien$ kubectl get pvc --all-namespaces
No resources found
:incubator michaelobrien$ kubectl get storageclass --all-namespaces
NAME                 PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard (default)   k8s.io/minikube-hostpath   Delete          Immediate           false                  3d2h


Links

https://kafka.apache.org/documentation/streams/

https://docs.confluent.io/3.3.0/clients/kafka-jms-client/docs/index.html

20200517: http://cloudurable.com/blog/kafka-architecture/index.html

  • No labels

1 Comment

  1. Debugging on a VMware Fusion Ubuntu 16.04 VM running RKE Kubernetes 1.18.3 - issue is RKE no longer ships a storageclass/provisioner out of the box (cloud ready) - adding a hostpath one manually

    amdocs@obriensystemsu0:~$ sudo helm ls --all
    NAME    	REVISION	UPDATED                 	STATUS 	CHART       	APP VERSION	NAMESPACE
    my-kafka	1       	Fri Jul 10 12:47:33 2020	DELETED	kafka-0.21.2	5.0.1      	default  
    amdocs@obriensystemsu0:~$ sudo helm del --purge my-kafka
    release "my-kafka" deleted
    amdocs@obriensystemsu0:~$ sudo helm install --name my-kafka incubator/kafka
    NAME:   my-kafka
    
    amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces -o wide
    NAMESPACE              NAME                                         READY   STATUS      RESTARTS   AGE     IP               NODE             NOMINATED NODE   READINESS GATES
    default                my-kafka-0                                   0/1     Pending     0          3m11s   <none>           <none>           <none>           <none>
    default                my-kafka-zookeeper-0                         1/1     Running     0          3m11s   10.42.0.22       192.168.75.129   <none>           <none>
    default                my-kafka-zookeeper-1                         1/1     Running     0          2m24s   10.42.0.23       192.168.75.129   <none>           <none>
    default                my-kafka-zookeeper-2                         1/1     Running     0          116s    10.42.0.24       192.168.75.129   <none>           <none>
    
    amdocs@obriensystemsu0:~$ kubectl describe pod my-kafka-0
    
    Events:
      Type     Reason            Age        From               Message
      ----     ------            ----       ----               -------
      Warning  FailedScheduling  <unknown>  default-scheduler  running "VolumeBinding" filter plugin for pod "my-kafka-0": pod has unbound immediate PersistentVolumeClaims
      Warning  FailedScheduling  <unknown>  default-scheduler  running "VolumeBinding" filter plugin for pod "my-kafka-0": pod has unbound immediate PersistentVolumeClaims
    
    amdocs@obriensystemsu0:~$ kubectl get pv,pvc
    I0710 15:08:57.789872   48270 request.go:621] Throttling request took 1.18003902s, request: GET:https://192.168.75.129:6443/apis/scheduling.k8s.io/v1?timeout=32s
    NAME                                       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    persistentvolumeclaim/datadir-my-kafka-0   Pending                                                     141m
    
    
    amdocs@obriensystemsu0:~$ kubectl describe pvc
    Name:          datadir-my-kafka-0
    Namespace:     default
    StorageClass:  
    Status:        Pending
    Volume:        
    Labels:        app.kubernetes.io/component=kafka-broker
                   app.kubernetes.io/instance=my-kafka
                   app.kubernetes.io/name=kafka
    Annotations:   <none>
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      
    Access Modes:  
    VolumeMode:    Filesystem
    Mounted By:    my-kafka-0
    Events:
      Type    Reason         Age                    From                         Message
      ----    ------         ----                   ----                         -------
      Normal  FailedBinding  114s (x562 over 142m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
    
    As a reference on my mac using docker desktop we have hostpath storage class providers
    biometric:install michaelobrien$ kubectl get pvc
    NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    datadir-kafka-0   Bound    pvc-6a69f312-055a-4444-9c24-fa8fe4878d3a   1Gi        RWO            hostpath       57d
    datadir-kafka-1   Bound    pvc-84387536-09be-44ea-b137-2ef40cc6d6f3   1Gi        RWO            hostpath       57d
    datadir-kafka-2   Bound    pvc-1a0ca822-5bf2-4b11-bffb-39b4f5849d69   1Gi        RWO            hostpath       57d