A workload at the container, service and infrastructure level that does not deviate from the original automated infrastructure as code deployment - will be able to restart with minimal impact on the system. This is why one of the first implementations of Kubernetes outside of kubeadm - Rancher was named around the concept of "cattle" - as in we don't treat our infrastructure as "pets" and hand adjust each instance.
We need a couple simple derived formulas for several architectural scenarios to be able to rapidly plan the FinOps profile before going into more detail. Some base costs around compute and persistence are required.
We also need to derive out the base case costs (overhead adjustment).
|Type||Granularity||Service||Example||Utilization per service||Formula|
|compute||1 vCPU||IaaS EC2||t3a.micro||100%|
|PaaS K8S||3 x t3a.large||1/12|
|persistence||1 GB||IaaS RDS||100%|
|throughput||1 Gbps||Network In|