Learn DevOps Helm/Helmfile Kubernetes deployment
The main motivation for this course was to provide Students a comprehensive explanation of the applications deployment to Kubernetes cluster by using the standard called HELM CHARTS via helm and helmfile binaries. In order to achieve this goal, the course is using particular HELM CHARTS such as:
Grafana and Prometheus,
helm charts to make it happen. This course is not getting to deep detail about every single detail specified in HELM CHARTS which will be used throughout all the examples rather, I will try to explain everything in practical context so everybody can put together thoughts accurate to her/his thinking process.
This is an introduction to this course and I will briefly go through the topics which will be covered throughout this course.
I will shortly explain how to start Kubernetes cluster in AWS. I will provide an example how to start Kubenretes cluster with kops binary. There are now other ways how to start your K8s cluster for example by using kubeadm or some oter ways. However, from the simplicity perspective I believe that using a combination of:
1) kops which generates terraform files to a folder
2) and then starting up your K8s cluster by running terraform init && terraform apply
In case you want to stop your cluster throughout the night or whatever simply run: terraform destroy
If you again need to start your cluster then simply run terraform apply and your cluster is again up.
Hosted Zone is a prerequisite for staring up Kubernetes cluster in AWS and naturally it's is also one of the flags you need to specify when using kops command to generate terraform code. New Hosted Zone can be easily created in Route53 section in AWS.
I will explain the importance of aws binary when we are setting up Kubernetes cluster via kops and terraform and why it is actually needed in the initial phase in this lecture.
I will guide you through the process of installing kops binary in this lecture. You can go at the kops web page and follow the instructions how to install kops or you can use a simple shell function I have prepared in form of materials for this lecture.
I will explain why we need to create S3 bucket in AWS when we want to use kops and terraform to start/stop and take care about our Kubernetes cluster.
The main goal of this lecture is to show you guys how to install terraform binary which is in fact very easy. I have also prepared some trivial bash function to install terraform if you are using LINUX OS.
I will explain how to install kubectl binary to your PC and I will also explain the importance of this binary when you want to orchestrate/control your Kubernetes cluster. This command is actually used for orchestrating Kubernetes cluster in general and it is not something dedicated to AWS. You need to use kubectl every time you want to interact with your Kubernetes cluster no matter which cloud you or on-premise solution you are using.
This lecture explains how to start your Kubernetes cluster in AWS for the first time by running kops and terraform binary. We also need to generate SSH keys as a last part before starting up our Kubernetes cluster.
The purpose of this lecture is to start a Jupyter Notebooks as a single Docker image.
This will help you in future to understand what is the difference between running Docker containers "manually"
and when we let run the same Docker container to Kubernetes orchestration platform.
If your use case is simply running one Docker image in your infrastructure then you can write some
custom scripts to take care about some critical scenarios. However, most of us have much complex infrastructural and we most likely need to run tens of containers. In such a case Kubernetes will take care about your containers much better than taking care about then ourselves.
This lecture will provide an overview of deploying Jupyter Notebooks to Kubernetes. This will be achieved by specifying a simple YAML file which will define two Kubernetes objects: Deployment and Service. Since that point we will always be using yaml specifications for deploying stuff to Kubernetes.
In this lecture I will describe Jupyter Notebooks deployment in a simple yaml file. More specifically, I will talk about Deployment and Service Kubernetes objects. These will be created in K8s cluster and as a result we will be able to access and use Jupyter Notebooks in AWS.
In this lecture I will actually trigger actual deployment of Jupyter Notebooks by using kubectl -f <file_name>.yaml. We will examine created kubernetes objects. I will point out the difference between kubectl and docker command. Kubectl command is generally used all the time when we want to control Kubernetes cluster and docker command is used when we control docker containers startedat the machine we are currently logged in.
I will explain how it is possible that we are actually able to access our Jupyter Notebooks instance via the service of type NodePort. I will physically SSH to both of the nodes (even on Master server) in Kubernetes cluster and try to run netstat -tunlp | grep <NodePort_Number> . I will also empathize the importance of firewall (Security Group) setting to allow NodePort to be accessible via browser.
I will show you guys how to describe kubernetes pod object. I will also demonstrate how to get inside pod and actual Docker container running within this pod to Jupyter Notebooks. I will briefly explore web interface for Jupyter Notebooks too. I will explain how to allow firewall - Security Group section in AWS. other clouds also have these kind of settings. I will perform a simple example from github. The Docker container I have used in this example is missing matplotlib python package so I will access Docker container within pod again and I will install missing matplotlib package and this will make Jupyter Notebooks working properly. Keep in mind that missing dependency is a problem of a particular Docker image I have used and if you would want to use this particular image - you would need to adjust this Dockerfile and make sure that matplotlib will always be a part of the actual Docker image build.
In this lecture I will compare Jupyter Notebooks running as Docker Conatainer and Jupyter Notebooks deployed in Kubernetes. One of the main advantages of Kubernetes over running Jupyter Notebooks as a single docker container is resiliency in case the docker container is going to get down (one of the physical node is going to fail) so Kubernetes will take care about it and it will try to recreate this pod with docker container inside. If the Jupyter Notebooks would be running as a simple Docker container I would need to figure it out and start it manually.
Introduction to Helm Charts
In this lecture I will explain some basics of HELM charts and I will briefly describe why we are using helm charts. I will show you guys how to install helm binary which is essential if we want to be able to use HELM CHARTS and process deployments to our Kubernetes cluster. I will also explain tiller pod deployment to kube-system namespace in our Kubernetes cluster.
I will explore HELM CHARTS in github.com web page. I will talk about where to find the HELM CHARTS. I will also explain how to use helm binary to use HELM CHARTS to process the actual deployment. I will use a simple comparison between installing apps to Kubernetes and installing packages to OS. Gogs will be used as an example for this purpose. I will also clone all the charts from github.com
In this lecture I will explain how to understand helm Gogs deployment which was triggered in a previous lecture. Gogs has dependency at PostgreSQL so I will try to explain the deployment if one HELM CHART (Gogs) is dependent at some other HELM CHART (PostgreSQL). I will go through created kubernetes objects within this deployment. I will also set up Security Group in AWS.
In this lecture I will demonstrate how to use HELM CHART from locally downloaded HELM CHARTS. I will be creating files and directory structure for such a deployment. I will slightly customize some values in both Gogs and PostgreSQL HELM CHART and use helm binary to process deployment to our Kubernetes cluster.
This lecture is all about the persistentVolumeClaim and persistentVolume and how to make our data persistent when we process deployment to Kubernetes. We want to make sure that if pod is going to get rescheduled let's say to some other physical node (EC2) in the Kubernetes cluster from any reason - it will have all previously collected/saved data available. Nobody wants to lose data. Kubernetes objects persistentVolumeClaim are used for that purpose. This idea works at all big cloud providers.
This lecture summarizes what has been achieved within past few lectures and more specifically the deployment of Gogs and PostgreSQL as the Gogs requirement. I will explain the idea of templates folder and values.yaml file as a standard part of each HELM CHART. I will explain how variables from values.yaml file are substituted into templates stored in templates folder.
Exploring Helmfile deployment in Kubernetes
This lecture is dedicated to a introduction to HELMFILE binary. I will explain how helmfile binary relates to helm binary which has already been used. Since helmfile uses helm binary in background it's obvious that helmfile is basically a wrapper over helm. I will explain how to structure helmfile specification for using HELM CHARTS in order to process deployment to Kubenretes cluster. Helm, helmfile binaries and helm charts is nothing related specifically to AWS - rather, it is a standard used in Kubenretes so you will use this knowledge in general independent of cloud provider. I will also show you guys how to install helmfile binary in this lecture.
This lecture is all about how to deploy Jenkins HEM CHART from stable by using Helmfile to the Kubernetes cluster. I will briefly explain how to customize (change values) to be accurate for your particular deployment which are always different dependent on your requirements.
I will continue explaining helmfile specification for Jenkins deployment and more specifically - I will point out the options how to customize ans supply variables in helmfile specifications:
1) set all the variables via values.yaml file
2) set a part of the variables directly in helmfile specification
3) most likely you will combine previous options
I will execute Jenkins HELM CHART deployment via helmfile binary in this lecture. I will briefly talk about Jenkins Master pod and Jenkins Agent pod setup as this HELM CHART is constructed. I will show you guys that the change in persistentVolumeClaim (which we previously made) has taken an effect. This lecture is split into two parts since it was getting pretty long.
I will setup Security Group (Firewall) access to the Jenkins instance in AWS console to be able to access Jenkins via web browser. Simple Jenkins job will be created:
1) clone some github project (helm charts)
2) count the number of folders in a stable/ folder within cloned project
3) simple echo command which will return the number of cloned stable helm charts
I will also point out that this job is actually executed by Jenkins Agent which will be started up on the fly as a new pod in our Kubernetes cluster. Once this job is finished - Jenkins Agent pod is gonna get destroyed.
Grafana and Prometheus HELMFILE deployment
This is an introductory lecture to deploy Prometheus and Grafana to our Kubernetes cluster. The idea of this lecture and actually of this entire section is not really these two software - it is actually more about how to use multiple HELM CHART specified in helmfile specification. The actual deployment will be executed in favor of helmfile binary and helm binary in the background.
In this lecture I will be describing a graphical representation of Prometheus piece of software. I will try to extrapolate a graphical interpretation to of particular Kubernetes object defined in form of yaml specification as a part of helm chart. I will explain how Prometheus server is taking an advantage over Prometheus exporters - which provide data for Prometheus. I will try to provide high level overview how Grafana, Prometheus and Exporters correlate together.
This lecture is dedicated to preparation of HELM CHART for Grafana deployment by using helmfile the the Kubernetes cluster. I will also try to bring up the idea of namespaces in Kubernetes. I will slightly modify values.yaml file for Grafana HELM CHART. I will also start writing helmfile specification for this deployment.
This lecture is all about preparation of HELM CHART for Prometheus deployment by using helmfile the the Kubernetes cluster. I will also try to bring up the idea of namespaces in Kubernetes. I will slightly modify values.yaml file for Prometheus HELM CHART. I will also add helmfile specification for Prometheus deployment to our previous specification we have created in the previous lecture.
This lecture is all about preparation of HELM CHART for Prometheus Node Exporter deployment by using helmfile the the Kubernetes cluster. I will explain the concept of DaemonSet kind of object in Kubernetes and how this is different to Deployment kind of Kubernetes object. I will also explain Prometheus annotation meaning in Service section for Prometheus Node Exporter within Prometheus HELM CHART.
This lecture is about the recapitulation of what I have done in previous lectures and what customization was done in terms of Grafana and Prometheus HELM CHARTS. I will transfer HELM CHARTS from my local to the server I am going to execute the actual deployment from.
In this lecture I will execute the actual deployment to the Kubernetes cluster. This will be achieved by running:
helmfile -f helmfile_specification.yaml sync. This command will be used very often - actually every time you want to adjust your values.yaml files for one or more of your releases specified within helmfile_specification.yaml.
In this lecture I will explore Prometheus Node Exporter Kubernetes objects which are created as a result of our deployment by using helmfile binary, helm binary and HELM CHARTS. I will show you guys how to SSH to physical nodes (EC2) instances in AWS and run:
netstat -tunlp | grep 9100. We will see that Prometheus Node Exporter exposes this port to the physical machine because it has define hostPort within Prometheus HELM CHART.
In this lecture I will explore Prometheus Kubernetes objects which are created as a result of our deployment by using helmfile binary, helm binary and HELM CHARTS. I will show you guys how to make Prometheus WEB USER interface accessible by setting up NodePort type of service for Prometheus and setting up Firewall (Security Group) section in AWS. As a result I will be able to access Prometheus web interface via web browser.
In this lecture I will explore Grafana Kubernetes objects which are created as a result of our deployment by using helmfile binary, helm binary and HELM CHARTS. I will show you guys how to access Grafana web user interface. Furthermore, I will explain how to load a Grafana dashboard for Prometheus Node Exporter to graphically represent data collected by Prometheus Node Exporter then scraped by Prometheus server and nicely visualize by Grafana.
Ingress and LoadBalancer type of service for your Kubernetes cluster
In this lecture I will be taking about Grafana Service Kubernetes object. I will demonstrate how to change values.yaml file to make this Grafana Service the type of LoadBalancer. This change will make Kubernetes ask the AWS for Elastic LoadBalancer. I will receive DNS name for this Grafana Service. If DNS name hit from browser fro example this LoadBalancer then routes the traffic to two physical nodes in the Kubernetes cluster.
In order to explain the usage of Ingres Kubernetes objects in Kubernetes cluster I will deploy one more HEML CHART (DokuWiki) to this setup. Make use of Ingress object in Kubernetes will allow us in fact two important things:
1) use one Service of type of LoadBalancer instead on many (cost savings)
2) allow us to access services in the Kubernetes cluster via one common URL with different prefixes:
In this lecture I will deploy nginx-ingress HELM CHART to our cluster and setup/expose its Service to be a type of LoadBalancer. This will be only one Service object set up as a type of LoadBalancer in our solution. This HELM CHART has Nginx web server started up inside and is designed in the way that can pick up all the Ingress Kubernetes objects available in the current cluster and setup itself to route the traffic to this Ingress services. I will also demonstrate 404 returned by default backend which is a part of nginx-ingress HELM CHART.
In this lecture I will configure Ingress Kubernetes Objects for Grafana, Prometheus and DokuWiki in the Kubernetes cluster (Nginx Ingress Controller Pod). I will edit values.yaml files for:
I will physically access nginx controller pod and and demonstrate /etc/nginx/nginx.conf file where we will find ProxyPass directives to route the traffic to particular software services (Prometheus, Grafana and DokuWik). I will then create three RecordSets in Route53 section in AWS and this will allow me to login to applications (Prometheus, Grafana and DokuWik) via: