3.71 out of 5
3.71
105 reviews on Udemy

Learn DevOps Helm/Helmfile Kubernetes deployment

Learn DevOps Helm/Helmfile Kubernetes deployment with practical HELM CHART examples
Instructor:
Jan Toth
1,043 students enrolled
Learn deployment concepts in Kubernetes by using HELM and HELMFILE
Learn how to work and interact with Kubernetes orchestration platform.
Deploy Kubernetes cluster in AWS by using kops and terraform.
Learn how to use and adjust Helm charts (standard deployment methodology).

The main motivation for this course was to provide Students a comprehensive explanation of the applications deployment to Kubernetes cluster by using the standard called HELM CHARTS via helm and helmfile binaries. In order to achieve this goal, the course is using particular HELM CHARTS such as:

  • Jupyter Notebooks,

  • Gogs,

  • Jenkins,

  • Grafana and Prometheus,

  • Nginx-Ingress,

    helm charts to make it happen. This course is not getting to deep detail about every single detail specified in HELM CHARTS which will be used throughout all the examples rather, I will try to explain everything in practical context so everybody can put together thoughts accurate to her/his thinking process.

Introduction

1
Welcome to course

This is an introduction to this course and I will briefly go through the topics which will be covered throughout this course.

2
Materials: Delete/destroy all the AWS resources every time you do not use them
3
How to start kubernetes cluster on AWS

I will shortly explain how to start Kubernetes cluster in AWS. I will provide an example how to start Kubenretes cluster with kops binary. There are now other ways how to start your K8s cluster for example by using kubeadm or some oter ways. However,  from the simplicity perspective I believe that using a combination of:
1) kops which generates terraform files to a folder

2) and then starting up your K8s cluster by running terraform init && terraform apply

In case you want to stop your cluster throughout the night or whatever simply run: terraform destroy

If you again need to start your cluster then simply run terraform apply and your cluster is again up.

4
How to create Hosted Zone on AWS

Hosted Zone is a prerequisite for staring up Kubernetes cluster in AWS and naturally it's is also one of the flags you need to specify when using kops command to generate terraform code. New Hosted Zone can be easily created in Route53 section in AWS.

5
How to setup communication kops to AWS via aws

I will explain the importance of aws binary when we are setting up Kubernetes cluster via kops and terraform and why it is actually needed in the initial phase in this lecture.

6
Materials: How to install KOPS binary
7
How to install kops

I will guide you through the process of installing kops binary in this lecture. You can go at the kops web page and follow the instructions how to install kops or you can use a simple shell function I have prepared in form of materials for this lecture.

8
How to create S3 bucket in AWS

I will explain why we need to create S3 bucket in AWS when we want to use kops and terraform to start/stop and take care about our Kubernetes cluster.

9
Materials: How to install TERRAFORM binary
10
How to install Terraform binary

The main goal of this lecture is to show you guys how to install terraform binary which is in fact very easy. I have also prepared some trivial bash function to install terraform if you are using LINUX OS.

11
Materials: How to install KUBECTL binary
12
How to install Kubectl binary

I will explain how to install kubectl binary to your PC and I will also explain the importance of this binary when you want to orchestrate/control your Kubernetes cluster. This command is actually used for orchestrating Kubernetes cluster in general and it is not something dedicated to AWS. You need to use kubectl every time you want to interact with your Kubernetes cluster no matter which cloud you or on-premise solution you are using.

13
Materials: How to start Kubernetes cluster
14
How to lunch kubernetes cluster on AWS by using kops and terraform

This lecture explains how to start your Kubernetes cluster in AWS for the first time by running kops and terraform binary. We also need to generate SSH keys as a last part before starting up our Kubernetes cluster.

Jupyter Notebooks

1
Materials: How to run Jupyter Notebooks locally as Docker image
2
How to Jupyter Notebook in Docker on local

The purpose of this lecture is to start a Jupyter Notebooks as a single Docker image.
This will help you in future to understand what is the difference between running Docker containers "manually"
and when we let run the same Docker container to Kubernetes orchestration platform.

If your use case is simply running one Docker image in your infrastructure then you can write some
custom scripts to take care about some critical scenarios. However, most of us have much complex infrastructural and we most likely need to run tens of containers. In such a case Kubernetes will take care about your containers much better than taking care about then ourselves.

3
How to deploy Jupyter Notebooks to Kubernetes AWS (Part 1)

This lecture will provide an overview of deploying Jupyter Notebooks to Kubernetes. This will be achieved by specifying a simple YAML file  which will define two Kubernetes objects: Deployment and Service. Since that point we will always be using yaml specifications for deploying stuff to Kubernetes.

4
Materials: How to deploy Juypyter Notebooks to Kubernetes via YAML file
5
How to deploy Jupyter Notebooks to Kubernetes AWS (Part 2)

In this lecture I will describe Jupyter Notebooks deployment in a simple yaml file. More specifically, I will  talk about Deployment and Service Kubernetes objects. These will be created in K8s cluster and as a result we will be able to access and use Jupyter Notebooks in AWS.

6
How to deploy Jupyter Notebooks to Kubernetes AWS (Part 3)

In this lecture I will actually trigger actual deployment of Jupyter Notebooks by using kubectl -f <file_name>.yaml. We will examine created kubernetes objects. I will point out the difference  between kubectl and docker command. Kubectl command is generally used all the time when we want to control Kubernetes cluster and docker command is used when we control docker containers startedat the machine we are currently logged in.

7
Materials: How to SSH to the physical servers in AWS
8
How to deploy Jupyter Notebooks to Kubernetes AWS (Part 4)

I will explain how it is possible that we are actually able to access our Jupyter Notebooks instance via the service of type NodePort. I will physically SSH to both of the nodes (even on Master server) in Kubernetes cluster and try to run netstat -tunlp | grep <NodePort_Number> . I will also empathize the importance of firewall (Security Group) setting to allow NodePort to be accessible via browser.

9
How to deploy Jupyter Notebooks to Kubernetes AWS (Part 5)

I will show you guys how to describe kubernetes pod object. I will also demonstrate how to get inside pod and actual Docker container running within this pod to Jupyter Notebooks. I will briefly explore web interface for Jupyter Notebooks too. I will explain how to allow firewall - Security Group section in AWS. other clouds also have these kind of settings. I will perform a simple example from github. The Docker container I have used in this example is missing matplotlib python  package so I will access Docker container within pod again and I will install missing matplotlib package and this will make Jupyter Notebooks working properly. Keep in mind that missing dependency is a problem of a particular Docker image I have used and if you would want to use this particular image - you would need to adjust this Dockerfile and make sure that matplotlib will always be a part of the actual Docker image build.

10
Comparison between Jupyter Notebooks running as Docker Conatainer with Kubernete

In this lecture I will compare Jupyter Notebooks running as Docker Conatainer and Jupyter Notebooks deployed in  Kubernetes. One of the main advantages of Kubernetes over running Jupyter Notebooks as a single docker container is resiliency in case the docker container  is going to get down (one of the physical node is going to fail) so Kubernetes will take care about it and it will try to recreate this pod with docker container inside. If the Jupyter Notebooks would be running as a simple Docker container I would need to figure it out and start it manually.

Introduction to Helm Charts

1
Materials: Install HELM binary and activate HELM user account in your cluster
2
Introduction to Helm charts

In this lecture I will explain some basics of HELM charts and I will briefly describe why we are using helm charts. I will show you guys how to install helm binary which is essential if we want to be able to use HELM CHARTS and process deployments to our Kubernetes cluster. I will also explain tiller pod deployment to kube-system namespace in our Kubernetes cluster.

3
Materials: Run GOGS helm deployment for the first time
4
How to use Helm for the first time

I will explore HELM CHARTS in github.com web page. I will talk about where to find the HELM CHARTS. I will also explain how to use helm binary to use HELM CHARTS to process the actual deployment. I will use a simple comparison between installing apps to Kubernetes and installing packages to OS. Gogs will be used as an example for this purpose. I will also clone all the charts from github.com

5
How to understand helm Gogs deployment

In this lecture I will explain how to understand helm Gogs deployment which was triggered in a previous lecture. Gogs has dependency at PostgreSQL so I will try to explain the deployment if one HELM CHART (Gogs) is dependent at some other HELM CHART (PostgreSQL). I will go through created kubernetes objects within this deployment. I will also set up Security Group in AWS.

6
Materials: How to use HELM to deploy GOGS from locally downloaded HELM CHARTS
7
How to deploy Gogs from local repository

In this lecture I will demonstrate how to use HELM CHART from locally downloaded HELM CHARTS. I will be creating files and directory structure for such a deployment. I will slightly customize some values in both Gogs and PostgreSQL HELM CHART and use helm binary to process deployment to our Kubernetes cluster.

8
Materials: How to understand persistentVolumeClaim and persistentVolumes
9
How to make you data persistent

This lecture is all about the persistentVolumeClaim and persistentVolume and how to make our data persistent when we process deployment to Kubernetes. We want to make sure that if pod is going to get rescheduled let's say to some other physical node (EC2) in the Kubernetes cluster from any reason - it will have all previously collected/saved data available. Nobody wants to lose data. Kubernetes objects persistentVolumeClaim  are used for that purpose. This idea works at all big cloud providers.

10
Lets summarize on Gogs helm chart deployment

This lecture summarizes what has been achieved within past few lectures and more specifically the deployment of Gogs and PostgreSQL as the Gogs requirement. I will explain the idea of templates folder and values.yaml file as a standard part of each HELM CHART. I will explain how variables from values.yaml file are substituted into templates stored in templates folder.

Exploring Helmfile deployment in Kubernetes

1
Materials: How to install HELMFILE binary to your machine
2
Introduction to Helmfile

This lecture is dedicated to a introduction to HELMFILE binary. I will explain how helmfile binary relates to helm binary which has already been used. Since helmfile uses helm binary in background it's obvious that helmfile is basically a wrapper over helm. I will explain how to structure helmfile specification for using HELM CHARTS in order to process deployment to Kubenretes cluster. Helm, helmfile binaries and helm charts is nothing related specifically to AWS - rather, it is a standard used in Kubenretes so you will use this knowledge in general independent of cloud provider. I will also show you guys how to install helmfile binary in this lecture. 

3
How to deploy Jenkins by using Helmfile (Part 1)

This lecture is all about how to deploy Jenkins HEM CHART from stable by using Helmfile to the Kubernetes cluster. I will briefly explain how to customize (change values) to be accurate for your particular deployment which are always different dependent on your requirements.

4
How to deploy Jenkins by using Helmfile (Part 2)

I will continue explaining helmfile specification for Jenkins deployment  and more specifically - I will point out the options how to customize ans supply variables in helmfile specifications:
1) set all the variables via values.yaml file
2) set a part of the variables directly in helmfile specification
3) most likely you will combine previous options

5
Materials: Create HELMFILE specification for Jenkins deployment
6
How to use helmfile to deploy Jenkins helm chart for the first time (Part 1)

I will execute Jenkins HELM CHART deployment via helmfile binary in this lecture. I will briefly talk about Jenkins Master pod and Jenkins Agent pod setup as this HELM CHART is constructed. I will show you guys that the change in persistentVolumeClaim  (which we previously made) has taken an effect. This lecture is split into two parts since it was getting pretty long.

7
Materials: Useful commands Jenkins deployment
8
How to use helmfile to deploy Jenkins helm chart for the first time (Part 2)

I will setup Security Group (Firewall) access to the Jenkins instance in AWS console to be able to access Jenkins via web browser.  Simple Jenkins job will be created:
1) clone some github project (helm charts)
2) count the number of folders in a stable/ folder within cloned project
3) simple echo command which will return the number of cloned stable helm charts

I will also point out that this job is actually executed by Jenkins Agent which will be started up on the fly as a new pod in our Kubernetes cluster. Once this job is finished - Jenkins Agent pod is gonna get destroyed.

Grafana and Prometheus HELMFILE deployment

1
Introduction to Prometheus and Grafana deployment by using helmfile (Grafana)

This is an introductory lecture to deploy Prometheus and Grafana to our Kubernetes cluster. The idea of this lecture and actually of this entire section is not really these two software - it is actually more about how to use multiple HELM CHART specified in helmfile specification. The actual deployment will be executed in favor of helmfile binary and helm binary in the background.

2
Prometheus and Grafana deployment by using helmfile (Prometheus part)

In this lecture I will be describing a graphical representation of Prometheus piece of software. I will try to extrapolate a graphical interpretation to of particular Kubernetes object defined in form of yaml specification as a part of helm chart. I will explain how  Prometheus server is taking an advantage over Prometheus exporters - which provide data for Prometheus. I will try to provide high level overview how Grafana, Prometheus and Exporters correlate together. 

3
Prepare Helm charts for Grafana deployment by using helmfile

This lecture is dedicated to preparation of  HELM CHART for Grafana deployment by using helmfile the the Kubernetes cluster. I will also try to bring up the idea of namespaces in Kubernetes. I will slightly modify values.yaml file for Grafana HELM CHART. I will also start writing helmfile specification for this deployment.

4
Prepare Helm charts for Prometheus deployment by using helmfile

This lecture is all about  preparation of  HELM CHART for Prometheus deployment by using helmfile the the Kubernetes cluster. I will also try to bring up the idea of namespaces in Kubernetes. I will slightly modify values.yaml file for Prometheus HELM CHART. I will also add helmfile specification for Prometheus deployment to our previous specification we have created in the previous lecture. 

5
Prepare Helm charts for Prometheus Node Exporter deployment by using helmfile

This lecture is all about  preparation of  HELM CHART for Prometheus Node Exporter deployment by using helmfile the the Kubernetes cluster. I will explain the concept of DaemonSet kind of object in Kubernetes  and how this is different to Deployment kind of Kubernetes object. I will also explain Prometheus annotation meaning in Service section for Prometheus Node Exporter within Prometheus HELM CHART.

6
Copy Prometheus and Grafana Helm Charts specifications to server

This lecture is about the recapitulation of what I have done in previous lectures and what customization was done in terms of Grafana and Prometheus HELM CHARTS. I will transfer HELM CHARTS from my local to the server I am going to execute the actual deployment from.

7
Materials: Helmfile specification for Grafana and Prometheus deployment
8
Process Grafana and Prometheus helmfile deployment

In this lecture I will execute the actual deployment to the Kubernetes cluster. This will be achieved by running:
helmfile -f helmfile_specification.yaml sync. This command will be used very often - actually every time you want to adjust your values.yaml files for one or more of your releases specified within helmfile_specification.yaml.

9
Exploring Prometheus Node Exporter

In this lecture I will explore Prometheus Node Exporter Kubernetes objects which are created as a result of our deployment by using helmfile binary, helm binary and HELM CHARTS. I will show you guys how to SSH to physical nodes (EC2) instances in AWS and run:
netstat -tunlp | grep 9100. We will see that  Prometheus Node Exporter exposes this port to the physical machine because it has define hostPort within Prometheus HELM CHART.

10
Explore Promethus Web User Interface

In this lecture I will explore Prometheus Kubernetes objects which are created as a result of our deployment by using helmfile binary, helm binary and HELM CHARTS. I will show you guys how to make Prometheus WEB USER interface accessible by setting up NodePort type of service for Prometheus  and setting up Firewall (Security Group) section in AWS. As a result I will be able to access Prometheus web interface via web browser.

11
Explore Grafana Web User Interface

In this lecture I will explore Grafana Kubernetes objects which are created as a result of our deployment by using helmfile binary, helm binary and HELM CHARTS. I will show you guys how to access Grafana web user interface. Furthermore, I will explain how to  load a Grafana dashboard for Prometheus Node Exporter to graphically represent data collected by Prometheus Node Exporter then scraped by Prometheus server and nicely visualize by Grafana. 

Ingress and LoadBalancer type of service for your Kubernetes cluster

1
LoadBalancer Grafana Service

In this lecture I will be taking about Grafana Service Kubernetes object. I will demonstrate how to change values.yaml file to make this Grafana Service the type of LoadBalancer. This change will make Kubernetes ask the AWS for Elastic LoadBalancer. I will receive DNS name for this Grafana Service. If DNS name hit from browser fro example this LoadBalancer then routes the traffic to two physical nodes in the Kubernetes cluster. 

2
Materials: Helmfile specification to add DokuWiki deployment
3
Single LoadBalancer service type for all instances in your K8s (DokuWiki)

In order to explain the usage of Ingres Kubernetes objects in Kubernetes cluster I will deploy one more HEML CHART (DokuWiki)  to this setup. Make use of Ingress object in Kubernetes will allow us in fact two important things:
1) use one Service of type of LoadBalancer instead on many (cost savings)
2) allow us to access services in the Kubernetes cluster via one common URL with different prefixes:

    - http://grafana.course.<domain_name>.com
    - http://prometheus.course.<domain_name>.com
    - http://dokuwiki.course.<domain_name>.com

4
Materials: Helmfile specification to add nginx-ingress Helm Chart deployment
5
Nginx Ingress Controller Pod

In this lecture I will deploy nginx-ingress HELM CHART  to our cluster and setup/expose its Service to be a type of LoadBalancer. This will be only one Service object set up as a type of LoadBalancer in our solution. This HELM CHART has Nginx web server started up inside and is designed in the way that can pick up all the Ingress Kubernetes objects  available in the current cluster and setup itself to route the traffic to this Ingress services. I will also demonstrate 404 returned by default backend which is a part of nginx-ingress HELM CHART.

6
Configure Ingress Kubernetes Objects for Grafana, Prometheus and DokuWiki

In this lecture I will configure Ingress Kubernetes Objects for Grafana, Prometheus and DokuWiki in the Kubernetes cluster (Nginx Ingress Controller Pod). I will edit values.yaml files for:

1) grafana
2) promethues
3) dokuwiki

I will physically access nginx controller pod and and demonstrate /etc/nginx/nginx.conf file where we will find ProxyPass directives to route the traffic to particular software services (Prometheus, Grafana and DokuWik). I will then create three RecordSets in Route53 section in AWS and this will allow me to login to applications (Prometheus, Grafana and DokuWik) via:
1) http://prometheus.course.<domain>.com
2) http://grafana.course.<domain>.com
3) http://dokuwiki.course.<domain>.com


7
Important: Clean up Kubernetes cluster and all the AWS resources
8
Congratulations
You can view and review the lecture materials indefinitely, like an on-demand channel.
Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don't have an internet connection, some instructors also let their students download course lectures. That's up to the instructor though, so make sure you get on their good side!
3.7
3.7 out of 5
105 Ratings

Detailed Rating

Stars 5
34
Stars 4
29
Stars 3
25
Stars 2
8
Stars 1
9
daffd743681e36ab4f777e5fb9174239
30-Day Money-Back Guarantee

Includes

4 hours on-demand video
18 articles
Full lifetime access
Access on mobile and TV
Certificate of Completion