Kubernetes is a market-leading cloud platform technology and is the best solution over other cloud platforms. Further, almost all the major cloud infrastructure providers, such as AWS, Azure, and Google, offer hosted versions of Kubernetes.
Moving to microservices is not an easy transition for developers who have been building applications using more traditional methods. There are a ton of new concepts and details developers need to become familiar with when they design a distributed application. Throw Docker and Kubernetes into the mix and it becomes clear why many developers struggle to adapt to this new world.
Kubernetes is the new infrastructure. If you understand how to use it, it unlocks the benefits of standard continuous delivery in your apps. We are going to set up a Kubernetes cluster in Google Cloud Platform and deploy software in a CI/CD manner so that we speed up the release cycle in a way that has no one seen before.
This comprehensive 4-in-1 course is a step-by-step tutorial which provides in-depth learning of core components and concepts, followed by hands-on experience installing and managing Kubernetes. Furthermore, the course will intrigue you with deploying an app to a local Kubernetes installation, as well as an overview of best practices for deploying app models to Kubernetes. Build modern, cloud-native services and applications using the best of Cloud Native Computing Foundation technologies. Manage and orchestrate Kubernetes cluster on the Amazon EC2 environment.
Contents and Overview
This training program includes 4 complete courses, carefully chosen to give you the most comprehensive training possible.
The first course, Learning Kubernetes, covers enhancing the operability of your modern software systems with Kubernetes. Extend the opportunities that containerization innovations have brought about in a new and even more effective way. Get started with the basics, explore the fundamental elements of Kubernetes, and find out how to install it on your system, before digging a little deeper into Kubernetes core constructs. Finally, you will learn how to use Kubernetes pods, services, replication controllers, and labels to manage your clusters effectively and also get a feel for how to handle networking with Kubernetes.
The second course, Deploying Software to Kubernetes, covers deploying, managing and monitoring applications on Kubernetes. This video course starts by explaining the organizational alignment that has to happen in every company that wants to implement DevOps in order to be effective. Delve into deploying and managing applications on Kubernetes, but we also take a look at how Docker Swarm works. Explore how to create a continuous delivery pipeline and deploy a microservice-based system to update it, keeping all the lights on.
The third course, Develop and Operate Microservices on Kubernetes, covers deploying, scaling, and maintaining your distributed applications with Kubernetes. The goal of this course is to walk you through the process of getting familiar with Kubernetes and its way of doing things. By the end of the course, you will have mastered best practices and leveraged some of the latest DevOps technologies to increase agility and decrease time-to-market for the services you have built.
The fourth course, Hands-on Kubernetes on AWS, covers running, deploying, and managing a Kubernetes cluster on AWS. In this course, you’ll jump into Kubernetes architecture, and grasp what the main components and services are, and how they come together to build a production-class container infrastructure. Learn how to install and deploy Kubernetes on several cloud platforms. Explore more advanced topics on Kubernetes, including Continuous Integration, High Availability, and Disaster recovery using Kubernetes.
By the end of the course, you’ll have gained plenty of hands-on experience with Kubernetes on Amazon Web Services. You’ll also have picked up some tips on deploying and managing applications, keeping your cluster and applications secure, and ensuring that your whole system is reliable and resilient to failure.
By the end of the course, you’ll enhance the operability of your modern software systems to deploy, scale, and maintain your distributed applications with Kubernetes.
About the Authors
Braithe E.S. Warnock is currently a managing cloud architect for the financial services division of Ernst & Young. He has had the opportunity to work with several of the largest PCF installations on an international scale. He helped build the framework for the adoption of PCF at top companies such as Ford, Comcast, DISH, HSBC, and Charles Schwab. As a vendor-neutral consultant, Braithe enjoys helping people understand the rapidly-evolving world of cloud and application architecture. Braithe has more than six years’ experience and specialization in global digital transformations. He has expertise in various cloud and cloud platform technologies (PCF, AWS, Azure, VMware, Netflix OSS, Kubernetes, and OpenShift) and also the Java and Spring Boot frameworks. He has developed over 100 microservices using Spring Boot, Java 7/8, Spring Cloud, and Netflix OSS, spanning half a dozen unique cloud-native microservice architectures. He also has experience in developing machine learning models using AWS, Spark, and MLlib to support product recommendations and enhance customer data.
David Gonzalez is an enthusiastic engineer and author of a book called Developing Microservices with Node.js (microservices don’t work without platform automation).He is a Google Developer Expert (a nomination from Google to certain experts in several areas) in Kubernetes (GKE), who enjoys being pushed out of his comfort zone in order to sharpen his skills. Java, Node.js, Python, and DevOps—as well as a holistic approach to security—are part of the skill set that has helped him deliver value across different startups and corporations. Nowadays, he is a consultant at nearForm, enabling companies to deliver the best possible solution to their IT problems or proposals, as well as an avid speaker at conferences such as Rebel Con and Google I/O Extended, among others.
Martin Helmich studied computer science at the University of Applied Sciences in Osnabrück and lives in Rahden, Germany. He works as a software architect, specializing in building distributed applications using web technologies and Microservice Architectures. Besides programming in Go, PHP, Python, and Node.js, he also builds infrastructures using configuration management tools such as SaltStack and container technologies such as Docker and Kubernetes. He is an Open Source enthusiast and likes to make fun of people who are not using Linux. In his free time, you’ll probably find him coding on one of his open source pet projects, listening to music, or reading science-fiction literature.
Alan Rodrigues has been working on software components such as Docker containers and Kubernetes for the last 2 years. He has extensive experience working on the AWS Platform, currently being certified as an AWS Solution Architect Associate, a SysOps Administrator, and a Developer Associate. He has seen that organizations are moving towards using containers as part of their Microservices architecture. And there is a strong need to have a container orchestration tool in place. Kubernetes is by far the most popular container orchestration on the market.
Learning Kubernetes
This video provides an overview of the entire course.
This video aims to provide a high-level overview of installing a kubernetes.
Learn to install Kubernetes
Install Kubernetes (Minikube)
Explore Kubernetes
In this video, we will learn to setup our environment before installing Kubernetes.
Understand what Kubernetes requires
Install dependencies
Confirm whether the environment is correct
In this video, we will install a local Kubernetes cluster.
Explore the installation process
Install Kubernetes
Confirm if the cluster is available
In this video, we will explore our Kubernetes installation.
Learn to interact with Kubernetes
Learn some kubectl commands
Confirm if kubectl is working
In this video, we will understand Kubernetes core concepts before diving into advanced concepts.
Understand how kubernetes works
Explore core kubernetes components
Learn the kubernetes logical architecture
This video aims to explain that we want our cluster to be highly available to reduce downtime and we need a deployment pattern.
Learn target state architecture
Understand how to achieve service and data redundancy
Learn to leverage federation
In this video, we will learn of some upper-limits or size limitations of kubernetes.
Explore Kubernetes limitations
Learn scaling an app
Understand scaling an app across clusters
This video aims to explain that we need to manage multiple clusters using federation.
Learn to sync resources across a cluster
Understand cross cluster discovery
Explore high availability
In this video, we will understand best practices for configuring Kubernetes.
Learn the configuration files
Learn about labels
Explore leveraging version control
In this video, we will understand how to use the kubectl CLI to create and decode secrets.
Use kubectl to encode secrets
Use kubectl to decode secrets
Create secrets manually
In this video, we will understand how to mount volumes containing secrets to apps.
Use secrets in the app
Learn to attach secrets to a volume
Learn to attach secrets to an app
This video aims to explain what is a container engine, container build tool, and container registry.
Build container image
Register container image
Run container image
In this video, we will learn to install docker, to build and test run our docker image.
Install docker
Explore docker commands
Explore docker config file
In this video, we will install our container image to kubernetes.
Deploy our container to a pod
Explore deployment methods
Explore deployment commands
In this video, we will test whether our application is running successfully in Kubernetes.
Monitor our running app
View health of the app
Learn to send test request to our running app
Develop and Operate Microservices on Kubernetes
This video will give you an overview about the course.
Kubernetes is difficult to set up and operate. Minikube offers an easy solution for setting up a local Kubernetes environment for testing and developing.
Find and install the appropriate version and build of Minikube
Start and stop the Minikube VM
Identify and enable required add-ons
This video shows the very first steps on deploying a microservice on Kubernetes and also covers some required core principles.
Learn about Pods and Nodes
Define a Pod using a YAML definition
Use kubectl to list and manage Pods
Kubernetes Pods are neither resilient nor scalable. This video introduces high-level controllers like ReplicaSets that offer solutions for this.
Learn about shortcomings of single Pods and how a ReplicaSet solves them
See ReplicaSet resiliency in action
Overview on different high-level controllers and when to use them
Practical implementation of concepts already shown in 1.4; create resilient and scalable deployments using ReplicaSets.
Learn about Labeling
Define and create a ReplicaSet
Scale a ReplicaSet
Kubernetes Pods do not provide a stable network identity that other applications could connect to. Service objects offer a solution to that problem.
Learn what problems services solve
Define and test a service
Learn about LoadBalancer services
Kubernetes service only provides simple TCP/UDP forwarding. Ingress controllers offer more sophisticated HTTP request routing features.
Learn about service shortcomings and how Ingresses solve them
Learn about different ingress controllers
Define and test an Ingress definition
ReplicaSets provide scalable and resilient application deployments; but they do not provide a robust mechanism for providing updates. This problem is solved by Deployment objects.
Learn about how ReplicaSets do not provide application updates and how Deployments do
Define and test a Deployment with rolling update
Learn how to roll back a failed deployment to a previous version
Up until now, we have only learned how to deploy stateless applications; this section introduces the concept of stateful applications and illuminates under which circumstances you should decide for which kind of storage engine.
Learn about Stateful applications
Learn about Persistent Volumes
Learn about Network Block Devices and Network File Systems
Managing network volumes is a complex tasks; Kubernetes breaks this down into separately managing PersistentVolumes and PersistentVolumeClaims.
Learn about separation of concerns in working with storage volumes
Define a PersistentVolume and claim it
Define a Pod that uses the PersistentVolume created before
Manually creating PersistentVolumes is time-consuming and tedious. Automatic Volume Provisioning and Storage Classes offer a solution for this.
Learn about Storage Classes and automatic provisioning
Define a Storage Class
Define a PersistentVolumeClaim using a Storage Class
Sometimes you will need to deploy clustered applications in which each instance will require its own private persistent volume. This is often the case when deploying databases. This can be done using StatefulSets.
Learn about StatefulSets and Headless Services
Define and create a StatefulSet
Learn about the exact differences between StatefulSets and ReplicaSets
Often, your application will require various configuration values. While you could just define these as Pod environment variables, ConfigMaps offer a solution that is easier to manage.
Learn about different kinds of configuration data
Define a ConfigMap
Use a ConfigMap to initialize environment variables in a Pod
Using ConfigMaps to manage environment variables is a useful feature – but often, applications require entire configuration files to run. This can also be solved using ConfigMaps.
Learn how to store large files in a ConfigMap
Use a ConfigMap as a volume in a Pod
Learn how to create a ConfigMap from an existing directory containing configuration files
ConfigMaps are great, but not secure. Secrets work similar to ConfigMaps, but are designed to store secret data like password, API keys and other sensitive data.
Learn about the difference (and similarities) between ConfigMaps and Secrets
Learn how to create a Secret and to use it in a Pod
Create a secret from existing files or direct user input
Typically, you will want to deploy your application automatically from a source code repository as soon as changes are made in the repository. This video will show you how a typical Continuous Delivery pipeline in Kubernetes looks like.
Learn what Continuous Integration and Delivery is (in a nutshell)
Discuss different possible typical build pipelines
Learn about typical caveats and how to solve them
GitLab is a popular solution for both version control and Continuous Integration and Delivery. This video will show how to implement the deployment pipeline shown earlier using GitLab CI.
Learn about GitLab
Define a deployment pipeline using GitLab CI
See a deployment pipeline in action
Often, Kubernetes deployments are complex and consist of multiple objects that are difficult to manage manually. This can be made easier using the Helm package manager.
Learn about Helm and its basic architecture
Define a Helm chart for your application
Deploy that Helm chart into your cluster
This video combines the knowledge from the previous two and shows how to build a continuous delivery pipeline using GitLab CI and Helm.
Use Helm in a GitLab CI pipeline
Release and deploy a new version of your application
Lean back and let GitLab do the rest
Deploying Software to Kubernetes
This video will give you an overview about the course.
DevOps is a philosophy more than a set of tools or a procedure. In this video, we will look at DevOps and corporations.
Understand the DevOps concept
This video brushes through the traditional release management concept. We will also see the cost of fixing buys as well. The other main part of this video deals with modern release management where we will have a look at Agile development and communicaton.
Study the traditional release management system
Understand the importance of Agile development and communication
Microservices are a big trend nowadays. It is a small software components that allow companies to manage their systems on vertical slices of functionality. In this video, we will see DevOps organizational alignment.
Understand microservices and DevOps Organizational alignment
Docker is a fantastic tool that follows the most modern architectural principles used for running applications packed as containers. Docker Swarm is the clustered version of Docker. In this video, we will look into clustering as well.
Understand the relationship between development and operations with clustering
Study docker Swarm
Kubernetes is the jewel of the crown of the containers orchestration. We will also see different ways to set up the Kubernetes cluster. We will also look into Kubernetes logical architecture.
Study Kubernetes
Understand the logical architecture of Kubernetes
The first thing to start playing with in Kubernetes is a cluster. In this video, we will see how to set up a cluster in Google Cloud Platform.
Create a cluster
Install kubectl
In this video, we are going to look at some of the most important API Objects that Kubernetes provide in order to build our applications. First we are going to look at pods which are the basic element of Kubernetes API. To overcome the disadvantages of pods, we have come up with replica sets. We will also see services in Kubernetes.
Deploy a pod using number of commands
Create pods using replica set and deploy it
Create service on top of deployment
In Kubernetes, there are many other building blocks that can be used to build more advanced applications. In this video, we will look at other building blocks of Kubernetes such as daemon sets, PetSets and jobs. We will also see secrets and configuration management in Kuberenetes.
Understand different building block of Kubernetes
Create secrets in Kubernetes
In this video, we look at different services such as ISO date, UTC date and aggregator service. We will also push the images to Google container registry.
Understand the test system
Look at the ISO date and UTC dates service example
Look at the aggregator service code
Now that we have deployed our images to GCR, we need to automate the process so that we minimize the manual intervention. In this video, we will set up a continuous delivery pipeline for images.
Create three repository in out GitHub account
Push the code in the remote repository
Jenkins has become Kubernetes-friendly with a plugin that allows Jenkins to spawn slaves when required in a containerized fashion.
Set up Jenkins
In this video, we are going to set up the continuous delivery pipeline for our code and the Kubernetes infrastructure. This pipeline is going to be actioned by a Jenkins job, which we will trigger manually. We will also see different deployments such as blue-green and canary deployment.
Create a service for our application
Set up Jenkins job to articulate our build
Look at different deployments
In this video, we will look at two types of monitoring: Blackbox monitoring and White box monitoring.
Understand the different types of monitoring
Monitoring is usually a good candidate to involve third-party companies. In this video, we are going to take a look at three tools in particular: Pingdom, Logentries and AppDynamics.
Look at the different third part monitoring tools
Stackdriver can monitor standalone applications by capturing metrics and logs.
Create a simple monitoring application
In this video, we will create a cluster with stackdrivers and then monitor it.
Integrate cluster with stackdrivers
Add monitoring to the cluster
Create a alerting policy
Hands-on Kubernetes on AWS
This video will give you an overview about the course.
The aim of this video is to developing Microservices based applications.
Look at using containers
Look at isolation of components
Look at integration of various components
This video will look into working with various images.
Understand about the images for web containers
Discuss about images for database containers
Learn about the images for OS based containers
This video will discuss on the types of web containers and how to create them.
Use an in-built container from DockerHub
Create a custom image using Dockerfile
List the deployment options for the custom image
This video will discuss on the types of database containers and various networking options.
Use an in-built container from DockerHub
Create a custom image using Dockerfile
Create a custom network if required
The aim of this video is to discuss the types of volumes and storage for images.
Decide on volumes
Decide on bind mounts
Discuss where to store your custom images
This video will take you through how do deploy Kubernetes.
Use manual installation
Use custom tools for on-premise deployment
Use custom tools for cloud based deployment
The aim of this video is to discuss different components of the Kubernetes cluster.
Discuss about kubeadmn for building the cluster
Understand about Kubectl for working with cluster
Look into details of Kops for working with the cluster
This video will help us decide on the various available deployment options.
Decide whether the replicas are required
Discuss on how many containers per pod
Ensure images are present in Dockerhub
The aim of this video is to expose your deployment.
Discuss whether the deployment need to be accessed
Monitor requirements for the cluster
Test, stage, or deploy the environment
The aim of this video is to use AWS for Kubernetes.
Maintain an S3 bucket
Decide on number of master’s required for the cluster
Decide on number of nodes required for the cluster
The aim of this video is to understand different aspects when deploying an application.
Discuss the images for the containers
Look at the number of replicas required
Look at the number of nodes required
The aim of this video is to discuss the requirement for load balancing.
Expose the deployment
Access the load balancer
Use Route53 for DNS to load balancer
This video discusses on more number of nodes required for clustering.
Decide on number of master’s required
Decide on number of node’s required
Decide on image
This video explains the need to see the performance of your infrastructure.
Use the local dashboard
Monitor the pods
Monitor the nodes
If you need more information for your clusters, this video will help you.
Use local logs
Use custom tools
Use AWS Tools
The aim of this video is to teach about hosting docker containers.
Discuss about managed service
Use AWS Cloud
Use Elastic Container Service
This video will teach you how to work with Clusters.
Decide on tasks
Decide on task definitions
Decide on Services
The aim of this video is to learn about custom requirement for instances.
Choose the AMI
Create your own user data scripts
Install the container agent
The aim of this video is to look at the tasks for clusters.
Decide on task definition
Create the task definition
List the number of tasks
The aim of this video is to understand the need to compose services.
Look at the multiple tasks to create a service
Expose the service
Manage the service
The aim of this video is to add load balancing capabilities.
Choose the type of load balancer
Add Load balancer
Manage the load balancer
The aim of this video is to add autoscaling capabilities to EC2 cluster.
Add more task instances
Add Autoscaling
Add more container instances
The aim of this video is to understand the utilization of resources.
Monitor the cluster
Use Cloudwatch
Monitor tasks
The aim of this video is to look into continuous integration with containers.
Update the app code
Update the container
Kubernetes to pull the container
This video aim at considering high availability on AWS.
Use Autoscaling
Use Elastic Load Balancer
Use multiple pods
This video aim at considering disaster recovery on AWS.
Use Route53
Create Secondary clusters
Use Cloudformation