Learning Path: Modern DevOps
Ready to get more efficient and effective in overcoming day-to-day IT infrastructure challenges? Let’s take advantage of the DevOps revolution!
Packt’s Video Learning Paths are a series of individual video products put together in a logical and stepwise manner such that each video builds on the skills learned in the video before it.
DevOps looks at software development in a whole new way. You can automate and build configurations for infrastructure servers and then address areas of automation, continuous deployment, containers, and monitoring. Git, Docker, and Puppet are the foremost tools in the modern DevOps world.
This Learning Path covers a deep dive into DevOps with the Mastering DevOps section. We then cover the basics of version control using Git with the Learning Git section. Further on, we move towards mastering containerization using Docker in the Mastering Docker section. Finally, we learn how to leverage Puppet to ease configuration management in our IT infrastructure.
We have designed this course keeping in mind what modern DevOps engineers require to fully utilize the resources in hand.
About The Author
This Learning Path is authored by some of the best in their fields.
Dave Mangot is the director of operations for Librato and Papertrail and an accomplished systems engineer with over 20 years of experience. He has led transformations of multiple companies both in operational maturity and in a deeper adherence to DevOps thinking.
Shrikrishna Holla is a full-stack developer and entrepreneur based in Bengaluru. He builds and maintains sigalrm.io, a service that provides actionable alerts, allowing your engineers to take immediate remedial measures.
Thomas Uphill is a long-time user of Puppet. He has presented Puppet tutorials at LOPSA-East, Cascada, and PuppetConf. He has also been a system administrator for over 20 years, working primarily with RedHat systems. He runs the Puppet User Group of Seattle (PUGS).
This video will provide you an overview of entire course.
There is a lot of press about DevOps, but really what's in it for you? In this video, we show the results of the science behind the approach and start looking at software development in a whole new way.
In Gene Kim's DevOps model of the three ways, the first way is about optimizing delivery of our software. In this video, we are exposed to systems thinking and ways we can improve our speed of delivery.
Getting feedback on our improvement is the only way to know if the changes we make are having an effect. In this video, we examine the ways to get feedback and the benefits of collecting feedback.
The fastest and best way to move quickly is to continually experiment with new ideas, concepts and techniques. This video discusses how continually testing, probing, and analyzing lead to the highest performing teams.
The components of the three ways, used together, enable us to move faster, learn quicker, and improve more than ever in the past. The principle of Kaizen is the rationale for these improvements.
In this video, we introduce the CAMS model of DevOps and start to examine why Culture was made the very first, and most important, component of the model.
We'll explain why automation is a critical component in DevOps that allows us to get all of the benefits of the three ways in this video.
Just like the second way where we need feedback, the Measurement part of the CAMS model complements the second way; we'll examine what you can measure and how to use that information to your advantage.
In this video, we'll briefly discuss the final component of the CAMS model—Sharing. There are examples of sharing built into DevOps continually; we'll discuss why.
A DevOps Software Development Life Cycle (SDLC) looks very different from a traditional SDLC. We examine how the DevOps one helps us achieve the goals of Gene Kim's three ways.
"You wrote it, you run it." What does that mean? In this video we look at the concept of service ownership and how everyone is responsible for how our software performs in production.
Let's talk about hack events—not only how they are great fun to participate in, but also how to run one and why they are such a key component of the culture of many admired companies.
We take a look at destructive testing and how to incorporate it into the culture of your company in order to deliver high-quality software.
In many large companies, we achieve the goals of DevOps through cross functional teams. In many small companies, we act as one cross-functional team. We look at why it's such a powerful way to organize teams.
This video will look at the reasons why automation is a necessary part of the maturity of a company. We'll talk about what the normal problems are and how automation empowers us.
Now we fire up a virtual machine and write our first infrastructure code. We'll get introduced to the operating model of SaltStack and use it to configure our virtual machine.
Being able to configure a single virtual machine—being able to control thousands of machines from a single location—is how we leverage automation to do things we could only dream of in the past. In this video, we learn how to do just that.
Containers are one of the hottest topics in DevOps at the moment. Why? We will look at what came before, what's great about containers now, and how to choose between containers and configuration management.
We've built and run a virtual machine using configuration management. In this video, we do the same for containers as we learn about Docker and build and run our first container.
We know that automation is important for DevOps, and Continuous Delivery is critical to being able to deploy quickly and satisfy the 1st way of DevOps. We look at how a proper continuous delivery pipeline is formed.
In this video, we are first introduced to Vagrant, which we'll use again and again throughout the course. We'll learn how easy it is to set up a test environment, and to control our virtual machine after we do.
In this video, we go beyond the basics of Vagrant. We learn how to control multiple machines and set up a development environment that leverages both our favorite tools and multiple Vagrant virtual machines.
Now that we know how to write some configuration management code and how to work with Vagrant virtual machines, we bring the two concepts together and learn how to automate the testing of our configuration management code.
We know what it takes to get our code to be ready for production with automation and testing. Now we look at some different methodologies for delivering software over the last line, into production.
We know that in order to make improvements we need to be able to measure how far we've come. In this video, we look at different types of metrics you can collect, how to collect them, and how they can be used.
Graphite is one of the most popular open metrics tools available. We'll set up our very first Graphite server, examine the components, and look at some of the ways we can use it to really understand our data.
Using the built in data from our example was a great way to get used to the Graphite system. In this video, we're going to go one level deeper and see ways in which we can programmatically submit real data to our metrics system. This will allow us to monitor anything we can measure.
There are lots of different pieces of software that will allow us to collect metrics about our systems and applications with very little effort. We look at some popular and simple examples and how we can automate their installation using Puppet.
Being able to collect lots of data is great, but we can't sit and look at dashboards all day long. We'll configure a monitoring and alerting system to take our Graphite data and perform actions based on what it finds.
Agile software development revolutionized the way software was developed, much like how DevOps revolutionizes the way it is delivered. We'll learn about the goals of agile and look at specific examples of its implementation.
We know that retrospectives are valuable ways to collect feedback. We dive even deeper into that concept as we discuss learning reviews, things to enable your success, and pitfalls we need to be cautious of.
Practicing ChatOps is one of the most fun and effective ways of Sharing in a DevOps culture. We'll talk about what makes ChatOps so powerful and then build our own Chatbot to really bring the concept home.
There are lots of other ways the most successful companies practice sharing. We look at a few different techniques you can experiment with to bring sharing deeper and wider into your organization.
We've learned over the duration of the course that DevOps is more than just Dev and Ops. We examine how you can bring other parts of the organization into the DevOps fold and enable the delivery of secure, high-quality software.
Solving the problems presented by auditors and compliance can be some of the most challenging parts of our work in IT. In this video, we'll examine how, armed with our DevOps approach, we can use the things we've learned to make these problems much easier to solve.
A company is more than just Dev, Ops, Security and compliance. It's a living, breathing ecosystem of many different departments and people, all working together towards the same goal. We take a look at how we can bring the entire system together.
Peter Senge's idea of a Learning Organization shares many elements that we describe when talking about DevOps. We examine Peter's idea and present resources you can use to explore the areas of DevOps that interest you the most in greater depth.
Get introduced to Git and learn how it can help developers work more efficiently.
Version control is very important to track changes when several people are working on a single project.
Different team dynamics requires different collaboration techniques. Choose the workflow that suits your group.
Collaboration on changing content necessitates the ability to keep a history of modifications. Initialize your Git repository right away to begin tracking changes.
After making changes, your project is in a working state, which you need to save before further modifications. Use "git commit" to check in this set of changes.
In order to collaborate, other team members need access to your repository. Add a link to an online remote repository where everyone's changes will be gathered.
You need to be able to identify when changes were made to files and who made them. Use "git log" with various options to see the story of how your project was built.
Keep your work streams clean and isolated. Make branches for each feature and let team members work with copies of the original repository.
Keep current with updates from other team members. Track and pull down updates from shared branches.
Submission of work requires an approval process. Use a pull request as a means to discuss and approve reviewable changes.
Small commits can cause noisy history and difficult conflict resolution. Use interactive rebasing to squash a range of commits into one.
You want to keep track of specific versions of your project, such as those used in deployments. Use the tagging feature to permanently mark any point in the project's history.
When collaborating, you need a quick insight into who added or removed code and why. Use the blame feature in forward and reverse order to discover where changes originated from.
You've acquired substantial knowledge on a powerful versioning tool. Review the lessons learned and get some hands-on experience.
This video will provide you an overview of entire course.
The aim of this video is to talk about the underlying concepts of Docker. It is critical for us to know how the internals of Docker are laid out so that if we encounter problems whilst using Docker, we will be able to figure out exactly what went wrong and where.
The aim of this video is to revisit some of the more useful Docker CLI commands.
Running setup commands in a running container and then committing it, although possible, is not an efficient solution. It also doesn’t lend itself very well to automation. So, we will look at automating the image creation process using Docker file and the Docker build command.
In this section and video, we will learn about Docker Compose. Compose is a tool for orchestrating multi-container Docker applications.
We have set up diaspora enough number of times in various different ways in the last few videos. Let us apply this learning to make a deployment of diaspora on to an AWS instance
The aim of this video is to scale application services across multiple containers in a single host.
The aim of this video is to discuss the default networking drivers available in Docker, and specifically the bridge network.
Discuss and get familiar with the multi-host networks completely.
The aim of this video is to explore solutions to service discovery.
In this video, we will be designing infrastructure for the next phase of our diaspora deployment.
Use Swarm to deploy diaspora on a cluster of Docker hosts.
Deploying a Swarm cluster on AWS.
Discover the tools that give more power to operations, with a better ability to scale out. These tools are production ready, are battle tested, and are being used in production today at some of the biggest companies.
Explore Kubernetes, Google’s cluster management tool that they use to back their container engine.
We will be setting up Marathon and Mesos locally in a VM.
Discuss security considerations and possible attack vectors in a Docker deployment.
Explore Docker Bench for Security tool and use it for our Docker environment.
Deals with the issue of content security when transferring objects over an untrusted medium—the Internet.
Discuss the options available to route logs—logging drivers.
Learn how to use volume plugins.
Discover how to extend Docker with the Network Plugins
Discuss the best practices in a Docker environment.
Discover the tools available to complement workflows in the Docker ecosystem.
We will look at Dockercraft.
Mastering Puppet for Large Infrastructures
This video provides an overview of the entire course.
Puppet is a configuration management tool; you use it to keep the configuration of your machines consistent. There are several terms used in a Puppet installation; we'll introduce those here.
In this section, a detailed understanding of the different Puppet architectures will be provided, with emphasis on the difference between installation with Passenger and Puppet Server.
We will learn how to initialize a code repository for our project and how to create appropriate branches for the same to be used in development, builds and tests later.
We will learn to use r10k to manage module dependencies in our project and see how they can be useful to automate deployments in the MPLI Productions Puppet infrastructure.
In this video, we will go through the typical installation of a PuppetServer machine in a standalone configuration. For convenience, we will refer to this machine as puppetca.
In this video, we are going to have one puppetca and two puppet master machines. The puppetca will act as a load balancer for our configuration.
In this video, we will modify our configuration to make the puppetca machine accessible directly.
In this video, we will learn how to tweak parameters in the Puppet configuration files to increase performance in Puppet client runs and in storing and retrieving Puppet metadata.
In this video, we will learn how to install and configure PuppetDB with a PostgreSQL database as the backend for better scalability in the MPLI Productions environment.
In this video, we will introduce the concept of exported resources and show some usage examples.
In this video, we will learn how to utilize the default dashboard available with PuppetDB and how it can be used to visualize infrastructure data for MPLI Productions.
In this video, we will learn about the different API endpoints that PuppetDB provides and the type of information that we can utilize and manipulate from them.
We will learn how to use already developed public modules and reuse them for tasks in MPLI productions.
We will learn to create custom facts based on internal requirements for MPLI productions and use them to develop modules.
We will learn how to go beyond the default types available with Puppet and write custom types for an MPLI Productions environment.
We will understand what Hiera is and how to use it for data management for an MPLI Productions Puppet environment.
We will configure mcollective on the Puppet client nodes in MPLI Productions so that we can run remote commands and modules on the nodes using the same.
We will learn how to configure Puppet to enable reporting.