Setting up Jenkins and running build jobs is not enough for a production infrastructure. For optimal performance and results, architecting, designing, and implementing a production-grade Jenkins deployment is essential.
This course gets you up and running with Jenkins and enables you to deliver an optimal Jenkins deployment. On your journey, you will explore and configure features such as high availability, security, monitoring, and backing up/restoring data, which are basically all of the things you need to implementing a scalable and production grade infrastructure. You will also learn how to implement distributed builds, automate build pipelines, and integrate your Jenkins deployment with external services, thus showing you how to increase your team’s productivity with pipeline as a code building advanced pipelines faster and easier.
By the end of this video course, you will be able to automate, implement, secure, and manage your Jenkins deployment in no time.
About the Author
Anirban Saha is an infrastructure professional with more than seven and half years’ experience in infrastructure management at various industries and organizations ranging from early startups to corporate environments. He has worked extensively with configuration management and automation tools including Puppet, Chef, Ansible, Saltstack, and Terraform, to name a few. He has extensive experience in architecting, deploying, and managing large infrastructures and speaks at various conferences on the latest technologies.
Installation and Configuration of Jenkins and Related Components
To work with any technology, it is crucial and mandatory to have a detailed understanding of the concepts and terminologies associated with it.
- Learn about Jenkins and it’s features
- Understand the development stages and Jenkins’ role in supporting them
- Learn about advanced features of Jenkins
Jenkins is a release engineering tool and there are few terminologies which need to be clarified with respect to Jenkins and release engineering.
- Explore Continuous Integration in detail
- Explore Continuous Delivery in detail
- Learn about Continuous Deployment in detail
To carry out even the most basic development support tasks with Jenkins, some essential tools need to be prepared such as softwares, repositories, storage and others.
- Install git and the Java runtime environment
- Create a Github account and add basic data to use repositories
- Set up dedicated storage for Jenkins data
To use Jenkins, an installation is required first. Jenkins is not bound to a specific method and can be installed using different procedures and on different infrastructure platforms.
- Install Jenkins via packages from a Jenkins managed repository
- Install Jenkins by deploying the Jenkins WAR file in Apache Tomcat
- Install Jenkins in a Docker container
After installation, Jenkins is accessible in form which is not very user-friendly. Some configurations and steps are needed to make it available for first time use and to users.
- Install nginx and add reverse proxy configuration
- Access Jenkins user interface and explore the setup wizard
- Configure SSH user with keys and git data
Preparation, installation and configuration of Jenkins is a long process and even more tedious if needed to perform repeatedly for different environments and in case of failures. It needs automation.
- Disable setup wizard and automate the steps using scripts
- Create Puppet module for Jenkins installation and configuration
- Apply Puppet module to node and automate the process
The primary function of Jenkins is to let users configure jobs and run builds from them. Although they can be done from the user interface, for more large and complex deployments, they need to be done from the command line or by using scripts.
- Create a small program and write tests for it
- Create a Jenkins job from the user interface and run builds
- Learn about Jenkins API and create jobs using API
High-availability, Monitoring, and Management of Jenkins Deployments
With the growing number of projects in a Jenkins deployment, it is crucial to make sure that it withstands failures without any downtime. A highly available infrastructure tries to achieve this objective.
- Explore high availability support in Jenkins ecosystem
- Create multiple Jenkins master nodes with shared storage
- Use HAProxy as load balancer and test by failing nodes
Jenkins data, be it build history, information or configuration files, it is a good production practice to back them up so that they can be restored in event of a failure.
- Explore methods to backup and restore Jenkins data
- Install Periodic Backup plugin, configure and run backup of Jenkins data
- Simulate data loss and restore from backup
Without proper monitoring of the Jenkins infrastructure and its various components, it would be impossible to know about arising problems and fix them in time.
- Monitor Jenkins deployment using the monitoring plugin
- Monitor Jenkins metrics using Graphite reporting plugin and a graphite server
- Monitor Jenkins components using "monit"
With bigger deployments, comes bigger risk of data and permission abuse including accidental actions causing widespread damage. The only way to check these situations is by implementing correct security measures and role-based access control.
- Explore best practices in security and advanced authentication mechanisms
- Configure LDAP based security realm for authentication
- Manage access control using role-based authorization strategy
The Jenkins user interface is quite good and very helpful in performing tasks. However, for the more automation savvy people, it is just not enough. They need solutions which can be scripted to achieve the same objectives but much faster.
- Use API token to make calls and explore API endpoints
- Use Jenkins-cli to perform tasks
- Manage plugins efficiently using Jenkins-cli
Implementing Distributed Build Architectures and Code Deployments
No matter how much we scale a Jenkins master vertically, it simply would not be enough to support an ever-growing deployment. At some point of time, a distributed approach will need to be adopted.
- Understand distributed architecture and slave concepts
- Explore agent launch methods
- Understand labels in details
An efficient way to run a Jenkins deployment is to offload build tasks to slave nodes while keeping the administrative tasks to the master. The most traditional way of running slaves is by adding dedicated Jenkins slave nodes.
- Set up slave nodes along with dependencies
- Configure SSH access and add slave node via user interface
- Configure labels and run builds on the slave
With cloud platforms running a huge part of the global infrastructure, testing is another function we can add to it. Dynamic EC2 instances are an excellent and cost-effective way of running Jenkins slaves.
- Configure AWS IAM user and credentials
- Install EC2 plugin and configure the data to connect to AWS
- Run builds on AWS EC2 slaves
Containers are the newest revolution to take over the infrastructure space. With launch times of few seconds and flexible architecture, they are a great choice for running disposable Jenkins slaves.
- Understand containers as Jenkins slaves and select an image for running slaves
- Install and configure the Docker plugin
- Run builds on remote Docker containers
Although standalone containers are quite good for running Jenkins slaves, an even more efficient workflow can be achieved by using container orchestration and clustering platform such as Kubernetes.
- Understand how Kubernetes runs Jenkins slaves
- Install and configure the Kubernetes plugin and dependencies
- Run builds on remote Kubernetes pods
To achieve continuous deployment, one of the most important methods of release engineering, an efficient code deployment design is required. With the numerous tools available for deployments, it is crucial that we choose the right one for the complexity and size involved.
- Understand code deployments and involved concepts
- Configure AWS EC2 and CodeDeploy to provide a platform for rapid and efficient deployment
- Install and configure the CodeDeploy plugin and deploy code from Jenkins
Understanding and Implementing Build Pipelines
For complex integration and release scenarios, jobs are not just enough. Jobs need to work together to create a sequenced workflow called pipeline. In the new model, Jenkins pipelines are a game changer with all features of pipelines integrated in itself.
- Understand workflow of release engineering
- Understand legacy pipeline model using Jenkins jobs
- Understand new pipeline model and its features
The new pipeline model of Jenkins has its core in the Jenkisfile. It is the source of all configurations related to the pipeline and involves a domain specific language with its own syntax. With detailed documentation, it needs some exploring.
- Understand the Jenkinsfile and basic syntax
- Explore Jenkinsfile syntax documentation and reference materials
- Understand some important section and directives
With knowledge of the Jenkinsfile syntax, the next step is to create the pipelines following the different methods available. Also automating the creation and update of pipeline jobs are necessary.
- Create pipeline from Jenkinsfile in pipeline project
- Create pipeline from Jenkinsfile in the code repository
- Automate pipeline creation using scripts
Almost all projects have multiple branches for efficient integration practices and the general pipeline is only able to handle one branch. Multi-branch pipelines are essential for automating projects involving multiple branches.
- Understand concept of multi-branch pipeline
- Configure prerequisites for multi-branch pipelines
- Create and explore new multi-branch pipelin0065
Although Jenkins has been solving release engineering problems for long, for some users, the learning curve is a bit high. With Blue Ocean, even the least technical users are able to view and use Jenkins pipelines with ease.
- Understand concepts and features of Blue Ocean
- Install and enable Blue Ocean
- Create pipelines using Blue Ocean
Integrating Jenkins with External Services
To achieve continuous integration, efficient branching and code commits are not enough. There needs to be a process to automatically trigger and run pipelines to process builds continuously. Github support is essential for this process.
- Explore required plugins
- Configuring Github and Jenkins for communication
- Configure pipeline for workflow and test scenarios
In modern day software development, its not just enough to write code. Efficient and optimized coding practices are essential and continuous code inspection and analysis becomes an integral part of the process.
- Install and configure Sonarqube plugin
- Add pipeline configuration for Sonarqube analysis
- Run pipeline and generate analysis report for project
With the numerous programming languages available in the software ecosystem, there arises the need for packaging of each type of software. It is critical that the software is packaged efficiently and made available to end users for use.
- Install and configure Artifactory plugin
- Add configuration for package creating and upload to Artifactory
- Run pipeline, upload to Artifactory and test on instance
With continuous integration, problems are identified readily and can be fixed early in the process. However, problems can differ in complexities and need efficient tracking throughout the process of fixing. JIRA integrates in an excellent manner and achieves the same objective.
- Install and configure the JIRA plugin
- Configure JIRA, Github and Jenkins for inter-communication
- Configure pipeline for JIRA and test pipeline
With large environments and numerous projects and pipelines, identifying problems and getting readily notified about them becomes essential. In addition to emails, instant notifications come handy when concerned people need to be updated.
- Install and configure Slack plugin in Jenkins
- Install and configure Jenkins app in Slack
- Add pipeline configuration for Slack and run pipelines