The world of Hadoop and “Big Data” can be intimidating – hundreds of different technologies with cryptic names form the Hadoop ecosystem. With this Hadoop tutorial, you’ll not only understand what those systems are and how they fit together – but you’ll go hands-on and learn how to use them to solve real business problems!
Learn and master the most popular big data technologies in this comprehensive course, taught by a former engineer and senior manager from Amazon and IMDb. We’ll go way beyond Hadoop itself, and dive into all sorts of distributed systems you may need to integrate with.
Install and work with a real Hadoop installation right on your desktop with Hortonworks (now part of Cloudera) and the Ambari UI
Manage big data on a cluster with HDFS and MapReduce
Write programs to analyze data on Hadoop with Pig and Spark
Store and query your data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto
Design real-world systems using the Hadoop ecosystem
Learn how your cluster is managed with YARN, Mesos, Zookeeper, Oozie, Zeppelin, and Hue
Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm
Understanding Hadoop is a highly valuable skill for anyone working at companies with large amounts of data.
Almost every large company you might want to work at uses Hadoop in some way, including Amazon, Ebay, Facebook, Google, LinkedIn, IBM, Spotify, Twitter, and Yahoo! And it’s not just technology companies that need Hadoop; even the New York Times uses Hadoop for processing images.
This course is comprehensive, covering over 25 different technologies in over 14 hours of video lectures. It’s filled with hands-on activities and exercises, so you get some real experience in using Hadoop – it’s not just theory.
You’ll find a range of activities in this course for people at every level. If you’re a project manager who just wants to learn the buzzwords, there are web UI’s for many of the activities in the course that require no programming knowledge. If you’re comfortable with command lines, we’ll show you how to work with them too. And if you’re a programmer, I’ll challenge you with writing real scripts on a Hadoop system using Scala, Pig Latin, and Python.
You’ll walk away from this course with a real, deep understanding of Hadoop and its associated distributed systems, and you can apply Hadoop to real-world problems. Plus a valuable completion certificate is waiting for you at the end!
Please note the focus on this course is on application development, not Hadoop administration. Although you will pick up some administration skills along the way.
Knowing how to wrangle “big data” is an incredibly valuable skill for today’s top tech employers. Don’t be left behind – enroll now!
“The Ultimate Hands-On Hadoop… was a crucial discovery for me. I supplemented your course with a bunch of literature and conferences until I managed to land an interview. I can proudly say that I landed a job as a Big Data Engineer around a year after I started your course. Thanks so much for all the great content you have generated and the crystal clear explanations. ” – Aldo Serrano
“I honestly wouldn’t be where I am now without this course. Frank makes the complex simple by helping you through the process every step of the way. Highly recommended and worth your time especially the Spark environment. This course helped me achieve a far greater understanding of the environment and its capabilities. Frank makes the complex simple by helping you through the process every step of the way. Highly recommended and worth your time especially the Spark environment.” – Tyler Buck
Learn all the buzzwords! And install the Hortonworks Data Platform Sandbox.
How to ask questions, tune the video playback, enable captions, and leave reviews.
After a quick intro, we'll dive right in and install Hortonworks Sandbox in a virtual machine right on your own PC. This is the quickest way to get up and running with Hadoop so you can start learning and experimenting with it. We'll then download some real movie ratings data, and use Hive to analyze it!
What's Hadoop for? What problems does it solve? Where did it come from? We'll learn Hadoop's backstory in this lecture.
We'll take a quick tour of all the technologies we'll cover in this course, and how they all fit together. You'll come out of this lecture knowing all the buzzwords!
Using Hadoop's Core: HDFS and MapReduce
Learn how Hadoop's Distributed Filesystem allows you store massive data sets across a cluster of commodity computers, in a reliable and scalable manner.
Before we can analyze movie ratings data from GroupLens using Hadoop, we need to load it into HDFS. You don't need to mess with command lines or programming to use HDFS. We'll start by importing some real movie ratings data into HDFS just using a web-based UI provided by Ambari.
Developers might be more comfortable interacting with HDFS via the command line interface. We'll import the same data, this time from a terminal prompt.
Learn how mappers and reducers provide a clever way to analyze massive distributed datasets quickly and reliably.
Learn what makes MapReduce so powerful, by horizontally scaling across a cluster of computers.
Let's look at a very simple example of MapReduce - counting how many of each rating type exists in our movie ratings data.
The quickest and easiest way to get started with MapReduce is by using Python's MRJob package, which lets you use MapReduce's streaming feature to write MapReduce code in Python instead of Java. Let's get set up.
We'll study our code for building a breakdown of movie ratings, and actually run it on your system!
As a challenge, see if you can write your own MapReduce script that sorts movies by how many ratings they received. I'll give you some hints, set you off, and then review my solution to the problem.
Let's see how I solved the challenge from the previous lecture - we'll change our script to count movies instead of ratings, and then review and run my solution for sorting by rating count.
Programming Hadoop with Pig
Ambari is Hortonworks' web-based UI (similar to Hue used by Cloudera.) We can use it as an easy way to experiment with Pig, so let's take a closer look at it before moving ahead.
An overview of what Pig is used for, who it's for, and how it works.
We'll use Pig to script a chain of queries on MovieLens to solve a more complex problem.
Let's actually run our example from the previous lecture on your Hadoop sandbox, and find some good, old movies!
We covered most of the basics of Pig in our example, but let's look at what else Pig Latin can do.
I'll give you some pointers, and challenge you to write your own Pig script that finds the most popular really bad movie!
Let's look at my code for finding the most popular bad movies, and you can compare my results to yours.
Programming Hadoop with Spark
What's so special about Spark? Learn how its efficiency and versatility make Apache Spark one of the hottest Hadoop-related technologies right now, and how it achieves this under the hood.
The core building block of Spark is the RDD; learn how they are used and the functions available on them.
As an example, let's write a Spark script to find the movie with the lowest average rating. We'll start by doing it just with RDD's.
Spark 2.0 placed a new emphasis on Datasets and SparkSQL. Learn how Datasets can make your Spark scripts even faster and easier to write.
Let's revisit the previous problem of finding the lowest-rated movies, but this time using DataFrames.
As an example of the more complicated things Spark is capable of, we'll use Spark's machine learning library to produce movie recommendations using the ALS algorithm.
As a very simple exercise, we'll build upon our earlier activity to filter the results by movies with a given number of ratings.
We'll review my solution to the previous exercise, and run the resulting scripts.
Using relational data stores with Hadoop
An introduction to Apache Hive and how it enables relational queries on HDFS-hosted data.
We'll import the MovieLens data set into Hive using the Ambari UI, and run a simple query to find the most popular movies.
Learn how Hive works under the hood of your Hadoop cluster, to efficiently query your data across a cluster using SQL commands. Well, technically it's HiveQL, but it will definitely seem familiar.
As a challenge, use this same Hive database to find the best-rated movie.
Compare your solution to mine for the exercise of finding the highest-rated movies using Hive.
A quick overview of MySQL and how it might fit into your Hadoop-based work.
Let import the MovieLens data set into MySQL, and run a query to view the most popular movies just to see that's it's working.
Learn how Sqoop works as a way to transfer data from an existing RDBMS like MySQL into Hadoop.
Sqoop can also work the other way - let's build a new table with Hive and export it back into MySQL.
Using non-relational data stores with Hadoop
Learn why "NoSQL" databases are important for efficiently and scalably vending your data.
HBase is a NoSQL columnar data store that sits on top of Hadoop. Learn what it's for and how it works.
We'll import our movie ratings into HBase through a RESTful service interface, using a Python script running our desktop to both populate and query the table.
We'll see how HBase can integrate with Pig to store big data into HBase in a distributed manner.
Cassandra is a popular NoSQL database, that is appropriate for vending data at massive scale outside of Hadoop.
Cassandra isn't a part of Hortonworks, so we'll need to install it ourselves.
We'll modify our HBase example to write results into a Cassandra database instead, and look at the results.
MongoDB is a popular alternative to Cassandra. Learn what's different about it.
We'll install MongoDB on our virtual machine using Ambari. Then, we'll study and run a script to load up a Spark DataFrame of user data, store it into MongoDB, and query MongoDB to get users under 20 years old.
We'll query our movie user data using MongoDB's command line interface, and set up an index on it.
With so many options for choosing a database, how do you decide? We'll look at the requirements of given problems, such as consistency, latency, and scalability, and how that can inform your decision.
In the previous lecture, I challenged you to choose a database for a stock trading application. Let's talk about my own thought process in this decision, and see if we reached the same conclusion.
Querying your Data Interactively
What is Drill and what problems does it solve?
We'll install Drill so we can play with it, after installing a Hive and MongoDB database to work with.
We'll use Drill to execute a query that spans data on MongoDB and Hive at the same time!
What is Phoenix for? How does it work?
We'll get our hands dirty with Phoenix and use it to query our HBase database.
We'll use Phoenix with Pig to store and load MovieLens users data, and accelerate queries on it.
What is Presto, and how does it differ from Drill and Phoenix?
We'll install Presto, and issue some queries on Hive through it.
We'll configure Presto to also talk to our Cassandra database that we set up earlier, and do a JOIN query that spans both data in Cassandra and Hive!
Managing your Cluster
Learn how YARN works in more depth as it controls and allocates the resources of your Hadoop cluster.
Like Spark, Tez also uses Directed Acyclic Graphs to optimize tasks on your cluster. Learn how it works, and how it's different.
As an example of the power of Tez, we'll execute a Hive query with and without it.
Mesos is an alternative cluster manager to Hadoop YARN. Learn how it differs, who uses Mesos, and why.
Zookeeper is a deceptively simple service for maintaining states across your cluster, like which servers are in service, in a highly reliable manner. Learn how it works, and what systems depend on Zookeeper for reliable operation.
Let's use ZooKeeper's command line interface to explore how it works.
Oozie allows you to set up complex workflows on your cluster using multiple technologies, and schedule them. Let's look at some examples of how it works.
As a hands-on example, we'll use Oozie to import movie data into HDFS from MySQL using Sqoop, then analyze that data using Hive.
Apache Zeppelin provides a notebook-based environment for importing, transforming, and analyzing your data.
We'll set up a Zeppelin notebook to load movie ratings and titles into Spark dataframes, and interactively query and visualize them.
We'll set up a Zeppelin notebook to load movie ratings and titles into Spark dataframes, and interactively query and visualize them.
Apache Hue is a popular alternative to Ambari views, especially on Cloudera platforms. Let's see what it offers and how it's different.
Let's talk about Chukwa and Ganglia, just so you know what they are.
Feeding Data to your Cluster
Learn how Kafka provides a scalable, reliable means for collecting data across a cluster of computers and broadcasting it for further processing.
We'll get Kafka running, and set it up to publish and consume some data from a new topic.
We'll simulate a web server by monitoring an Apache log files using a Kafka connector, and watch Kafka pick up new lines in it.
Flume is another way to publish logs from a cluster. Learn about sinks and Flume's architecture, and how it differs from Kafka.
As a simple way to get started with Flume, we'll connect a source listening to a telnet connection to a sink that just logs information received.
As something closer to a real-world example, we'll configure Flume to monitor a directory on our local filesystem for new files, and publish their data into HDFS, organized by the time the data was received.
Analyzing Streams of Data
Spark streaming allows you to write "continuous applications" that process micro-batches of information in real time. Learn how it works, about DStreams, windowing, and the new Structured Streaming API.
We'll write and run a Spark Streaming application that analyzes web logs as they are streamed in from Flume.
As a challenge, extend the previous activity to look for status codes in the web log and aggregate how often different status codes appear. Also, let's fiddle with the slide interval.
Let's review my solution to the previous exercise, and run it.
Storm is an alternative to Spark Streaming. Learn how it differs and is a true streaming solution.
We'll walk through, and run, the word count topology sample included with Storm.
Apache Flink is an up-and-coming alternative to Storm that offers a higher-level API. Let's talk about what sets it apart.
Let's install Flink and run a simple example with it.
Designing Real-World Systems
Let's briefly cover other systems you may encounter or need to integrate with, including Impala, NiFi, Falcon, Accumulo, AWS, Kinesis, Redis, Ignite, Elasticsearch, and Slider.