4.54 out of 5
4.54
14246 reviews on Udemy

The Ultimate Hands-On Hadoop – Tame your Big Data!

Hadoop tutorial with MapReduce, HDFS, Spark, Flink, Hive, HBase, MongoDB, Cassandra, Kafka + more! Over 25 technologies.
Instructor:
Sundog Education by Frank Kane
74,113 students enrolled
English More
Design distributed systems that manage "big data" using Hadoop and related technologies.
Use HDFS and MapReduce for storing and analyzing data at scale.
Use Pig and Spark to create scripts to process data on a Hadoop cluster in more complex ways.
Analyze relational data using Hive and MySQL
Analyze non-relational data using HBase, Cassandra, and MongoDB
Query data interactively with Drill, Phoenix, and Presto
Choose an appropriate data storage technology for your application
Understand how Hadoop clusters are managed by YARN, Tez, Mesos, Zookeeper, Zeppelin, Hue, and Oozie.
Publish data to your Hadoop cluster using Kafka, Sqoop, and Flume
Consume streaming data using Spark Streaming, Flink, and Storm

The world of Hadoop and “Big Data” can be intimidating – hundreds of different technologies with cryptic names form the Hadoop ecosystem. With this Hadoop tutorial, you’ll not only understand what those systems are and how they fit together – but you’ll go hands-on and learn how to use them to solve real business problems!

Learn and master the most popular big data technologies in this comprehensive course, taught by a former engineer and senior manager from Amazon and IMDb. We’ll go way beyond Hadoop itself, and dive into all sorts of distributed systems you may need to integrate with.

  • Install and work with a real Hadoop installation right on your desktop with Hortonworks (now part of Cloudera) and the Ambari UI

  • Manage big data on a cluster with HDFS and MapReduce

  • Write programs to analyze data on Hadoop with Pig and Spark

  • Store and query your data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto

  • Design real-world systems using the Hadoop ecosystem

  • Learn how your cluster is managed with YARN, Mesos, Zookeeper, Oozie, Zeppelin, and Hue

  • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm

Understanding Hadoop is a highly valuable skill for anyone working at companies with large amounts of data.

Almost every large company you might want to work at uses Hadoop in some way, including Amazon, Ebay, Facebook, Google, LinkedIn, IBM,  Spotify, Twitter, and Yahoo! And it’s not just technology companies that need Hadoop; even the New York Times uses Hadoop for processing images.

This course is comprehensive, covering over 25 different technologies in over 14 hours of video lectures. It’s filled with hands-on activities and exercises, so you get some real experience in using Hadoop – it’s not just theory.

You’ll find a range of activities in this course for people at every level. If you’re a project manager who just wants to learn the buzzwords, there are web UI’s for many of the activities in the course that require no programming knowledge. If you’re comfortable with command lines, we’ll show you how to work with them too. And if you’re a programmer, I’ll challenge you with writing real scripts on a Hadoop system using Scala, Pig Latin, and Python.

You’ll walk away from this course with a real, deep understanding of Hadoop and its associated distributed systems, and you can apply Hadoop to real-world problems. Plus a valuable completion certificate is waiting for you at the end! 

Please note the focus on this course is on application development, not Hadoop administration. Although you will pick up some administration skills along the way.

Knowing how to wrangle “big data” is an incredibly valuable skill for today’s top tech employers. Don’t be left behind – enroll now!

  • “The Ultimate Hands-On Hadoop… was a crucial discovery for me. I supplemented your course with a bunch of literature and conferences until I managed to land an interview. I can proudly say that I landed a job as a Big Data Engineer around a year after I started your course. Thanks so much for all the great content you have generated and the crystal clear explanations. ” – Aldo Serrano

  • “I honestly wouldn’t be where I am now without this course. Frank makes the complex simple by helping you through the process every step of the way. Highly recommended and worth your time especially the Spark environment.   This course helped me achieve a far greater understanding of the environment and its capabilities.  Frank makes the complex simple by helping you through the process every step of the way. Highly recommended and worth your time especially the Spark environment.” – Tyler Buck

Learn all the buzzwords! And install the Hortonworks Data Platform Sandbox.

1
Udemy 101: Getting the Most From This Course

How to ask questions, tune the video playback, enable captions, and leave reviews.

2
Tips for Using This Course
3
If you have trouble downloading Hortonworks Data Platform...
4
Installing Hadoop [Step by Step]

After a quick intro, we'll dive right in and install Hortonworks Sandbox in a virtual machine right on your own PC. This is the quickest way to get up and running with Hadoop so you can start learning and experimenting with it. We'll then download some real movie ratings data, and use Hive to analyze it!

5
Hadoop Overview and History

What's Hadoop for? What problems does it solve? Where did it come from? We'll learn Hadoop's backstory in this lecture.

6
Overview of the Hadoop Ecosystem

We'll take a quick tour of all the technologies we'll cover in this course, and how they all fit together. You'll come out of this lecture knowing all the buzzwords!

Using Hadoop's Core: HDFS and MapReduce

1
HDFS: What it is, and how it works

Learn how Hadoop's Distributed Filesystem allows you store massive data sets across a cluster of commodity computers, in a reliable and scalable manner.

2
Installing the MovieLens Dataset

Before we can analyze movie ratings data from GroupLens using Hadoop, we need to load it into HDFS. You don't need to mess with command lines or programming to use HDFS. We'll start by importing some real movie ratings data into HDFS just using a web-based UI provided by Ambari.

3
[Activity] Install the MovieLens dataset into HDFS using the command line

Developers might be more comfortable interacting with HDFS via the command line interface. We'll import the same data, this time from a terminal prompt.

4
MapReduce: What it is, and how it works

Learn how mappers and reducers provide a clever way to analyze massive distributed datasets quickly and reliably.

5
How MapReduce distributes processing

Learn what makes MapReduce so powerful, by horizontally scaling across a cluster of computers.

6
MapReduce example: Break down movie ratings by rating score

Let's look at a very simple example of MapReduce - counting how many of each rating type exists in our movie ratings data.

7
[Activity] Installing Python, MRJob, and nano

The quickest and easiest way to get started with MapReduce is by using Python's MRJob package, which lets you use MapReduce's streaming feature to write MapReduce code in Python instead of Java. Let's get set up.

8
[Activity] Code up the ratings histogram MapReduce job and run it

We'll study our code for building a breakdown of movie ratings, and actually run it on your system!

9
[Exercise] Rank movies by their popularity

As a challenge, see if you can write your own MapReduce script that sorts movies by how many ratings they received. I'll give you some hints, set you off, and then review my solution to the problem.

10
[Activity] Check your results against mine!

Let's see how I solved the challenge from the previous lecture - we'll change our script to count movies instead of ratings, and then review and run my solution for sorting by rating count.

Programming Hadoop with Pig

1
Introducing Ambari

Ambari is Hortonworks' web-based UI (similar to Hue used by Cloudera.) We can use it as an easy way to experiment with Pig, so let's take a closer look at it before moving ahead.

2
Introducing Pig

An overview of what Pig is used for, who it's for, and how it works.

3
Example: Find the oldest movie with a 5-star rating using Pig

We'll use Pig to script a chain of queries on MovieLens to solve a more complex problem.

4
[Activity] Find old 5-star movies with Pig

Let's actually run our example from the previous lecture on your Hadoop sandbox, and find some good, old movies!

5
More Pig Latin

We covered most of the basics of Pig in our example, but let's look at what else Pig Latin can do.

6
[Exercise] Find the most-rated one-star movie

I'll give you some pointers, and challenge you to write your own Pig script that finds the most popular really bad movie!

7
Pig Challenge: Compare Your Results to Mine!

Let's look at my code for finding the most popular bad movies, and you can compare my results to yours.

Programming Hadoop with Spark

1
Why Spark?

What's so special about Spark? Learn how its efficiency and versatility make Apache Spark one of the hottest Hadoop-related technologies right now, and how it achieves this under the hood.

2
The Resilient Distributed Dataset (RDD)

The core building block of Spark is the RDD; learn how they are used and the functions available on them.

3
[Activity] Find the movie with the lowest average rating - with RDD's

As an example, let's write a Spark script to find the movie with the lowest average rating. We'll start by doing it just with RDD's.

4
Datasets and Spark 2.0

Spark 2.0 placed a new emphasis on Datasets and SparkSQL. Learn how Datasets can make your Spark scripts even faster and easier to write.

5
[Activity] Find the movie with the lowest average rating - with DataFrames

Let's revisit the previous problem of finding the lowest-rated movies, but this time using DataFrames.

6
[Activity] Movie recommendations with MLLib

As an example of the more complicated things Spark is capable of, we'll use Spark's machine learning library to produce movie recommendations using the ALS algorithm.

7
[Exercise] Filter the lowest-rated movies by number of ratings

As a very simple exercise, we'll build upon our earlier activity to filter the results by movies with a given number of ratings.

8
[Activity] Check your results against mine!

We'll review my solution to the previous exercise, and run the resulting scripts.

Using relational data stores with Hadoop

1
What is Hive?

An introduction to Apache Hive and how it enables relational queries on HDFS-hosted data.

2
[Activity] Use Hive to find the most popular movie

We'll import the MovieLens data set into Hive using the Ambari UI, and run a simple query to find the most popular movies.

3
How Hive works

Learn how Hive works under the hood of your Hadoop cluster, to efficiently query your data across a cluster using SQL commands. Well, technically it's HiveQL, but it will definitely seem familiar.

4
[Exercise] Use Hive to find the movie with the highest average rating

As a challenge, use this same Hive database to find the best-rated movie.

5
Compare your solution to mine.

Compare your solution to mine for the exercise of finding the highest-rated movies using Hive.

6
Integrating MySQL with Hadoop

A quick overview of MySQL and how it might fit into your Hadoop-based work.

7
[Activity] Install MySQL and import our movie data

Let import the MovieLens data set into MySQL, and run a query to view the most popular movies just to see that's it's working.

8
[Activity] Use Sqoop to import data from MySQL to HFDS/Hive

Learn how Sqoop works as a way to transfer data from an existing RDBMS like MySQL into Hadoop.

9
[Activity] Use Sqoop to export data from Hadoop to MySQL

Sqoop can also work the other way - let's build a new table with Hive and export it back into MySQL.

Using non-relational data stores with Hadoop

1
Why NoSQL?

Learn why "NoSQL" databases are important for efficiently and scalably vending your data.

2
What is HBase

HBase is a NoSQL columnar data store that sits on top of Hadoop. Learn what it's for and how it works.

3
[Activity] Import movie ratings into HBase

We'll import our movie ratings into HBase through a RESTful service interface, using a Python script running our desktop to both populate and query the table.

4
[Activity] Use HBase with Pig to import data at scale.

We'll see how HBase can integrate with Pig to store big data into HBase in a distributed manner.

5
Cassandra overview

Cassandra is a popular NoSQL database, that is appropriate for vending data at massive scale outside of Hadoop.

6
[Activity] Installing Cassandra

Cassandra isn't a part of Hortonworks, so we'll need to install it ourselves.

7
[Activity] Write Spark output into Cassandra

We'll modify our HBase example to write results into a Cassandra database instead, and look at the results.

8
MongoDB overview

MongoDB is a popular alternative to Cassandra. Learn what's different about it.

9
[Activity] Install MongoDB, and integrate Spark with MongoDB

We'll install MongoDB on our virtual machine using Ambari. Then, we'll study and run a script to load up a Spark DataFrame of user data, store it into MongoDB, and query MongoDB to get users under 20 years old.

10
[Activity] Using the MongoDB shell

We'll query our movie user data using MongoDB's command line interface, and set up an index on it.

11
Choosing a database technology

With so many options for choosing a database, how do you decide? We'll look at the requirements of given problems, such as consistency, latency, and scalability, and how that can inform your decision.

12
[Exercise] Choose a database for a given problem

In the previous lecture, I challenged you to choose a database for a stock trading application. Let's talk about my own thought process in this decision, and see if we reached the same conclusion.

Querying your Data Interactively

1
Overview of Drill

What is Drill and what problems does it solve?

2
[Activity] Setting up Drill

We'll install Drill so we can play with it, after installing a Hive and MongoDB database to work with.

3
[Activity] Querying across multiple databases with Drill

We'll use Drill to execute a query that spans data on MongoDB and Hive at the same time!

4
Overview of Phoenix

What is Phoenix for? How does it work?

5
[Activity] Install Phoenix and query HBase with it

We'll get our hands dirty with Phoenix and use it to query our HBase database.

6
[Activity] Integrate Phoenix with Pig

We'll use Phoenix with Pig to store and load MovieLens users data, and accelerate queries on it.

7
Overview of Presto

What is Presto, and how does it differ from Drill and Phoenix?

8
[Activity] Install Presto, and query Hive with it.

We'll install Presto, and issue some queries on Hive through it.

9
[Activity] Query both Cassandra and Hive using Presto.

We'll configure Presto to also talk to our Cassandra database that we set up earlier, and do a JOIN query that spans both data in Cassandra and Hive!

Managing your Cluster

1
YARN explained

Learn how YARN works in more depth as it controls and allocates the resources of your Hadoop cluster.

2
Tez explained

Like Spark, Tez also uses Directed Acyclic Graphs to optimize tasks on your cluster. Learn how it works, and how it's different.

3
[Activity] Use Hive on Tez and measure the performance benefit

As an example of the power of Tez, we'll execute a Hive query with and without it.

4
Mesos explained

Mesos is an alternative cluster manager to Hadoop YARN. Learn how it differs, who uses Mesos, and why.

5
ZooKeeper explained

Zookeeper is a deceptively simple service for maintaining states across your cluster, like which servers are in service, in a highly reliable manner. Learn how it works, and what systems depend on Zookeeper for reliable operation.

6
[Activity] Simulating a failing master with ZooKeeper

Let's use ZooKeeper's command line interface to explore how it works.

7
Oozie explained

Oozie allows you to set up complex workflows on your cluster using multiple technologies, and schedule them. Let's look at some examples of how it works.

8
[Activity] Set up a simple Oozie workflow

As a hands-on example, we'll use Oozie to import movie data into HDFS from MySQL using Sqoop, then analyze that data using Hive.

9
Zeppelin overview

Apache Zeppelin provides a notebook-based environment for importing, transforming, and analyzing your data.

10
[Activity] Use Zeppelin to analyze movie ratings, part 1

We'll set up a Zeppelin notebook to load movie ratings and titles into Spark dataframes, and interactively query and visualize them.

11
[Activity] Use Zeppelin to analyze movie ratings, part 2

We'll set up a Zeppelin notebook to load movie ratings and titles into Spark dataframes, and interactively query and visualize them.

12
Hue overview

Apache Hue is a popular alternative to Ambari views, especially on Cloudera platforms. Let's see what it offers and how it's different.

13
Other technologies worth mentioning

Let's talk about Chukwa and Ganglia, just so you know what they are.

Feeding Data to your Cluster

1
Kafka explained

Learn how Kafka provides a scalable, reliable means for collecting data across a cluster of computers and broadcasting it for further processing.

2
[Activity] Setting up Kafka, and publishing some data.

We'll get Kafka running, and set it up to publish and consume some data from a new topic.

3
[Activity] Publishing web logs with Kafka

We'll simulate a web server by monitoring an Apache log files using a Kafka connector, and watch Kafka pick up new lines in it.

4
Flume explained

Flume is another way to publish logs from a cluster. Learn about sinks and Flume's architecture, and how it differs from Kafka.

5
[Activity] Set up Flume and publish logs with it.

As a simple way to get started with Flume, we'll connect a source listening to a telnet connection to a sink that just logs information received.

6
[Activity] Set up Flume to monitor a directory and store its data in HDFS

As something closer to a real-world example, we'll configure Flume to monitor a directory on our local filesystem for new files, and publish their data into HDFS, organized by the time the data was received.

Analyzing Streams of Data

1
Spark Streaming: Introduction

Spark streaming allows you to write "continuous applications" that process micro-batches of information in real time. Learn how it works, about DStreams, windowing, and the new Structured Streaming API.

2
[Activity] Analyze web logs published with Flume using Spark Streaming

We'll write and run a Spark Streaming application that analyzes web logs as they are streamed in from Flume.

3
[Exercise] Monitor Flume-published logs for errors in real time

As a challenge, extend the previous activity to look for status codes in the web log and aggregate how often different status codes appear. Also, let's fiddle with the slide interval.

4
Exercise solution: Aggregating HTTP access codes with Spark Streaming

Let's review my solution to the previous exercise, and run it.

5
Apache Storm: Introduction

Storm is an alternative to Spark Streaming. Learn how it differs and is a true streaming solution.

6
[Activity] Count words with Storm

We'll walk through, and run, the word count topology sample included with Storm.

7
Flink: An Overview

Apache Flink is an up-and-coming alternative to Storm that offers a higher-level API. Let's talk about what sets it apart.

8
[Activity] Counting words with Flink

Let's install Flink and run a simple example with it.

Designing Real-World Systems

1
The Best of the Rest

Let's briefly cover other systems you may encounter or need to integrate with, including Impala, NiFi, Falcon, Accumulo, AWS, Kinesis, Redis, Ignite, Elasticsearch, and Slider.

You can view and review the lecture materials indefinitely, like an on-demand channel.
Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don't have an internet connection, some instructors also let their students download course lectures. That's up to the instructor though, so make sure you get on their good side!
4.5
4.5 out of 5
14246 Ratings

Detailed Rating

Stars 5
8062
Stars 4
4810
Stars 3
1077
Stars 2
177
Stars 1
120
981ac9ac58a3900cb51ad7507c3274ac
30-Day Money-Back Guarantee

Includes

15 hours on-demand video
2 articles
Full lifetime access
Access on mobile and TV
Certificate of Completion