4.29 out of 5
4.29
380 reviews on Udemy

Apache Spark Hands on Specialization for Big Data Analytics

In-depth course to master Apache Spark Development using Scala for Big Data (with 30+ real-world & hands-on examples)
Instructor:
Irfan Elahi
11,478 students enrolled
English [Auto-generated]
Understand the relationship between Apache Spark and Hadoop Ecosystem
Understand Apache Spark use-cases and advanced characteristics
Understand Apache Spark Architecture and how it works
Understand how Apache Spark on YARN (Hadoop) works in multiple modes
Understand development life-cycle of Apache Spark Applications in Python and Scala
Learn the foundations of Scala programming language
Understand Apache Spark's primary data abstraction (RDDs)
Understand and use RDDs advanced characteristics (e.g. partitioning)
Learn nuances in loading files in Hadoop Distributed File system in Apache Spark
Learn implications of delimiters in text files and its processing in Spark
Create and use RDDs by parallelizing Scala's collection objects and implications
Learn the usage of Spark and YARN Web UI to gain in-depth operational insights
Understand Spark's Direct Acyclic Graph (DAG) based execution model and implications
Learn Transformations and their lazy execution semantics
Learn Map transformation and master its applications in real-world challenges
Learn Filter transformation and master its usage in real-world challenges
Learn Apache Spark's advanced Transformations and Actions
Learn and use RDDs of different JVM objects including collections and understanding critical nuances
Learn and use Apache Spark for statistical analysis
Learn and master Key Value Pair RDDs and their applications in complex Big Data problems
Learn and master Join Operations on complex Key Value Pair RDDs in Apache Spark
Learn how RDDs caching works and use it for advanced performance optimization
Learn how to use Apache Spark for Data Ranking problems
Learn how to use Apache Spark for handling and processing structured and unstructured data
Learn how to use Apache Spark for advanced Business Analytics
Learn how to use Apache Spark for advanced data integrity and quality checks
Learn how to use Scala's advanced features like functional programming and pattern matching
Learn how to use Apache Spark for logs processing

What if you could catapult your career in one of the most lucrative domains i.e. Big Data by learning the state of the art Hadoop technology (Apache Spark) which is considered mandatory in all of the current jobs in this industry?

What if you could develop your skill-set in one of the most hottest Big Data technology i.e. Apache Spark by learning in one of the most comprehensive course  out there (with 10+ hours of content) packed with dozens of hands-on real world examples, use-cases, challenges and best-practices?

What if you could learn from an instructor who is working in the world’s largest consultancy firm, has worked, end-to-end, in Australia’s biggest Big Data projects to date and who has a proven track record on Udemy with highly positive reviews and thousands of students already enrolled in his previous course(s)?

If you have such aspirations and goals, then this course and you is a perfect match made in heaven!

Why Apache Spark?

Apache Spark has revolutionised and disrupted the way big data processing and machine learning were done by virtue of its unprecedented in-memory and optimised computational model. It has been unanimously hailed as the future of Big Data. It’s the tool of choice all around the world which allows data scientists, engineers and developers to acquire and process data for a number of use-cases like scalable machine learning, stream processing and graph analytics to name a few. All of the leading organisations like Amazon, Ebay, Yahoo among many others have embraced this technology to address their Big Data processing requirements. 

Additionally, Gartner has repeatedly highlighted Apache Spark as a leader in Data Science platforms. Certification programs of Hadoop vendors like Cloudera and Hortonworks, which have high esteem in current industry, have oriented their curriculum to focus heavily on Apache Spark. Almost all of the jobs in Big Data and Machine Learning space demand proficiency in Apache Spark. 

This is what John Tripier, Alliances and Ecosystem Lead at Databricks has to say, “The adoption of Apache Spark by businesses large and small is growing at an incredible rate across a wide range of industries, and the demand for developers with certified expertise is quickly following suit”.

All of these facts correlate to the notion that learning this amazing technology will give you a strong competitive edge in your career.

Why this course?

Firstly, this is the most comprehensive and in-depth course ever produced on Apache Spark. I’ve carefully and critically surveyed all of the resources out there and almost all of them fail to cover this technology in the depth that it truly deserves. Some of them lack coverage of Apache Spark’s theoretical concepts like its architecture and how it works in conjunction with Hadoop, some fall short in thoroughly describing how to use Apache Spark APIs optimally for complex big data problems, some ignore the hands-on aspects to demonstrate how to do Apache Spark programming to work on real-world use-cases and almost all of them don’t cover the best practices in industry and the mistakes that many professionals make in field.

This course addresses all of the limitations that’s prevalent in the currently available courses. Apart from that, as I have attended trainings from leading Big Data vendors like Cloudera (for which they charge thousands of dollars), I’ve ensured that the course is aligned with the educational patterns and best practices followed in those training to ensure that you get the best and most effective learning experience. 

Each section of the course covers concepts in extensive detail and from scratch so that you won’t find any challenges in learning even if you are new to this domain. Also, each section will have an accompanying assignment section where we will work together on a number of real-world challenges and use-cases employing real-world data-sets. The data-sets themselves will also belong to different niches ranging from retail, web server logs, telecommunication and some of them will also be from Kaggle (world’s leading Data Science competition platform).

The course leverages Scala instead of Python. Even though wherever possible, reference to Python development is also given but the course is majorly based on Scala. The decision was made based on a number of rational factors. Scala is the de-facto language for development in Apache Spark. Apache Spark itself is developed in Scala and as a result all of the new features are initially made available in Scala and then in other languages like Python. Additionally, there is significant performance difference when it comes to using Apache Spark with Scala compared to Python. Scala itself is one of the most highest paid programming languages and you will be developing strong skill in that language along the way as well.

The course also has a number of quizzes to further test your skills. For further support, you can always ask questions to which you will get prompt response. I will also be sharing best practices and tips on regular basis with my students.

What you are going to learn in this course?

The course consists
of majorly two sections:

  • Section – 1:

We’ll start off with
the introduction of Apache Spark and will understand its potential and business
use-cases in the context of overall Hadoop ecosystem. We’ll then focus on how
Apache Spark actually works and will take a deep dive of the architectural components
of Spark as its crucial for thorough understanding.

  • Section  – 2:

After developing
understanding of Spark architecture, we will move to the next section of this
course where we will employ Scala language to use Apache Spark APIs to develop
distributed computation programs. Please note that you don’t need to have prior
knowledge of Scala for this course as I will start with the very basics of
Scala and as a result you will also be developing your skills in this one of
the highest paying programming languages.

In this section, We
will comprehensively understand how spark performs distributed computation
using abstractions like RDDs, what are the caveats
in loading data in Apache Spark, what are the
different ways to create RDDs and how to leverage parallelism and much more.

Furthermore, as
transformations and action constitute the gist of Apache Spark APIs thus its
imperative to have sound understanding of these. Thus, we will then
focus on a number of Spark transformations and Actions that are heavily being
used in Industry and will go into detail of each. Each API usage will be
complimented with a series of real-world examples and datasets e.g. retail, web
server logs, customer churn and also from kaggle. Each section of the course
will have a number of assignments where you will be able to practically apply
the learned concepts to further consolidate your skills.

A significant
section of the course will also be dedicated to key value RDDs which form the
basis of working optimally on a number of big data problems.

In addition to
covering the crux of Spark APIs, I will also highlight a number of valuable
best practices based on my experience and exposure and will also intuit on
mistakes that many people do in field. You will rarely such information
anywhere else.

Each topic will be
covered in a lot of detail with strong emphasis on being hands-on thus ensuring
that you learn Apache Spark in the best possible way.

The course is
applicable and valid for all versions of Spark i.e. 1.6 and 2.0.

After completing
this course, you will develop a strong foundation and extended skill-set to use
Spark on complex big data processing tasks. Big data is one of the most
lucractive career domains where data engineers claim salaries in high numbers.
This course will also substantially help in your job interviews. Also, if you
are looking to excel further in your big data career, by passing Hadoop
certifications
like of Cloudera and Hortonworks, this course will prove to be
extremely helpful in that context as well.

Lastly, once enrolled, you will have life-time access to the lectures and resources. Its a self-paced course and you can watch lecture videos on any device like smartphone or laptop. Also, you are backed by Udemy’s rock-solid 30 days money back guarantee. So if you are serious about learning about learning Apache Spark, enrol in this course now and lets start this amazing journey together!

Introduction

1
Breaking the Ice with Warm Welcome!
2
Course's Curriculum - Journey to the excellence!

Section 1 - Apache Spark Introduction and Architecture Deep Dive

1
Apache Spark in the context of Hadoop Evolution
2
Say Hello to Apache Spark - Thorough Dissemination of Capabilities
3
In-Depth Understanding of Spark's Ecosystem of High Level Libraries
4
Apache Spark and its integration within Enterprise Lambda Architecture
5
Apache Spark and where it fits in whole Hadoop Ecosystem

Working with Text Files to create Resilient Distributed Datasets (RDDs) in Spark

1
Setting up development Environment
2
Better Development Environment Employing DataBricks - Part 1 (**New Lecture**)
3
Better Development Environment Employing Databricks - Part 2 (**New Lecture**)
4
Loading Text Files (in HDFS) in Spark to create RDDs
  • How to copy a text file from Linux File System to Hadoop Distributed File System (HDFS)
  • How to load text file in HDFS to Spark using Spark Context object's function
  • Analyzing the type of RDD created when RDD is loaded in Spark using SparkContext's function
5
Loading All Directory Files (in HDFS) simultaneously in Spark and implications
6
Loading Text Files (in HDFS) in Spark - Continued
7
Using Wildcards to selectively load text files (in HDFS) in Spark and use-cases
8
Real Life Challenge: Different Record Delimiters in Text Files in Spark
9
Solution: Handling Different Record Delimiters in Text Files in Spark
10
T1

Testing your understanding of how Spark works with Text Files

Creating RDDs by Distributing Scala Collections in Spark

1
The semantics and implications behind parallelizing Scala Collections
2
Hands-on: Distributing/Parallelizing Scala Collections

Understanding the Partitioning and Distributed Nature of RDDs in Spark

1
How Data gets Partitioned and Distributed in Spark Cluster
2
Accessing Hadoop YARN RM and AM Web UIs to understand RDDs Partitioning
3
Manually Changing Partitions of RDDs in Spark and Implications

Developing Mastery in Spark's Map Transformations and lazy DAG Execution Model

1
Demystifying Spark's Direct Acyclic Graph (DAG) and Lazy Execution Model
2
Introducing Map Transformation - the Swiss Army Knife of Transformations
3
Hands-on: Map Transformation via Scala's Functional Programming constructs
4
Understanding the Potential of Map Transformation to alter RDDs Types
5
Using Your Own Functions, in addition to Anonymous ones, in Map Transformations

Assignment - Using Map Transformation on Real World Big Data Retail Analytics

1
Introducing the Real World Online Retail Data-set and Assignment Challenges
2
Detailed Hands-on Comprehension of Assignment Challenges' Solutions
3
Conceptual Understanding of Distributing Scala Collections and Implications
4
Hands-on Understanding of Distributing Scala Collections and use-cases

Developing Mastery in Spark's Filter Transformation

1
Introducing Filter Transformation and its Powerful Use-Cases
2
Hands on: Spark's Filter Transformation in Action

Assignment - Using Filter and Map on Apache Web Server Logs and Retail Dataset

1
Introducing the Data-sets and Real-World Assignment Challenges
2
Challenge 1: Removing Empty Lines in Web Logs Data-set
3
Challenge 2: Removing Header Line in Retail Data-set
4
Challenge 3: Selecting rows in Retail Data-set Containing Specific Countries

Developing Mastery in RDD of Scala Collections

1
Introducing RDDs of Scala Collections and their Relational Analytics use-cases
2
Transforming Scala Collections using Functional Programming Constructs
3
Creating and Manipulating RDDs of Arrays of String from Different Data Sources

Assignment - Customer Churn Analytics using Apache Spark

1
Introducing the Context, Challenges and Data-set of Customer Churn Use-Case
2
Challenge 1: Finding Number of Unique States in the Data-set
3
Challenge 2: Performing Data Integrity Check on Individual Columns of Data-Set
4
Challenge 3: Finding Summary Statistics on number of Voice Mail Messages
5
Challenge 4: Finding Summary Statistics on Voice Mail in Selected States
6
Challenge 5: Finding Average Value of Total Night Calls Minutes
7
Challenge 6: Finding conditioned Total day calls for customers
8
Challenge 7: Using Scala Functions and Pattern Matching for advanced processing
9
Challenge 8: Finding Churned Customers with International and Voice Mail Plan
10
Challenge 9: Performing Data Quality and Type Checks on Individual Columns

Developing Mastery in Spark's Key-Value (Pair) RDDs

1
Introduction
2
Developing Intuition for Solving Big Data Problems using KeyValue Pair Construct
3
Developing Hands-on Understanding of working with KeyValue RDDs in Spark
4
Proof - Transformations' exclusivity to KeyValue RDDs
5
Transforming Text File Data to Pair RDDs for KeyValue based Data Processing
6
The Case of Different Data Types of "Values" in KeyValue RDDs
7
Transforming Complex Delimited Text File to Pair RDDs for KeyValue Processing

Assignment - Analyzing Video Games (Kaggle Dataset) using Spark's KeyValue RDDs

1
Challenge 1: Determining Frequency Distribution of Video Games Platforms
2
Challenge 2: Finding Total Sales of Each Video Games Platform
3
Challenge 3: Finding Global Sales of Video Games Platform
4
Challenge 4: Maximum Sales Value of Each Gaming Console
5
Challenge 5: Data Ranking - Top 10 platforms by global sales

Developing Mastery in Join Operations on Key Value Pair RDDs in Apache Spark

1
Introducing Join Operations on Relational Data with Examples
2
Getting started with join operation in Spark with Key Value Pair RDDs
3
Working towards complex Join Operations in Apache Spark with advanced indexing

Assignment - A Real Life Relational Dataset about Retail Customers

1
Setting context and developing understanding of relationships in the dataset
2
Challenge 1 - Top 5 states with Most Orders' Status as Cancelled
3
Challenge 2 - Top 5 Cities from CA State with Orders Status as Cancelled

Apache Spark - Advanced Concepts

1
Introducing Caching in RDDs, Motivation and Relation to DAG Based Execution
2
Caching and Persistence in RDDs in Action
3
Technique: Finding and Filtering Dirty Records in Data-Set using Apache Spark

Bonus Section

1
My lecture to University of Tromso students - When Databases Meet Hadoop
2
Bonus Lecture: Exceptional Discount on My Course(s)/Book(s)
You can view and review the lecture materials indefinitely, like an on-demand channel.
Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don't have an internet connection, some instructors also let their students download course lectures. That's up to the instructor though, so make sure you get on their good side!
4.3
4.3 out of 5
380 Ratings

Detailed Rating

Stars 5
218
Stars 4
89
Stars 3
40
Stars 2
19
Stars 1
14
7a47dfbf172be689f5ebf84600b95cea
30-Day Money-Back Guarantee

Includes

12 hours on-demand video
3 articles
Full lifetime access
Access on mobile and TV
Certificate of Completion