4 out of 5
4
10 reviews on Udemy

Hands-On PySpark for Big Data Analysis

Use PySpark to productionize analytics over Big Data and easily crush messy data at scale
Instructor:
Packt Publishing
31 students enrolled
English [Auto-generated]
Work on real-life messy datasets with PySpark to get practical Big Data experience
Design for both offline and online use cases with Spark Notebooks to increase productivity
Analyse and discover patterns with Spark SQL to improve your business intelligence
Get rapid-fire feedback with PySpark’s interactive shell to speed up development time
Quickly iterate through your solution by setting up PySpark for your own computer
Using Spark Notebooks to quickly iterate through your new ideas

Data is an incredible asset, especially when there are lots of it. Exploratory data analysis, business intelligence, and machine learning all depend on processing and analyzing Big Data at scale. 

How do you go from working on prototypes on your local machine, to handling messy data in production and at scale? 

This is a practical, hands-on course that shows you how to use Spark and it’s Python API to create performant analytics with large-scale data. Don’t reinvent the wheel, and wow your clients by building robust and responsible applications on Big Data.

About the Author

Colibri Digital is a technology consultancy company founded in 2015 by James Cross and Ingrid Funie. The company works to help their clients navigate the rapidly changing and complex world of emerging technologies, with deep expertise in areas such as Big Data, Data Science, Machine Learning, and Cloud Computing. Over the past few years, they have worked with some of the world’s largest and most prestigious companies, including a tier 1 investment bank, a leading management consultancy group, and one of the world’s most popular soft drinks companies, helping each of them to better make sense of their data, and process it in more intelligent ways.

The company lives by their motto: Data -> Intelligence -> Action.

Rudy Lai is the founder of QuantCopy, a sales acceleration startup using AI to write sales emails to prospects. By taking in leads from your pipelines, QuantCopy researches them online and generates sales emails from that data. It also has a suite of email automation tools to schedule, send, and track email performance – key analytics that all feedback into how our AI generates content.

Prior to founding QuantCopy, Rudy ran HighDimension.IO, a machine learning consultancy, where he experienced first hand the frustrations of outbound sales and prospecting. As a founding partner, he helped startups and enterprises with HighDimension.IO’s Machine-Learning-as-a-Service, allowing them to scale up data expertise in the blink of an eye.

In the first part of his career, Rudy spent 5+ years in quantitative trading at leading investment banks such as Morgan Stanley. This valuable experience allowed him to witness the power of data, but also the pitfalls of automation using data science and machine learning. Quantitative trading was also a great platform to learn deeply about reinforcement learning and supervised learning topics in a commercial setting. 

Rudy holds a Computer Science degree from Imperial College London, where he was part of the Dean’s List, and received awards such as the Deutsche Bank Artificial Intelligence prize.

Install PySpark and Setup Your Development Environment

1
The Course Overview

This video provides an overview of the entire course.

2
Core Concepts in Spark and PySpark

Illustrate the main tenets of PySpark using the documentation

  • Understand what is Spark and PySpark?

  • Learn what are RDDs (resilient distributed datasets)?

  • Get the answer to — what is Spark SQL, DataFrames and Datasets?

3
Setting Up Spark on Windows and PySpark

This video walks you through all the set up for the required software for this course.

  • Setup Spark on Windows

  • Verify that Spark has been correctly setup

  • Setup PySpark and verify the installation

4
SparkContext, SparkConf and Spark Shell

Through this video, you will learn the key concepts of Spark

  • Understand what SparkContext is

  • Learn about SparkConf and Spark Shell

Getting Your Big Data into the Spark Environment Using RDDs

1
Loading Data onto Spark RDDs

This video will walk you through some basic understanding of what UCI machine learning data repository is and how to get data from the repository to Python. Along with that you will also learn to get data into Spark.

  • Get data from the repository to Python?

  • Get data into Spark

2
Parallelization with Spark RDDs

Now that you know how to load data onto your Spark RDDs, let’s go ahead and learn how to parallelize Spark RDDs.

  • Understand what parallelization is.

  • Parallelize Spark RDDs

3
RDD Operation Basics

In this video, we will learn different operations for transformations and actions on the dataset.  

  • Explore the map, filter, and collect operations

Big Data Cleaning and Wrangling with Spark Notebooks

1
Using Spark Notebooks for Quick Iteration of Ideas

In this video, we will get a quick introduction to Spark Notebooks and then we will move ahead to explore its amazing features.

  • Explore Spark notebooks

  • Get started with Spark notebooks

  • Use Spark notebooks

2
Sampling/Filtering RDDs to Pick-Out Relevant Data Points

Through this video, we will extend our basic knowledge to learn how we could sample or filter RDDs.

  • Understand what are sample and takeSample

3
Splitting Datasets and Creating New Combinations with Set Operations

This final video of the section will teach you to split the dataset and create completely new combinations using the set operations.

  • Learn what subtract is

  • Learn to use Cartesian

Aggregating and Summarizing Data into Useful Reports

1
Calculating Averages with Map and Reduce

How do we calculate averages with map and reduce?

  • Calculate averages

  • Learn about map and reduce functions

2
Faster Average Computation with Aggregate

This video will give you a good understanding about how do we speed up computation.

  • Understand what aggregate is

  • Learn to use aggregate and get an average faster

3
Pivot Tabling with Key-Value Paired Data Points

Let’s take a step ahead and explore pivot tables and also how to use it.

  • Know what is a pivot table is

  • Create pivot tables in PySpark

Powerful Exploratory Data Analysis with MLlib

1
Computing Summary Statistics with MLlib

How do we compute summary statistics with MLlib?

  • What are summary statistics?

  • How do we use MLlib to create summary statistics?

2
Using Pearson and Spearman to Discover Correlations

How do we explore correlations?

  • What is Pearson correlation?

  • What is Spearman correlation?

3
Testing Your Hypotheses on Large Datasets

How do we test hypotheses?

  • What is hypothesis testing?

  • How do we test hypotheses using PySpark?

Putting Structure on Your Big Data with SparkSQL

1
Manipulating DataFrames with SparkSQL Schemas

How do we manipulate DataFrames with SparkSQL?

  • What are DataFrames?

  • How do we use SparkSQL?

2
Using the Spark DSL to Build Queries for Structured Data Operations

How do we use the Spark DSL?

  • What is Spark DSL?

  • How do we build queries?

You can view and review the lecture materials indefinitely, like an on-demand channel.
Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don't have an internet connection, some instructors also let their students download course lectures. That's up to the instructor though, so make sure you get on their good side!
4
4 out of 5
10 Ratings

Detailed Rating

Stars 5
3
Stars 4
4
Stars 3
1
Stars 2
2
Stars 1
0
bae89e339ffb0bae06cf6962ee7ab03f
30-Day Money-Back Guarantee

Includes

2 hours on-demand video
Full lifetime access
Access on mobile and TV
Certificate of Completion