Hands-On PySpark for Big Data Analysis
Data is an incredible asset, especially when there are lots of it. Exploratory data analysis, business intelligence, and machine learning all depend on processing and analyzing Big Data at scale.
How do you go from working on prototypes on your local machine, to handling messy data in production and at scale?
This is a practical, hands-on course that shows you how to use Spark and it’s Python API to create performant analytics with large-scale data. Don’t reinvent the wheel, and wow your clients by building robust and responsible applications on Big Data.
About the Author
Colibri Digital is a technology consultancy company founded in 2015 by James Cross and Ingrid Funie. The company works to help their clients navigate the rapidly changing and complex world of emerging technologies, with deep expertise in areas such as Big Data, Data Science, Machine Learning, and Cloud Computing. Over the past few years, they have worked with some of the world’s largest and most prestigious companies, including a tier 1 investment bank, a leading management consultancy group, and one of the world’s most popular soft drinks companies, helping each of them to better make sense of their data, and process it in more intelligent ways.
The company lives by their motto: Data -> Intelligence -> Action.
Rudy Lai is the founder of QuantCopy, a sales acceleration startup using AI to write sales emails to prospects. By taking in leads from your pipelines, QuantCopy researches them online and generates sales emails from that data. It also has a suite of email automation tools to schedule, send, and track email performance – key analytics that all feedback into how our AI generates content.
Prior to founding QuantCopy, Rudy ran HighDimension.IO, a machine learning consultancy, where he experienced first hand the frustrations of outbound sales and prospecting. As a founding partner, he helped startups and enterprises with HighDimension.IO’s Machine-Learning-as-a-Service, allowing them to scale up data expertise in the blink of an eye.
In the first part of his career, Rudy spent 5+ years in quantitative trading at leading investment banks such as Morgan Stanley. This valuable experience allowed him to witness the power of data, but also the pitfalls of automation using data science and machine learning. Quantitative trading was also a great platform to learn deeply about reinforcement learning and supervised learning topics in a commercial setting.
Rudy holds a Computer Science degree from Imperial College London, where he was part of the Dean’s List, and received awards such as the Deutsche Bank Artificial Intelligence prize.
Install PySpark and Setup Your Development Environment
This video provides an overview of the entire course.
Illustrate the main tenets of PySpark using the documentation
Understand what is Spark and PySpark?
Learn what are RDDs (resilient distributed datasets)?
Get the answer to — what is Spark SQL, DataFrames and Datasets?
This video walks you through all the set up for the required software for this course.
Setup Spark on Windows
Verify that Spark has been correctly setup
Setup PySpark and verify the installation
Through this video, you will learn the key concepts of Spark
Understand what SparkContext is
Learn about SparkConf and Spark Shell
Getting Your Big Data into the Spark Environment Using RDDs
This video will walk you through some basic understanding of what UCI machine learning data repository is and how to get data from the repository to Python. Along with that you will also learn to get data into Spark.
Get data from the repository to Python?
Get data into Spark
Now that you know how to load data onto your Spark RDDs, let’s go ahead and learn how to parallelize Spark RDDs.
Understand what parallelization is.
Parallelize Spark RDDs
In this video, we will learn different operations for transformations and actions on the dataset.
Explore the map, filter, and collect operations
Big Data Cleaning and Wrangling with Spark Notebooks
In this video, we will get a quick introduction to Spark Notebooks and then we will move ahead to explore its amazing features.
Explore Spark notebooks
Get started with Spark notebooks
Use Spark notebooks
Through this video, we will extend our basic knowledge to learn how we could sample or filter RDDs.
Understand what are sample and takeSample
This final video of the section will teach you to split the dataset and create completely new combinations using the set operations.
Learn what subtract is
Learn to use Cartesian
Aggregating and Summarizing Data into Useful Reports
How do we calculate averages with map and reduce?
Learn about map and reduce functions
This video will give you a good understanding about how do we speed up computation.
Understand what aggregate is
Learn to use aggregate and get an average faster
Let’s take a step ahead and explore pivot tables and also how to use it.
Know what is a pivot table is
Create pivot tables in PySpark
Powerful Exploratory Data Analysis with MLlib
How do we compute summary statistics with MLlib?
What are summary statistics?
How do we use MLlib to create summary statistics?
How do we explore correlations?
What is Pearson correlation?
What is Spearman correlation?
How do we test hypotheses?
What is hypothesis testing?
How do we test hypotheses using PySpark?
Putting Structure on Your Big Data with SparkSQL
How do we manipulate DataFrames with SparkSQL?
What are DataFrames?
How do we use SparkSQL?
How do we use the Spark DSL?
What is Spark DSL?
How do we build queries?