3.5 out of 5
3.5
24 reviews on Udemy

Learning PySpark

Building and deploying data-intensive applications at scale using Python and Apache Spark
Instructor:
Packt Publishing
90 students enrolled
English [Auto-generated]
Learn about Apache Spark and the Spark 2.0 architecture.
Understand schemas for RDD, lazy executions, and transformations.
Explore the sorting and saving elements of RDD.
Build and interact with Spark DataFrames using Spark SQL
Create and explore various APIs to work with Spark DataFrames.
Learn how to change the schema of a DataFrame programmatically.
Explore how to aggregate, transform, and sort data with DataFrames.

Apache Spark is an open-source distributed engine for querying and processing data. In this tutorial, we provide a brief overview of Spark and its stack. This tutorial presents effective, time-saving techniques on how to leverage the power of Python and put it to use in the Spark ecosystem. You will start by getting a firm understanding of the Apache Spark architecture and how to set up a Python environment for Spark.

You’ll learn about different techniques for collecting data, and distinguish between (and understand) techniques for processing data. Next, we provide an in-depth review of RDDs and contrast them with DataFrames. We provide examples of how to read data from files and from HDFS and how to specify schemas using reflection or programmatically (in the case of DataFrames). The concept of lazy execution is described and we outline various transformations and actions specific to RDDs and DataFrames.

Finally, we show you how to use SQL to interact with DataFrames. By the end of this tutorial, you will have learned how to process data using Spark DataFrames and mastered data collection techniques by distributed data processing.

About the Author

Tomasz Drabas is a Data Scientist working for Microsoft and currently residing in the Seattle area. He has over 12 years’ international experience in data analytics and data science in numerous fields: advanced technology, airlines, telecommunications, finance, and consulting.

Tomasz started his career in 2003 with LOT Polish Airlines in Warsaw, Poland while finishing his Master’s degree in strategy management. In 2007, he moved to Sydney to pursue a doctoral degree in operations research at the University of New South Wales, School of Aviation; his research crossed boundaries between discrete choice modeling and airline operations research. During his time in Sydney, he worked as a Data Analyst for Beyond Analysis Australia and as a Senior Data Analyst/Data Scientist for Vodafone Hutchison Australia among others. He has also published scientific papers, attended international conferences, and served as a reviewer for scientific journals.

In 2015 he relocated to Seattle to begin his work for Microsoft. While there, he has worked on numerous projects involving solving problems in high-dimensional feature space.

A Brief Primer on PySpark

1
The Course Overview

This video gives an overview of the entire course.

2
Brief Introduction to Spark

The aim of the video is to explain Spark and its Python interface. 

3
Apache Spark Stack

The aim of this video is to provide a brief overview of Apache Spark stack components.

4
Spark Execution Process

The aim of this video is to briefly review the execution process.

5
Newest Capabilities of PySpark 2.0+

The aim of this video is to briefly review the newest features of Spark 2.0+.

6
Cloning GitHub Repository

The aim of this video is to clone the GitHub repository for the course. Doing this will set everything we need for the following videos.

Resilient Distributed Datasets

1
Brief Introduction to RDDs

In this video, we will provide a brief overview of one of the fundamental data structures of Spark – the RDDs.

2
Creating RDDs

In this video, we will learn how to create RDDs in many different ways.

3
Schema of an RDD

In this video, we explore the advantages and disadvantages of RDD’s lack of schema.

4
Understanding Lazy Execution

Spark is lazy to process data. In this video we will learn why this is an advantage. 

5
Introducing Transformations – .map(…)

In this video, we will introduce lambdas and the .map(…) transformation.

6
Introducing Transformations – .filter(…)

In this video, we will learn how to filter data from RDDs. 

7
Introducing Transformations – .flatMap(…)

In this video, we will explain the difference between .flatMap(…) and .map(…) transformations and we will learn to use it to filter malformed records.

8
Introducing Transformations – .distinct(…)

In this video, we will explore what the .distinct(…) transformation does.

9
Introducing Transformations – .sample(…)

In this video, we will learn how to sample data from RDDs.

10
Introducing Transformations – .join(…)

In this video, we will learn how to join two RDDs.

11
Introducing Transformations – .repartition(…)

In this video, we will explore how to effectively use repartitioning.

Resilient Distributed Datasets and Actions

1
Introducing Actions – .take(…)

In this video, we will focus on one of the most fundamental tools any data scientist can use: the .take(…) action. 

2
Introducing Actions – .collect(…)

In this video, we will learn when to use the .collect(…) action and when to avoid it. 

3
Introducing Actions – .reduce(…) and .reduceByKey(…)

In this video, we will learn another fundamental method from the Map-Reduce paradigm – the .reduce(…) and the .reduceByKey(…).

4
Introducing Actions – .count()

In this video, we will learn how to count the number of records in an RDD.

5
Introducing Actions – .foreach(…)

In this video, we will learn how to execute an action on each element of an RDD in each of its partitions.

6
Introducing Actions – .aggregate(…) and .aggregateByKey(…)

In this video, we will explore how to aggregate the data within each partition first before collecting the results on the driver for the final aggregation.

7
Introducing Actions – .coalesce(…)

In this video, we will learn when and why to use the .coalesce(…) method instead of the .repartition(…). 

8
Introducing Actions – .combineByKey(…)

In this video, we will learn about the most flexible data reduction action.

9
Introducing Actions – .histogram(…)

In this video, we will learn how to bin data into buckets.

10
Introducing Actions – .sortBy(…)

In this video, we will learn how to sort data within an RDD.

11
Introducing Actions – Saving Data

In this video, we will explore how to save data from an RDD.

12
Introducing Actions – Descriptive Statistics

In this video, we will explore some basic descriptive statistics.

DataFrames and Transformations

1
Introduction

In this video, we will provide a brief introduction to Spark DataFrames. 

2
Creating DataFrames

In this video, we will learn how to create DataFrames.

3
Specifying Schema of a DataFrame

In this video, we will learn how to specify schema of a DataFrame.

4
Interacting with DataFrames

In this video, we will discuss different ways of interacting with DataFrames.

5
The .agg(…) Transformation

In this video, we will learn how to use the .agg(…) method to aggregate data.

6
The .sql(…) Transformation

In this video, we will learn how to use the .sql(…) transformation to interact with the data in a DataFrame.

7
Creating Temporary Tables

In this video, we will learn how to create temporary views over a DataFrame.

8
Joining Two DataFrames

In this video, we will learn how to join two DataFrames. 

9
Performing Statistical Transformations

In this video, we will learn how to calculate descriptive statistics in DataFrames.

10
The .distinct(…) Transformation

In this video, we will how to retrieve distinct values from a DataFrame.

Data Processing with Spark DataFrames

1
Schema Changes

In this video, we will learn how to drop, rename, and handle missing observations.

2
Filtering Data

In this video, we will learn how to filter data.

3
Aggregating Data

In this video, we will learn how to aggregate data.

4
Selecting Data

In this video, we will learn how to select data from a DataFrame. 

5
Transforming Data

In this video, we will learn how to transform data. 

6
Presenting Data

In this video, we will learn how to present data.

7
Sorting DataFrames

In this video, we will learn how to sort data contained within a DataFrame.

8
Saving DataFrames

In this video, we will learn how to save DataFrames in a number of file formats.

9
Pitfalls of UDFs

In this video, we will discuss the pitfalls of using pure Python user defined functions. 

10
Repartitioning Data

In this video, we will learn how to repartition the data.

You can view and review the lecture materials indefinitely, like an on-demand channel.
Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don't have an internet connection, some instructors also let their students download course lectures. That's up to the instructor though, so make sure you get on their good side!
3.5
3.5 out of 5
24 Ratings

Detailed Rating

Stars 5
5
Stars 4
9
Stars 3
5
Stars 2
1
Stars 1
4
de803ea3991d771b8f7adc8ec8422b7e
30-Day Money-Back Guarantee

Includes

2 hours on-demand video
Full lifetime access
Access on mobile and TV
Certificate of Completion