3.86 out of 5
3.86
371 reviews on Udemy

Apache Spark Streaming with Python and PySpark

Add Spark Streaming to your Data Science and Machine Learning Python Projects
Instructor:
Level Up Big Data Program
22,772 students enrolled
English [Auto-generated]
Create big data streaming pipelines with Spark using Python
Run analytics on live Tweet data from Twitter
Integrate Spark Streaming with tools like Apache Kafka, used by Fortune 500 companies
Work with new features of the most recent version of Spark: 2.3

What is this course about? 

This course covers all the fundamentals about Apache Spark streaming with Python and teaches you everything you need to know about developing Spark streaming applications using PySpark, the Python API for Spark. At the end of this course, you will gain in-depth knowledge about Spark streaming and general big data manipulation skills to help your company to adapt Spark Streaming for building big data processing pipelines and data analytics applications. This course will be absolutely critical to anyone trying to make it in data science today. 

What will you learn from this Apache Spark streaming cour? 

In this Apache Spark streaming course, you’ll learn the following:

  • An overview of the architecture of Apache Spark.
  • How to develop Apache Spark streaming applications with PySpark using RDD transformations and actions and Spark SQL.
  • How to work with Spark’s primary abstraction, resilient distributed datasets(RDDs), to process and analyze large data sets.
  • Advanced techniques to optimize and tune Apache Spark jobs by partitioning, caching and persisting RDDs.
  • Analyzing structured and semi-structured data using Datasets and DataFrames, and develop a thorough understanding of Spark SQL.
  • How to scale up Spark Streaming applications for both bandwidth and processing speed
  • How to integrate Spark Streaming with cluster computing tools like Apache Kafka
  • How to connect your Spark Stream to a data source like Amazon Web Services (AWS) Kinesis
  • Best practices of working with Apache Spark streaming in the field.
  • Big data ecosystem overview.

Why should you learn Apache Spark streaming? 

Spark streaming is becoming incredibly popular, and with good reason. According to IBM, Ninety percent of the data in the world today has been created in the last two years alone. Our current output of data is roughly 2.5 quintillion bytes per day. The world is being immersed in data, moreso each and every day. As such, analyzing static dataframes of non-dynamic data becomes the less practical approach to more and more problems. This is where data streaming comes in, the ability to process data almost as soon as it’s produced, recognizing the time-dependency of the data.

Apache Spark streaming gives us unlimited ability to build cutting-edge applications. It is also one of the most compelling technologies of the last decade in terms of its disruption to the big data world. Spark provides in-memory cluster computing which greatly boosts the speed of iterative algorithms and interactive data mining tasks.

Spark also is a powerful engine for streaming data as well as processing it. The synergy between them makes Spark an ideal tool for processing gargantuan data firehoses.

Tons of companies, including Fortune 500 companies, are adapting Apache Spark streaming to extract meaning from massive data streams, today you have access to that same big data technology right on your desktop.

What programming language is this Apache Spark streaming course taught in? 

This Apache Spark streaming course is taught in Python. Python is currently one of the most popular programming languages in the world! It’s rich data community, offering vast amounts of toolkits and features, makes it a powerful tool for data processing. Using PySpark (the Python API for Spark) you will be able to interact with Apache Spark Streaming’s main abstraction, RDDs, as well as other Spark components, such as Spark SQL and much more!

Let’s learn how to write Apache Spark streaming programs with PySpark Streaming to process big data sources today!

30-day Money-back Guarantee!

You will get 30-day money-back guarantee from Udemy for this Apache Spark streaming course.
If not satisfied simply ask for a refund within 30 days. You will get a full refund. No questions whatsoever asked.
Are you ready to take your big data analysis skills and career to the next level, take this course now!
You will go from zero to Spark streaming hero in 4 hours.

Getting started with Apache Spark Streaming

1
Course Overview

Course Instructor Introduction, and Course Overview.

2
How to Take this Course and How to Get Support

Some quick tips and guidelines on how to get the most out of this course.

3
Text Lecture: How to Take this Course and How to Get Support
4
Introduction to Streaming

Introduction to streaming, and what makes Spark Streaming Unique.

5
Pyspark Setup Tutorial

Tutorial for setting up a Ubuntu virtual machine (if you're using Windows or Mac), installing Spark and Scala, as well as installing PySpark and making it compatible with Jupyter Notebooks.

6
PySpark Setup Tutorial Text Lecture
7
Example Twitter Application

(we know this is advanced, but we figured we'd give you a glimpse of the kinds of applications you can create with PySpark Streaming)

8
Twitter Tutorial Text Lecture

Pyspark Basics

1
What are Discretized Streams?

This lecture introduces the basic abstraction of PySpark Streaming: Discretized Streams and RDDs.

2
How to Create Discretized Streams

A demo of how to actually create Discretized Streams.

3
Transformations on DStreams

Examples of the basic operations and transformations on DStreams.

4
Transformation Operation

An overview of the more versatile `Transform` operation, and how to use it.

5
Window Operations

A general overview of the Window operations in PySpark Streaming.

6
Window

How to use the specific `window()` function.

7
countByWindow

How to use the specific `countByWindow()` function.

8
reduceByKeyAndWindow

How to use the specific `reduceByKeyAndWindow()` function.

9
countByValueAndWindow

How to use the specific `countByValueAndWindow()` function.

10
Output Operations on DStreams

Options for output operations on the data you process in a DStream.

11
forEachRDD

A more in-depth look at the `forEachRDD()` output operation.

12
SQL Operations

How to use SQL-like queries on the data in your Spark stream.

13
Reviewing the Basics

This lecture reviews everything we've learned in section 2. We also have an exercise that utilizes many of the concepts from the previous lectures.

Advanced Spark Concepts

1
Join Operations

How to use join operations on multiple data streams, as well as data streams and static dataframes.

2
Stateful Transformations

How to use stateful transformations (like updateStateByKey), which retain memory of the previous streaming operations.

3
Checkpointing

A more in-depth look at how to use checkpointing in PySpark Streaming.

4
Accumulators

A look at a more esoteric option (but nonetheless useful in debugging) in Pyspark Streaming: Accumulators.

5
Fault Tolerance

We've been talking a lot about Fault-tolerance, but we will explore what exactly that means at a higher conceptual level.

PySpark Streaming at Scale

1
Performance Tuning

Some considerations to keep in mind when tuning your spark stream for greater performance, or operating at a largr scale.

2
PySpark Streaming with Apache Kafka

Basics of integrating PySpark with Apache Kafka (an incredibly useful cluster computing tool created at LinkedIn) as a Streaming source.

3
Integration with Kafka Text Lecture
4
PySpark Streaming with Amazon Kinesis

Amazon Kinesis is very useful for large-scale streaming, especially for highly data-intensive applications such as video streaming. We'll go into how to get AWS connected to your spark stream.

5
Integration with Kinesis Text Lecture

Structured Streaming

1
Introduction to Structured Streaming

As of Spark 2.0, pyspark has had a new paradigm for streaming large amounts of data: Structured Streaming. This results in performance enhancements for certain applications, as well as streams that highly resemble SQL data tables.

2
Operations on Streaming Dataframes and DataSets

Since Structured Streaming makes streams look so much like Dataframes, we'll look at how you can use Structured Streaming to operate on both of them.

3
Window Operations

Remember window operations? Let's look at how they work for Structured Streaming.

4
Handling Late Data and Watermarking

We mentioned watermarking in the previous lecture, but here we will go into what that involves, and how this relates to handling data that comes in late from your streaming source.

Course Conclusion

1
Final Lecture

Congratulations! You've made it to the end of this course! This video will show you what an amazing achievement this is.

We'll also give you some tips on how to continue your learning.

2
Final Text Lecture
You can view and review the lecture materials indefinitely, like an on-demand channel.
Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don't have an internet connection, some instructors also let their students download course lectures. That's up to the instructor though, so make sure you get on their good side!
3.9
3.9 out of 5
371 Ratings

Detailed Rating

Stars 5
162
Stars 4
130
Stars 3
54
Stars 2
7
Stars 1
18
cd9a9b6a88ddd8a2274fce5f3973ee48
30-Day Money-Back Guarantee

Includes

3 hours on-demand video
6 articles
Full lifetime access
Access on mobile and TV
Certificate of Completion