Big Data Processing using Apache Spark
Every year we have a big increment of data that we need to store and analyze. When we want to aggregate all data about our users and analyze that data to find insights from it, terabytes of data undergo processing. To be able to process such amounts of data, we need to use a technology that can distribute multiple computations and make them more efficient. Apache Spark is a technology that allows us to process big data leading to faster and scalable processing.
In this course, we will learn how to leverage Apache Spark to be able to process big data quickly. We will cover the basics of Spark API and its architecture in detail. In the second section of the course, we will learn about Data Mining and Data Cleaning, wherein we will look at the Input Data Structure and how Input data is loaded In the third section we will be writing actual jobs that analyze data. By the end of the course, you will have sound understanding of the Spark framework which will help you in writing the code understand the processing of big data.
About the Author
Tomasz Lelek is a Software Engineer, programming mostly in Java, Scala. He is a fan of microservices architecture, and functional programming. He has dedicated considerable time and effort to be better every day. He recently dived into Big Data technologies such as Apache Spark and Hadoop. Tomasz is passionate about nearly everything associated with software development. Recently he was a speaker at conferences in Poland – Confitura and JDD (Java Developers Day) and also at Krakow Scala User Group. He has also conducted a live coding session at Geecon Conference.
Writing Big Data Processing Using Apache Spark
This video will an overview of entire course
In this video, we will cover the Spark Architecture.
This video focuses on creating a project.
This video shows the installation of spark-submit on our machine.
In this video we will look at the API of Spark.
Data Mining and Data Cleaning
Thinking what problem we want to solve?
In this video, we will learn about Spark API to load data.
In this video, we will cover how to load input data.
In this video, we look at how to tokenizing input data
Writing Job Logic
This video shows how to implement counting Word Logic.
In this video, we will focus on solving problems.
This video shows how to write Robust Spark Test Suite.
This video shows how to start our Apache Spark job for two text books.