This video is a comprehensive tutorial to help you learn all the
fundamentals of Apache Spark, one of the trending big data processing
frameworks on the market today. We will introduce you to the various
components of the Spark framework to efficiently process, analyze, and
visualize data.
You will also get the brief introduction of Apache Hadoop and Scala
programming language before start writing with Spark programming. You
will learn about the Apache Spark programming fundamentals such as
Resilient Distributed Datasets (RDD) and See which operations can be
used to perform a transformation or action operation on the RDD. We’ll
show you how to load and save data from various data sources as
different type of files, No-SQL and RDBMS databases etc.. We’ll also
explain Spark advanced programming concepts such as managing Key-Value
pairs, accumulators etc. Finally, you’ll discover how to create an
effective Spark application and execute it on Hadoop cluster to the data
and gain insights to make informed business decisions.
By the end of this video, you will be well-versed with all the fundamentals of Apache Spark and implementing them in Spark.
About The Author
Nishant Garg has over 16 years of software architecture and
development experience in various technologies, such as Java Enterprise
Edition, SOA, Spring, Hadoop, Hive, Flume, Sqoop, Oozie, Spark, YARN,
Impala, Kafka, Storm, Solr/Lucene, NoSQL databases (such as HBase,
Cassandra, and MongoDB), and MPP databases (such as GreenPlum).
He received his MS in software systems from the Birla Institute of
Technology and Science, Pilani, India, and is currently working as a
senior technical architect for the Big Data R&D Labs with Impetus
Infotech Pvt. Ltd. Previously, Nishant has enjoyed working with some of
the most recognizable names in IT services and financial industries,
employing full software life cycle methodologies such as Agile and
SCRUM.
Nishant has also undertaken many speaking engagements on big data
technologies and is also the author of Learning Apache Kafka & HBase
Essestials, Packt Publishing.
Introducing Spark
This video provides an overview of the entire course.
What are the origins of Apache Spark and what are its uses?
What are the various components in Apache Spark?
Hadoop and Spark
This video explains the complete historical journey of project Nutch to Apache Hadoop—how the project Hadoop was started, what were the research papers that influenced the Spark project, and so on. In the end, various goals achieved by developing Hadoop are explained.
In this video, we are going to look at the Apache Hadoop background running JVM processes—name node, data node, resource manager, and node manager. It also provides an overview of Hadoop components—HDFS, YARN, and Map Reduce programming mode.
This video shares more details about Hadoop components Hadoop distributed filesystem—Goals, HDFS components, and the working of HDFS. It also explains another Hadoop component YARN—components, lifecycle, and its use cases.
This video provides an overview of Map Reduce—the Hadoop programming model and its execution behavior at various stages.
Scala from 30,000 feet
The aim of this video is to introduce the Scala language and its features, and by the end of this video, you should be able to get started with Scala.
The aim of this video is to explain the fundamentals of Scala Programming, such as Scala classes, fields, methods, and the different types of arguments, such as default and named arguments passed to class constructors and methods.
The aim of this video is to explain the objects in Scala language, singleton object in Scala, and outline the usages of objects in Scala applications. It also describes companion objects.
The aim of this video is to explain the structure of the Scala collections hierarchy. Look at the examples of different collection types, such as Array, Set, and Map. It also covers how to apply functions to data in collections and outlines the basics of structural sharing.
Spark Programming
The aim of this video is to start your learning of Apache Spark fundamentals. It introduces you to the Spark component architecture and how different components are stitched together for Spark execution.
The aim of this video is to take the first step towards Spark programming. It explains the Spark Context and also shares the need of Resilient Distributed Datasets called RDD. It also explains the execution approach change in Map Reduce due to RDD.
The aim of this video is to explain the operations that can be applied on RDDs. These operations are in the form of transformations and actions. It explains various operations under both the categories with examples.
Advanced Spark Programming
The aim of this video is to explain and demonstrate data loading and storing in Spark from different file types; such as text, CSV, JSON file, and sequence file; different filesystems, such as local filesystem, Amazon S3, and HDFS; and different databases, such as My SQL, Postgres, HBase, and so on.
The aim of this video is to explain the motivations behind key-value-based RDD and the creation of such RDDs. Next, it explains the various transformations and actions that can be applied on key-value-based RDD. Finally, it explains data partitioning techniques in Spark.
The aim of this video is to explain a few more advance concepts, such as accumulators, broadcast variables, and passing data to external programs using pipes.
The aim of this video is to demonstrate the writing of Spark jobs using Eclipse-based Scala IDE, creating Spark job JAR files, and, finally, copying and executing the Spark job on Hadoop cluster.