Developer Training for Apache Spark and Hadoop

United Arab Emirates:
USD 3,195.00 excl. VAT

Duration: 4 Days

Who Should Attend

  • This course is best suited to developers and engineers who have programming experience.
An individual course planning is possible for this kind of course. If you would like to know more about this course please either call us on +971 4 42 89 440 or send an email to
  • Apache Spark examples and hands-on exercises are presented in Scala and Python. The ability to program in one of those languages is required 
  • Basic familiarity with the Linux command line is assumed 
  • Basic knowledge of SQL is helpful 

Prior knowledge of Hadoop is not required

Associated Certification(s):

Upon completion of the course, attendees are encouraged to continue their study and register for the CCA Spark and Hadoop Developer exam. Certification is a great differentiator. It helps establish you as a leader in the field, providing employers and customers with tangible evidence of your skills and expertise

Course Content

This four-day hands-on training course delivers the key concepts and expertise participants need to ingest and process data on a Hadoop cluster using the most up-to date tools and techniques. Employing Hadoop ecosystem projects such as Spark (including Spark Streaming and Spark SQL), Flume, Kafka, and Sqoop, this training course is the best preparation for the real-world challenges faced by Hadoop developers. With Spark, developers can write sophisticated parallel applications to execute faster decisions, better decisions, and interactive actions, applied to a wide variety of use cases, architectures, and industries

Course Objective

Through expert-led discussion and interactive, hands-on exercises, participants will learn how to: 

  • Distribute, store, and process data in a Hadoop cluster 
  • Write, configure, and deploy Apache Spark applications on a Hadoop cluster 
  • Use the Spark shell for interactive data analysis 
  • Process and query structured data using Spark SQL 
  • Use Spark Streaming to process a live data stream 
  • Use Flume and Kafka to ingest data for Spark Streaming

Course Outline
  • Course Introduction
  • Introduction to Apache Hadoop and the Hadoop Ecosystem 
  • Apache Hadoop File Storage 
  • Data Processing on an Apache Hadoop Cluster 
  • Importing Relational Data with Apache Sqoop 
  • Apache Spark Basics 
  • Working with RDDs 
  • Aggregating Data with Pair RDDs
  • Writing and Running Apache Spark Applications 
  • Configuring Apache Spark Applications 
  • Parallel Processing in Apache Spark 
  • RDD Persistence 
  • Common Patterns in Apache Spark Data Processing 
  • DataFrames and Spark SQL 
  • Message Processing with Apache Kafka 
  • Capturing Data with Apache Flume 
  • Integrating Apache Flume and Apache Kafka 
  • Apache Spark Streaming: Introduction to DStreams 
  • Apache Spark Streaming: Processing Multiple Batches 
  • Apache Spark Streaming: Data Sources
Further information

If you would like to know more about this course please contact us