Watch the video with slide synchronization on InfoQ.com! http://www.infoq.com/presentations /apache-spark-big-data InfoQ.com: News & Community Site • 750,000 unique visitors/month • Published in 4 languages (English, Chinese, Japanese and Brazilian Portuguese) • Post content from our QCon conferences • News 15-20 / week • Articles 3-4 / week • Presentations (videos) 12-15 / week • Interviews 2-3 / week • Books 1 / month
Presented at QCon San Francisco www.qconsf.com Purpose of QCon - to empower software development by facilitating the spread of knowledge and innovation Strategy - practitioner-driven conference designed for YOU: influencers of change and innovation in your teams - speakers and topics driving the evolution and innovation - connecting and catalyzing the influencers and innovators Highlights - attended by more than 12,000 delegates since 2007 - held in 9 cities worldwide
Unified Big Data Processing with Apache Spark Matei Zaharia @matei_zaharia
What is Apache Spark? Fast & general engine for big data processing Generalizes MapReduce model to support more types of processing Most active open source project in big data
About Databricks Founded by the creators of Spark in 2013 Continues to drive open source Spark development, and offers a cloud service (Databricks Cloud) Partners to support Spark with Cloudera, MapR, Hortonworks, Datastax
Spark Community MapReduce YARN HDFS Storm Spark 2000 1500 1000 500 0 MapReduce YARN HDFS Storm Spark 350000 300000 250000 200000 150000 100000 50000 0 Commits Lines of Code Changed Activity in past 6 months
Community Growth 100 75 50 25 0 Contributors per Month to Spark 2010 2011 2012 2013 2014 2-3x more activity than Hadoop, Storm, MongoDB, NumPy, D3, Julia, …
Overview Why a unified engine? Spark execution model Why was Spark so general? What’s next
History: Cluster Programming Models 2004
MapReduce A general engine for batch processing
Beyond MapReduce MapReduce was great for batch processing, but users quickly needed to do more: > More complex, multi-pass algorithms > More interactive ad-hoc queries > More real-time stream processing Result: many specialized systems for these workloads
Big Data Systems Today MapReduce Pregel Giraph Presto Storm Dremel Drill Impala S4 . . . Specialized systems for new workloads General batch processing
Problems with Specialized Systems More systems to manage, tune, deploy Can’t combine processing types in one application > Even though many pipelines need to do this! > E.g. load data with SQL, then run machine learning In many pipelines, data exchange between engines is the dominant cost!
MapReduce Pregel Giraph Presto Storm Dremel Drill Impala S4 Specialized systems for new workloads General batch processing Unified engine Big Data Systems Today ? . . .
Overview Why a unified engine? Spark execution model Why was Spark so general? What’s next
Background Recall 3 workloads were issues for MapReduce: > More complex, multi-pass algorithms > More interactive ad-hoc queries > More real-time stream processing While these look different, all 3 need one thing that MapReduce lacks: efficient data sharing
Data Sharing in MapReduce iter. 1 iter. 2 . . . Input HDFS read HDFS write HDFS read HDFS write Input query 1 query 2 query 3 result 1 result 2 result 3 . . . HDFS read Slow due to data replication and disk I/O
What We’d Like iter. 1 iter. 2 . . . Input Distributed memory Input query 1 query 2 query 3 . . . one-time processing 10-100× faster than network and disk
Spark Model Resilient Distributed Datasets (RDDs) > Collections of objects that can be stored in memory or disk across a cluster > Built via parallel transformations (map, filter, …) > Fault-tolerant without replication
Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Base RTDraDn sformed RDD lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(‘t’)[2]) messages.cache() Block 1 Block 2 Block 3 Worker Worker Worker Driver messages.filter(lambda s: “foo” in s).count() messages.filter(lambda s: “bar” in s).count() . . . results tasks Cache 1 Cache 2 Cache 3 Action Full-text search of Wikipedia in <1 sec (vs 20 sec for on-disk data)
Fault Tolerance RDDs track lineage info to rebuild lost data file.map(lambda rec: (rec.type, 1)) .reduceByKey(lambda x, y: x + y) .filter(lambda (type, count): count > 10) map reduce filter Input file
Fault Tolerance RDDs track lineage info to rebuild lost data file.map(lambda rec: (rec.type, 1)) map reduce filter Input file .reduceByKey(lambda x, y: x + y) .filter(lambda (type, count): count > 10)
Example: Logistic Regression 4000 3500 Running Time (s) Number of Iterations 3000 2500 2000 1500 1000 500 0 1 5 10 20 30 110 s / iteration Hadoop Spark first iteration 80 s later iterations 1 s
Spark in Scala and Java // Scala: val lines = sc.textFile(...) lines.filter(s => s.contains(“ERROR”)).count() // Java: JavaRDD<String> lines = sc.textFile(...); lines.filter(s -> s.contains(“ERROR”)).count();
How General Is It?
Spark Streaming real-time Spark Core Spark SQL relational MLlib machine learning GraphX graph Libraries Built on Spark
Spark SQL Represents tables as RDDs Tables = Schema + Data
Spark SQL Represents tables as RDDs Tables = Schema + Data = SchemaRDD From Hive: c = HiveContext(sc) rows = c.sql(“select text, year from hivetable”) rows.filter(lambda r: r.year > 2013).collect() {“text”: “hi”, “user”: { “name”: “matei”, “id”: 123 }} From JSON: c.jsonFile(“tweets.json”).registerTempTable(“tweets”) c.sql(“select text, user.name from tweets”) tweets.json
Spark Streaming Time Input
Spark Streaming Time RDD RDD RDD RDD RDD RDD Represents streams as a series of RDDs over time val spammers = sc.sequenceFile(“hdfs://spammers.seq”) sc.twitterStream(...) .filter(t => t.text.contains(“QCon”)) .transform(tweets => tweets.map(t => (t.user, t)).join(spammers)) .print()
MLlib Vectors, Matrices
MLlib Vectors, Matrices = RDD[Vector] Iterative computation points = sc.textFile(“data.txt”).map(parsePoint) model = KMeans.train(points, 10) model.predict(newPoint)
GraphX Represents graphs as RDDs of edges and vertices
GraphX Represents graphs as RDDs of edges and vertices
GraphX Represents graphs as RDDs of edges and vertices
Combining Processing Types // Load data using SQL val points = ctx.sql( “select latitude, longitude from historic_tweets”) // Train a machine learning model val model = KMeans.train(points, 10) // Apply it to a stream sc.twitterStream(...) .map(t => (model.closestCenter(t.location), 1)) .reduceByWindow(“5s”, _ + _)
Composing Workloads Separate systems: . . . HDFS read HDFS write ETL HDFS read HDFS write train HDFS read HDFS write query HDFS write HDFS read ETL train query Spark:
Hive Impala (disk) Impala (mem) Spark (disk) Spark (mem) 0 10 20 30 40 50 Response Time (sec) SQL Mahout GraphLab Spark 0 10 20 30 40 50 60 Response Time (min) ML Performance vs Specialized Systems Storm Spark 0 5 10 15 20 25 30 35 Throughput (MB/s/node) Streaming
On-Disk Performance: Petabyte Sort Spark beat last year’s Sort Benchmark winner, Hadoop, by 3× using 10× fewer machines 2013 Record (Hadoop) Spark 100 TB Spark 1 PB Data Size 102.5 TB 100 TB 1000 TB Time 72 min 23 min 234 min Nodes 2100 206 190 Cores 50400 6592 6080 Rate/Node 0.67 GB/min 20.7 GB/min 22.5 GB/min tinyurl.com/spark-sort
Overview Why a unified engine? Spark execution model Why was Spark so general? What’s next
Why was Spark so General? In a world of growing data complexity, understanding this can help us design new tools / pipelines Two perspectives: > Expressiveness perspective > Systems perspective
1. Expressiveness Perspective Spark ≈ MapReduce + fast data sharing
1. Expressiveness Perspective MapReduce can emulate any distributed system! How to share data! quickly across steps? Local computation All-to-all communication One MR step … Spark: RDDs How low is this latency? Spark: ~100 ms
2. Systems Perspective Main bottlenecks in clusters are network and I/O Any system that lets apps control these resources can match speed of specialized ones In Spark: > Users control data partitioning & caching > We implement the data structures and algorithms of specialized systems within Spark records
Examples Spark SQL > A SchemaRDD holds records for each chunk of data (multiple rows), with columnar compression GraphX > GraphX represents graphs as an RDD of HashMaps so that it can join quickly against each partition
Result Spark can leverage most of the latest innovations in databases, graph processing, machine learning, … Users get a single API that composes very efficiently More info: tinyurl.com/matei-thesis
Overview Why a unified engine? Spark execution model Why was Spark so general? What’s next
What’s Next for Spark While Spark has been around since 2009, many pieces are just beginning 300 contributors, 2 whole libraries new this year Big features in the works
Spark 1.2 (Coming in Dec) New machine learning pipelines API > Featurization & parameter search, similar to SciKit-Learn Python API for Spark Streaming Spark SQL pluggable data sources > Hive, JSON, Parquet, Cassandra, ORC, … Scala 2.11 support
Beyond Hadoop Batch Interactive Streaming Hadoop Cassandra Mesos … … Public Clouds Your application here Unified API across workloads, storage systems and environments
Learn More Downloads and tutorials: spark.apache.org Training: databricks.com/training (free videos) Databricks Cloud: databricks.com/cloud
www.spark-summit.org
Watch the video with slide synchronization on InfoQ.com! http://www.infoq.com/presentations/apache-spark- big-data

Unified Big Data Processing with Apache Spark

  • 2.
    Watch the videowith slide synchronization on InfoQ.com! http://www.infoq.com/presentations /apache-spark-big-data InfoQ.com: News & Community Site • 750,000 unique visitors/month • Published in 4 languages (English, Chinese, Japanese and Brazilian Portuguese) • Post content from our QCon conferences • News 15-20 / week • Articles 3-4 / week • Presentations (videos) 12-15 / week • Interviews 2-3 / week • Books 1 / month
  • 3.
    Presented at QConSan Francisco www.qconsf.com Purpose of QCon - to empower software development by facilitating the spread of knowledge and innovation Strategy - practitioner-driven conference designed for YOU: influencers of change and innovation in your teams - speakers and topics driving the evolution and innovation - connecting and catalyzing the influencers and innovators Highlights - attended by more than 12,000 delegates since 2007 - held in 9 cities worldwide
  • 4.
    Unified Big DataProcessing with Apache Spark Matei Zaharia @matei_zaharia
  • 5.
    What is ApacheSpark? Fast & general engine for big data processing Generalizes MapReduce model to support more types of processing Most active open source project in big data
  • 6.
    About Databricks Foundedby the creators of Spark in 2013 Continues to drive open source Spark development, and offers a cloud service (Databricks Cloud) Partners to support Spark with Cloudera, MapR, Hortonworks, Datastax
  • 7.
    Spark Community MapReduce YARN HDFS Storm Spark 2000 1500 1000 500 0 MapReduce YARN HDFS Storm Spark 350000 300000 250000 200000 150000 100000 50000 0 Commits Lines of Code Changed Activity in past 6 months
  • 8.
    Community Growth 100 75 50 25 0 Contributors per Month to Spark 2010 2011 2012 2013 2014 2-3x more activity than Hadoop, Storm, MongoDB, NumPy, D3, Julia, …
  • 9.
    Overview Why aunified engine? Spark execution model Why was Spark so general? What’s next
  • 10.
  • 11.
    MapReduce A generalengine for batch processing
  • 12.
    Beyond MapReduce MapReducewas great for batch processing, but users quickly needed to do more: > More complex, multi-pass algorithms > More interactive ad-hoc queries > More real-time stream processing Result: many specialized systems for these workloads
  • 13.
    Big Data SystemsToday MapReduce Pregel Giraph Presto Storm Dremel Drill Impala S4 . . . Specialized systems for new workloads General batch processing
  • 14.
    Problems with SpecializedSystems More systems to manage, tune, deploy Can’t combine processing types in one application > Even though many pipelines need to do this! > E.g. load data with SQL, then run machine learning In many pipelines, data exchange between engines is the dominant cost!
  • 15.
    MapReduce Pregel Giraph Presto Storm Dremel Drill Impala S4 Specialized systems for new workloads General batch processing Unified engine Big Data Systems Today ? . . .
  • 16.
    Overview Why aunified engine? Spark execution model Why was Spark so general? What’s next
  • 17.
    Background Recall 3workloads were issues for MapReduce: > More complex, multi-pass algorithms > More interactive ad-hoc queries > More real-time stream processing While these look different, all 3 need one thing that MapReduce lacks: efficient data sharing
  • 18.
    Data Sharing inMapReduce iter. 1 iter. 2 . . . Input HDFS read HDFS write HDFS read HDFS write Input query 1 query 2 query 3 result 1 result 2 result 3 . . . HDFS read Slow due to data replication and disk I/O
  • 19.
    What We’d Like iter. 1 iter. 2 . . . Input Distributed memory Input query 1 query 2 query 3 . . . one-time processing 10-100× faster than network and disk
  • 20.
    Spark Model ResilientDistributed Datasets (RDDs) > Collections of objects that can be stored in memory or disk across a cluster > Built via parallel transformations (map, filter, …) > Fault-tolerant without replication
  • 21.
    Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Base RTDraDn sformed RDD lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(‘t’)[2]) messages.cache() Block 1 Block 2 Block 3 Worker Worker Worker Driver messages.filter(lambda s: “foo” in s).count() messages.filter(lambda s: “bar” in s).count() . . . results tasks Cache 1 Cache 2 Cache 3 Action Full-text search of Wikipedia in <1 sec (vs 20 sec for on-disk data)
  • 22.
    Fault Tolerance RDDstrack lineage info to rebuild lost data file.map(lambda rec: (rec.type, 1)) .reduceByKey(lambda x, y: x + y) .filter(lambda (type, count): count > 10) map reduce filter Input file
  • 23.
    Fault Tolerance RDDstrack lineage info to rebuild lost data file.map(lambda rec: (rec.type, 1)) map reduce filter Input file .reduceByKey(lambda x, y: x + y) .filter(lambda (type, count): count > 10)
  • 24.
    Example: Logistic Regression 4000 3500 Running Time (s) Number of Iterations 3000 2500 2000 1500 1000 500 0 1 5 10 20 30 110 s / iteration Hadoop Spark first iteration 80 s later iterations 1 s
  • 25.
    Spark in Scalaand Java // Scala: val lines = sc.textFile(...) lines.filter(s => s.contains(“ERROR”)).count() // Java: JavaRDD<String> lines = sc.textFile(...); lines.filter(s -> s.contains(“ERROR”)).count();
  • 26.
  • 27.
    Spark Streaming real-time Spark Core Spark SQL relational MLlib machine learning GraphX graph Libraries Built on Spark
  • 28.
    Spark SQL Representstables as RDDs Tables = Schema + Data
  • 29.
    Spark SQL Representstables as RDDs Tables = Schema + Data = SchemaRDD From Hive: c = HiveContext(sc) rows = c.sql(“select text, year from hivetable”) rows.filter(lambda r: r.year > 2013).collect() {“text”: “hi”, “user”: { “name”: “matei”, “id”: 123 }} From JSON: c.jsonFile(“tweets.json”).registerTempTable(“tweets”) c.sql(“select text, user.name from tweets”) tweets.json
  • 30.
  • 31.
    Spark Streaming Time RDD RDD RDD RDD RDD RDD Represents streams as a series of RDDs over time val spammers = sc.sequenceFile(“hdfs://spammers.seq”) sc.twitterStream(...) .filter(t => t.text.contains(“QCon”)) .transform(tweets => tweets.map(t => (t.user, t)).join(spammers)) .print()
  • 32.
  • 33.
    MLlib Vectors, Matrices= RDD[Vector] Iterative computation points = sc.textFile(“data.txt”).map(parsePoint) model = KMeans.train(points, 10) model.predict(newPoint)
  • 34.
    GraphX Represents graphsas RDDs of edges and vertices
  • 35.
    GraphX Represents graphsas RDDs of edges and vertices
  • 36.
    GraphX Represents graphsas RDDs of edges and vertices
  • 37.
    Combining Processing Types // Load data using SQL val points = ctx.sql( “select latitude, longitude from historic_tweets”) // Train a machine learning model val model = KMeans.train(points, 10) // Apply it to a stream sc.twitterStream(...) .map(t => (model.closestCenter(t.location), 1)) .reduceByWindow(“5s”, _ + _)
  • 38.
    Composing Workloads Separatesystems: . . . HDFS read HDFS write ETL HDFS read HDFS write train HDFS read HDFS write query HDFS write HDFS read ETL train query Spark:
  • 39.
    Hive Impala (disk) Impala (mem) Spark (disk) Spark (mem) 0 10 20 30 40 50 Response Time (sec) SQL Mahout GraphLab Spark 0 10 20 30 40 50 60 Response Time (min) ML Performance vs Specialized Systems Storm Spark 0 5 10 15 20 25 30 35 Throughput (MB/s/node) Streaming
  • 40.
    On-Disk Performance: PetabyteSort Spark beat last year’s Sort Benchmark winner, Hadoop, by 3× using 10× fewer machines 2013 Record (Hadoop) Spark 100 TB Spark 1 PB Data Size 102.5 TB 100 TB 1000 TB Time 72 min 23 min 234 min Nodes 2100 206 190 Cores 50400 6592 6080 Rate/Node 0.67 GB/min 20.7 GB/min 22.5 GB/min tinyurl.com/spark-sort
  • 41.
    Overview Why aunified engine? Spark execution model Why was Spark so general? What’s next
  • 42.
    Why was Sparkso General? In a world of growing data complexity, understanding this can help us design new tools / pipelines Two perspectives: > Expressiveness perspective > Systems perspective
  • 43.
    1. Expressiveness Perspective Spark ≈ MapReduce + fast data sharing
  • 44.
    1. Expressiveness Perspective MapReduce can emulate any distributed system! How to share data! quickly across steps? Local computation All-to-all communication One MR step … Spark: RDDs How low is this latency? Spark: ~100 ms
  • 45.
    2. Systems Perspective Main bottlenecks in clusters are network and I/O Any system that lets apps control these resources can match speed of specialized ones In Spark: > Users control data partitioning & caching > We implement the data structures and algorithms of specialized systems within Spark records
  • 46.
    Examples Spark SQL > A SchemaRDD holds records for each chunk of data (multiple rows), with columnar compression GraphX > GraphX represents graphs as an RDD of HashMaps so that it can join quickly against each partition
  • 47.
    Result Spark canleverage most of the latest innovations in databases, graph processing, machine learning, … Users get a single API that composes very efficiently More info: tinyurl.com/matei-thesis
  • 48.
    Overview Why aunified engine? Spark execution model Why was Spark so general? What’s next
  • 49.
    What’s Next forSpark While Spark has been around since 2009, many pieces are just beginning 300 contributors, 2 whole libraries new this year Big features in the works
  • 50.
    Spark 1.2 (Comingin Dec) New machine learning pipelines API > Featurization & parameter search, similar to SciKit-Learn Python API for Spark Streaming Spark SQL pluggable data sources > Hive, JSON, Parquet, Cassandra, ORC, … Scala 2.11 support
  • 51.
    Beyond Hadoop BatchInteractive Streaming Hadoop Cassandra Mesos … … Public Clouds Your application here Unified API across workloads, storage systems and environments
  • 52.
    Learn More Downloadsand tutorials: spark.apache.org Training: databricks.com/training (free videos) Databricks Cloud: databricks.com/cloud
  • 53.
  • 55.
    Watch the videowith slide synchronization on InfoQ.com! http://www.infoq.com/presentations/apache-spark- big-data