High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark
- Autorzy:
- Holden Karau, Rachel Warren
- Ocena:
- Bądź pierwszym, który oceni tę książkę
- Stron:
- 358
- Dostępne formaty:
-
ePubMobi
Opis ebooka: High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark
Apache Spark is amazing when everything clicks. But if you haven’t seen the performance improvements you expected, or still don’t feel confident enough to use Spark in production, this practical book is for you. Authors Holden Karau and Rachel Warren demonstrate performance optimizations to help your Spark queries run faster and handle larger data sizes, while using fewer resources.
Ideal for software engineers, data engineers, developers, and system administrators working with large-scale data applications, this book describes techniques that can reduce data infrastructure costs and developer hours. Not only will you gain a more comprehensive understanding of Spark, you’ll also learn how to make it sing.
With this book, you’ll explore:
- How Spark SQL’s new interfaces improve performance over SQL’s RDD data structure
- The choice between data joins in Core Spark and Spark SQL
- Techniques for getting the most out of standard RDD transformations
- How to work around performance issues in Spark’s key/value pair paradigm
- Writing high-performance Spark code without Scala or the JVM
- How to test for functionality and performance when applying suggested improvements
- Using Spark MLlib and Spark ML machine learning libraries
- Spark’s Streaming components and external community packages
Wybrane bestsellery
-
Oprogramowanie Apache Kafka powstało jako broker wiadomości w LinkedIn. Obecnie pełni funkcję rozproszonego systemu przetwarzania strumieniowego danych, używanego do budowania aplikacji opracowujących duże ilości danych w czasie rzeczywistym. Z zalet tego oprogramowania korzystają firmy na całym ...
Apache Kafka. Kurs video. Przetwarzanie danych w czasie rzeczywistym Apache Kafka. Kurs video. Przetwarzanie danych w czasie rzeczywistym
(35.60 zł najniższa cena z 30 dni)35.60 zł
89.00 zł(-60%) -
Used by more than 80% of Fortune 100 companies, Apache Kafka has become the de facto event streaming platform. Kafka Connect is a key component of Kafka that lets you flow data between your existing systems and Kafka to process data in real time.With this practical guide, authors Mickael Maison a...(245.65 zł najniższa cena z 30 dni)
245.65 zł
289.00 zł(-15%) -
This book describes both batch processing and real-time processing pipelines. You’ll learn how to implement basic and advanced big data use cases with ease and develop a deep understanding of the Apache Beam model. In addition to this, you’ll discover how the portability layer works...
Building Big Data Pipelines with Apache Beam. Use a single programming model for both batch and stream data processing Building Big Data Pipelines with Apache Beam. Use a single programming model for both batch and stream data processing
-
Every enterprise application creates data, including log messages, metrics, user activity, and outgoing messages. Learning how to move these items is almost as important as the data itself. If you're an application architect, developer, or production engineer new to Apache Pulsar, this practical ...(211.65 zł najniższa cena z 30 dni)
211.65 zł
249.00 zł(-15%) -
Data is bigger, arrives faster, and comes in a variety of formatsâ??and it all needs to be processed at scale for analytics or machine learning. But how can you process such varied workloads efficiently? Enter Apache Spark.Updated to include Spark 3.0, this second edition shows data engineer...(211.65 zł najniższa cena z 30 dni)
211.65 zł
249.00 zł(-15%) -
Serverless computing greatly simplifies software development. Your team can focus solely on your application while the cloud provider manages the servers you need. This practical guide shows you step-by-step how to build and deploy complex applications in a flexible multicloud, multilanguage envi...
Learning Apache OpenWhisk. Developing Open Serverless Solutions Learning Apache OpenWhisk. Developing Open Serverless Solutions
(211.65 zł najniższa cena z 30 dni)211.65 zł
249.00 zł(-15%) -
Before you can build analytics tools to gain quick insights, you first need to know how to process data in real time. With this practical guide, developers familiar with Apache Spark will learn how to put this in-memory framework to use for streaming data. You’ll discover how Spark enables ...
Stream Processing with Apache Spark. Mastering Structured Streaming and Spark Streaming Stream Processing with Apache Spark. Mastering Structured Streaming and Spark Streaming
(211.65 zł najniższa cena z 30 dni)211.65 zł
249.00 zł(-15%) -
This practical guide explains you to program and understand the power of Apache Cassandra 3.x. You will explore the integration and interaction of Cassandra components, and explore features such as the token allocation algorithm, CQL3, vnodes, lightweight transactions, and data modelling in detail.
Mastering Apache Cassandra 3.x. An expert guide to improving database scalability and availability without compromising performance - Third Edition Mastering Apache Cassandra 3.x. An expert guide to improving database scalability and availability without compromising performance - Third Edition
-
Apache Hive helps you deal with data summarization, queries, and analysis for huge amounts of data. This book will give you a background in big data, and familiarize you with your Hive working environment. Next you will cover advanced topics like performance and security in Hive and how to work e...
Apache Hive Essentials. Essential techniques to help you process, and get unique insights from, big data - Second Edition Apache Hive Essentials. Essential techniques to help you process, and get unique insights from, big data - Second Edition
-
More and more data-driven companies are looking to adopt stream processing and streaming analytics. With this concise ebook, you’ll learn best practices for designing a reliable architecture that supports this emerging big-data paradigm.Authors Ted Dunning and Ellen Friedman (Real World Had...
Streaming Architecture. New Designs Using Apache Kafka and MapR Streams Streaming Architecture. New Designs Using Apache Kafka and MapR Streams
(80.74 zł najniższa cena z 30 dni)80.74 zł
94.99 zł(-15%)
O autorze ebooka
Holden Karau, Rachel Warren - pozostałe książki
-
Modern systems contain multi-core CPUs and GPUs that have the potential for parallel computing. But many scientific Python tools were not designed to leverage this parallelism. With this short but thorough resource, data scientists and Python programmers will learn how the Dask open source librar...(245.65 zł najniższa cena z 30 dni)
245.65 zł
289.00 zł(-15%) -
Serverless computing enables developers to concentrate solely on their applications rather than worry about where they've been deployed. With the Ray general-purpose serverless implementation in Python, programmers and data scientists can hide servers, implement stateful applications, support dir...(186.15 zł najniższa cena z 30 dni)
186.15 zł
219.00 zł(-15%) -
If you're training a machine learning model but aren't sure how to put it into production, this book will get you there. Kubeflow provides a collection of cloud native tools for different stages of a model's lifecycle, from data exploration, feature preparation, and model training to model servin...(143.65 zł najniższa cena z 30 dni)
143.65 zł
169.00 zł(-15%) -
When people want a way to process big data at speed, Spark is invariably the solution. With its ease of development (in comparison to the relative complexity of Hadoop), it’s unsurprising that it’s becoming popular with data analysts and engineers everywhere. Beginning with the fund...
Fast Data Processing with Spark 2. Accelerate your data for rapid insight - Third Edition Fast Data Processing with Spark 2. Accelerate your data for rapid insight - Third Edition
Ebooka "High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark" przeczytasz na:
-
czytnikach Inkbook, Kindle, Pocketbook, Onyx Boox i innych
-
systemach Windows, MacOS i innych
-
systemach Windows, Android, iOS, HarmonyOS
-
na dowolnych urządzeniach i aplikacjach obsługujących formaty: PDF, EPub, Mobi
Masz pytania? Zajrzyj do zakładki Pomoc »
Audiobooka "High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark" posłuchasz:
-
w aplikacji Ebookpoint na Android, iOS, HarmonyOs
-
na systemach Windows, MacOS i innych
-
na dowolnych urządzeniach i aplikacjach obsługujących format MP3 (pliki spakowane w ZIP)
Masz pytania? Zajrzyj do zakładki Pomoc »
Kurs Video "High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark" zobaczysz:
-
w aplikacjach Ebookpoint i Videopoint na Android, iOS, HarmonyOs
-
na systemach Windows, MacOS i innych z dostępem do najnowszej wersji Twojej przeglądarki internetowej
Szczegóły ebooka
- ISBN Ebooka:
- 978-14-919-4315-1, 9781491943151
- Data wydania ebooka:
- 2017-05-25 Data wydania ebooka często jest dniem wprowadzenia tytułu do sprzedaży i może nie być równoznaczna z datą wydania książki papierowej. Dodatkowe informacje możesz znaleźć w darmowym fragmencie. Jeśli masz wątpliwości skontaktuj się z nami sklep@ebookpoint.pl.
- Język publikacji:
- angielski
- Rozmiar pliku ePub:
- 4.1MB
- Rozmiar pliku Mobi:
- 10.0MB
Spis treści ebooka
- Preface
- First Edition Notes
- Supporting Books and Materials
- Conventions Used in This Book
- Using Code Examples
- OReilly Safari
- How to Contact the Authors
- How to Contact Us
- Acknowledgments
- 1. Introduction to High Performance Spark
- What Is Spark and Why Performance Matters
- What You Can Expect to Get from This Book
- Spark Versions
- Why Scala?
- To Be a Spark Expert You Have to Learn a Little Scala Anyway
- The Spark Scala API Is Easier to Use Than the Java API
- Scala Is More Performant Than Python
- Why Not Scala?
- Learning Scala
- Conclusion
- 2. How Spark Works
- How Spark Fits into the Big Data Ecosystem
- Spark Components
- How Spark Fits into the Big Data Ecosystem
- Spark Model of Parallel Computing: RDDs
- Lazy Evaluation
- Performance and usability advantages of lazy evaluation
- Lazy evaluation and fault tolerance
- Lazy evaluation and debugging
- Lazy Evaluation
- In-Memory Persistence and Memory Management
- Immutability and the RDD Interface
- Types of RDDs
- Functions on RDDs: Transformations Versus Actions
- Wide Versus Narrow Dependencies
- Spark Job Scheduling
- Resource Allocation Across Applications
- The Spark Application
- Default Spark Scheduler
- The Anatomy of a Spark Job
- The DAG
- Jobs
- Stages
- Tasks
- Conclusion
- 3. DataFrames, Datasets, and Spark SQL
- Getting Started with the SparkSession (or HiveContext or SQLContext)
- Spark SQL Dependencies
- Managing Spark Dependencies
- Avoiding Hive JARs
- Basics of Schemas
- DataFrame API
- Transformations
- Simple DataFrame transformations and SQL expressions
- Specialized DataFrame transformations for missing and noisy data
- Beyond row-by-row transformations
- Aggregates and groupBy
- Windowing
- Sorting
- Transformations
- Multi-DataFrame Transformations
- Set-like operations
- Plain Old SQL Queries and Interacting with Hive Data
- Data Representation in DataFrames and Datasets
- Tungsten
- Data Loading and Saving Functions
- DataFrameWriter and DataFrameReader
- Formats
- JSON
- JDBC
- Parquet
- Hive tables
- RDDs
- Local collections
- Additional formats
- Save Modes
- Partitions (Discovery and Writing)
- Datasets
- Interoperability with RDDs, DataFrames, and Local Collections
- Compile-Time Strong Typing
- Easier Functional (RDD like) Transformations
- Relational Transformations
- Multi-Dataset Relational Transformations
- Grouped Operations on Datasets
- Extending with User-Defined Functions and Aggregate Functions (UDFs, UDAFs)
- Query Optimizer
- Logical and Physical Plans
- Code Generation
- Large Query Plans and Iterative Algorithms
- Debugging Spark SQL Queries
- JDBC/ODBC Server
- Conclusion
- 4. Joins (SQL and Core)
- Core Spark Joins
- Choosing a Join Type
- Choosing an Execution Plan
- Speeding up joins by assigning a known partitioner
- Speeding up joins using a broadcast hash join
- Partial manual broadcast hash join
- Core Spark Joins
- Spark SQL Joins
- DataFrame Joins
- Self joins
- Broadcast hash joins
- DataFrame Joins
- Dataset Joins
- Conclusion
- 5. Effective Transformations
- Narrow Versus Wide Transformations
- Implications for Performance
- Implications for Fault Tolerance
- The Special Case of coalesce
- Narrow Versus Wide Transformations
- What Type of RDD Does Your Transformation Return?
- Minimizing Object Creation
- Reusing Existing Objects
- Using Smaller Data Structures
- Iterator-to-Iterator Transformations with mapPartitions
- What Is an Iterator-to-Iterator Transformation?
- Space and Time Advantages
- An Example
- Set Operations
- Reducing Setup Overhead
- Shared Variables
- Broadcast Variables
- Accumulators
- Reusing RDDs
- Cases for Reuse
- Iterative computations
- Multiple actions on the same RDD
- If the cost to compute each partition is very high
- Cases for Reuse
- Deciding if Recompute Is Inexpensive Enough
- Types of Reuse: Cache, Persist, Checkpoint, Shuffle Files
- Persist and cache
- Checkpointing
- Checkpointing example
- Alluxio (nee Tachyon)
- LRU Caching
- Shuffle files
- Noisy Cluster Considerations
- Interaction with Accumulators
- Conclusion
- 6. Working with Key/Value Data
- The Goldilocks Example
- Goldilocks Version 0: Iterative Solution
- How to Use PairRDDFunctions and OrderedRDDFunctions
- The Goldilocks Example
- Actions on Key/Value Pairs
- Whats So Dangerous About the groupByKey Function
- Goldilocks Version 1: groupByKey Solution
- Why GroupByKey fails
- Goldilocks Version 1: groupByKey Solution
- Choosing an Aggregation Operation
- Dictionary of Aggregation Operations with Performance Considerations
- Preventing out-of-memory errors with aggregation operations
- Dictionary of Aggregation Operations with Performance Considerations
- Multiple RDD Operations
- Co-Grouping
- Partitioners and Key/Value Data
- Using the Spark Partitioner Object
- Hash Partitioning
- Range Partitioning
- Custom Partitioning
- Preserving Partitioning Information Across Transformations
- Using narrow transformations that preserve partitioning
- Leveraging Co-Located and Co-Partitioned RDDs
- Dictionary of Mapping and Partitioning Functions PairRDDFunctions
- Dictionary of OrderedRDDOperations
- Sorting by Two Keys with SortByKey
- Secondary Sort and repartitionAndSortWithinPartitions
- Leveraging repartitionAndSortWithinPartitions for a Group by Key and Sort Values Function
- How Not to Sort by Two Orderings
- Goldilocks Version 2: Secondary Sort
- Defining the custom partitioner
- Filtering on each partition
- Combine the elements associated with one key
- Performance
- A Different Approach to Goldilocks
- Map to (cell value, column index) pairs
- Sort and count values on each partition
- Determine location of rank statistics on each partition
- Filter for rank statistics
- Goldilocks Version 3: Sort on Cell Values
- Straggler Detection and Unbalanced Data
- Back to Goldilocks (Again)
- Goldilocks Version 4: Reduce to Distinct on Each Partition
- Aggregate to ((cell value, column index), count) on each partition
- Sort and find rank statistics
- Goldilocks postmortem
- Conclusion
- 7. Going Beyond Scala
- Beyond Scala within the JVM
- Beyond Scala, and Beyond the JVM
- How PySpark Works
- PySpark RDDs
- PySpark DataFrames and Datasets
- Accessing the backing Java objects and mixing Scala code
- PySpark dependency management
- Installing PySpark
- How PySpark Works
- How SparkR Works
- Spark.jl (Julia Spark)
- How Eclair JS Works
- Spark on the Common Language Runtime (CLR)C# and Friends
- Calling Other Languages from Spark
- Using Pipe and Friends
- JNI
- Java Native Access (JNA)
- Underneath Everything Is FORTRAN
- Getting to the GPU
- The Future
- Conclusion
- 8. Testing and Validation
- Unit Testing
- General Spark Unit Testing
- Factoring your code for testability
- Regular Spark jobs (testing with RDDs)
- Streaming
- General Spark Unit Testing
- Mocking RDDs
- Testing DataFrames
- Unit Testing
- Getting Test Data
- Generating Large Datasets
- Sampling
- Property Checking with ScalaCheck
- Computing RDD Difference
- Integration Testing
- Choosing Your Integration Testing Environment
- Local mode
- Docker-based
- Yarn MiniCluster
- Choosing Your Integration Testing Environment
- Verifying Performance
- Spark Counters for Verifying Performance
- Projects for Verifying Performance
- Job Validation
- Conclusion
- 9. Spark MLlib and ML
- Choosing Between Spark MLlib and Spark ML
- Working with MLlib
- Getting Started with MLlib (Organization and Imports)
- MLlib Feature Encoding and Data Preparation
- Working with Spark vectors
- Preparing textual data
- Preparing data for supervised learning
- Feature Scaling and Selection
- MLlib Model Training
- Predicting
- Serving and Persistence
- Saveable (internal format)
- PMML
- Custom
- Model Evaluation
- Working with Spark ML
- Spark ML Organization and Imports
- Pipeline Stages
- Explain Params
- Data Encoding
- Data Cleaning
- Spark ML Models
- Putting It All Together in a Pipeline
- Training a Pipeline
- Accessing Individual Stages
- Data Persistence and Spark ML
- Automated model selection (parameter search)
- Extending Spark ML Pipelines with Your Own Algorithms
- Custom transformers
- Custom estimators
- Model and Pipeline Persistence and Serving with Spark ML
- General Serving Considerations
- Conclusion
- 10. Spark Components and Packages
- Stream Processing with Spark
- Sources and Sinks
- Receivers
- Repartitioning
- Sources and Sinks
- Batch Intervals
- Data Checkpoint Intervals
- Considerations for DStreams
- Output operations
- Stream Processing with Spark
- Considerations for Structured Streaming
- Data sources
- Output operations
- Custom sinks
- Machine learning with Structured Streaming
- Stream status and debugging
- High Availability Mode (or Handling Driver Failure or Checkpointing)
- GraphX
- Using Community Packages and Libraries
- Creating a Spark Package
- Conclusion
- A. Tuning, Debugging, and Other Things Developers Like to Pretend Dont Exist
- Spark Tuning and Cluster Sizing
- How to Adjust Spark Settings
- How to Determine the Relevant Information About Your Cluster
- Spark Tuning and Cluster Sizing
- Basic Spark Core Settings: How Many Resources to Allocate to the Spark Application?
- Calculating Executor and Driver Memory Overhead
- How Large to Make the Spark Driver
- A Few Large Executors or Many Small Executors?
- Many small executors
- Many large executors
- Allocating Cluster Resources and Dynamic Allocation
- Restrictions on dynamic allocation
- Dividing the Space Within One Executor
- Number and Size of Partitions
- Serialization Options
- Kryo
- Spark settings conclusion
- Kryo
- Some Additional Debugging Techniques
- Out of Disk Space Errors
- Logging
- Configuring logging
- Accessing logs
- Attaching debuggers
- Debugging in notebooks
- Python debugging
- Debugging conclusion
- Index
O'Reilly Media - inne książki
-
When it comes to building user interfaces on the web, React enables web developers to unlock a new world of possibilities. This practical book helps you take a deep dive into fundamental concepts of this JavaScript library, including JSX syntax and advanced patterns, the virtual DOM, React reconc...(194.65 zł najniższa cena z 30 dni)
203.15 zł
239.00 zł(-15%) -
This practical book provides a detailed explanation of the zero trust security model. Zero trust is a security paradigm shift that eliminates the concept of traditional perimeter-based security and requires you to "always assume breach" and "never trust but always verify." The updated edition off...(194.65 zł najniższa cena z 30 dni)
203.15 zł
239.00 zł(-15%) -
Decentralized finance (DeFi) is a rapidly growing field in fintech, having grown from $700 million to $100 billion over the past three years alone. But the lack of reliable information makes this area both risky and murky. In this practical book, experienced securities attorney Alexandra Damsker ...(194.65 zł najniższa cena z 30 dni)
203.15 zł
239.00 zł(-15%) -
Whether you're a startup founder trying to disrupt an industry or an entrepreneur trying to provoke change from within, your biggest challenge is creating a product people actually want. Lean Analytics steers you in the right direction.This book shows you how to validate your initial idea, find t...(118.15 zł najniższa cena z 30 dni)
126.65 zł
149.00 zł(-15%) -
If programming is magic, then web scraping is surely a form of wizardry. By writing a simple automated program, you can query web servers, request data, and parse it to extract the information you need. This thoroughly updated third edition not only introduces you to web scraping but also serves ...(203.15 zł najniższa cena z 30 dni)
203.15 zł
239.00 zł(-15%) -
Do you wish the existing books on site reliability engineering started at the beginning? Do you wish someone would walk you through how to become an SRE, how to think like an SRE, or how to build and grow a successful SRE function in your organization? Becoming SRE addresses all of these needs a...
Becoming SRE. First Steps Toward Reliability for You and Your Organization Becoming SRE. First Steps Toward Reliability for You and Your Organization
(135.15 zł najniższa cena z 30 dni)143.65 zł
169.00 zł(-15%) -
Data fabric, data lakehouse, and data mesh have recently appeared as viable alternatives to the modern data warehouse. These new architectures have solid benefits, but they're also surrounded by a lot of hyperbole and confusion. This practical book provides a guided tour of these architectures to...(237.15 zł najniższa cena z 30 dni)
245.65 zł
289.00 zł(-15%) -
As an engineering manager, you almost always have someone in your company to turn to for advice: a peer on another team, your manager, or even the head of engineering. But who do you turn to if you're the head of engineering? Engineering executives have a challenging learning curve, and many folk...(126.65 zł najniższa cena z 30 dni)
126.65 zł
149.00 zł(-15%) -
Trillions of lines of code help us in our lives, companies, and organizations. But just a single software cybersecurity vulnerability can stop entire companies from doing business and cause billions of dollars in revenue loss and business recovery. Securing the creation and deployment of software...(169.15 zł najniższa cena z 30 dni)
177.65 zł
209.00 zł(-15%) -
Entity resolution is a key analytic technique that enables you to identify multiple data records that refer to the same real-world entity. With this hands-on guide, product managers, data analysts, and data scientists will learn how to add value to data by cleansing, analyzing, and resolving data...(203.15 zł najniższa cena z 30 dni)
203.15 zł
239.00 zł(-15%)
Dzieki opcji "Druk na żądanie" do sprzedaży wracają tytuły Grupy Helion, które cieszyły sie dużym zainteresowaniem, a których nakład został wyprzedany.
Dla naszych Czytelników wydrukowaliśmy dodatkową pulę egzemplarzy w technice druku cyfrowego.
Co powinieneś wiedzieć o usłudze "Druk na żądanie":
- usługa obejmuje tylko widoczną poniżej listę tytułów, którą na bieżąco aktualizujemy;
- cena książki może być wyższa od początkowej ceny detalicznej, co jest spowodowane kosztami druku cyfrowego (wyższymi niż koszty tradycyjnego druku offsetowego). Obowiązująca cena jest zawsze podawana na stronie WWW książki;
- zawartość książki wraz z dodatkami (płyta CD, DVD) odpowiada jej pierwotnemu wydaniu i jest w pełni komplementarna;
- usługa nie obejmuje książek w kolorze.
Masz pytanie o konkretny tytuł? Napisz do nas: sklep[at]helion.pl.
Książka, którą chcesz zamówić pochodzi z końcówki nakładu. Oznacza to, że mogą się pojawić drobne defekty (otarcia, rysy, zagięcia).
Co powinieneś wiedzieć o usłudze "Końcówka nakładu":
- usługa obejmuje tylko książki oznaczone tagiem "Końcówka nakładu";
- wady o których mowa powyżej nie podlegają reklamacji;
Masz pytanie o konkretny tytuł? Napisz do nas: sklep[at]helion.pl.
Książka drukowana
Oceny i opinie klientów: High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark Holden Karau, Rachel Warren (0) Weryfikacja opinii następuję na podstawie historii zamówień na koncie Użytkownika umieszczającego opinię. Użytkownik mógł otrzymać punkty za opublikowanie opinii uprawniające do uzyskania rabatu w ramach Programu Punktowego.