Skip to content

Unleashing the power of Apache Spark™ on a K8s cluster with Fyrefuse

| |
2 minute read

Fyrefuse brings scale and parallelism to your data pipelines by leveraging the power of Apache Spark™ on a K8s cluster, without complexity.

What is Spark?

In this blog post, we will introduce Apache Spark and explore some of the areas in which its particular set of capabilities show the most potential. We will discuss why your teams should trust Fyrefuse to utilize Apache Spark™ on Kubernetes to maximum advantage without technical complexity.

Apache Spark is a general-purpose distributed data processing engine commonly adopted in a wide range of circumstances. Spark’s core data processing engine came out with a bunch of libraries for SQL, machine learning and stream processing, which can be used together.

Programming languages supported by Spark include Java, Python, Scala, and R. Application developers and data scientists incorporate Spark into their applications to rapidly query, analyze, and transform data at scale, in particular ETL and SQL batch jobs across large data sets.Untitled presentation (3)

What is Spark?

In this blog post, we will introduce Apache Spark and explore some of the areas in which its particular set of capabilities show the most potential. We will discuss why your teams should trust Fyrefuse to utilize Apache Spark™ on Kubernetes to maximum advantage without technical complexity.

Apache Spark is a general-purpose distributed data processing engine commonly adopted in a wide range of circumstances. Spark’s core data processing engine came out with a bunch of libraries for SQL, machine learning and stream processing, which can be used together.

Programming languages supported by Spark include Java, Python, Scala, and R. Application developers and data scientists incorporate Spark into their applications to rapidly query, analyze, and transform data at scale, in particular ETL and SQL batch jobs across large data sets.

Spark for data processing

Spark adopts MapReduce with less expensive shuffles in the data processing. Main advantages related to the competitors involve in-memory data storage and near real-time processing. In this way the performances can be several times faster than other big data technologies. Spark supports big data queries’ computation with optimization of the steps in data processing workflows and provides a higher level API to improve productivity.

Spark holds intermediate results in memory rather than writing them to disk enhancing multi use work but can work both in-memory and on-disk. FInally, Spark operators perform external operations when data does not fit in memory. It will attempt to store as much as data in memory and then will spill to disk to execute in parallel.

Benefits of running Apache Spark on Kubernetes

  • Containerization

The concept of containerization is inherited from traditional software engineering and applies to Spark too. This is the starting point if you decide to run Spark on Kubernetes. It increases portability, simplifies the dependencies loading and helps build reliable workflows.

To summarize, containerization speeds up the development iteration cycle to help deliver better applications and services.

  • Efficient parallel processing

Different cluster-managers rely on the concept of isolation if you want to reuse the same cluster for concurrent Spark applications. Indeed, many platforms suggest running transient clusters for production jobs and terminate them once the job is finished. The problem with this approach it’s the waste of compute and economical resources associated with the solution.

Spark on Kubernetes introduces a cluster of computing processes. It can be seen as a group of separate workers that can work better and more efficiently than a single worker. They share information, break down the tasks and produce outputs to come up with a single set of results.

Conclusion

Running Spark on Kubernetes is more flexible and can even be cheaper. Fyrefuse enables non-technical users to quickly set up Kubernetes clusters for optimized execution of Spark or in-memory jobs in almost any programming language to scale quickly and efficiently.