Understanding persist()
in Spark
In Spark, the persist()
method is used to save a dataset (RDD or DataFrame) in memory or on disk, so you can use it multiple times without recalculating it every time. If you need to use the same data repeatedly, persist()
can help speed up your work and add some fault tolerance.
Syntax of persist()
Usage | Syntax (DataFrame) | Syntax (RDD) |
---|---|---|
No Argument | dfPersist = df.persist() | rdd.persist() |
With Argument | dfPersist = df.persist(StorageLevel.XXXXXXX) | rdd.persist(StorageLevel.XXXXXXX) |
Storage Levels
Note: The default storage level is different for DataFrames and RDDs:
- DataFrames: Default is
MEMORY_AND_DISK
. (DataFrame API Documentation) - RDDs: Default is
MEMORY_ONLY
. (RDD Persistence Documentation)
Different Values for StorageLevel
StorageLevel values are available in the pyspark.StorageLevel
class. Here is the complete list:
- DISK_ONLY: Store data on disk only.
- DISK_ONLY_2: Store data on disk with replication to two nodes.
- DISK_ONLY_3: Store data on disk with replication to three nodes.
- MEMORY_AND_DISK: Store data in memory and spill to disk if necessary.
- MEMORY_AND_DISK_2: Store data in memory and spill to disk if necessary, with replication to two nodes.
- MEMORY_AND_DISK_SER: Store data in JVM memory as serialized objects and spill to disk if necessary.
- MEMORY_AND_DISK_DESER: Store data in JVM memory as deserialized objects and spill to disk if necessary.
- MEMORY_ONLY: Store data as deserialized objects in JVM memory only.
- MEMORY_ONLY_2: Store data in memory only, with replication to two nodes.
- NONE: No storage level. (Note: This can’t be used as an argument.)
- OFF_HEAP: Store data in off-heap memory (experimental).
Note: The official Spark documentation states that the default storage level for RDD.persist()
is MEMORY_ONLY
. For df.persist()
, it is MEMORY_AND_DISK
, and starting from version 3.4.0, it is MEMORY_AND_DISK_DESER
. (Spark Persistence Documentation)
Example
Imagine you have a list of numbers, and you want to do some calculations on it multiple times. Here’s how you can use persist()
:
from pyspark import SparkContext
# Initialize Spark Context
sc = SparkContext("local", "PersistExample")
# Create an RDD
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
rdd = sc.parallelize(numbers)
# Persist the RDD in memory
rdd.persist()
# Do some actions
print(rdd.count()) # Count the numbers
print(rdd.collect()) # Collect all the numbers
# This will show the storage level being used
print(dfPersistMemoryOnly.storageLevel)
In this example, persist()
saves the rdd
in memory, so if you do more actions like count()
or collect()
, it doesn’t have to recalculate the data each time.
Removing Saved Data
When you don’t need the saved data anymore, you can remove it from memory or disk using unpersist()
:
rdd.unpersist()
When to use persist()
So, we have learned about persist()
. Does that mean you should use persist()
every time you have a DataFrame (df)? No. Here are the situations where it is recommended to use it.
-
Running Multiple Machine Learning Models: When you run multiple machine learning models on the same dataset, using
persist()
can save time. -
Interactive Data Analysis: If you are using a notebook for interactive data analysis, where you need to understand the data by running multiple queries,
persist()
will make the results come faster. -
ETL Process: In your ETL (Extract, Transform, Load) process, if you have a cleaned DataFrame and you are using the same DataFrame in many steps, using
persist()
can help. -
Graph Processing: When processing graphs using libraries like GraphX,
persist()
can improve performance.
Remember, persist() comes later in the tools for fixing speed. For significant improvement, focus on better partitioning of data (like fixing data skewness) and using broadcast variables for joins first. Don’t expect persist() to magically improve speed. Also, if your DataFrame (df) is very large and you persist it, it will consume a lot of memory on the worker nodes. This means they won’t have enough memory left for their other tasks. To avoid this, first assess the memory of the workers and the size of the df you are persisting.
For smaller DataFrames, use MEMORY_ONLY
, and for larger ones, use MEMORY_AND_DISK
(this will spill some data to disk if memory is low). Remember, MEMORY_AND_DISK
also has a performance impact because it increases I/O operations.
Additionally, large DataFrames can increase the frequency of garbage collection in the JVM, which affects overall performance.
Conclusion
Using persist()
in Spark is like telling Spark to remember your data so it doesn’t have to start from scratch every time you need it. This can make your work much faster, especially when working with large datasets, and also adds fault tolerance.
Understanding cache()
in Spark
The cache()
function is a shorthand method for persist()
with the default storage level, which is MEMORY_ONLY
. This means,
cache()
is same as persist(StorageLevel.MEMORY_ONLY)