Spark-frame is a library that super-charges your Spark DataFrames!
It brings several utility methods and transformation functions for PySpark DataFrames.
Here is a quick list of the most exciting features 😎
spark_frame.data_diff.compare_dataframes
: compare two SQL tables or DataFrames and generate an HTML report to view the result. And yes, this is completely free, open source, and it works even with complex data structures ! It also detects column reordering and can handle type changes. Go check it out ❗spark_frame.nested
: Did you ever thought manipulating complex data structures in SQL or Spark was a nightmare 🎃 ? You just found the solution ! Thenested
library makes those manipulations much cleaner and simpler. Get started over there 🚀spark_frame.transformations
: A wide collection of generic dataframe transformations.- Ever wanted to apply a transformation to every field of a DataFrame depending on it's name or type ? Easy as pie 🍰
- Ever wanted to rename every field of a DataFrame, including the deeply nested ones ? Done: 👌
- Ever wanted to analyze the content of a DataFrame,
but
DataFrame.describe()
does not work with complex data types ? You're welcome 🙏
spark_frame.schema_utils
: Need to dump the schema of a DataFrame somewhere to be able to load it later ? We got you covered 👍spark_frame.graph.ascending_forest_traversal
: Need an algorithm that takes the adjacency matrix of a tree 🌳 (or forest) graph and associates each node to their corresponding root node ? But that other algorithm you tried went into an infinite loop ∞ because your graph isn't really a tree and occasionally contains cycles ? Try this 🌲
Visit the official Spark-frame website documentation for use cases examples and reference.
spark-frame is available on PyPi.
pip install spark-frame
This library does not depend on any other library. Pyspark must be installed separately to use it. It is compatible with the following versions:
- Python: requires 3.8.1 or higher (tested against Python 3.8, 3.9, 3.10, 3.11 and 3.12)
- Pyspark: requires 3.3.0 or higher (tested against PySpark 3.3, 3.4 and 3.5)
This library is tested against Windows, Mac and Linux.
However, testing for the following combinations has been disabled, because they are failing:
- PySpark 3.3 with Python >= 3.11
- PySpark 3.4 with Python >= 3.12
- PySpark 3.5 with Python 3.12 on Windows
Some features require extra libraries to be installed alongside this project. We chose to not include them as direct dependencies for security and flexibility reasons. This way, users who are not using these features don't need to worry about these dependencies.
feature | Method | spark-frame's version |
dependency required |
---|---|---|---|
Generating HTML reports for data diff |
DiffResult.export_to_html |
>= 0.5.0 | data-diff-viewer==0.3.* (>=0.3.2) |
Generating HTML reports for data diff |
DiffResult.export_to_html |
0.4.* | data-diff-viewer==0.2.* |
Generating HTML reports for data diff |
DiffResult.export_to_html |
< 0.4 | jinja2 |
Since version 0.4, the code used to generate HTML diff reports has been moved to data-diff-viewer from the same author. It comes with a dependency to duckdb, which is used to store the diff results and embed them in the HTML page.
These methods were initially part of the karadoc project used at Younited, but they were fully independent from karadoc, so it made more sense to keep them as a standalone library.
Several of these methods were my initial inspiration to make the cousin project
bigquery-frame, which was first made to illustrate
this blog article.
This is why you will find similar methods in both spark_frame
and bigquery_frame
,
except the former runs on PySpark while the latter runs on BigQuery (obviously).
I try to keep both projects consistent together, and will eventually port new developments made on
one project to the other one.
Bugfixes
- data-diff:
- Fix export of HTML report not working in cluster mode.
Please do not use.
New features:
- data-diff:
- Full sample rows in data-diff: in the data-diff HTML report, you can now click on a most frequent value or change for a column and it will display the full content of a row where this change happens.
Breaking Changes:
- data-diff:
- The names of the keys of the
DiffResult.diff_df_shards
dict have changed: All keys except the root key (""
) have been appended a REPETITION_MARKER ("!"
). This will make future manipulations easier. This should not impact users as it is a very advanced mechanic.
- The names of the keys of the
Fixes and improvements on data_diff.
Improvements:
- data-diff:
- Now supports complex data types. Declaring a repeated field (e.g.
"s!.id"
in join_cols will now explode the corresponding array and perform the diff on it). - When columns are removed or renamed, they are now still displayed in the per-column diff report.
- Refactored and improved the HTML report: it is now fully standalone and can be opened without any internet connection .
- Can now generate the HTML report directly on any remote file system accessible by Spark (e.g. "hdfs", "s3", etc.)
- A user-friendly error is now raised when one of the
join_cols
does not exist.
- Now supports complex data types. Declaring a repeated field (e.g.
- added package
spark_frame.filesystem
that can be used to read and write files directly from the driver using the java FileSystem from Spark's JVM.
Breaking Changes:
- data-diff:
spark_frame.data_diff.DataframeComparator
object has been removed. Please use directly the methodspark_frame.data_diff.compare_dataframes
.- package
spark_frame.data_diff.diff_results
has been renamed todiff_result
. - Generating HTML reports for data diff does not require jinja anymore, but it does now require the installation of the library data-diff-viewer, please check the Compatibilities and requirements section to know which version to use.
- The DiffResult object returned by the
compare_dataframes
method has evolved. In particular, the type ofdiff_df_shards
changed from a singleDataFrame
to aDict[str, DataFrame]
. DiffFormatOptions.max_string_length
option has been removedDiffFormatOptions.nb_diffed_rows
has been renamed tonb_top_values_kept_per_column
spark_frame.data_diff.compare_dataframes_impl.DataframeComparatorException
was replaced withspark_frame.exceptions.DataFrameComparisonException
spark_frame.data_diff.compare_dataframes_impl.CombinatorialExplosionError
was replaced withspark_frame.exceptions.CombinatorialExplosionError
QA:
- Spark: Added tests to ensure compatibility with Pyspark versions 3.3, 3.4 and 3.5
- Replaced flake and isort with ruff
Fixes and improvements on data_diff
- Fix: automatic detection of join_col was sometimes selecting the wrong column
- Visual improvements to HTML diff report:
- Name of columns used for join are now displayed in bold
- Total number of column is now displayed when the diff is ok
- Fix incorrect HTML diff display when one of the DataFrames is empty
Fixes and improvements on data_diff
- The
export_html_diff_report
method now accepts arguments to specify the path and encoding of the output html report. - Data-diff join now works correctly with null values
- Visual improvements to HTML diff report
Fixes and improvements on data_diff
- Fixed incorrect diff results
- Column values are not truncated at all, this was causing incorrect results. The possibility to limit the size of the column values will be added back in a later version
- Made sure that the most frequent values per column are now displayed by decreasing order of frequency
Two new exciting features: analyze and data_diff. They are still in experimental stage and will be improved in future releases.
- Added a new transformation
spark_frame.transformations.analyze
. - Added new data_diff feature. Example:
from pyspark.sql import DataFrame
from spark_frame.data_diff import DataframeComparator
df1: DataFrame = ...
df2: DataFrame = ...
diff_result = DataframeComparator().compare_df(df1, df2) # Produces a DiffResult object
diff_result.display() # Print a diff report in the terminal
diff_result.export_to_html() # Generates a html diff report file named diff_report.html
- Added a new transformation
spark_frame.transformations.flatten_all_arrays
. - Added support for multi-arg transformation to
nested.select
andnested.with_fields
With this feature, we can now access parent fields from higher levels when applying a transformation. Example:
>>> nested.print_schema(df)
"""
root
|-- id: integer (nullable = false)
|-- s1!.average: integer (nullable = false)
|-- s1!.values!: integer (nullable = false)
"""
>>> df.show(truncate=False)
+---+--------------------------------------+
|id |s1 |
+---+--------------------------------------+
|1 |[{2, [1, 2, 3]}, {3, [1, 2, 3, 4, 5]}]|
+---+--------------------------------------+
>>> new_df = df.transform(nested.with_fields, {
>>> "s1!.values!": lambda s1, value: value - s1["average"] # This transformation takes 2 arguments
>>> })
+---+-----------------------------------------+
|id |s1 |
+---+-----------------------------------------+
|1 |[{2, [-1, 0, 1]}, {3, [-2, -1, 0, 1, 2]}]|
+---+-----------------------------------------+
-
Added a new amazing module called
spark_frame.nested
, which makes manipulation of nested data structure much easier! Make sure to check out the reference and the use-cases. -
Also added a new module called
spark_frame.nested_functions
, which contains aggregation methods for nested data structures (See Reference). -
New transformations:
spark_frame.transformations.transform_all_field_names
spark_frame.transformations.transform_all_fields
spark_frame.transformations.unnest_field
spark_frame.transformations.unnest_all_fields
spark_frame.transformations.union_dataframes
- New transformation:
spark_frame.transformations.convert_all_maps_to_arrays
. - New transformation:
spark_frame.transformations.sort_all_arrays
. - New transformation:
spark_frame.transformations.harmonize_dataframes
.