Investigate if we should skip zipping of parquet dependency table
Closed this issue · 7 comments
In #372 we introduced storing the dependency table as a PARQUET file, instead of a CSV file.
When the file is uploaded to the server, still a ZIP file is created first. As PARQUET comes already with compression, we should check:
- Is the file size still reduced by using ZIP?
- How much code is affected, if we would skip using ZIP before uploading?
- If we use a similar compression algorithm directly in PARQUET, do we loose speed compared to the current approach?
Related to #181
To answer the first question, we create a PARQUET file and a corresponding ZIP file, and compare their sizes.
NOTE: the following example requires the dev
branch of audb
at the moment.
import audb
import audeer
import os
deps = audb.dependencies("musan", version="1.0.0")
parquet_file = "deps.parquet"
zip_file = "deps.zip"
deps.save(parquet_file)
audeer.create_archive(".", parquet_file, zip_file)
parquet_size = os.stat(parquet_file).st_size
zip_size = os.stat(zip_file).st_size
print(f"Parquet file size: {parquet_size >> 10:.0f} kB")
print(f"Zip file size: {zip_size >> 10:.0f} kB")
returns
Parquet file size: 175 kB
Zip file size: 130 kB
I repeated it with librispeech
3.1.0 from our internal repository to have an example of a bigger dataset:
Parquet file size: 21848 kB
Zip file size: 16163 kB
Regarding the second question, we would need to change the following code block in audb/core/publish.py
:
Lines 753 to 759 in fa14acc
There we could simply use put_file()
instead of put_archive()
to not zip the file.
Slightly more complicated will be the case of loading the dependency table, as we might have a ZIP file or a PARQUET file on the server, which is not ideal. The affected code block is in audb/core/api.py
in the definition of audb.dependencies()
:
Lines 275 to 282 in fa14acc
There we could first try to load the PARQUET file (or check if it exists), and otherwise load the ZIP file.
An alternative approach would be to still use ZIP, but don't compress the file as proposed in #181 (comment)
Then there are also two parts inside audb/core/api.py
inside remove_media()
:
Lines 492 to 500 in fa14acc
Lines 550 to 560 in fa14acc
To answer the third question, I created the benchmark script shown below, that tests different ways to store and load the dependency table on a dataset containing 292,381 files. When running the script, it returns:
parquet snappy
Writing time: 0.2501 s
Reading time: 0.1112 s
File size: 21848 kB
parquet snappy + zip no compression
Writing time: 0.2985 s
Reading time: 0.1290 s
File size: 21848 kB
parquet snappy + zip
Writing time: 1.1113 s
Reading time: 0.2630 s
File size: 16163 kB
parquet gzip
Writing time: 1.5897 s
Reading time: 0.1205 s
File size: 13524 kB
The zipped CSV file, currently used to store the dependency table of the same dataset has a size of 14390 kB.
Benchmark script
import os
import time
import zipfile
import pyarrow.parquet
import audb
import audeer
parquet_file = "deps.parquet"
zip_file = "deps.zip"
def clear():
for file in [parquet_file, zip_file]:
if os.path.exists(file):
os.remove(file)
deps = audb.dependencies("librispeech", version="3.1.0")
print("parquet snappy")
clear()
t0 = time.time()
table = deps._dataframe_to_table(deps._df, file_column=True)
pyarrow.parquet.write_table(table, parquet_file, compression="snappy")
t = time.time() - t0
print(f"Writing time: {t:.4f} s")
t0 = time.time()
table = pyarrow.parquet.read_table(parquet_file)
df = deps._table_to_dataframe(table)
t = time.time() - t0
print(f"Reading time: {t:.4f} s")
size = os.stat(parquet_file).st_size
print(f"File size: {size >> 10:.0f} kB")
print()
print("parquet snappy + zip no compression")
clear()
t0 = time.time()
table = deps._dataframe_to_table(deps._df, file_column=True)
pyarrow.parquet.write_table(table, parquet_file, compression="snappy")
with zipfile.ZipFile(zip_file, "w", zipfile.ZIP_STORED) as zf:
full_file = audeer.path(".", parquet_file)
zf.write(full_file, arcname=parquet_file)
t = time.time() - t0
print(f"Writing time: {t:.4f} s")
t0 = time.time()
audeer.extract_archive(zip_file, ".")
table = pyarrow.parquet.read_table(parquet_file)
df = deps._table_to_dataframe(table)
t = time.time() - t0
print(f"Reading time: {t:.4f} s")
size = os.stat(zip_file).st_size
print(f"File size: {size >> 10:.0f} kB")
print()
print("parquet snappy + zip")
clear()
t0 = time.time()
table = deps._dataframe_to_table(deps._df, file_column=True)
pyarrow.parquet.write_table(table, parquet_file, compression="snappy")
with zipfile.ZipFile(zip_file, "w", zipfile.ZIP_DEFLATED) as zf:
full_file = audeer.path(".", parquet_file)
zf.write(full_file, arcname=parquet_file)
t = time.time() - t0
print(f"Writing time: {t:.4f} s")
os.remove(parquet_file)
t0 = time.time()
audeer.extract_archive(zip_file, ".")
table = pyarrow.parquet.read_table(parquet_file)
df = deps._table_to_dataframe(table)
t = time.time() - t0
print(f"Reading time: {t:.4f} s")
size = os.stat(zip_file).st_size
print(f"File size: {size >> 10:.0f} kB")
print()
print("parquet gzip")
clear()
t0 = time.time()
table = deps._dataframe_to_table(deps._df, file_column=True)
pyarrow.parquet.write_table(table, parquet_file, compression="GZIP")
t = time.time() - t0
print(f"Writing time: {t:.4f} s")
t0 = time.time()
table = pyarrow.parquet.read_table(parquet_file)
df = deps._table_to_dataframe(table)
t = time.time() - t0
print(f"Reading time: {t:.4f} s")
size = os.stat(parquet_file).st_size
print(f"File size: {size >> 10:.0f} kB")
"zip no compression" is referring to the solution proposed in #181, to still be able to upload the files as ZIP files to the server. In #181 we discuss media files, for which it is important to store them in a ZIP file, as we also have to preserve the underlying folder structure. This is not the case for the dependency table, and also the file extension will always be the same for the dependency table.
Our current approach is "parquet snappy + zip". If we switch to any of the other approaches, reading time would be halved.
We can choose between using GZIP directly when creating the PARQUET file. This increases writing time, but reduces the files size. Or we could switch to using snappy compression, which decreases writing time, but will result in a larger file.
@ChristianGeng any preference?
Or we could switch to using snappy compression, which decreases writing time, but will result in a larger file. @ChristianGeng any preference?
In general I think that disk storage is normally quite cheap, so I would find it a good move to be able to read data faster. So I would be open to depart from "parquet snappy + zip" and optimize for reading time by going into snappy direction.
The SOV post here suggests too that excessive zipping is for cold data. I think we have something in between - luke warm data - but CPU is normally more expensive. Apart from that the SOV post discusses "splittability". Concerning determinism in order to be able to md5sum a file I have not been able to answer.