pandas DataFrame.to_parquet get killed while writing to hdfs3
eromoe opened this issue · 5 comments
My code like below
from hdfs3 import HDFileSystem
hdfs = HDFileSystem(host=HDFS_HOST, port=HDFS_PORT)
df = hdfs.read_stockquantitylogs(input_path)
df = process_df(df, process_stock_quantity_log, stack_hour=False)
output_path = input_path.replace('/arch', '/clean', 1)
hdfs.makedirs(dirname(output_path))
with hdfs.open(output_path, 'wb') as f:
df.to_parquet(f)
I didn't use dask for now, it is pandas. Here df is [31909929 rows x 3 columns] , I fond if I write 1000, it works. But print Killed
when write the whole df.
I change to put , still get killed.
I change to https://hdfscli.readthedocs.io/ , error gone. Though I have to save file in local then upload to hdfs.
It is hard to diagnose what "killed" might mean - presumably some buffer overrun in the C layer. It would be interesting to know the data size of the file you are trying to write.
You might want to try arrow's hdfs interface, which seems to be less error prone.
Can't hdfs3 catch such errors ?
This message is not being created by hdfs3, but by the OS when the process does something illegal. It will be in the C layer, because python produces nicer messages, and so there is no opportunity for python to catch it. You could perhaps invoke gdb to find out what, but such investigations are very hard.