The issue with the performance of unflattening process
Anna050689 opened this issue · 1 comments
The issue is related to the performance in the large datasets.
The method 'unflatten_list' has been used for the unflattening process.
The dataset has 80750 rows and 1051 columns.
The process of unflattening took 6 hours and 5 minutes.
Have you faced with this issue? How the unflattening process might be optimized?
it's a single-threaded implementation so doesn't surprise me that it's not super scalable.
off the top of my head, you might be able to do a parallel map-reduce type of computation. first, in parallel apply unflatten
to each {key: value} in the input (or partitions of the input) and then reduce the results into the final output. whether that saves you time or not would really depend on how independent the keys are in your input dict.
happy to take a closer look if you share a snippet of your data or at least sth that looks very similar to it.