Spark Broadcast exceeding executor memory with large training data set
shaunswanson opened this issue · 4 comments
My X variable is between 1-4 GB. The pre_dispatch arg when initializing GridSearchCV doesn't appear to be used or have any effect, preventing me from parallelizing a decently-sized param_grid in my Spark cluster.
The X variable is being broadcast to all combinations of param_grid (48 in our case), causing memory to fill up.
Is there a way that the code above could pull the broadcast variables, rather than pushing them eagerly to every task?
Yeah pre_dispatch doesn't have any effect; it's there to make this a drop-in replacement for scikit's implementation, but the execution context is quite different.
A broadcast will only be copied once to each executor in Spark. Spark effectively 'pulls' but it won't matter as all the tasks need this data. The grid here reuses the single instance of the X broadcast. I don't think that's the issue unless there's something subtler at work.
However this fix might be relevant, which would come in the next version: 03df085
Else, maybe you can say more about what runs out of memory where.
PS I merged #100 which removes "pre_dispatch". Reopen or comment if the next release doesn't resolve this, as I think the issue is broadcasting the split indices.
@srowen There is a restriction on the size of X due to cloudpickle limitations. I'm seeing the following error when the size of X exceeds 2 GB.
PicklingError: Could not serialize broadcast: OverflowError: cannot serialize a string larger than 2 GiB
How do we use GridsearchCV with larger training datasets?
One option I can think of is to not use broadcast variable for X and y. Instead, load this as a pandas data frame and write to a file. Every executor would read from that file.
Yeah, in that case you would need to side-load the data somehow. This is a limitation of Spark/cloudpickle, as you say.