scipr-lab/dizk

Rust question

burdges opened this issue · 4 comments

How much code here duplicates code in other sipr-labs repositories? I suppose the base differs and then everything else above must change as a result?

I'm asking because one rust clone of Spark is being developed https://github.com/rajasekarv/native_spark although I suspect they should figure out async ala rajasekarv/vega#71 before one uses it for much.

We've been keeping an eye on native_spark, I think it would be fascinating to use it. I haven't had a chance to experiment with the library yet, does it support custom partitioners? I believe most of the other mapreduce operations are well supported.

Separately, are there any performance benchmark comparisons of Spark and native_spark? In DIZK, most of the time spent in practice is on compute and shuffling - it turns out gc, despite being expensive, hardly makes a dent in comparison to the expensive operations we require.

@howardwu Custom partitioners are indeed supported and the usage is very similar to Apache Spark.

For narrow dependency tasks, the performance is pretty much near the custom code you can write(30-50% behind) despite using boxed iterators between combinators. From my tests, we are generally 3-10 times faster than Spark dataframe APIs when involving pure CPU intensive tasks. We are trying to convert the boxed iterators to monomorphic ones. Shuffling is probably way behind Spark for now and is very basic. Shuffle data is not even compressed. Improving that is our priority.

A priori, I doubt compression improves the DIZK use case much, well except.. arkworks-rs/snark#122

I'll close this issue: I suspect this code does not overlap much with other rust code, but it's only my curiosity not an issue here.