xtensor-stack/xtensor-sparse

Handle runtime scheme policy

tkloczko opened this issue · 11 comments

Hi guys,
.
Firstly, I would like to tell you that the work you have already done is very interesting. I read the paper about the different format mentioned in #21 . It gives nice ideas to drive the architecture.
I also read the sources, the notion of scheme is a great way to customize the different storage models. However, when using such a design in, for instance, a FE code, some drawbacks can arise.

Typically, it is natural to build the matrix of the linear system using a map scheme. Then, according to the solver of the linear system, one needs to convert to either CSR, COO or another format. This is not quite difficult. But in case of the choice of the solver is done at runtime, using the current design is not possible since you don't know at compile time the type of scheme you need to tackle the solver that will be used.

This is the same drawback that led to conceive the std polymorphic allocator. You don't want to recompile the code every time you need to change the allocation policy. It is the same problem with the sparse scheme. In our experience, we realized that it was the solver that drives in general the scheme policy.
In terms of architecture, it is of course much involved. The sparse array or tensor can be composed by an abstract scheme (no more template parameter is needed I think) implemented by the concrete schemes but you then need abstract nz iterators which can lead to a lack of performance. However, it is still possible to "extract" the concrete scheme when performance is required.

We did a similar design few years ago in the context of distributed sparse linear algebra (in the sense of MPI). It would be great to see how this former work could fit with your brand new architecture.

See you,
Cheers,
Thibaud.

Hi @tkloczko,

thanks for your message!

Each scheme should be able to give the sparse N-D array in another scheme. I agree with you it's generally the solver which gives you the scheme to use. But since we have conversion tools, we should be able to communicate with the different layers that you can find in linear algebra. There are a lot of libraries in linear algebra (umfpack, superlu, hypre, ...) and it's a difficult topic. So the idea is not to write linear solvers but to build interfaces to these libraries with the good scheme and the good memory layer.

Let me know if I don't well understand your message.

Hi @gouarin ,

Thanks a lot for the reply. Maybe the first thing would be to know what is the main objective of xtensor-sparse. I am available a lot by the end of the week for a phone call or a visio. I also started to code a xsparse_polymorphic_scheme to illustrate what I have in minds in terms of runtime modularity.

Here is a link to a presentation we did 5 years ago that deal with distributed version of Sparse Matrix. The goal was to plug matrices coming from a MPI Finite-Volume (or FE) code with linear solver such as Hypre , Mumps, MaPhys ... It seems that it is a very similar problem that you try to address.

Let me know,
Cheers,
Thib

Hi @tkloczko,

Happy to see you here! As @gouarin said, the idea is to provide different schemes tjat implement a common API and conversion tools that allow to easily switch from one scheme to another one.

Even with static encoded scheme, these conversion tools will avoid rebuilding your software if you change the solver dynamically since the conversion code will be always be compiled.

I'm on holidays until the end of the week, but if you're available next week I'll be happy to join a call with you and Loic. Otherwise I will catch up with Loic so he can give me some feedback about your discussion.

Cheers,

Johan

Hi @JohanMabille ,

I will be available next week too, at more regular time ;-) ! Enjoy your holidays !

Let me know about your availability as soon as you are back.

Cheers,

Thib

I'm also on vacation this week. But I will be available next week to discuss with you.

Bonne fin de vacances alors :-) ! À la semaine prochaine !

I'm back and available any time except from 11:30 to 12:00 every day, and Wednesday the whole day.

Hi Johan !

Tomorrow between 12:45 and 14:30 ?

BTW, I did a PR just to illustrate what I had in mind.
Thib.

@tkloczko Yes I saw your PR but I didn't have the time to get into the details (I have more than 70 notifications to pop ;)).

Tomorrow between 13:30 and 14:30 works for me.

Ok let's go for that time !

Can we do that using Discord ?

@JohanMabille No pb, I know you are very busy :-) !