mn416/QPULib

Not an issue!

E3V3A opened this issue · 4 comments

E3V3A commented

Wow, very cool to find this and learn about this vector processing capabilities of the RPi!
Now, just need to clarify further what applications this would be great for. I.e. what type of programs would benefit from vectorisation?

I'll speak for myself here. I'm currently deep into massively parallel distributed programming. The type of programs I make are along the lines of Support Vector machines (SVM), i.e. given a set of images create a model for classification[1].

This approach is computing intensive, and I am looking for a platform for a good trade-off between cost and performance. The Raspberry Pi running only on the ARM chip is not adequate, but when combined with the vector processing on the VideoCore might just cut it.

Using the Raspberry Pi vector processing is just one part of this. My vision is to use this in a cluster of Pi's working on the same calculation together. The distributed part I've got more or less figured out, so tinkering around with the GPU is a logical next step for me.

So the long term goal for me with QPULib is to make such an application manageable. In the short term, I'm just having fun contributing to an open source project.


[1] There are undoubtedly people who will exclaim that 'deep learning' using neural networks is a much better approach to classifying than SVM. I disagree, deep learning is ultimately a dead end for various reasons. Also, I'm using SVM as an example of the type of application I am aiming at, not the thing I am implementing.

E3V3A commented

@wimrijnders

Thanks for your answer, but now they raise more questions:

I disagree, deep learning is ultimately a dead end for various reasons.

  1. Can you tell us a few reasons why this is so?

I'm using SVM as an example of the type of application I am aiming at...

  1. Can you please give some other examples of what you see would benefit from this?

Can you tell us a few reasons why this is so?

  • The obvious ones: massive resources required in hardware and time
  • The killer: even though the deep learning solution may work, you have no clue how it solves it. There is no insight gained whatsoever.

Can you please give some other examples of what you see would benefit from this?

In general, what comes to me spontaneously:

  • Rendering for movies
  • Finding optimal solutions for complex functions. Happens in
    • medical industry
    • chemistry
    • physics
  • Finding proofs for formal systems.

However: For sheer performance, you're better off using a high-end NVIDIA GPU. A cluster of Raspberry Pi's might be more energy efficient, but I haven't made that calculation yet. With Price/performance, NVIDIA cards beat anything the Pi's can do.

E3V3A commented

Thanks. I appreciate your feedback.