/CUDA_Floyd_Warshall_

CUDA implementation of the Floyd-Warshall All pairs shortest path graph algorithm(with path reconstruction)

Primary LanguageCuda

CUDA_Floyd_Warshall_

CUDA implementation of the Floyd-Warshall All pairs shortest path graph algorithm(with path reconstruction)

UPDATE: Made a new table with times for Tesla GPU. Make sure you run in release mode for full speed!

This is a very simple implementation of the Floyd-Warshall all-pairs-shortest-path algorithm written in two versions, a standard serial CPU version and a CUDA GPU version. Both implementations also calculate and store the full edge path and their respective weights from the start vertex to the end vertex(if such a path exists).

Uses two adjacency matrices, one for path values, one for path reconstruction.

привет Белгорода и Волгограда!

調布の日本の友人が、私のコードを主演してください!

NOTE: no overlocking of GPU, is running at stock 700 Mhz

Running Times CPU vs GPU for Floyd-Warshall APSP with full Path cache:


Total VerticesSize of Adjacency MatricesGPU time(s)
1,000 1,000,000 0.061s
2,000 4,000,000 0.392s
4,000 16,000,000 2.8s
8,000 64,000,000 22.6s
____

Tesla K40x initial tests

Total VerticesSize of Adjacency MatricesGPU time(s)
15,000 225,000,000 211.4s
20,000 400,000,000 490.2s

This type of dynamic programming algorithm generally does not lend itself as well to the parallel computing model, but still is able to get a consistent 37 to 51 times speedup over the CPU version(including all host-device and device-host memory allocations and copies for the CUDA version). Also this implementation does seem to scale well, and so far has tested as generating the same results as the CPU version for all tested data sets.

This algorithm is intended to be used on directed graphs with non-negative edge weights.

The testing generates a adjacency Matrix in row-major form with initial random weights assigned to apx 25% of the edges, and M[i][i]=0. All other entries are set to 'infinity' to indicate no known path from vertex i to vertex j.

Since no sparse format is used to store the matrix, it seems this algorithm is best suited for highly-connected graphs. If there is a low-level of connectivity the CUDA version of BFS is better suited for that type of graph.

The CPU used in an Intel I-7 3770 3.5 ghz with 3.9 ghz target, and a single Nvidia GTX 680 2GB GPU.

<script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-43459430-1', 'github.com'); ga('send', 'pageview'); </script>

githalytics.com alpha