gpu-parallelization

There are 4 repositories under gpu-parallelization topic.

  • LLNL/inq

    This is a mirror. Please check our main website on gitlab.

    Language:C++261045
  • oekosheri/pytorch_unet_scaling

    Scaling Unet in Pytorch

    Language:Jupyter Notebook2100
  • Vivek-Tate/Performance-Analysis-of-Parallel-Computing-Algorithms-using-CUDA-and-OpenMP

    This is an academic experiment comparing CPU and GPU performance using CUDA and OpenMP. It involves implementing three algorithms: Standard Deviation Calculation, Image Convolution, and Histogram-Based Data Structure, optimised for parallel execution to demonstrate performance improvements on different hardware architectures.

    Language:Cuda0100
  • Ferdib-Al-Islam/gpu_parallelization

    Co-occurrence matrices act as the input to many unsupervised learning algorithms, including those that learn word embedding, and modern spectral topic models. However, the computation of these inputs often takes longer time than the inference. While much thought has been given to implementing fast learning algorithms. The co-occurrence matrix computation tasks are well suited to GPU parallelization. GPUs or other specialized hardware, have never been used to explicitly compute word-to-word co-occurrence matrix.

    Language:Jupyter Notebook202