neka-nat/cupoch

dbscan out of memory, but array smaller than GPU RAM

stratomaster31 opened this issue · 3 comments

When I try to run this code:

import numpy as np
import cupoch

X = np.random.randn(20000000, 3)
print(f"Sise of X: {X.nbytes / 1E9} GB")
pcd = cupoch.geometry.PointCloud()
pcd.points = cupoch.cupoch.utility.Vector3fVector(X)
labels = np.array(pcd.cluster_dbscan(eps=0.15, min_points=10, print_progress=True).cpu())

I get CUDA out of memory, but:

  • The array size is: 0.48 GB
  • My GPU has 6 GB:

image

Platform:

  • Microsoft Windows 10 Enterprise 10.0.19042 N/D Compilación 19042
  • Intel i7 10700
  • 64 GB RAM
  • Nvidia GTX 1660 SUPER 6 GB RAM
  • Python 3.8.10

Thanks for the report!
You can keep the memory down by setting the max_edges.

labels = np.array(pcd.cluster_dbscan(eps=0.15, min_points=10, print_progress=True, max_edges=20).cpu())
max_edges=20

Thx for your reply! Does max_edges have impact on the clustering performance?

The higher the density of the point cloud, the more it affects it.
Please try to adjust max_edges.