HAL-42/FMA-WSSS

upload the files to OneDrive, Google Drive, Dropbox?

ProjectDisR opened this issue · 7 comments

upload the files to OneDrive, Google Drive, Dropbox?

These links are expired. Please provide updated links.

Thanks for your work! But these links are expired. Could you Please provide updated links? Thank you!

hi, may I ask the graphics memory for generate SAM-based quasi-superpixel? I use 3090 24GB, need to decrease '-b' '-w ' for some complex images, but i think decrease these two parameters will reduce quasi-superpixel's quality.
could you please provied the quasi-superpixel?

These two parameters do not affect performance.

Also try modify src/libs/sam/custom_sam/sam_auto.py with following _mask_intersection_areas implementation. This version significantly reduces the memory usage:

    @staticmethod
    def _mask_intersection_areas(mask1: torch.Tensor, mask2: torch.Tensor, einsum: bool=True) -> torch.Tensor:
        """Computes the intersection area between two masks.

        Args:
            mask1: Binary masks of shape (N1, H, W).
            mask2: Binary masks of shape (N2, H, W).
            einsum: 是否使用einsum计算。

        Returns:
            A tensor of shape (N1, N2) containing the intersection area between the two masks.
        """
        if not einsum:
            # * 计算两个mask的交集。
            intersection = mask1[:, None, :, :] & mask2[None, :, :, :]  # (N1, N2, H, W)

            # * 计算交集的面积。
            intersection_areas = intersection.sum(dim=(2, 3), dtype=torch.long)  # (N1, N2)
        else:
            intersection_areas = torch.einsum('nij,mij->nm',
                                              mask1.to(torch.float32), mask2.to(torch.float32)).to(torch.long)

        return intersection_areas

    @staticmethod
    def _chunk_mask_intersection_areas(mask1: torch.Tensor, mask2: torch.Tensor, chunksize: int=0) -> torch.Tensor:
        """Computes the intersection area between two masks.

        Args:
            mask1: Binary masks of shape (N1, H, W).
            mask2: Binary masks of shape (N2, H, W).
            chunksize: Chunk size to use for the computation.

        Returns:
            A tensor of shape (N1, N2) containing the intersection area between the two masks.
        """
        if chunksize == 0:
            return SamAuto._mask_intersection_areas(mask1, mask2)

        ret = []

        for i in range(0, mask1.shape[0], chunksize):
            chunk = mask1[i:i + chunksize, :, :]  # (chunksize, H, W)
            ret.append(SamAuto._mask_intersection_areas(chunk, mask2))  # (chunksize, N2)

        ret = torch.cat(ret, dim=0)  # (N1, N2)

        return ret

the link is expired. Could you provide updated links? Thank you