continue-revolution/sd-webui-segment-anything

HELP ~ GroundingDINO dont work on my mac ~ I got 2 problems

coong4 opened this issue · 16 comments

coong4 commented

I install the extensions in WebUI by URL, and setup the params in text2img panel, download the SAM Model & GroundingDINO Model, is good so far, till I run the Preview Segmentation, there is the result:

image
image
image

And the console logs, I think there are two issue

1 - git clone error ( but I can visit github on browser AND clone sth. by command line; this can solved by turn on the "local groundingdino" in settings; but I wander why cannot download )
2 - cannot use torch.cuda ( but I already enabled the "use CPU for SAM" )

and I for sure a newbie ~ hope for response ~

Start SAM Processing
Installing sd-webui-segment-anything requirement: groundingdino
Traceback (most recent call last):
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/dino.py", line 67, in install_goundingdino
    launch.run_pip(
  File "/Users/coong/SD/stable-diffusion-webui/modules/launch_utils.py", line 138, in run_pip
    return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
  File "/Users/coong/SD/stable-diffusion-webui/modules/launch_utils.py", line 115, in run
    raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install sd-webui-segment-anything requirement: groundingdino.
Command: "/Users/coong/SD/stable-diffusion-webui/venv/bin/python3.10" -m pip install git+https://github.com/IDEA-Research/GroundingDINO --prefer-binary
Error code: 1
stdout: Looking in indexes: https://mirrors.aliyun.com/pypi/simple
Collecting git+https://github.com/IDEA-Research/GroundingDINO
  Cloning https://github.com/IDEA-Research/GroundingDINO to /private/var/folders/c_/wlyq1dss07lckqcg418k7msr0000gn/T/pip-req-build-8l0rw7ic
  Resolved https://github.com/IDEA-Research/GroundingDINO to commit 60d796825e1266e56f7e4e9e00e88de662b67bd3
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'error'

stderr:   Running command git clone --filter=blob:none --quiet https://github.com/IDEA-Research/GroundingDINO /private/var/folders/c_/wlyq1dss07lckqcg418k7msr0000gn/T/pip-req-build-8l0rw7ic
  error: subprocess-exited-with-error

  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [17 lines of output]
      Traceback (most recent call last):
        File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
        File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
        File "/private/var/folders/c_/wlyq1dss07lckqcg418k7msr0000gn/T/pip-build-env-fk61km89/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 355, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
        File "/private/var/folders/c_/wlyq1dss07lckqcg418k7msr0000gn/T/pip-build-env-fk61km89/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 325, in _get_build_requires
          self.run_setup()
        File "/private/var/folders/c_/wlyq1dss07lckqcg418k7msr0000gn/T/pip-build-env-fk61km89/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 507, in run_setup
          super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
        File "/private/var/folders/c_/wlyq1dss07lckqcg418k7msr0000gn/T/pip-build-env-fk61km89/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 341, in run_setup
          exec(code, locals())
        File "<string>", line 27, in <module>
      ModuleNotFoundError: No module named 'torch'
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

GroundingDINO install failed. Will fall back to local groundingdino this time. Please permanently switch to local groundingdino on Settings/Segment Anything or submit an issue to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues.
Running GroundingDINO Inference
Initializing GroundingDINO GroundingDINO_SwinT_OGC (694MB)
/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/modeling_utils.py:884: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
  warnings.warn(
Initializing SAM to cpu
Traceback (most recent call last):
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 204, in sam_predict
    sam = init_sam_model(sam_model_name)
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 129, in init_sam_model
    sam_model_cache[sam_model_name] = load_sam_model(sam_model_name)
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 80, in load_sam_model
    sam = sam_model_registry[model_type](checkpoint=sam_checkpoint_path)
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/sam_hq/build_sam_hq.py", line 29, in build_sam_hq_vit_l
    return _build_sam_hq(
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/sam_hq/build_sam_hq.py", line 122, in _build_sam_hq
    return _load_sam_checkpoint(sam, checkpoint)
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/sam_hq/build_sam_hq.py", line 67, in _load_sam_checkpoint
    state_dict = torch.load(f)
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1024, in load
    return _load(opened_zipfile,
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1432, in _load
    result = unpickler.load()
  File "/opt/homebrew/Cellar/python@3.10/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pickle.py", line 1213, in load
    dispatch[key[0]](self)
  File "/opt/homebrew/Cellar/python@3.10/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pickle.py", line 1254, in load_binpersid
    self.append(self.persistent_load(pid))
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1402, in persistent_load
    typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1376, in load_tensor
    wrap_storage=restore_location(storage, location),
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 391, in default_restore_location
    result = fn(storage, location)
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 266, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 250, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

There is an option to use local groundingdino in Settings/SegmentAnything.

coong4 commented

ooooooooh!
finally done
thank you ( cry...

I will resolve this issue along with a major update later.

coong4 commented

oooooooh! finally done! thannnnk!

dose I modify alright? but why? Im not write any about cuda's gpu's code
image

coong4 commented

torch.load(model_checkpoint, map_location="cpu")
(why my reply is missing...anyway...
oooooooh! finally done! thannnnk!

dose I modify alright? but why? Im not write any about cuda's gpu's code

USA is switching to standard time from daylight time, so the comment order is quite messed up. I've received all your comments via email.

Your change is correct. you can submit a PR, even if it is forced to cpu. I will most likely not merge, but it can serve me as a reminder to fix it in the major update later.

coong4 commented

There is an option to use local groundingdino in Settings/SegmentAnything.

Yeah, first problem can bypass by Use Local, but I still cannot run the Preview, case the CUDA problem.
Is there something other I need to do ?

coong4 commented

By "Use the Local", run again the Preview Segmentations( ticked the "use CPU for SAM" ), again the CUDA error

ps. Mac mini M2

Start SAM Processing
Using local groundingdino.
Running GroundingDINO Inference
Initializing GroundingDINO GroundingDINO_SwinT_OGC (694MB)
/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/modeling_utils.py:884: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
  warnings.warn(
Initializing SAM to cpu
Traceback (most recent call last):
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 204, in sam_predict
    sam = init_sam_model(sam_model_name)
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 129, in init_sam_model
    sam_model_cache[sam_model_name] = load_sam_model(sam_model_name)
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 80, in load_sam_model
    sam = sam_model_registry[model_type](checkpoint=sam_checkpoint_path)
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/sam_hq/build_sam_hq.py", line 29, in build_sam_hq_vit_l
    return _build_sam_hq(
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/sam_hq/build_sam_hq.py", line 122, in _build_sam_hq
    return _load_sam_checkpoint(sam, checkpoint)
  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/sam_hq/build_sam_hq.py", line 67, in _load_sam_checkpoint
    state_dict = torch.load(f)
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1024, in load
    return _load(opened_zipfile,
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1432, in _load
    result = unpickler.load()
  File "/opt/homebrew/Cellar/python@3.10/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pickle.py", line 1213, in load
    dispatch[key[0]](self)
  File "/opt/homebrew/Cellar/python@3.10/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pickle.py", line 1254, in load_binpersid
    self.append(self.persistent_load(pid))
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1402, in persistent_load
    typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1376, in load_tensor
    wrap_storage=restore_location(storage, location),
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 391, in default_restore_location
    result = fn(storage, location)
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 266, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 250, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

there is a checkbox on the right of model selection - use CPU, if I remember correctly. check it and you should be fine.

coong4 commented

there is a checkbox on the right of model selection - use CPU, if I remember correctly. check it and you should be fine.

I always turn on this, and I read the main.py code, at first I think this should be worked, but it seems not
image

I don't understand why torch.load is trying to load weights to cuda. you may try to force torch.load to load to cpu or some mac device. folow the line at

  File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/sam_hq/build_sam_hq.py", line 67, in _load_sam_checkpoint
    state_dict = torch.load(f)
coong4 commented

there is a checkbox on the right of model selection - use CPU, if I remember correctly. check it and you should be fine.

I feel wire too, the console log showing it Initializing SAM to cpu for sure...

image

use the method I propose above anyway. It should solve your problem.

coong4 commented

use the method I propose above anyway. It should solve your problem.

means I force set to 'cpu' ? how ? forgive me ask that ....

torch.load(model_checkpoint, map_location="cpu")

coong4 commented

USA is switching to standard time from daylight time, so the comment order is quite messed up. I've received all your comments via email.

Your change is correct. you can submit a PR, even if it is forced to cpu. I will most likely not merge, but it can serve me as a reminder to fix it in the major update later.

hi, I submitted a PR for this issue