[Bug]: RuntimeError: MPS support binary op with uint8 natively starting from macOS 13.0
websepia opened this issue · 9 comments
Is there an existing issue for this?
- I have searched the existing issues and checked the recent builds/commits of both this extension and the webui
Have you updated WebUI and this extension to the newest version?
- I have updated WebUI and this extension to the most up-to-date version
Do you understand that you should go to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues if you cannot install GroundingDINO?
- My problem is not about installing GroundingDINO
Do you know that you should use the newest ControlNet extension and enable external control if you want SAM extension to control ControlNet?
- I have updated ControlNet extension and enabled "Allow other script to control this extension"
What happened?
RuntimeError: MPS support binary op with uint8 natively starting from macOS 13.0
Steps to reproduce the problem
- Select any SAM Model like sam_vit_b_01ec64.pth.
- Upload a image.
- Press Preview segmentation.
What should have happened?
Make it work under macOS Monterey 12.6.6 (21G646) with pytorch CPU version.
Commit where the problem happens
webui:
extension:
sd-webui-segment-anything %
git branch -v
- master 89a2213 update readme to fix typo
What browsers do you use to access the UI ?
Microsoft Edge
Command Line Arguments
export COMMANDLINE_ARGS="--api --skip-torch-cuda-test --no-half --no-half-vae --upcast-sampling --opt-split-attention-v1 --disable-nan-check --disable-safe-unpickle"
export PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0
Console logs
Start SAM Processing
Initializing SAM
Running SAM Inference (811, 650, 3)
Traceback (most recent call last):
File "/Users/xixili/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 408, in run_predict
output = await app.get_blocks().process_api(
File "/Users/xixili/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1315, in process_api
result = await self.call_function(
File "/Users/xixili/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1043, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Users/xixili/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Users/xixili/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/Users/xixili/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/Users/xixili/AI/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 208, in sam_predict
predictor.set_image(image_np_rgb)
File "/Users/xixili/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/segment_anything/predictor.py", line 60, in set_image
self.set_torch_image(input_image_torch, image.shape[:2])
File "/Users/xixili/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/xixili/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/segment_anything/predictor.py", line 88, in set_torch_image
input_image = self.model.preprocess(transformed_image)
File "/Users/xixili/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/segment_anything/modeling/sam.py", line 167, in preprocess
x = (x - self.pixel_mean) / self.pixel_std
RuntimeError: MPS support binary op with uint8 natively starting from macOS 13.0
Additional information
My physical environment:
On the latest torch 2.0.1, "stable-diffusion-webui" works on my old iMac iMac (Retina 5K, 27-inch, Late 2015) with a quad-core i5 CPU, 32GB of RAM, and AMD Radeon R9 M395 2 GB graphics.
stable-diffusion-webui:
git branch -v
- master 89f9faa6 Merge branch 'release_candidate'
All python libs:
absl-py==1.4.0
accelerate==0.18.0
addict==2.4.0
aenum==3.1.12
aiofiles==23.1.0
aiohttp==3.8.4
aiosignal==1.3.1
altair==4.2.2
antlr4-python3-runtime==4.9.3
anyio==3.6.2
astunparse==1.6.3
async-timeout==4.0.2
attrs==23.1.0
basicsr==1.4.2
beautifulsoup4==4.12.2
bitsandbytes==0.35.4
blendmodes==2022
boltons==23.0.0
cachetools==5.3.0
certifi==2022.12.7
chardet==4.0.0
charset-normalizer==3.1.0
clean-fid==0.1.29
click==8.1.3
clip @ git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1
coloredlogs==15.0.1
contourpy==1.0.7
cycler==0.11.0
dadaptation==1.5
deprecation==2.1.0
diffusers==0.14.0
discord-webhook==1.1.0
einops==0.4.1
entrypoints==0.4
facexlib==0.3.0
fastapi==0.94.1
ffmpy==0.3.0
filelock==3.12.0
filterpy==1.4.5
flatbuffers==23.3.3
font-roboto==0.0.1
fonts==0.0.3
fonttools==4.39.3
frozenlist==1.3.3
fsspec==2023.4.0
ftfy==6.1.1
future==0.18.3
gast==0.4.0
gdown==4.7.1
gfpgan==1.3.8
gitdb==4.0.10
GitPython==3.1.31
google-auth==2.17.3
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
gradio==3.23.0
grpcio==1.54.0
h11==0.12.0
h5py==3.8.0
httpcore==0.15.0
httpx==0.24.0
huggingface-hub==0.14.1
humanfriendly==10.0
idna==2.10
imageio==2.28.0
importlib-metadata==6.6.0
inflection==0.5.1
invisible-watermark==0.1.5
jax==0.4.8
Jinja2==3.1.2
jsonmerge==1.8.0
jsonschema==4.17.3
keras==2.12.0
kiwisolver==1.4.4
kornia==0.6.7
lark==1.1.2
lazy_loader==0.2
libclang==16.0.0
lightning-utilities==0.8.0
linkify-it-py==2.0.0
lion-pytorch==0.0.7
llvmlite==0.39.1
lmdb==1.4.1
lpips==0.1.4
Markdown==3.4.3
markdown-it-py==2.2.0
MarkupSafe==2.1.2
matplotlib==3.7.1
mdit-py-plugins==0.3.3
mdurl==0.1.2
ml-dtypes==0.1.0
mpmath==1.3.0
multidict==6.0.4
mypy-extensions==1.0.0
networkx==3.1
numba==0.56.4
numpy==1.23.3
oauthlib==3.2.2
omegaconf==2.2.3
onnx==1.13.1
onnxruntime==1.14.1
open-clip-torch @ git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b
opencv-contrib-python==4.7.0.72
opencv-python==4.7.0.72
opt-einsum==3.3.0
orjson==3.8.10
packaging==23.1
pandas==2.0.1
piexif==1.1.3
Pillow==9.4.0
protobuf==4.22.3
psutil==5.9.5
pyasn1==0.5.0
pyasn1-modules==0.3.0
pydantic==1.10.7
pyDeprecate==0.3.2
pydub==0.25.1
Pygments==2.15.1
pyparsing==3.0.9
pyre-extensions==0.0.23
pyrsistent==0.19.3
PySocks==1.7.1
python-dateutil==2.8.2
python-multipart==0.0.6
pytorch-lightning==1.9.4
pytz==2023.3
PyWavelets==1.4.1
PyYAML==6.0
realesrgan==0.3.0
regex==2023.3.23
rehash==1.0.1
requests
requests-oauthlib==1.3.1
resize-right==0.0.2
rich==13.3.5
rsa==4.9
safetensors==0.3.0
scikit-image==0.19.2
scipy==1.10.1
semantic-version==2.10.0
Send2Trash==1.8.2
sentencepiece==0.1.98
six==1.16.0
smmap==5.0.0
sniffio==1.3.0
soupsieve==2.4.1
starlette==0.26.1
sympy==1.11.1
tb-nightly==2.13.0a20230317
tensorboard==2.12.0
tensorboard-data-server==0.7.0
tensorboard-plugin-wit==1.8.1
tensorflow==2.12.0
tensorflow-estimator==2.12.0
tensorflow-io-gcs-filesystem==0.32.0
termcolor==2.3.0
tifffile==2023.4.12
timm==0.6.7
tokenizers==0.13.3
tomli==2.0.1
toolz==0.12.0
torch==1.13.1
torchdiffeq==0.2.3
torchmetrics==0.11.4
torchsde==0.2.5
torchvision==0.14.1
tqdm==4.64.1
trampoline==0.1.2
transformers==4.26.1
typing-inspect==0.8.0
typing_extensions==4.5.0
tzdata==2023.3
uc-micro-py==1.0.1
urllib3==1.26.15
uvicorn==0.21.1
wcwidth==0.2.6
websockets==11.0.2
Werkzeug==2.3.1
wrapt==1.14.1
yapf==0.33.0
yarl==1.9.2
zipp==3.15.0
At this moment, I am not able to help mac users since I myself do not have access to macos. However, from the console log, I guess updating your macos version to >= 13.0 might help.
Thx, I will consider to buy a new Mac.😓
Also, since it is a problem inside segment_anything package, you can consider searching at https://github.com/facebookresearch/segment-anything/issues
It is highly possible that other people have met the same problem.
Yes, I wonder it may work if you can enable segment_anything web UI extension to use 'cpu' , not MPS by default, it will work for my case as a feature enhancement.
sam_checkpoint = "./models/sam_vit_h_4b8939.pth"
device = 'cpu'
model_type = "default"
sam = sam_model_registrymodel_type
sam.to(device=device)
OK. I have added it to my TODO list, and it will be implemented very soon.
Thx, great!
Should be available in the new update. Let me know if problem still exist.
Great! I gave it a try this morning, CPU feature works for me now!
Start SAM Processing Initializing SAM to cpu Running SAM Inference (512, 512, 3) SAM inference with 0 box, 13 positive prompts, 0 negative prompts Creating output image
Also, attach a screenshot., this feature is very useful, thanks!
not work for me