Opensearch.xml request url is in docker just http://searx/
JackMcCrack opened this issue · 7 comments
Hi,
the opensearch.xml contains the right FQDN as Image URL, but the POST request URL for search requests is just the hostname of the docker container 'searx'.
What happens:
- Firefox tries to find the hostname/URL 'searx' and than redirects it to searx.com (which is not a searx instance).
Firefox does not allow editing the search engine URL manually. 😟 - Chromium returns 'This site can’t be reached Check if there is a typo in searx. DNS_PROBE_FINISHED_NXDOMAIN'.
But I can edit the search engine URL in the chromium settings.
tested on version: 0.18.0-156-1e35c3cc (installed via docker)
How to reproduce:
Firefox
- open searx instance
- add custom search via URL-bar menu (3 dots) or search field (green plus)
- enter search term and select newly added searx
Chromium:
- open searx instance
- open in another tab chrome://settings/searchEngines
- below 'Other search engines' select 'make default' in the Menu behind searx instance
Workaround
- let nginx deliver a handcrafted opensearch.xml
location ~ ^/opensearch\.xml.*$ {
alias /usr/share/nginx/html;
try_files opensearch.xml;
}
suggested fix
use of base_url instead of hostname in /searx/templates/common/opensearch.xml:
Maybe like this?
- <Url rel="suggestions" type="application/x-suggestions+json" template="{{ host }}autocompleter?q={searchTerms}"/>
+ <Url rel="suggestions" type="application/x-suggestions+json" template="{{ url_for('search', _external=True) }}autocompleter?q={searchTerms}"/>
ah, if this better be in the issues of searx/searx please feel free to move it.
Thanks
Are you using searx-docker scripts or just the searx docker image?
I use the systemd service which calls the start/stop.sh
For your main webserver is it only Caddy, which is the one from searx-docker? Or is it NGINX which is acting as a reverse proxy for caddy?
I run the searx-docker container behind a nginx.
this is my nginx config on the docker host:
upstream searx {
server 127.0.0.1:4040;
keepalive 64;
}
upstream morty {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name searx.example.org
ssl_certificate /etc/dehydrated/certs/searx.example.org/fullchain.pem;
ssl_certificate_key /etc/dehydrated/certs/searx.example.org/privkey.pem;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/dehydrated/certs/searx.example.org/fullchain.pem;
add_header Strict-Transport-Security "max-age=15768000" always; # six months
add_header X-Frame-Options "DENY";
add_header Content-Security-Policy "default-src 'self'; img-src 'self'; script-src 'self'; style-src 'self'; object-src 'none'; frame-ancestors 'none'" always;
location / {
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://searx;
}
location /morty {
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://morty;
}
# REMOVE this location when opensearch.xml is fixed
# from here.
location ~ ^/opensearch\.xml.*$ {
alias /usr/share/nginx/html;
try_files opensearch.xml;
}
# until here.
}
the problem is not the nginx, its the wrong hostname in opensearch.xml.
When I open the URL https://searx.HOSTNAME/opensearch.xml in a webbrowser there should be a line like this:
<Url rel="results" type="text/html" method="post" template="http://searx.HOSTNAME/search">
<Param name="q" value="{searchTerms}"/>
</Url>
but instead there is this:
<Url rel="results" type="text/html" method="post" template="http://searx/search">
<Param name="q" value="{searchTerms}"/>
</Url>
the difference is the HOSTNAME part.
You forgot to pass the host headers. Please see searx/searx#2547 (comment) for more info
Btw your setup is unsupported by searx-docker so this mean you are on your own because we haven't tested your case.
the host header fixed it. Thank you for that hint.
proxy_set_header Host $http_host;