zmap/zgrab2

'panic' error

Yulian83 opened this issue · 6 comments

An error occurs when entering the following command
echo '192.168.20.1' | zgrab2 http --raw-headers --max-redirects=10 --with-body-size --use-https --port=443

And the error itself looks like this:

INFO[0000] started grab at 2024-08-20T08:49:57Z
panic: runtime error: slice bounds out of range [:-3535]

goroutine 1025 [running]:
github.com/zmap/zgrab2/lib/http.(*TeeConn).Bytes(...)
/opt/zgrab2-0.1.8/lib/http/transport.go:1166
github.com/zmap/zgrab2/lib/http.readResponse(0xc00007e1c0, 0xc0000d0800)
/opt/zgrab2-0.1.8/lib/http/response.go:227 +0x6e5
github.com/zmap/zgrab2/lib/http.ReadResponseTee(...)
/opt/zgrab2-0.1.8/lib/http/response.go:170
github.com/zmap/zgrab2/lib/http.(*persistConn).readResponse(0xc000168240, {0xc0000d0800, 0xc000100a80, 0x1, 0x0, 0xc000100a20}, 0x0)
/opt/zgrab2-0.1.8/lib/http/transport.go:1708 +0x8a
github.com/zmap/zgrab2/lib/http.(*persistConn).readLoop(0xc000168240)
/opt/zgrab2-0.1.8/lib/http/transport.go:1550 +0x39a
created by github.com/zmap/zgrab2/lib/http.(*Transport).dialConn in goroutine 23
/opt/zgrab2-0.1.8/lib/http/transport.go:1138 +0x12c8

At the same time, when I remove the --raw-headers flag, or scan regular http with this flag, the error disappears
echo '192.168.20.1' | zgrab2 http --max-redirects=10 --with-body-size --use-https --port=443
echo '192.168.20.9' | zgrab2 http --raw-headers --max-redirects=10 --with-body-size --port=80 | jq

OS: Ubuntu 22.0.4
Go version: go1.22.2 linux/amd64

Huh, I didn’t even realize there was a —raw-headers flag. It makes sense you only see the behavior with that flag as the logic below only triggers when that flag is set

Specifying —raw-headers sets a bool (RawHeaders):

RawHeaders bool `long:"raw-headers" description:"Extract raw response up through headers"`

That is passed along here:

RawHeaderBuffer: scanner.config.RawHeaders,

I’ll take a look and add a little more detail, I’m on mobile so I can’t look at all of the code at once very easily

The http.Transport gets that bool:

RawHeaderBuffer bool

That causes a member of the TeeConn instance to get set:

pconn.tee.enabled = t.RawHeaderBuffer

That tee is set here:

pconn.tee = &TeeConn{}

TeeConn is defined here:

type TeeConn struct {

I wonder if in some case, the TeeConn.tb or TeeConn.bt isn’t actually filled by a write (or has been drained already by a read somewhere else) so when it’s accessed as a slice, it panics because it’s actually 0 bytes, not the 3535 bytes that it seems to expect, based your stack trace

I’ll have to look more at a computer, but it may be as simple as a missing write into the TeeConn buffer

Can you elaborate a little on what is at 192.168.20.1? And can you confirm you're using a build from the master branch of git? I'm trying to reproduce but haven't been able to do so on a few randomly chosen web servers

Or, even better, if you can reproduce it with plaintext HTTP and provide a packet capture, that would be best

(@Yulian83 )

Here is a set of commands from my installation script
sudo wget -c -O zgrab2.tar.gz https://github.com/zmap/zgrab2/archive/refs/tags/v0.1.8.tar.gz
sudo rm -rf /opt/zgrab2-* && sudo tar -C /opt/ -xzf zgrab2.tar.gz
cd /opt/zgrab2-*
sudo make
sudo ln -s /opt/zgrab2-*/zgrab2 /usr/bin/zgrab2
I repeated, it does not work when accessing the 443 port on which the hp router page is located, it does not work when accessing some kind of virtual machine, but it works when accessing the page on the machine in which zgrab2 itself

Here is a set of commands from my installation script sudo wget -c -O zgrab2.tar.gz https://github.com/zmap/zgrab2/archive/refs/tags/v0.1.8.tar.gz sudo rm -rf /opt/zgrab2-* && sudo tar -C /opt/ -xzf zgrab2.tar.gz cd /opt/zgrab2-* sudo make sudo ln -s /opt/zgrab2-*/zgrab2 /usr/bin/zgrab2 I repeated, it does not work when accessing the 443 port on which the hp router page is located, it does not work when accessing some kind of virtual machine, but it works when accessing the page on the machine in which zgrab2 itself

I see, thanks for those details

I wouldn't expect the location of the service (VM, LAN, local, WAN) to be all that important for this case- I think what is most important is how the service on the target behaves and what the response looks like. I'm assuming they are different HTTP implementations, which could be a major factor.

My main interest is in reproducing the issue on my system

Do you see any additional output when adding --debug?

If you can attach output from --debug or pipe it through a TLS terminating proxy (such as BurpSuite) and attach the request/response, I will take a deeper look and try to reproduce/diagnose/propose a fix

Unfortunately, I'm not comfortable proposing any changes to the code until I'm able to reproduce the issue. And I'm not strong enough in golang to confidently diagnose the issue via only static analysis