RFC: go-exploit intercommunication
j-baines opened this issue · 0 comments
go-exploit was designed to create stand-alone scanners/exploits. This is, and always will be, an important part of go-exploit and what makes it unique. That will never change.
However.
Hypothetically, let's say an organization has hundreds of go-exploit implementations. They want to use those to scan a single HTTP server. Should those exploits send hundreds of HTTP GET requests to /
or should they send one? Obviously, they should send one and share the response amongst themselves. Otherwise, they are being wildly inefficient.
So, in an ideal world, go-exploit would support a mechanism where the individual exploits could also share data between themselves. A (read: one possible) solution would be to introduce an optional parameter that takes in an sqlite database. HTTP requests that can be cached (or are asked to be cached by the dev) can be stuck in there, and the database could be passed around between exploits/scanners, and cached information can be pulled out as needed.
This isn't too dissimilar with how Nessus works. It has a sqlite(? I forget) DB that stores all sort of things (including some HTTP responses). I'd say the main difference is that they will store things as far down as "this version I extracted from the http interface of a synology dsm". That's sort of neat in that a plugin can be quickly written that basically says "Do we have the version from a synology dsm? Cool. Is it a vulnerable version?" But to actually run that plugin requires figuring out all the dependencies and it can be sort of a nightmare (which is, again, why stand-alone go-exploit shines).
I think that's a reasonable path forward. We could introduce specific HTTP functions that ask for caching (or just respect caching fields in normal responses? that sounds like a bad idea but worth looking at). The db could be used for passing around credentials, storing secrets, etc.