[bug] An error occurs when running nuclei with multiple instances
hktalent opened this issue · 15 comments
fatal error: concurrent map iteration and map write
test code:
package main
import (
"bytes"
"fmt"
"github.com/hktalent/scan4all/nuclei_Yaml"
"net/http"
_ "net/http/pprof"
"os"
"sync"
)
func DoNuclei(buf *bytes.Buffer, wg *sync.WaitGroup, oOpts *map[string]interface{}) {
defer wg.Done()
xx := make(chan bool)
go nuclei_Yaml.RunNuclei(buf, xx, nil)
<-xx
}
func main() {
if true {
go func() {
http.ListenAndServe(":6060", nil)
}()
buf := bytes.Buffer{}
var wg sync.WaitGroup
wg.Add(1)
buf.WriteString("http://192.168.10.31:8888\n")
pwd, _ := os.Getwd()
m1 := map[string]interface{}{"UpdateTemplates": false, "Templates": []string{pwd + "/config/nuclei-templates"}, "TemplatesDirectory": pwd + "/config/nuclei-templates", "NoUpdateTemplates": true}
go DoNuclei(&buf, &wg, &m1)
wg.Add(1)
go DoNuclei(&buf, &wg, &m1)
wg.Add(1)
go DoNuclei(&buf, &wg, &m1)
wg.Wait()
}
}
error:
[0:00:25] | Templates: 3661 | Hosts: 1 | RPS: 102 | Matched: 19 | Errors: 92 | Requests: 2574/5043 (51%)
fatal error: concurrent map iteration and map write
goroutine 36114 [running]:
runtime.throw({0x52c27f3?, 0x1?})
/usr/local/Cellar/go/1.18.4/libexec/src/runtime/panic.go:992 +0x71 fp=0xc010f3f288 sp=0xc010f3f258 pc=0x4038b51
runtime.mapiternext(0x5031840?)
/usr/local/Cellar/go/1.18.4/libexec/src/runtime/map.go:871 +0x4eb fp=0xc010f3f2f8 sp=0xc010f3f288 pc=0x4013aeb
github.com/projectdiscovery/nuclei/v2/pkg/protocols/http.(*Request).executeRequest(0xc00ebe1680, {0xc0007cac60, 0x19}, 0xc01385f540, 0xc008d9d680?, 0x0, 0xc008d9d6b0, 0x0?)
/Users/51pwn/MyWork/scan4all/vendor/github.com/projectdiscovery/nuclei/v2/pkg/protocols/http/request.go:573 +0x28fb fp=0xc010f3ff38 sp=0xc010f3f2f8 pc=0x4d0f59b
github.com/projectdiscovery/nuclei/v2/pkg/protocols/http.(*Request).executeTurboHTTP.func1(0xc008412b40?)
/Users/51pwn/MyWork/scan4all/vendor/github.com/projectdiscovery/nuclei/v2/pkg/protocols/http/request.go:210 +0x9f fp=0xc010f3ffc8 sp=0xc010f3ff38 pc=0x4d0bbdf
github.com/projectdiscovery/nuclei/v2/pkg/protocols/http.(*Request).executeTurboHTTP.func2()
/Users/51pwn/MyWork/scan4all/vendor/github.com/projectdiscovery/nuclei/v2/pkg/protocols/http/request.go:216 +0x2a fp=0xc010f3ffe0 sp=0xc010f3ffc8 pc=0x4d0bb0a
runtime.goexit()
/usr/local/Cellar/go/1.18.4/libexec/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc010f3ffe8 sp=0xc010f3ffe0 pc=0x406b861
created by github.com/projectdiscovery/nuclei/v2/pkg/protocols/http.(*Request).executeTurboHTTP
/Users/51pwn/MyWork/scan4all/vendor/github.com/projectdiscovery/nuclei/v2/pkg/protocols/http/request.go:207 +0x377
goroutine 1 [chan receive]:
main.DoNuclei(0xc00093bd40, 0xc000869e30?, 0x5280fd1?)
/Users/51pwn/MyWork/scan4all/test/nuclei/testNuclei.go:29 +0xca
main.main()
/Users/51pwn/MyWork/scan4all/test/nuclei/testNuclei.go:45 +0x2c5
options
{
"Tags": [],
"ExcludeTags": [],
"Workflows": [],
"WorkflowURLs": [],
"Templates": [
"/Users/51pwn/MyWork/scan4all/config/nuclei-templates"
],
"TemplateURLs": [],
"RemoteTemplateDomainList": [
"api.nuclei.sh"
],
"ExcludedTemplates": [],
"ExcludeMatchers": null,
"CustomHeaders": [],
"Vars": {},
"Severities": null,
"ExcludeSeverities": null,
"Authors": [],
"Protocols": [
1,
2,
3,
4,
5,
6,
7,
8,
9
],
"ExcludeProtocols": null,
"IncludeTags": [],
"IncludeTemplates": [],
"IncludeIds": [],
"ExcludeIds": [],
"InternalResolversList": null,
"ProjectPath": "/var/folders/_l/pnb2t_9s0f192bqlz1348vpr0000gn/T/",
"InteractshURL": "",
"InteractshToken": "",
"Targets": [
"http://192.168.10.31:8888"
],
"TargetsFilePath": "",
"Resume": "",
"Output": "",
"ProxyInternal": false,
"Proxy": [],
"TemplatesDirectory": "/Users/51pwn/MyWork/scan4all/config/nuclei-templates",
"TraceLogFile": "",
"ErrorLogFile": "",
"ReportingDB": "",
"ReportingConfig": "",
"MarkdownExportDirectory": "",
"SarifExport": "",
"ResolversFile": "",
"StatsInterval": 5,
"MetricsPort": 9092,
"MaxHostError": 30,
"BulkSize": 64,
"TemplateThreads": 64,
"HeadlessBulkSize": 10,
"HeadlessTemplateThreads": 10,
"Timeout": 5,
"Retries": 1,
"RateLimit": 150,
"RateLimitMinute": 0,
"PageTimeout": 20,
"InteractionsCacheSize": 5000,
"InteractionsPollDuration": 5,
"InteractionsEviction": 60,
"InteractionsCoolDownPeriod": 5,
"MaxRedirects": 10,
"FollowRedirects": false,
"OfflineHTTP": false,
"StatsJSON": false,
"Headless": false,
"ShowBrowser": false,
"UseInstalledChrome": false,
"SystemResolvers": false,
"Metrics": false,
"Debug": false,
"DebugRequests": false,
"DebugResponse": false,
"LeaveDefaultPorts": false,
"AutomaticScan": false,
"Silent": false,
"Version": false,
"Validate": false,
"NoStrictSyntax": false,
"Verbose": false,
"VerboseVerbose": false,
"NoColor": false,
"UpdateTemplates": false,
"JSON": false,
"JSONRequests": false,
"EnableProgressBar": true,
"TemplatesVersion": false,
"TemplateList": false,
"HangMonitor": false,
"Stdin": false,
"StopAtFirstMatch": false,
"Stream": false,
"NoMeta": false,
"NoTimestamp": false,
"Project": false,
"NewTemplates": false,
"NewTemplatesWithVersion": null,
"NoInteractsh": false,
"UpdateNuclei": false,
"NoUpdateTemplates": true,
"EnvironmentVariables": false,
"MatcherStatus": false,
"ClientCertFile": "",
"ClientKeyFile": "",
"ClientCAFile": "",
"ZTLS": false,
"ShowMatchLine": false,
"EnablePprof": false,
"StoreResponse": false,
"StoreResponseDir": "output",
"DisableRedirects": true,
"SNI": "",
"HealthCheck": false,
"InputReadTimeout": 0,
"DisableStdin": false
}
@ehsandeep
github.com/projectdiscovery/nuclei/v2/pkg/protocols/http/request.go
The analysis found:
previousEvent
finalEvent
There are thread safety bugs
Finally tracked here:
type InternalEvent map[string]interface{}
The same bug exists in v2.7.4
fatal error: concurrent map writes
goroutine 21282 [running]:
runtime.throw({0x529fe07?, 0x40f2905?})
/usr/local/Cellar/go/1.18.4/libexec/src/runtime/panic.go:992 +0x71 fp=0xc00776d290 sp=0xc00776d260 pc=0x403a471
runtime.mapassign_faststr(0x5261772?, 0x5?, {0xc00f19c510, 0xe})
/usr/local/Cellar/go/1.18.4/libexec/src/runtime/map_faststr.go:212 +0x39c fp=0xc00776d2f8 sp=0xc00776d290 pc=0x40187fc
github.com/projectdiscovery/nuclei/v2/pkg/protocols/http.(*Request).executeRequest(0xc0095b2b60, {0xc0087c07a0, 0x19}, 0xc01253ef50, 0xc0088ba1b0?, 0x0, 0xc0088ba1e0, 0x0?)
/Users/51pwn/MyWork/scan4all/vendor/github.com/projectdiscovery/nuclei/v2/pkg/protocols/http/request.go:592 +0x2aef fp=0xc00776df38 sp=0xc00776d2f8 pc=0x4d1b94f
github.com/projectdiscovery/nuclei/v2/pkg/protocols/http.(*Request).executeTurboHTTP.func1(0xc0096665a0?)
/Users/51pwn/MyWork/scan4all/vendor/github.com/projectdiscovery/nuclei/v2/pkg/protocols/http/request.go:210 +0x9f fp=0xc00776dfc8 sp=0xc00776df38 pc=0x4d1791f
github.com/projectdiscovery/nuclei/v2/pkg/protocols/http.(*Request).executeTurboHTTP.func2()
/Users/51pwn/MyWork/scan4all/vendor/github.com/projectdiscovery/nuclei/v2/pkg/protocols/http/request.go:216 +0x2a fp=0xc00776dfe0 sp=0xc00776dfc8 pc=0x4d1784a
runtime.goexit()
/usr/local/Cellar/go/1.18.4/libexec/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00776dfe8 sp=0xc00776dfe0 pc=0x406d181
created by github.com/projectdiscovery/nuclei/v2/pkg/protocols/http.(*Request).executeTurboHTTP
/Users/51pwn/MyWork/scan4all/vendor/github.com/projectdiscovery/nuclei/v2/pkg/protocols/http/request.go:207 +0x377
goroutine 1 [chan receive]:
main.DoNuclei(0xc005cfc9c0, 0xc005cfc9f0?, 0xc000010f00)
/Users/51pwn/MyWork/scan4all/test/nuclei/testNuclei.go:29 +0xe5
main.main()
/Users/51pwn/MyWork/scan4all/test/nuclei/testNuclei.go:45 +0x277
Can you provide an http interface to submit asynchronous tasks?
1. Start the http receive task
2. The task contains url + template tag + option parameter, for example, you can specify url template data (yaml, zip)
post http://127.0.0.1/nuclei/v1/api/addTask
Parameter example:
url=http://127.0.0.1/test
templatesIds=[]strings{}
tags=[]strings{}
templates=[]strings{"./config/xx1.yaml","http://127.0.0.1/pocs/xx1.yaml","http://127.0.0.1/pocs/all.zip"}
same err,how fix, thanks?
Oh,I also encountered the same problem. Can you solve it as soon as possible.
@ehsandeep
bug2
go DoNuclei(&buf, &wg, &m1)
wg.Add(1)
go DoNuclei(&buf, &wg, &m1)
wg.Add(1)
go DoNuclei(&buf, &wg, &m1)
console log, you can see > 100%:
[0:01:00] | Templates: 3680 | Hosts: 1 | RPS: 89 | Matched: 26 | Errors: 186 | Requests: 5380/5094 (105%)
[0:01:05] | Templates: 3680 | Hosts: 1 | RPS: 58 | Matched: 17 | Errors: 174 | Requests: 3784/5094 (74%)
[0:01:05] | Templates: 3680 | Hosts: 1 | RPS: 75 | Matched: 23 | Errors: 136 | Requests: 4905/5094 (96%)
[0:01:05] | Templates: 3680 | Hosts: 1 | RPS: 82 | Matched: 26 | Errors: 189 | Requests: 5383/5094 (105%)
[0:01:10] | Templates: 3680 | Hosts: 1 | RPS: 54 | Matched: 17 | Errors: 174 | Requests: 3784/5094 (74%)
[0:01:10] | Templates: 3680 | Hosts: 1 | RPS: 70 | Matched: 23 | Errors: 136 | Requests: 4905/5094 (96%)
[0:01:10] | Templates: 3680 | Hosts: 1 | RPS: 76 | Matched: 26 | Errors: 192 | Requests: 5386/5094 (105%)
[0:01:12] | Templates: 3680 | Hosts: 1 | RPS: 67 | Matched: 23 | Errors: 136 | Requests: 4905/5094 (96%)
[0:01:15] | Templates: 3680 | Hosts: 1 | RPS: 50 | Matched: 17 | Errors: 174 | Requests: 3784/5094 (74%)
[0:01:15] | Templates: 3680 | Hosts: 1 | RPS: 71 | Matched: 26 | Errors: 194 | Requests: 5388/5094 (105%)
[0:01:20] | Templates: 3680 | Hosts: 1 | RPS: 47 | Matched: 17 | Errors: 174 | Requests: 3784/5094 (74%)
[0:01:20] | Templates: 3680 | Hosts: 1 | RPS: 67 | Matched: 26 | Errors: 196 | Requests: 5390/5094 (105%)
[0:01:25] | Templates: 3680 | Hosts: 1 | RPS: 44 | Matched: 17 | Errors: 174 | Requests: 3784/5094 (74%)
[0:01:25] | Templates: 3680 | Hosts: 1 | RPS: 63 | Matched: 26 | Errors: 196 | Requests: 5390/5094 (105%)
[0:01:25] | Templates: 3680 | Hosts: 1 | RPS: 44 | Matched: 17 | Errors: 174 | Requests: 3784/5094 (74%)
[0:01:25] | Templates: 3680 | Hosts: 1 | RPS: 63 | Matched: 26 | Errors: 196 | Requests: 5390/5094 (105%)
@ehsandeep
request.go
+ var someMapMutex = sync.RWMutex{}
// executeRequest executes the actual generated request and returns error if occurred
func (request *Request) executeRequest(reqURL string, generatedRequest *generatedRequest, previousEvent output.InternalEvent, hasInteractMatchers bool, callback protocols.OutputEventCallback, requestCount int) error {
request.setCustomHeaders(generatedRequest)
.....
+ someMapMutex.Lock()
for k, v := range previousEvent {
finalEvent[k] = v
}
for k, v := range outputEvent {
finalEvent[k] = v
}
// Add to history the current request number metadata if asked by the user.
if request.ReqCondition {
for k, v := range outputEvent {
key := fmt.Sprintf("%s_%d", k, requestCount)
previousEvent[key] = v
finalEvent[key] = v
}
}
+ someMapMutex.Unlock()
now fix request.go:573
@hktalent running multiple instances of nuclei at the same from the same system is not expected in general, but we will look into the crash.
@ehsandeep
I would like to be able to support concurrent nuclei multi-instance, which is very valuable, in the case of multi-tasking, in the case of dynamically joining the target
thanks
@ehsandeep
It is recommended not to use internal. At present, I have encountered such a situation. I need to call nuclei runner.Close() externally to terminate the current task.
Then I found that because of the use of internal, it violated the rules of golang
code example
(x1.(*runner2.Runner)).Close()
@ehsandeep After each update package, it needs to be repaired manually, otherwise memory errors and abnormal exits will occur. Get rid of whether you can merge my PR #2308
request.go
+ var someMapMutex = sync.RWMutex{} // executeRequest executes the actual generated request and returns error if occurred func (request *Request) executeRequest(reqURL string, generatedRequest *generatedRequest, previousEvent output.InternalEvent, hasInteractMatchers bool, callback protocols.OutputEventCallback, requestCount int) error { request.setCustomHeaders(generatedRequest) ..... + someMapMutex.Lock() for k, v := range previousEvent { finalEvent[k] = v } for k, v := range outputEvent { finalEvent[k] = v } // Add to history the current request number metadata if asked by the user. if request.ReqCondition { for k, v := range outputEvent { key := fmt.Sprintf("%s_%d", k, requestCount) previousEvent[key] = v finalEvent[key] = v } } + someMapMutex.Unlock()
now fix request.go:573
@ehsandeep
Concurrent multi-instance, a large number of:
“leveldb: closed”
0xProject/0x-mesh#319
@hktalent It seems like that https://github.com/hktalent/scan4all/nuclei_Yaml
doesn't exist anymore.
Is this issue still reproducible? Can you provide me a link to where the nuclei runner is being used?
@Mzack9999 maybe https://github.com/hktalent/scan4all/tree/main/projectdiscovery/nuclei_Yaml is the new path for it?