shuLhan/go-bindata

Stackoverflow if the data is too large

grantstephens opened this issue · 3 comments

If the data file you're loading is too large then the following error occurs:

runtime: goroutine stack exceeds 1000000000-byte limit
runtime: sp=0xc02c6f0418 stack=[0xc02c6f0000, 0xc04c6f0000]
fatal error: stack overflow

runtime stack:
runtime.throw(0x764e87, 0xe)
	/usr/local/go/src/runtime/panic.go:1112 +0x72
runtime.newstack()
	/usr/local/go/src/runtime/stack.go:1034 +0x6ce
runtime.morestack()
	/usr/local/go/src/runtime/asm_amd64.s:449 +0x8f

goroutine 1 [running]:
go/types.(*Checker).exprInternal(0xc00010a000, 0xc00c6f0600, 0x7da340, 0xc0086e6300, 0x0, 0x0, 0x0)
	/usr/local/go/src/go/types/expr.go:1014 +0x3e49 fp=0xc02c6f0428 sp=0xc02c6f0420 pc=0x61b329
go/types.(*Checker).rawExpr(0xc00010a000, 0xc00c6f0600, 0x7da340, 0xc0086e6300, 0x0, 0x0, 0x0)
	/usr/local/go/src/go/types/expr.go:981 +0x81 fp=0xc02c6f04c8 sp=0xc02c6f0428 pc=0x617201

This only happens when you try and run the program that uses the data, not during the go-bindata generation. It also does not happen if you add the -nocompress option (I can't explain that yet).
I have found by not splitting the byteslice into multiple lines, i.e. removing lines https://github.com/shuLhan/go-bindata/blob/master/stringwriter.go#L41-L46 then the program is able to run again.
I would like to add a flag to stop the splitting of the []byte data- would the maintainers be ok with that, or is there another approach?

@grantstephens ,

Thanks for the report.

My guts say that we should revert commit accfe6a because it just provide nice format to generated Go file. It may introduce new changes to other's people builds but its better than having this bugs.

Let me test first.

@grantstephens what Go version did you use?

I was on go 1.14 but didn't test it for different versions. Thank you- the revert fixed the problem- very much appreciated!
For reference we are using goimports and it takes a very long time to reformat the data file when there are so many new lines and is much quicker when the data is all on one line.