generalui/s3p

stdout maxBuffer length exceeded

cryptowhizzard opened this issue · 5 comments

Error:
class: class RangeError
stack:
RangeError [ERR_CHILD_PROCESS_STDIO_MAXBUFFER]: stdout maxBuffer length exceeded
at new NodeError (node:internal/errors:372:5)
at Socket.onChildStdout (node:child_process:461:14)
at Socket.emit (node:events:527:28)
at Socket.emit (node:domain:475:12)
at addChunk (node:internal/streams/readable:315:12)
at readableAddChunk (node:internal/streams/readable:285:11)
at Socket.Readable.push (node:internal/streams/readable:228:10)
at Pipe.onStreamRead (node:internal/stream_base_commons:190:23)

Can you provide more details? Your full s3p command-line with all the options would help a great deal tracking this down. Also, if you can, share your s3p summarize results. That'll give me a hint where to look for the bug.

same problem here

duration:       0.3522348999977112
items:          192
itemsPerSecond: 545
requests:       2
size:           2755628011680
maxSize:        14902834199
minSize:        14168713555
maxSizeKey:
  ...

minSizeKey:
  ...

sizeHistogram: gigabytes: items:  192,     size:    2755628011680, 1gB: 0, 2gB: 0, 4gB: 0, 8gB: 0, 16gB: 192, 32gB: 0, 64gB: 0, 128gB: 0, 256gB: 0, 512gB: 0
averageSize:   1467327339
human:         size:      2.51tB, maxSize: 13.88gB, minSize:       13.20gB, averageSize: 1.37gB

Can you provide more details? Your full s3p command-line with all the options would help a great deal tracking this down.

A little bit of research. It seems it comes from Node's Exec function, which I use to copy larger files using aws's cli. The default is 1 megabyte, which surprises me that isn't enough.

Does this happen every time? It sounds more like there was an error that wasn't properly reported.

If it does happen every time, I can try upping the maxBuffer setting and then you can try it and let me know if that helped.