mikespook/gearman-go

Sending large job data

Opened this issue · 7 comments

kdar commented

I noticed with my application, if I send a large amount of data, then it takes a long time to process while gearman is spewing out "Not enough data" errors back at me. There are multiple solutions to this problem, but one I tested can be found here:

https://github.com/kdar/gearman-go/compare/big-data

It basically reads the entire data upfront before it ever gets to decodeInPack(), so decodeInPack() won't throw an error. Another solution is to have the caller of decodeInPack() notice when it's not enough data and wait until there is a sufficient amount to continue. You would also need to increase bufferSize as a size of 1024 is extremely small and would still make it take a long time to process.

Let me know what you think.

Your solution should be right. But I need some days to think about this issue.
If I can make sure there are no other problems, would you please make a pull request for me?

kdar commented

Yup. No problem.

Any update on this?

I've made a pull request and merged it. Could you please do a test for this?

Unfortunately this is now causing more problems than it fixed. It was working fine for a few jobs when I did testing, but in production with a higher volume it will eventually hang on reading from the connection and block all incoming jobs. For my use, I will likely just revert this commit and increase the buffer size to fit my needs.

I‘ve no idea why it would hang on reading.

Could you tell me witch line is blocked, L171 or L181?

ndhfs commented

I revert de91c99
then fix read func like so

`
func (a *agent) read(length int) (data []byte, err error) {

var headerBuf []byte
var n = 0

if headerBuf, err = a.rw.Peek(minPacketLength); err != nil {
    return
}

dl := int(binary.BigEndian.Uint32(headerBuf[8:12])) + minPacketLength

buf := make([]byte, dl)

for len(data) < dl {
    if n, err = a.rw.Read(buf); err != nil {
        return
    }
    data = append(data, buf[:n]...)
}

return

}
`

And it seems to work stable