Lot of ClientAbortExceptions on compression
anant2614 opened this issue · 19 comments
Seeing a lot of ClientAbortException instances while performing compression on the data. Below is the stack trace we are observing:
Wrapped by: o.a.c.c.ClientAbortException: java.io.IOException: Broken pipe at o.a.c.c.OutputBuffer.realWriteBytes(OutputBuffer.java:353) at o.a.c.c.OutputBuffer.flushByteBuffer(OutputBuffer.java:783) at o.a.c.c.OutputBuffer.append(OutputBuffer.java:688) at o.a.c.c.OutputBuffer.writeBytes(OutputBuffer.java:388) at o.a.c.c.OutputBuffer.write(OutputBuffer.java:366) at o.a.c.c.CoyoteOutputStream.write(CoyoteOutputStream.java:96) at j.n.c.Channels$WritableByteChannelImpl.write(Unknown Source) at c.a.b.e.Encoder.pushOutput(Encoder.java:161) at c.a.b.e.Encoder.encode(Encoder.java:186) at c.a.b.e.Encoder.flush(Encoder.java:203) at c.a.b.e.BrotliOutputStream.flush(BrotliOutputStream.java:92) at g.c.s.w.f.b.BrotliServletOutputStream.flush(BrotliServletOutputStream.java:45)
Is this happening due to the timeouts at the client being low? -> I checked timeout is enough on client side
Able to replicate ? -> No, just happening on our production deployments
I need more context to figure this out but from a quick look, it looks like the connection is closed while Encoder was compressing data. Can you show source code of BrotliServletOutputStream
?
Yeah it seems like that but the timeout is large enough on our envoy level, so it shouldn't happen to take that much time.
Here's the code:
`import com.aayushatharva.brotli4j.encoder.BrotliOutputStream;
import com.aayushatharva.brotli4j.encoder.Encoder;
import lombok.extern.slf4j.Slf4j;
import javax.servlet.ServletOutputStream;
import javax.servlet.WriteListener;
import javax.servlet.http.HttpServletRequest;
import java.io.IOException;
import java.io.OutputStream;
@slf4j
public class BrotliServletOutputStream extends ServletOutputStream {
private BrotliOutputStream brotliOutputStream;
private HttpServletRequest request;
/**
- @param outputStream outputStream.
- @param parameters brotli compression parameter
*/
public BrotliServletOutputStream(OutputStream outputStream, Encoder.Parameters parameters,
HttpServletRequest request) {
try {
brotliOutputStream = new BrotliOutputStream(outputStream, parameters);
this.request = request;
} catch (IOException e) {
e.printStackTrace();
}
}
@OverRide
public void write(int byteToWrite) throws IOException {
brotliOutputStream.write(byteToWrite);
}
@OverRide
public void write(byte[] buffer) throws IOException {
brotliOutputStream.write(buffer);
}
@OverRide
public void write(byte[] buffer, int offset, int len) throws IOException {
brotliOutputStream.write(buffer, offset, len);
}
@OverRide
public void flush() throws IOException {
try {
brotliOutputStream.flush();
} catch (IOException ex) {
log.error("Exception occured in BrotliOutputStream for req uri: {}, req params: {},"
+ " req api key: {}, exception: {}",
request.getRequestURI(), request.getQueryString(), request.getHeader("x-api-key"),
ex.getMessage());
throw ex;
}
}
@OverRide
public void close() throws IOException {
brotliOutputStream.close();
}
@OverRide
public boolean isReady() {
return false;
}
@OverRide
public void setWriteListener(WriteListener writeListener) {
throw new UnsupportedOperationException("WriteListener support is not yet implemented.");
}
}`
It looks like you're not calling the flush
method periodically. You're compressing data but not writing those chunks to client and client times out and this exception is thrown. You have to call the flush
method so the compressed data is finally written into the OutputStream
.
I'd suggest to call flush
after every write.
I tried this solution as well, but still getting the same error. Could this be some issue with the buffersize ?
What is the timeout duration for a client?
timeout is 5 seconds. Though I mostly get my response within 800 ms for 99%ile of calls. These failures are around 500 each hour for a 3000 QPS
Are the test cases passing for your implementation locally?
Yes my service tests are passing well. Also Brotli4J tests passing too. Compression is working fine as well. Its just that for a fraction of calls we are facing these errors.
Your best bet would be to debug those clients then. I can't help much with this. :(
Yeah I'm looking into that.
I see that out existing Gzip compression is working without any such errors and it's not using transfer-encoding as chunked but setting the content-length in response headers.
Do you think avoiding chunked encoding and instead setting content-length might avoid these errors ?
Transfer-Encoding : chunked
is used when you're compressing data on the fly and don't have the exact number of bytes. If you have pre-compressed data with Brotli then use Content-Length
instead of Transfer-Encoding
.
Can you create a repo with a reproducer so I can debug easily?
Also, is the client throwing any errors while decompressing Brotli data?
No, the client isn't seeing any errors.
Regarding on the fly compression, we do it with gzip currently and still set Content-Length by calculating no. of bytes written. I think same has to be done for Brotli but BrotliOutputStream class in Bortli4J doesn't have a function that returns no. of bytes written.
And I'm just not able to replicate the broken pipe error even once. So I'm not sure how anyone can debug it.
Here you go :)
import com.aayushatharva.brotli4j.Brotli4jLoader;
import com.aayushatharva.brotli4j.encoder.BrotliOutputStream;
import com.aayushatharva.brotli4j.encoder.Encoder;
import java.io.ByteArrayOutputStream;
public class Main {
static {
// First things first :)
// Let's load native library
Brotli4jLoader.ensureAvailability();
}
public static void main(String[] args) throws Exception {
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
BrotliOutputStream brotliOutputStream = new BrotliOutputStream(outputStream, Encoder.Parameters.DEFAULT)) {
// Data to compress as byte[]
byte[] data = "I like cats but ducks are more adorable in my opinion :)".getBytes();
System.out.println("Original Length: " + data.length);
// Compress and flush
brotliOutputStream.write(data);
brotliOutputStream.flush();
// Now we have compressed data as byte[]
// We can do anything we want now :)
byte[] compressed = outputStream.toByteArray();
System.out.println("Compressed Length: " + compressed.length);
}
}
}
I tried with compressing the final byte[] response as mentioned in your code above but now client is unable to decompress brotli response.
I have included a sample code here:
https://github.com/anant2614/brotli4j-sample
Its a Spring Boot app and /hello api should return a Brotli response. I see it is being compressed but Postman or Okhttp is now failing to decompress it.
Looks like my original implementation is writing one extra byte to outputstream than the above code. Need to figure out what byte it is.
Show the decompressor code.
Looks like my original implementation is writing one extra byte to outputstream than the above code. Need to figure out what byte it is.
Any update?
Closing because of no more detail. Feel free to reopen if needed.