HTTP2 using Proxygen
SteveSelva opened this issue · 16 comments
I have configured Proxygen to work with HTTP2 over TLS1.3 on both downstream and upstream.
Downstream connections are using single connection for multiple requests and its working fine without any issues.
But Upstream connection is getting closed after each request and new connection is being established for new request even its working with HTTP2.
How to configure Proxygen to not close the upstream connection and use the same connection for multiple requests over the wire?
Do you have sample code for how you are configuring/using the session? Also are you using a SessionPool or not?
No, I am not using SessionPool.
And I am using Proxygen's ProxyHandler.cpp and HTTPServerAcceptor as a reference to create a ProxyServer.
I have made some changes in those two files to make it work as a ProxyServer.
ProxyHandler.cpp
void ProxyHandler::onRequest(std::unique_ptr<HTTPMessage> headers) noexcept {
if (headers->getMethod() == HTTPMethod::CONNECT)
{
LOG(INFO) << "NO-OP";
}
else
{
serverName_ = headers->getHeaders().rawGet("Host");
request_ = std::move(headers);
stats_->recordRequest();
const folly::SocketOptionMap opts{ {{SOL_SOCKET, SO_REUSEADDR}, 1} };
downstream_->pauseIngress();
if (remoteAddress_.empty())
{
proxygen::URL url(request_->getURL());
folly::SocketAddress addr;
addr.setFromHostPort(url.getHost(), url.getPort());
remoteAddress_ = addr;
}
LOG(INFO) << "Trying to connect to " << remoteAddress_ << " : " << serverName_;
if (remoteAddress_.getPort() == IPPORT_HTTPS)
{
connector_.connectSSL(folly::EventBaseManager::get()->getEventBase(),
remoteAddress_,
ctx_,
nullptr,
std::chrono::milliseconds(10000),
opts,
folly::AsyncSocket::anyAddress(),
serverName_);
}
else
{
connector_.connect(folly::EventBaseManager::get()->getEventBase(),
remoteAddress_,
std::chrono::milliseconds(FLAGS_proxy_connect_timeout),
opts);
}
}
}
bool ProxyHandler::checkForShutdown() {
if (clientTerminated_ && !txn_ &&
(!upstreamSock_ || (sockStatus_ == CLOSED && !upstreamEgressPaused_))) {
//delete this;
LOG(INFO) << "Connection Closed";
return true;
}
return false;
}
HTTPServerAcceptor.cpp
HTTPTransactionHandler* HTTPServerAcceptor::newHandler(
HTTPTransaction& txn, HTTPMessage* msg) noexcept {
SocketAddress clientAddr, vipAddr;
txn.getPeerAddress(clientAddr);
txn.getLocalAddress(vipAddr);
msg->setClientAddress(clientAddr);
msg->setDstAddress(vipAddr);
proxygen::URL url(msg->getURL());
//HTTPHeaders headers = msg->getHeaders();
LOG(INFO) << "Creating New Handler";
if (msg->getMethod() == proxygen::HTTPMethod::CONNECT)
{
try
{
folly::SocketAddress addr;
addr.setFromHostPort(url.getHost(), url.getPort());
remoteAddress = addr;
LOG(INFO) << remoteAddress.describe();
}
catch (Exception ex) {
LOG(INFO) << "Exception Code : " << ex.getCode() << "Exception : " << ex.what();
}
sendConnectResponse(txn, url.getHost(),clientAddr);
}
for (auto& factory : handlerFactories_) {
h = factory->onRequest(h, msg);
}
LOG(INFO) << "Created Handler for : " <<vipAddr;
txn.describe(LOG(INFO));
return new RequestHandlerAdaptor(h);
}
void HTTPServerAcceptor::onDoneAcceptingConnection(int fd, const folly::SocketAddress& clientAddr, std::chrono::steady_clock::time_point acceptTime, const folly::AsyncServerSocket::AcceptCallback::AcceptInfo& info) noexcept {
LOG(INFO) << "Preprocessing Connection : "<< clientAddr.describe();
wangle::TransportInfo tinfo;
tinfo.timeBeforeEnqueue = info.timeBeforeEnqueue;
processConnection(fd, clientAddr, acceptTime, tinfo);
}
void HTTPServerAcceptor::processConnection(int fd, const folly::SocketAddress& clientAddr, std::chrono::steady_clock::time_point acceptTime, wangle::TransportInfo& tinfo) noexcept {
acceptTime_ = acceptTime;
tinfo_ = tinfo;
if(!isSSL)
{
LOG(INFO) << "HTTP Connection : " << clientAddr.describe();
tinfo.secure = false;
tinfo.acceptTime = acceptTime;
folly::AsyncSocket::UniquePtr sock(makeNewAsyncSocket(base_, fd, &clientAddr));
tinfo.tfoSucceded = sock->getTFOSucceded();
plaintextConnectionReady(std::move(sock), clientAddr, tinfo);
}
else {
LOG(INFO) << "HTTPS Connection : "<<clientAddr.describe();
Acceptor::processEstablishedConnection(fd, clientAddr, acceptTime, tinfo);
}
}
void HTTPServerAcceptor::sendConnectResponse(HTTPTransaction& txn,const std::string& serverName, folly::SocketAddress& addr)
{
std::string response = "HTTP/1.1 200 Connection established\r\n\r\n";
const folly::AsyncTransport* transport = txn.getTransport().getUnderlyingTransport();
const folly::AsyncSocket* asyncSock = transport->getUnderlyingTransport<folly::AsyncSocket>();
WSABUF buffer = {};
buffer.len = response.length();
buffer.buf = (CHAR*)response.c_str();
SOCKET socket = asyncSock->getNetworkSocket().data;
DWORD bytesSent = 0;
if (WSASend(socket, &buffer, 1, &bytesSent, 0, NULL, NULL) != 0)
{
LOG(ERROR) << "Error while sending Connection Established response for Connect Request : " << std::to_string(WSAGetLastError());
return;
}
else
LOG(INFO) << "Sent Connection Established Response : " << std::to_string(bytesSent);
isSSL = true;
Acceptor::processEstablishedConnection(asyncSock->getNetworkSocket().toFd(), addr, acceptTime_, tinfo_);
}
I am converting HTTP CONNECT request into HTTPS connection in the HTTPServerAcceptor::newHandler()
function by sending
"HTTP/1.1 200 Connection established\r\n\r\n"
message for the CONNECT Request and set isSSL
to true(isSSL
is false by default);
Here two ProxyHandler objects are created,
- ProxyHandler object for CONNECT Request and I am performing no operation in it, as I have sent the response already using
WSASend()
, is there anyway to stop creating this object? - ProxyHandler object with HTTPS Connection/H2 codec, and the request/response is sent over this object.
I have made changes in a way that it will work this with HTTP CONNECT requests, but a downside is, I can't delete ProxyHandler's object in ProxyHandler::checkForShutdown()
function as it is causing MemoryAccessViolation
errors.
And the memory consumed by the Application increases for every request it processed.
And I don't know, how to implement proxygen::SessionPool
and where to implement it.
Can you give me some example for proxygen::SessionPool
and how to implement it in my code?
You need to have a persistent session upstream in order for subsequent requests to use the same upstream connection. One handler maps to one downstream request, and for each handler you are calling the connector to get a new upstream session.
ProxyHandler object for CONNECT Request and I am performing no operation in it, as I have sent the response already using WSASend(), is there anyway to stop creating this object?
You really should not be doing this. You should be using proxygen's functionality for writing responses (i.e. leveraging the HTTPTransactioN) not extracting the raw socket from the downstream session and using WSASend.
Where should I implement SessionPooling to store upstream sessions in Proxygen?
Session pooling is implemented here:
https://github.com/facebook/proxygen/tree/main/proxygen/lib/http/connpool
I have tried to implement SessionPooling using SessionPool class, and I have some doubts regarding it.
Does the Session will be maintained even if no active transaction is present for the idleTimeout set?
Or the session gets destroyed if there is no active transactions?
I want the session to be sustained till the timeout ends but when I implemented it and tested it, the session is getting destroyed when there is no active transactions in the session.
Am I doing it wrong, or it works like that and is there any work around to make it work like I wish?
The session will stay active until one of the following:
- You initiate closure using one of the following APIs: drain, notifyPendingShutdown, closeWhenIdle, dropConnection
- The peer initiates closure by sending an HTTP GOAWAY, TLS shutdown, and/or TCP fin/RST
- The idle timeout expires and all transactions have completed.
By using the example given in Proxygen, I have modified some changes to make it work as I want.
The flow of control is mentioned below:
- HTTPServer is listening for connections on port X.
- When connection is accepted it is processed in HTTPServerAcceptor.
- HTTPServerAcceptor establishes the downstream connection.
- While creating a downstream session, it asks for a RequestHandler.
- HTTPServerAcceptor creates a RequestHandler and sends it to Downstream Session.
- Then the Request in sent to RequestHandler.
- RequestHandler establishes an upstream connection and creates an Upstream Session.
In this flow, I implemented SessionPooling at HTTPServerAcceptor and sending that session pool's pointer to RequestHandler.
And the created UpstreamSession is stored in that SessionPool.
Is this approach correct?
Atlast the RequestHandler is getting deleted from heap.
While deleting RequestHandler, at times I am getting Memory Access Violation error as the address being deleted is NULL, and when I checked the logs, there was some Circular Dependency errors. I think this may cause Memory Access Violation error.
How to fix Circular Dependency Error and what is causing this error?
Note: ProxyServer is working in TLS 1.3 HTTP/2 Protocol.
I haven't thought about what component should "own" the session pool. In our production reverse proxy, there are some global data structures for each upstream server we want a pool for that contains the session pool?
I'm not sure what you mean by circular dependency errors -- possibly in the HTTP/2 priority tree?
Can you paste the logs?
Logs Snippet
I20230712 01:06:38.414695 4756 HTTPUpstreamSession.cpp:50] isClosing: proto=http/2, local=10.0.3.15:49805, 35.84.120.63:443=upstream, sock_->good()=1, draining_=0, readsShutdown()=0, writesShutdown()=0, writesDraining_=0, resetAfterDrainingWrites_=0
I20230712 01:06:38.414695 4692 AsyncSSLSocket.cpp:1511] AsyncSSLSocket::performRead() this=0000027F38ED49C0, buf=0000027F38F38B70, buflen=4000
I20230712 01:06:38.414695 4756 HTTP2Codec.cpp:1683] upstream generating 5 settings
I20230712 01:06:38.414695 4692 AsyncSocket.cpp:2989] this=0000027F38ED49C0, AsyncSocket::handleRead() got 9 bytes
I20230712 01:06:38.414695 4756 FlowControlFilter.cpp:56] Incrementing default conn-level recv window by 0
I20230712 01:06:38.414695 4692 HTTP2Codec.cpp:798] parsing SETTINGS frame for stream=0 length=0
I20230712 01:06:38.414695 4756 HTTPSession.cpp:2257] proto=http/2, local=10.0.3.15:49805, 35.84.120.63:443=upstream scheduling write callback
I20230712 01:06:38.422797 4692 HTTPSession.cpp:1273] proto=http/2, local=10.0.3.15:49804, 35.84.120.63:443=upstream received settings ack
I20230712 01:06:38.422797 4756 HTTPSession.cpp:2903] proto=http/2, local=10.0.3.15:49805, 35.84.120.63:443=upstream: resuming reads
I20230712 01:06:38.430685 4692 AsyncSocket.cpp:2605] AsyncSocket::handleErrMessages() this=0000027F38ED49C0, fd=folly::NetworkSocket(708), state=2
I20230712 01:06:38.430685 4756 HTTP2PriorityQueue.cpp:512] Adding id=1 with parent=0 and weight=16
I20230712 01:06:38.430685 4692 AsyncSSLSocket.cpp:1488] AsyncSSLSocket::handleRead() this=0000027F38ED49C0, fd=folly::NetworkSocket(708), state=2, sslState=5, events=2
I20230712 01:06:38.430685 4756 HTTPSession.cpp:2707] proto=http/2, local=10.0.3.15:49805, 35.84.120.63:443=upstream adding streamID=1, liveTransactions_ was 0
I20230712 01:06:38.430685 4692 AsyncSocket.cpp:3034] AsyncSocket::handleRead() this=0000027F38ED49C0, fd=folly::NetworkSocket(708), state=2
I20230712 01:06:38.430685 4756 ProxyHandler.cpp:194] Forwarding client request to server
I20230712 01:06:38.446571 4692 AsyncSocket.cpp:2957] prepareReadBuffer() buf=0000027F38F38B70, buflen=4000
E20230712 01:06:38.446571 4756 HTTPSession.cpp:1559] Attempted to create circular dependency txn=proto=http/2, local=10.0.3.15:49805, 35.84.120.63:443=upstream
I20230712 01:06:38.446571 4692 AsyncSSLSocket.cpp:1511] AsyncSSLSocket::performRead() this=0000027F38ED49C0, buf=0000027F38F38B70, buflen=4000
I20230712 01:06:38.446571 4756 HTTP2Codec.cpp:1261] generating HEADERS for stream=1
I20230712 01:06:38.462096 4692 AsyncSocket.cpp:2989] this=0000027F38ED49C0, AsyncSocket::handleRead() got 170 bytes
E20230712 01:06:38.462096 4756 HTTP2Codec.cpp:1291] Overwriting circular dependency for stream=1
I20230712 01:06:38.462096 4692 HTTP2Codec.cpp:390] parsing HEADERS frame for stream=1 length=109
I20230712 01:06:38.470036 4756 HTTPSession.cpp:1597] proto=http/2, local=10.0.3.15:49805, 35.84.120.63:443=upstream sending headers, size=358, uncompressedSize=548
I20230712 01:06:38.477783 4692 HPACKDecoderBase.cpp:83] Received table size update, new size=0
I20230712 01:06:38.477783 4756 HTTPTransaction.cpp:1456] resumeIngress request proto=http/2, UA=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36, downstream=127.0.0.1:49796, 127.0.0.1:5555=local, streamID=3
I20230712 01:06:38.493747 4692 HTTPSession.cpp:693] processing new msg streamID=1 proto=http/2, local=10.0.3.15:49804, 35.84.120.63:443=upstream
I20230712 01:06:38.493747 4756 HTTPSession.cpp:1481] proto=http/2, UA=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36, downstream=127.0.0.1:49796, 127.0.0.1:5555=local resuming streamID=3, liveTransactions_ was 1
I20230712 01:06:38.493747 4692 HTTPSession.cpp:842] processing ingress headers complete for proto=http/2, local=10.0.3.15:49804, 35.84.120.63:443=upstream, streamID=1
I20230712 01:06:38.493747 4756 HTTPTransaction.cpp:1486] Processing deferred ingress callback of type body proto=http/2, UA=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36, downstream=127.0.0.1:49796, 127.0.0.1:5555=local, streamID=3
I20230712 01:06:38.501487 4692 ProxyHandler.cpp:212] Forwarding 404 response to client
I20230712 01:06:38.501487 4756 HTTPSessionBase.cpp:141] proto=http/2, UA=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36, downstream=127.0.0.1:49796, 127.0.0.1:5555=local Dequeued 701 bytes of ingress. Ingress buffer uses 6026 of 65536 bytes.
I20230712 01:06:38.532986 4692 HTTP2Codec.cpp:1261] generating HEADERS for stream=1
I20230712 01:06:38.532986 4756 FlowControlFilter.cpp:73] processed 701 toAck_=1403 bytes, will ack=0
I20230712 01:06:38.532986 4692 HPACKEncoderBase.cpp:19] Encoding table size update size=65536
I20230712 01:06:38.540827 4756 ProxyHandler.cpp:134] Forwarding 701 body bytes to server
Complete Logs File
ServerLogs.txt
After this when the RequestHandler gets deleted after the request is completed, it throws a Memory Access Violation error.
Error.png
How to avoid Circular Dependency and what is cause of it?
You can see the line numbers in the log file and peeking at the code we see:
if (pri.streamDependency == txn->getID()) {
LOG(ERROR) << "Attempted to create circular dependency txn=" << *this;
} else {
So the client is sending an HTTP/2 message with a Priority field marking the stream as dependent on itself. Don't do that. That said, I don't think this error is fatal, and when we forward the H2 message upstream we overwrite the priority field to not be a circular dependency.
From the full logs though it doesn't appear that the program crashed there. Do you have a log snippet just from a request that crashes the sample?
Logs Snippet
I20230712 01:06:41.591816 4756 AsyncSocket.cpp:2605] AsyncSocket::handleErrMessages() this=0000027F38F00C50, fd=folly::NetworkSocket(768), state=2
I20230712 01:06:41.591816 4756 AsyncSocket.cpp:3034] AsyncSocket::handleRead() this=0000027F38F00C50, fd=folly::NetworkSocket(768), state=2
I20230712 01:06:41.591816 4756 AsyncSocket.cpp:2957] prepareReadBuffer() buf=0000027F38E9AA90, buflen=4000
I20230712 01:06:41.591816 4756 AsyncSocket.cpp:2481] AsyncSocket::performReadInternal() this=0000027F38F00C50, iovs=000000DE872FEE98, num=1
I20230712 01:06:41.591816 4756 AsyncSocket.cpp:2989] this=0000027F38F00C50, AsyncSocket::handleRead() got 89 bytes
I20230712 01:06:41.591816 4756 HTTPSession.cpp:497] read completed on proto=http/2, UA=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36, downstream=127.0.0.1:49814, 127.0.0.1:5555=local, bytes=13
I20230712 01:06:41.591816 4756 InfoCallback.cpp:43] Read Bytes : DOWNSTREAM : 13, local:127.0.0.1:5555, peer:127.0.0.1:49814
I20230712 01:06:41.591816 4756 HTTP2Codec.cpp:774] parsing RST_STREAM frame for stream=1 length=4
I20230712 01:06:41.591816 4756 HTTPSession.cpp:1097] stream abort on proto=http/2, UA=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36, downstream=127.0.0.1:49814, 127.0.0.1:5555=local, streamID=1, code=CANCEL
I20230712 01:06:41.599727 4756 HTTPTransaction.cpp:558] Marking egress complete on proto=http/2, UA=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36, downstream=127.0.0.1:49814, 127.0.0.1:5555=local, streamID=1
I20230712 01:06:41.599727 4756 HTTPTransaction.cpp:551] Marking ingress complete on proto=http/2, UA=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36, downstream=127.0.0.1:49814, 127.0.0.1:5555=local, streamID=1
E20230712 01:06:41.599727 4756 ProxyHandler.cpp:260] Client error: StreamAbort
E20230712 01:06:48.066387 4756 ProxyHandler.cpp:263] Aborting server txn: proto=http/2, local=10.0.3.15:49815, 142.250.205.228:443=upstream, streamID=1
I20230712 01:06:48.066387 4692 AsyncSocket.cpp:2605] AsyncSocket::handleErrMessages() this=0000027F38EDF040, fd=folly::NetworkSocket(804), state=2
I20230712 01:06:48.066387 4756 HTTPTransaction.cpp:551] Marking ingress complete on proto=http/2, local=10.0.3.15:49815, 142.250.205.228:443=upstream, streamID=1
I20230712 01:06:48.074054 4692 AsyncSSLSocket.cpp:1488] AsyncSSLSocket::handleRead() this=0000027F38EDF040, fd=folly::NetworkSocket(804), state=2, sslState=4, events=2
I20230712 01:06:48.074054 4756 HTTPTransaction.cpp:558] Marking egress complete on proto=http/2, local=10.0.3.15:49815, 142.250.205.228:443=upstream, streamID=1
I20230712 01:06:48.074054 4692 AsyncSSLSocket.cpp:1363] AsyncSSLSocket::handleConnect() this=0000027F38EDF040, fd=folly::NetworkSocket(804), state=2, sslState=4, events=2
I20230712 01:06:48.078928 4756 HTTPTransaction.cpp:1389] aborting transaction proto=http/2, local=10.0.3.15:49815, 142.250.205.228:443=upstream, streamID=1
I20230712 01:06:48.079878 4692 AsyncSocket.cpp:3734] AsyncSocket::updateEventRegistration(this=0000027F38EDF040, fd=folly::NetworkSocket(804), evb=0000027F370755F0, state=2, events=0
I20230712 01:06:48.079878 4756 HTTPSession.cpp:1750] proto=http/2, local=10.0.3.15:49815, 142.250.205.228:443=upstream sending abort for streamID=1
I20230712 01:06:48.079878 4692 AsyncSSLSocket.cpp:1411] AsyncSSLSocket 0000027F38EDF040: fd folly::NetworkSocket(804) successfully connected; state=2, sslState=5, events=0
I20230712 01:06:48.079878 4756 HTTP2Codec.cpp:1526] sending RST_STREAM for stream=1 with code=CANCEL
I20230712 01:06:48.079878 4692 HTTP2Codec.cpp:49] creating upstream HTTP/2 codec
I20230712 01:06:48.087246 4756 HTTPSession.cpp:2257] proto=http/2, local=10.0.3.15:49815, 142.250.205.228:443=upstream scheduling write callback
I20230712 01:06:48.087246 4692 HTTP2Codec.cpp:1116] generating connection preface
I20230712 01:06:48.093806 4756 HTTPTransaction.cpp:150] destroying transaction proto=http/2, local=10.0.3.15:49815, 142.250.205.228:443=upstream, streamID=1
I20230712 01:06:48.093806 4692 ProxyHandler.cpp:187] Established proto=http/2, local=10.0.3.15:49819, 18.65.39.64:443=upstream : brave-core-ext.s3.brave.com
I20230712 01:06:48.093806 4756 ProxyHandler.cpp:232] Detaching Server Transaction
I20230712 01:06:48.093806 4756 ProxyHandler.cpp:58] Time taken to process / request is 10005682us (10005.7 ms)
I20230712 01:06:48.093806 4692 HTTPUpstreamSession.cpp:50] isClosing: proto=http/2, local=10.0.3.15:49819, 18.65.39.64:443=upstream, sock_->good()=1, draining_=0, readsShutdown()=0, writesShutdown()=0, writesDraining_=0, resetAfterDrainingWrites_=0
I20230712 01:06:48.093806 4756 ProxyHandler.cpp:59] Deleting ProxyHandler
I20230712 01:06:48.093806 4692 HTTPUpstreamSession.cpp:50] isClosing: proto=http/2, local=10.0.3.15:49819, 18.65.39.64:443=upstream, sock_->good()=1, draining_=0, readsShutdown()=0, writesShutdown()=0, writesDraining_=0, resetAfterDrainingWrites_=0
I20230712 01:06:48.101152 4756 HTTPSession.cpp:1896] proto=http/2, local=10.0.3.15:49815, 142.250.205.228:443=upstream removing streamID=1, liveTransactions was 1
I20230712 01:06:48.093806 4692 HTTPUpstreamSession.cpp:28] isReusable: proto=http/2, local=10.0.3.15:49819, 18.65.39.64:443=upstream, liveTransactions_=0, isClosing()=0, sock_->connecting()=0, codec_->isReusable()=1, codec_->isBusy()=0, numActiveWrites_=0, writeTimeout_.isScheduled()=0, ingressError_=0, hasMoreWrites()=1, codec_->supportsParallelRequests()=1
I20230712 01:06:48.101152 4756 HTTP2PriorityQueue.h:200] scheduling expiration for node=1
I20230712 01:06:48.101152 4692 HTTPUpstreamSession.cpp:50] isClosing: proto=http/2, local=10.0.3.15:49819, 18.65.39.64:443=upstream, sock_->good()=1, draining_=0, readsShutdown()=0, writesShutdown()=0, writesDraining_=0, resetAfterDrainingWrites_=0
I20230712 01:06:48.101152 4756 HTTPUpstreamSession.cpp:50] isClosing: proto=http/2, local=10.0.3.15:49815, 142.250.205.228:443=upstream, sock_->good()=1, draining_=0, readsShutdown()=0, writesShutdown()=0, writesDraining_=0, resetAfterDrainingWrites_=0
I20230712 01:06:48.101152 4692 HTTPUpstreamSession.cpp:50] isClosing: proto=http/2, local=10.0.3.15:49819, 18.65.39.64:443=upstream, sock_->good()=1, draining_=0, readsShutdown()=0, writesShutdown()=0, writesDraining_=0, resetAfterDrainingWrites_=0
I20230712 01:06:48.101152 4756 HTTPUpstreamSession.cpp:50] isClosing: proto=http/2, local=10.0.3.15:49815, 142.250.205.228:443=upstream, sock_->good()=1, draining_=0, readsShutdown()=0, writesShutdown()=0, writesDraining_=0, resetAfterDrainingWrites_=0
I20230712 01:06:48.108247 4692 HTTPUpstreamSession.cpp:50] isClosing: proto=http/2, local=10.0.3.15:49819, 18.65.39.64:443=upstream, sock_->good()=1, draining_=0, readsShutdown()=0, writesShutdown()=0, writesDraining_=0, resetAfterDrainingWrites_=0
I20230712 01:06:48.101152 4756 HTTPUpstreamSession.cpp:28] isReusable: proto=http/2, local=10.0.3.15:49815, 142.250.205.228:443=upstream, liveTransactions_=0, isClosing()=0, sock_->connecting()=0, codec_->isReusable()=1, codec_->isBusy()=0, numActiveWrites_=0, writeTimeout_.isScheduled()=0, ingressError_=0, hasMoreWrites()=1, codec_->supportsParallelRequests()=1
I20230712 01:06:48.108247 4692 HTTPUpstreamSession.cpp:28] isReusable: proto=http/2, local=10.0.3.15:49819, 18.65.39.64:443=upstream, liveTransactions_=0, isClosing()=0, sock_->connecting()=0, codec_->isReusable()=1, codec_->isBusy()=0, numActiveWrites_=0, writeTimeout_.isScheduled()=0, ingressError_=0, hasMoreWrites()=1, codec_->supportsParallelRequests()=1
I20230712 01:06:48.108247 4756 HTTPUpstreamSession.cpp:50] isClosing: proto=http/2, local=10.0.3.15:49815, 142.250.205.228:443=upstream, sock_->good()=1, draining_=0, readsShutdown()=0, writesShutdown()=0, writesDraining_=0, resetAfterDrainingWrites_=0
I20230712 01:06:48.108247 4692 HTTPUpstreamSession.cpp:50] isClosing: proto=http/2, local=10.0.3.15:49819, 18.65.39.64:443=upstream, sock_->good()=1, draining_=0, readsShutdown()=0, writesShutdown()=0, writesDraining_=0, resetAfterDrainingWrites_=0
I20230712 01:06:48.108247 4692 HTTP2Codec.cpp:1683] upstream generating 5 settings
Error image representing that this statement was already executed.
Error.png
This is the set of logs before the program interrupted as error while debugging.
While checking the log and Debugger, I found that the ProxyHandler object has been already deleted in thread 4756.
It deleted the handler since RST message was sent from the peer and its clear from the logs.
But this function checkForShutdown() is again called due to the same message RST. (I found this while checking the call stack), and its again deleting the handler(but its already deallocated from heap) causing double deletion of same address causing error and crashing.
I don't know how is this possible, since one handler is being used for a request, but the handler is being deallocated twice, causing in an memory access violation error.
How to resolve this issue?
Looks like a bug in the sample for handling client errors. I suspect the problem is checkForShutdown => delete this is called twice in this pattern:
ProxyHandler::onError
clientTerminated_ = true;
HTTPTransaction::sendAbort -> ProxyHandler::detachServerTransaction
txn_ = nullptr
checkForShutdown(1)
checkForShutdown(2)
There are several solutions to this. One is to inherit ProxyHandler from folly::DelayedDestruction
and create a DestructorGuard
in any stack frame that requires the use of member variables after a state changing call.
PR forthcoming
Let me try to implement ProxyHandler
with inheriting folly::DelayedDestruction
and check for any issues.