anthonygauthier/jmeter-elasticsearch-backend-listener

es.timout.ms doesn't seem to work

swiaam opened this issue · 6 comments

Hi @delirius325,

I'm using the latest version of your awesome plugin, but it seems that "es.timout.ms" doesn't work as expected:

On the one hand, the requests to Elasticsearch seem to timeout at arbitrarily times, regardless of any configured timeout or how high it is.

For example, this log shows that requests were aborted after the number of seconds in the 5th column, even though es.timout.ms was set to 5000 or 10000:

$request_uri|$upstream_status|$status|$upstream_response_time|$request_time
/jmeter-bl-v2-20200403/_bulk - 499 - 1.562 13935.7
/jmeter-bl-v2-20200403/_bulk - 499 - 1.741 19293
/jmeter-bl-v2-20200403/_bulk - 400 - 0.267 23402.4
/jmeter-bl-v2-20200403/_bulk - 499 - 1.426 11307.9
/jmeter-bl-v2-20200403/_bulk - 499 - 1.685 14761.6
/jmeter-bl-v2-20200403/_bulk - 499 - 1.447 12907.5
/jmeter-bl-v2-20200403/_bulk - 499 - 1.642 18483
/jmeter-bl-v2-20200403/_bulk - 499 - 1.363 10910.3
/jmeter-bl-v2-20200403/_bulk - 499 - 1.589 16667.1
/jmeter-bl-v2-20200403/_bulk - 499 - 1.513 14099.5
/jmeter-bl-v2-20200403/_bulk - 499 - 1.629 18034.6
/jmeter-bl-v2-20200403/_bulk - 499 - 3.018 46422.3

(this is a snippet from NGINX log which is proxying requests to Elasticsearch)

In addition, whenever that happens, I get these errors in the log:

ERROR i.g.d.j.b.e.ElasticsearchBackendClient: Error with node: [host=http://elastic:9200]
ERROR i.g.d.j.b.e.ElasticSearchMetricSender: Exceptionjava.net.SocketTimeoutException: 200 milliseconds timeout on connection http-outgoing-1 [ACTIVE]
ERROR i.g.d.j.b.e.ElasticSearchMetricSender: Elastic Search Request End Point: /jmeter-bl-v2-20200403/_bulk
ERROR i.g.d.j.b.e.ElasticSearchMetricSender: ElasticSearch Backend Listener was unable to perform request to the ElasticSearch engine. Check your JMeter console for more info.

(notice the "200 milliseconds timeout", even though what's configured is much higher)

The only way that I could get it to work okay-ish with high load (5 remote servers, 1600 threads per server) is by changing the value of "Async Queue size" in the Backend Listener in the Test Plan all the way down to 75, but the trade-off is that it killed my TPS; without the Backend Listener I can reach 17K TPS, with it enabled and the Async Queue size set to 75 I can reach ~3.5K TPS only.

Am I missing something or is this a bug?

Any help would be appreciated.

Cheers.

Hi @delirius325,Is this resolved as I am also encountering the problem and not sure how to resolve.
Please advise

Hi everyone, I will look into this and throw out a fix very soon. I've been busy with a lot of other things lately (both work related and personal stuff) - but I'm back!

@swiaam

So at first glance, the issue seems to come from your ElasticSearch engine not being available.

https://github.com/delirius325/jmeter-elasticsearch-backend-listener/blob/0aa6b59425f2bca1254404164f1640c961828bf3/src/main/java/io/github/delirius325/jmeter/backendlistener/elasticsearch/ElasticsearchBackendClient.java#L116-L120

As you can see, this error is the first error that's hit by your JMeter instance. Now I believe you might want to look into this in the first place. If you're able to supply your ES engine's log, that'd be nice!

Also, just to be sure, in the logs you wrote http://elastic:9200, you changed the URL to elastic for privacy purposes right?

@delirius325

In my setup I have 1 ES node only fronted by NGINX so with high load the indexing takes time which may make it seem as not available, that's why I wanted to increase the timeout (works well with e.g. filebeat).

Regarding your second question, indeed I replaced it manually in to "elastic:9200" for privacy reasons :)

@swiaam

I have released a SNAPSHOT version of 2.6.11, please upgrade your JAR manually (remove the current used version of the plugin from your lib/ext folder and manually add the SNAPSHOT jar).

Tell me if you're having more success with this 😄 !

https://github.com/delirius325/jmeter-elasticsearch-backend-listener/releases/tag/2.6.11-SNAPSHOT

@swiaam Did you get the chance to test this out?