Larger test data files
Closed this issue · 7 comments
When trying to start a test with a ~1M test data (CSV) file, I've noticed that Kangal silently rejects the test. There is an error message printed to the controller logs, but the POST request yields a 201 response and the error can only be externally detected by examining the response to a succeeding GET to /load-test/:test_name
and noticing that it does not include a test name.
Besides warning that some kind of error message should be returned to the end user, I'd like to start a discussion here: do we see that test data size should be limited? It can be argued that we don't need a lot of test data since most load testing can be done by just repeating requests. On the other hand, there are very legitimate use cases for large data sets in a load test scenario like cache busting.
Hey @thiagoarrais, thanks for reporting.
Kangal stores test data on a Kubernetes ConfigMap and there is a 1MB limit on those resources.
A ConfigMap is not designed to hold large chunks of data. The data stored in a ConfigMap cannot exceed 1 MiB. If you need to store settings that are larger than this limit, you may want to consider mounting a volume or use a separate database or file service.
https://kubernetes.io/docs/concepts/configuration/configmap/
So there is not much to be done unless change the whole approach to use Volumes instead.
But we could improve the experience a bit and fail fast in case of the file exceeds the size.
I see. Would kangal be willing to accept a PR that moves the data to Volumes then?
Definitely @thiagoarrais,
If you prefer, you can propose the solution first and then we can discuss it.
I think the main question is, where those files will be stored?
Proxy already has S3 integration, so we could rely on this maybe.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had any activity in the last 21 days. Feel free to re-open in case you would like to follow up.
@diegomarangoni @thiagoarrais Can we reopen this issue? I would like to discuss the possibility to use S3 integration, wdyt?
Another idea that I'm working on: convert the csv to a gzip 64 file and convert back using an init container.