paketo-buildpacks/java

Override available memory config for memory-calculator

Closed this issue · 10 comments

We use the included gradle step that comes with the spring-boot gradle plugin to build a java container image. Now, we are very happy with how smooth the process is working, but have recently hit a roadblock when it comes to setting the available memory for the memory calculator.

We would really like to avoid setting the jvm configs ourselves, so the only way to give the available memory to the container is usually with the docker run command.

docker run -m 4g xxx

The catch is: We deploy to AWS elastic beanstalk, without a multicontainer environment. We sadly have no control over the command parameters passed on to the docker run command. The result is that the memory calculator assumes a default of 1Gb, which does not use the whole 4Gb attached to our instances. Now usually, we would just downscale to 1Gb and scale horizontally, but we have a user-triggered business process that consumes much memory and as a result the container is killed. No good.

We would really like to have a environment variable where we set something like:

BPL_TOTAL_AVAILABLE_MEMORY=4g

But have not found something in that direction. Is there something we can do? Is this the right repo to ask that question?

Thanks in advance!

@HendrikJanssen First, thanks for reaching out with your question and this is a perfectly fine place to start. I’ll likely transfer the issue to another repo once I get a bit more information and have an idea of how we’ll want to solve the issue.

The memory calculator uses the /sys/fs/cgroup/memory/memory.limit_in_bytes file system location to find out a container’s “total memory”. Given that this is populated by the kernel’s cgroup functionality it’s broadly considered to be a dependable standard. Both Docker and Kubernetes ensure that this value is set properly (heck even Cloud Foundry’s Diego container scheduler does it).

I don’t know much about Beanstalk but I am surprised that you’re experiencing this as I’d have expected them to have set that value properly as well. If you don’t mind can you please do a bit of experimentation for me? Could you write a tiny little application that outputs the contents of that file (System.out.println(Files.readString(Paths.get(“ /sys/fs/cgroup/memory/memory.limit_in_bytes”))) should work) and run it a couple of times with different memory configurations?

@nebhale When running Applications with AWS Elastic Beanstalk with the Docker platform it is not possible to configure the max memory for the docker container.
Inside the container /sys/fs/cgroup/memory/memory.limit_in_bytes exists but with 9223372036854771712 which aka unset.

@jansauer So how is the 4GB limit described in

... which does not use the whole 4Gb attached to our instances.

surfaced to applications? What enforces the limit, and what's the Beanstalk-native way of an application determining what that limit is?

Currently there is no memory limit set on the running docker container. 4GB in our case is the total memory of our virtual machine instance.
My understanding is that with aws beanstalk docker that there is no way to configure the memory limit for the docker image we give aws to run for us. The buildback is using the set set limit of the container. As we don't have a configured limit it falls back to calculating the jvm memory settings for 1G of memory. This results in our application not using all available memory on the machine and we did not find any other way to overwrite this.

@nebhale while thinking about how a good pr could look like I realised that the current implementation focuses on setups where multiple container are running on the same machine and limits are used to partition the available memory between them.
In our setup we are running only a single container per virtual machine. Besides the OS and maybe a security or logging agent all remaining memory could be used by the application inside the docker container.

@jansauer How do you feel about falling back to /proc/meminfo's MemAvailable value? It will change over time and we'll only read it once at the start of the JVM, but might that be accurate enough for your purposes?

Sounds like a good solution.

Is this still an issue?
I can't seem to solve the memory issues that I experience when deploying a Paketo buildpack to AWS Beanstalk. I just put it on Stackoverflow. If anybody has any help, please let me know, as I love to use Paketo, but getting frustrated that I don't seem to get it working. StackOverflow post: LINK

@edbras This was fixed in libjvm 1.25.0. Any JVM provider buildpack using libjvm 1.25.0+ should have this fix.

This buildpack uses Bellsoft Liberica, which incorporated libjvm 1.25.0 in version 6.0.0. Bellsoft Liberica 6.0.0 is incorporated into version 4.6.0+ of this buildpack.

I'm going to close out this issue because it has been addressed. I'll take a look at your SO post shortly.

Ok @dmikusa-pivotal , thanks for the quick answer. I will check the version of the used libjvm, I suppose it will be at least 1.25.0 as I am almost using the latest spring-boot-maven plugin that uses packeto to build the image.