Add parameter --log-file
Opened this issue · 4 comments
Tell us more about this new feature.
We are using mountpoint-s3 inside Docker containers running as ECS Tasks in regulated environment where very minimalistic solutions are possiblle.
All logs must go to STDOUT OR STDERR.
AWS documentation also recommends this.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-considerations.html
Configure containerized applications to write logs to stdout and stderr.
Decoupling log handling from your application code gives you flexibility to adjust log handling at the infrastructure level. One example of this is to change your logging system. Instead of modifying your services, and building and deploying a new container image, you can adjust the settings.
Another documentation recommends this:
https://repost.aws/knowledge-center/ecs-eks-troubleshoot-container-logs
Build your container with application log files linked to STDOUT and STDERR. Or, configure your application to directly log to /proc/1/fd/1 (stdout) and /proc/1/fd/2 (stderr). For examples, see nginx and httpd official container images on the Docker Hub website.
The offical nginx docker image also uses this:
# forward request and error logs to docker log collector
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log \
Current situation for mountpoint-s3:
https://github.com/awslabs/mountpoint-s3/blob/main/doc/LOGGING.md
Logging to a file
You can redirect logs to a file instead of syslog by providing a destination directory using the -l, --log-directory command-line argument with mount-s3.
mount-s3 <MOUNT_PATH> --log-directory <LOG_DIRECTORY>
The directory will be created if it doesn't exist. A new log file will be created for each execution of mount-s3. Both the directory and log files are created with read/write access for the process owner and read access for the process owner's group. Log files are not automatically rotated or cleaned up.
Section "Logging to a file" should be rather renamed to "Logging to directory".
I need mountpoint-s3 to log all into STDOUT or STDERR.
It is not possible to redirect logging into one specific file.
Why you do not offer that basic option?
WE NEED parameter for logging to STDOUT or STDERR (as mentioned earlier in this post).
Give us just one simple parameter: --log-file
Hi, thanks for creating this issue.
In the logging documentation, it says that to log to stdout, you can run mount-s3 in foreground mode (the -f
, --foreground
command-line argument). If you need to, you can redirect stdout to stderr with mount-s3 -f ... 1>&2
Please let us know if that doesn't work for your use case.
Hello @muddyfish
this does not work for our use case.
We have Dockerfile where we call entrypoint.sh shell script.
I have tried mount-s3 with --foreground
sudo -E mount-s3 --foreground --debug --debug-crt --log-metrics --allow-other --allow-delete --allow-overwrite --dir-mode 0777 --file-mode 0666 --region eu-central-1 "${BUCKET_NAME}" /home/s3dir
The logging of mount-s3 is happening, but after mount-s3 we need to start java then and it will NOT happen, because mount-s3 is in foreground. The main process in the foreground should be java. This is purpose of the container. The java is using files mounted from s3.
java -Xmx4000M -XX:+UseZGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=99 -XX:AdaptiveSizePolicyWeight=90 -XX:+ExitOnOutOfMemoryError -Djava.security.egd=file:/dev/./urandom -jar "./${APP_NAME}.jar"
If i start mount-s3 in foreground, then java is not started and health-check for my java app will fail and container is then killed.
I need to tell mount-s3, that there is one log file and then i can redirect that log file to stdout.
Now you just offer me to run mount-s3 as main process in foreground and that is not our use case.
We can just define log directory. Many new files are then created inside docker container and that is very tricky to handle. We do not want to keep log files inside container.
In that case, as a workaround, I can suggest that you redirect stdout and run Mountpoint in the background with &
.
mount-s3 -f ... > /dev/stdout &
java ...
I hope that's able to help you move forward. I'll leave this open to gather interest in the feature for full support (👍 the main issue).
In that case, as a workaround, I can suggest that you redirect stdout and run Mountpoint in the background with
&
.mount-s3 -f ... > /dev/stdout & java ...
In the workaround above, it does mean a failure to mount will not be detected as the container will no longer fail when the mount fails. (The main application may fail, of course, if it depends on the content of the mount.)
We'd probably recommend here that Mountpoint should be run inside another container, and that the mount is shared between the Mountpoint container and your application container. This is described in our documentation on docker under "Running as a service": https://github.com/awslabs/mountpoint-s3/tree/main/docker#running-as-a-service
As Simon said, we'll leave this issue open to gather interest in the feature.