tiredofit/docker-nginx-php-fpm

Server NGINX won't start up

chuckienorton opened this issue · 4 comments

Summary

We have a production server, running in an aws ecs container, which restarted last week, and started giving us the below error:

[42m[STARTING]�[49m ** [nginx] [33] Starting nginx 1.23.0
nginx: [emerg] open() "/etc/nginx/nginx.conf.d/site_optimization.conf" failed (2: No such file or directory) in /etc/nginx/sites.available/default.conf:29
This happens at the very beginning of the cycle. We have tried restarting

Here is our Dockerfile:

FROM tiredofit/nginx-php-fpm:debian-8.1

# Set Working Directory
WORKDIR /www/html

RUN php-ext enable pdo_sqlite \
    && php-ext enable sqlite3 \
    && php-ext enable zip \
    && php-ext enable sockets

# Install NPM
RUN curl -sL https://deb.nodesource.com/setup_16.x | bash - \
    && apt-get install -y nodejs \
    && npm install -g npm

# Copy Files
COPY --chown=www-data:www-data . /www/html
RUN chmod -R 775 /www/html/storage

# Run Composer Install
RUN composer install \
    --no-interaction \
    --no-plugins \
    --no-scripts \
    --no-dev \
    --prefer-dist

# RUN NPM - removing in case this is an issue on staging. 
RUN npm install --only=production --prefer-offline --no-audit \
    && npm run production

2 follow up questions:

Is there an immediate fix within the container (ie steps to remove site_optimization and restart nginx - which we're having a hard time doing)

Long term fix.

Thanks in advance!

PS - here are a few env variables just in case it's helpful...

NGINX_ENABLE_COMPRESSION_GZIP=true
NGINX_LISTEN_PORT=8080
NGINX_WEBROOT=/www/html/public
NODE_ENV=production
PHP_ENABLE_OPENSSL=false
PHP_FPM_POST_INIT_SCRIPT=/www/html/docker/dcc/post-init-script
PORT=8080

Hi Are you using a custom nginx site configuration file (default.conf or the like?). As part of a major change to the base image (tiredofit/nginx) the folder /etc/nginx/nginx.conf.d/ was changed to /etc/snippets and /etc/nginx/conf.d was phased out for /etc/nginx/sites.available for more flexibility of automation and also running multiple sites under the same container.

In this new model I would do the following:

  • Change any references of /etc/nginx/nginx.conf.d in your nginx configuration
  • Move your default.conf or your Nginx configuration file to install/etc/nginx/sites.available/
  • Rename your configuration to something other than default.conf lets say application.conf
  • Add the following to your Dockerfile or to your runtime environment variables NGINX_SITE_ENABLED=application
  • Rebuild

Apologies on the delay was on a holiday, Let me know if this gets you back up and running with the more modernized images.

Ok thanks @tiredofit that is helpful! We fixed it right away by not referencing the latest tag, which was a mistake on our part.

Question - since your docker image relies on so many other docker images up-the-chain, is it possible to daisy-chain your documentation? Or at least link within the Readme to all the other images it relies on so it's easy to know what environment variables are available, and allow us to see the 'releases' tab to see if something changed. In last week's example, we did look at your releases for this repo, and one other, and just missed the tiredofit/nginx release. Just a thought, but not necessary as I know you already have tons to manage.

Thanks!

The base image situation is messy, I agree - When I first started building these I didn't see how tangled of a base web I would get myself into. Some of the stuff I have is 5 images deep for some clients, but its a method of madness I've come to rely on and can quickly churn things out. On the update situation when big changes happen in upstream however - thats when things get fun.

Luckily for now I don't see anything major for the two base images this one relies on unless a client requests something in the next year.

I do list in the README ttps://github.com/tiredofit/docker-nginx-php-fpm#base-images-used the base images used for this, but you are right it does force you to keep on top of what is happening with the upstream. A couple months ago I started posting Github Releases instead of just tags which allows one to "watch" the repository for new releases and then get notifications when it happens. My team goes through similar pains and gets a wave of emails when I go on a tear and add some new features.

Glad you are up and running again. Apologies on not building in some sort of legacy migration steps or warnings into the images for the nginx.conf.d folder as I did for conf.d