jbossdemocentral/fuse-financial-cicd

3scale provisioning fails due missing PVs

Opened this issue · 2 comments

Hi,

I run the 3scale pv creation in part A, and went for part B, and I am left with lot of faulty pods.

$ oc create -f support/amptemplates/pv.yml
persistentvolume "pv01" created
persistentvolume "pv02" created
persistentvolume "pv03" created
persistentvolume "pv04" created

$ oc status -v :

Errors:
  * pod/backend-cron-1-jt4vh is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs backend-cron-1-jt4vh -c backend-cron
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    
  * pod/backend-redis-1-5xwzx is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs backend-redis-1-5xwzx -c backend-redis
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    
  * pod/backend-worker-1-n3f22 is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs backend-worker-1-n3f22 -c backend-worker
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    
  * pod/system-mysql-1-g88h5 is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs system-mysql-1-g88h5 -c system-mysql
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    
  * pod/system-redis-1-20gzb is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs system-redis-1-20gzb -c system-redis
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    
  * pod/system-sidekiq-1-x3fq9 is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs system-sidekiq-1-x3fq9 -c system-sidekiq
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    
  * pod/system-sphinx-1-wskr8 is crash-looping

    The container is starting and exiting repeatedly. This usually means the container is unable
    to start, misconfigured, or limited by security restrictions. Check the container logs with
    
      oc logs system-sphinx-1-wskr8 -c system-sphinx
    
    Current security policy prevents your containers from being run as the root user. Some images
    may fail expecting to be able to change ownership or permissions on directories. Your admin
    can grant you access to run containers that need to run as the root user with this command:
    
      oadm policy add-scc-to-user anyuid -n threescaleonprem -z default
    

Warnings:
  * pod/apicast-production-1-dfs9g has restarted within the last 10 minutes
  * pod/system-app-1-hook-pre has restarted within the last 10 minutes
  * container "system-resque" in pod/system-resque-1-g7tjn has restarted within the last 10 minutes
  * container "system-scheduler" in pod/system-resque-1-g7tjn has restarted within the last 10 minutes

Info:
  * pod/apicast-production-1-deploy has no liveness probe to verify pods are still running.
    try: oc set probe pod/apicast-production-1-deploy --liveness ...
  * pod/backend-redis-1-deploy has no liveness probe to verify pods are still running.
    try: oc set probe pod/backend-redis-1-deploy --liveness ...
  * pod/system-app-1-deploy has no liveness probe to verify pods are still running.
    try: oc set probe pod/system-app-1-deploy --liveness ...
  * pod/system-app-1-hook-pre has no liveness probe to verify pods are still running.
    try: oc set probe pod/system-app-1-hook-pre --liveness ...
  * pod/system-mysql-1-deploy has no liveness probe to verify pods are still running.
    try: oc set probe pod/system-mysql-1-deploy --liveness ...
  * pod/system-redis-1-deploy has no liveness probe to verify pods are still running.
    try: oc set probe pod/system-redis-1-deploy --liveness ...
  * dc/backend-cron has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/backend-cron --readiness ...
  * dc/backend-cron has no liveness probe to verify pods are still running.
    try: oc set probe dc/backend-cron --liveness ...
  * dc/backend-worker has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/backend-worker --readiness ...
  * dc/backend-worker has no liveness probe to verify pods are still running.
    try: oc set probe dc/backend-worker --liveness ...
  * dc/system-resque has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/system-resque --readiness ...
  * dc/system-resque has no liveness probe to verify pods are still running.
    try: oc set probe dc/system-resque --liveness ...
  * dc/system-sidekiq has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/system-sidekiq --readiness ...
  * dc/system-sidekiq has no liveness probe to verify pods are still running.
    try: oc set probe dc/system-sidekiq --liveness ...
  * dc/system-sphinx has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/system-sphinx --readiness ...

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

Sorry, I just noticed the cluster I work with doesn't have persistent volumes working.