Unable to install 3scale
pre-yein opened this issue · 2 comments
Hello, I want to install 3scale on Openshift.
I followed linked below.
https://github.com/3scale/3scale-operator/blob/master/doc/template-user-guide.md#eval
but I couldn't.
I don't know what was wrong.
openshift version is 4.9
3scale version is 2.11
please help me.
Thank you.
login as: admin
admin@11.11.11.220's password:
Activate the web console with: systemctl enable --now cockpit.socket
Last failed login: Tue Jan 11 04:29:52 EST 2022 from 11.11.11.119 on ssh:notty
There was 1 failed login attempt since the last successful login.
Last login: Mon Jan 10 20:15:07 2022 from 11.11.11.119
[admin@localhost ~]$ eval $(crc oc-env)
[admin@localhost ~]$ oc new-project 3scale
Now using project "3scale" on server "https://api.crc.testing:6443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app rails-postgresql-example
to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname
[admin@localhost ~]$
[admin@localhost ~]$
[admin@localhost ~]$
[admin@localhost ~]$
[admin@localhost ~]$
[admin@localhost ~]$ oc project
Using project "3scale" on server "https://api.crc.testing:6443".
[admin@localhost ~]$ git clone https://github.com/3scale/3scale-operator.git
Cloning into '3scale-operator'...
remote: Enumerating objects: 17465, done.
remote: Counting objects: 100% (2057/2057), done.
remote: Compressing objects: 100% (748/748), done.
remote: Total 17465 (delta 1242), reused 1870 (delta 1145), pack-reused 15408
Receiving objects: 100% (17465/17465), 5.07 MiB | 10.16 MiB/s, done.
Resolving deltas: 100% (11769/11769), done.
[admin@localhost ~]$ ls
3scale-operator Documents openshift-nfs-server pv0001 pv.yaml
bin Downloads Pictures pv0002 Templates
Desktop Music Public pvc.yaml Videos
[admin@localhost ~]$ rm -rf openshift-nfs-server/
[admin@localhost ~]$
[admin@localhost ~]$
[admin@localhost ~]$
[admin@localhost ~]$ ls
3scale-operator Desktop Downloads Pictures pv0001 pvc.yaml Templates
bin Documents Music Public pv0002 pv.yaml Videos
[admin@localhost ~]$ oc project
Using project "3scale" on server "https://api.crc.testing:6443".
[admin@localhost ~]$
[admin@localhost ~]$
[admin@localhost ~]$
[admin@localhost ~]$
[admin@localhost ~]$
[admin@localhost ~]$ cd 3scale-operator/
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$ oc new-app --file pkg/3scale/amp/auto-generated-templates/amp/amp-eval.yml \
> --param WILDCARD_DOMAIN=lvh.me
--> Deploying template "3scale/3scale-api-management-eval" for "pkg/3scale/amp/auto-generated-templates/amp/amp-eval.yml" to project 3scale
3scale API Management
---------
3scale API Management main system (Evaluation)
Login on https://3scale-admin.lvh.me as admin/try62dma
* With parameters:
* AMP_RELEASE=master
* APP_LABEL=3scale-api-management
* TENANT_NAME=3scale
* RWX_STORAGE_CLASS=null
* AMP_BACKEND_IMAGE=quay.io/3scale/apisonator:latest
* AMP_ZYNC_IMAGE=quay.io/3scale/zync:latest
* AMP_APICAST_IMAGE=quay.io/3scale/apicast:latest
* AMP_SYSTEM_IMAGE=quay.io/3scale/porta:latest
* ZYNC_DATABASE_IMAGE=centos/postgresql-10-centos7
* MEMCACHED_IMAGE=memcached:1.5
* IMAGESTREAM_TAG_IMPORT_INSECURE=false
* SYSTEM_DATABASE_IMAGE=centos/mysql-57-centos7
* REDIS_IMAGE=centos/redis-5-centos7
* System MySQL User=mysql
* System MySQL Password=1cn48a6o # generated
* System MySQL Database Name=system
* System MySQL Root password.=3tibco5w # generated
* WILDCARD_DOMAIN=lvh.me
* SYSTEM_BACKEND_USERNAME=3scale_api_user
* SYSTEM_BACKEND_PASSWORD=5yxvo5jj # generated
* SYSTEM_BACKEND_SHARED_SECRET=vgsj30ys # generated
* SYSTEM_APP_SECRET_KEY_BASE=a85d1c2e17dbc8bd32c3eca8600cac65a0dcd628560eeccb06631ed62172637c717b83787886c584d5d78383a815db0320bedcd80ba17281341e04bb0be3c5c4 # generated
* ADMIN_PASSWORD=try62dma # generated
* ADMIN_USERNAME=admin
* ADMIN_EMAIL=
* USER_SESSION_TTL=
* ADMIN_ACCESS_TOKEN=mpmw2tbvmiifk47t # generated
* MASTER_NAME=master
* MASTER_USER=master
* MASTER_PASSWORD=5jr4m71b # generated
* MASTER_ACCESS_TOKEN=tpmyhipu # generated
* RECAPTCHA_PUBLIC_KEY=
* RECAPTCHA_PRIVATE_KEY=
* SYSTEM_REDIS_URL=redis://system-redis:6379/1
* SYSTEM_REDIS_NAMESPACE=
* Zync Database PostgreSQL Connection Password=iCpmeXmFTVW10BL2 # generated
* ZYNC_SECRET_KEY_BASE=CqO5XLD1CKBJ1DHm # generated
* ZYNC_AUTHENTICATION_TOKEN=Wghdt7s6DIqbMRbY # generated
* APICAST_ACCESS_TOKEN=n6fg26er # generated
* APICAST_MANAGEMENT_API=status
* APICAST_OPENSSL_VERIFY=false
* APICAST_RESPONSE_CODES=true
* APICAST_REGISTRY_URL=http://apicast-staging:8090/policies
--> Creating resources ...
imagestream.image.openshift.io "amp-backend" created
imagestream.image.openshift.io "amp-zync" created
imagestream.image.openshift.io "amp-apicast" created
imagestream.image.openshift.io "amp-system" created
imagestream.image.openshift.io "zync-database-postgresql" created
imagestream.image.openshift.io "system-memcached" created
serviceaccount "amp" created
imagestream.image.openshift.io "system-mysql" created
deploymentconfig.apps.openshift.io "backend-redis" created
service "backend-redis" created
configmap "redis-config" created
persistentvolumeclaim "backend-redis-storage" created
imagestream.image.openshift.io "backend-redis" created
secret "backend-redis" created
secret "system-redis" created
deploymentconfig.apps.openshift.io "system-redis" created
persistentvolumeclaim "system-redis-storage" created
service "system-redis" created
imagestream.image.openshift.io "system-redis" created
deploymentconfig.apps.openshift.io "backend-cron" created
deploymentconfig.apps.openshift.io "backend-listener" created
service "backend-listener" created
route.route.openshift.io "backend" created
deploymentconfig.apps.openshift.io "backend-worker" created
configmap "backend-environment" created
secret "backend-internal-api" created
secret "backend-listener" created
deploymentconfig.apps.openshift.io "system-mysql" created
service "system-mysql" created
configmap "mysql-main-conf" created
configmap "mysql-extra-conf" created
persistentvolumeclaim "mysql-storage" created
secret "system-database" created
deploymentconfig.apps.openshift.io "system-memcache" created
persistentvolumeclaim "system-storage" created
service "system-provider" created
service "system-master" created
service "system-developer" created
service "system-sphinx" created
service "system-memcache" created
configmap "system" created
secret "system-smtp" created
configmap "system-environment" created
deploymentconfig.apps.openshift.io "system-app" created
deploymentconfig.apps.openshift.io "system-sidekiq" created
deploymentconfig.apps.openshift.io "system-sphinx" created
secret "system-events-hook" created
secret "system-master-apicast" created
secret "system-seed" created
secret "system-recaptcha" created
secret "system-app" created
secret "system-memcache" created
role.rbac.authorization.k8s.io "zync-que-role" created
serviceaccount "zync-que-sa" created
rolebinding.rbac.authorization.k8s.io "zync-que-rolebinding" created
deploymentconfig.apps.openshift.io "zync" created
deploymentconfig.apps.openshift.io "zync-que" created
deploymentconfig.apps.openshift.io "zync-database" created
service "zync" created
service "zync-database" created
secret "zync" created
deploymentconfig.apps.openshift.io "apicast-staging" created
deploymentconfig.apps.openshift.io "apicast-production" created
service "apicast-staging" created
service "apicast-production" created
configmap "apicast-environment" created
--> Success
Access your application via route 'backend-3scale.lvh.me'
Run 'oc status' to view your app.
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$ oc status
In project 3scale on server https://api.crc.testing:6443
svc/apicast-production - 10.217.4.63 ports 8080, 8090
dc/apicast-production deploys istag/amp-apicast:master
deployment #1 pending 9 seconds ago
svc/apicast-staging - 10.217.4.187 ports 8080, 8090
dc/apicast-staging deploys istag/amp-apicast:master
deployment #1 pending 9 seconds ago
https://backend-3scale.lvh.me (and http) to pod port http (svc/backend-listener)
dc/backend-listener deploys istag/amp-backend:master
deployment #1 pending 17 seconds ago
svc/backend-redis - 10.217.4.179:6379
dc/backend-redis deploys istag/backend-redis:master
deployment #1 pending 17 seconds ago
svc/system-master - 10.217.5.144:3000 -> master
svc/system-provider - 10.217.4.180:3000 -> provider
svc/system-developer - 10.217.5.53:3000 -> developer
dc/system-app deploys istag/amp-system:master
deployment #1 pending 12 seconds ago
svc/system-memcache - 10.217.4.154:11211
dc/system-memcache deploys istag/system-memcached:master
deployment #1 pending 15 seconds ago
svc/system-mysql - 10.217.4.240:3306
dc/system-mysql deploys istag/system-mysql:master
deployment #1 pending 16 seconds ago
svc/system-redis - 10.217.4.122:6379
dc/system-redis deploys istag/system-redis:master
deployment #1 pending 17 seconds ago
svc/system-sphinx - 10.217.5.84:9306
dc/system-sphinx deploys istag/amp-system:master
deployment #1 pending 13 seconds ago
svc/zync - 10.217.5.29:8080
dc/zync deploys istag/amp-zync:master
deployment #1 pending 11 seconds ago
svc/zync-database - 10.217.4.241:5432
dc/zync-database deploys istag/zync-database-postgresql:master
deployment #1 pending 9 seconds ago
dc/backend-cron deploys istag/amp-backend:master
deployment #1 pending 17 seconds ago
dc/backend-worker deploys istag/amp-backend:master
deployment #1 pending 18 seconds ago
dc/system-sidekiq deploys istag/amp-system:master
deployment #1 pending 12 seconds ago
dc/zync-que deploys istag/amp-zync:master
deployment #1 pending 11 seconds ago
8 infos identified, use 'oc status --suggest' to see details.
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$ oc status
In project 3scale on server https://api.crc.testing:6443
svc/apicast-production - 10.217.4.63 ports 8080, 8090
dc/apicast-production deploys istag/amp-apicast:master
deployment #1 running for 3 minutes - 0/1 pods
svc/apicast-staging - 10.217.4.187 ports 8080, 8090
dc/apicast-staging deploys istag/amp-apicast:master
deployment #1 running for 3 minutes - 0/1 pods
https://backend-3scale.lvh.me (and http) to pod port http (svc/backend-listener)
dc/backend-listener deploys istag/amp-backend:master
deployment #1 running for 3 minutes - 0/1 pods
svc/backend-redis - 10.217.4.179:6379
dc/backend-redis deploys istag/backend-redis:master
deployment #1 running for 3 minutes - 0/1 pods
svc/system-master - 10.217.5.144:3000 -> master
svc/system-provider - 10.217.4.180:3000 -> provider
svc/system-developer - 10.217.5.53:3000 -> developer
dc/system-app deploys istag/amp-system:master
deployment #1 running for 3 minutes
svc/system-memcache - 10.217.4.154:11211
dc/system-memcache deploys istag/system-memcached:master
deployment #1 running for 3 minutes - 0/1 pods
svc/system-mysql - 10.217.4.240:3306
dc/system-mysql deploys istag/system-mysql:master
deployment #1 running for 3 minutes - 0/1 pods
svc/system-redis - 10.217.4.122:6379
dc/system-redis deploys istag/system-redis:master
deployment #1 running for 3 minutes - 0/1 pods
svc/system-sphinx - 10.217.5.84:9306
dc/system-sphinx deploys istag/amp-system:master
deployment #1 running for 3 minutes - 0/1 pods
svc/zync - 10.217.5.29:8080
dc/zync deploys istag/amp-zync:master
deployment #1 running for 3 minutes - 0/1 pods
svc/zync-database - 10.217.4.241:5432
dc/zync-database deploys istag/zync-database-postgresql:master
deployment #1 running for 3 minutes - 0/1 pods
dc/backend-cron deploys istag/amp-backend:master
deployment #1 running for 3 minutes - 0/1 pods
dc/backend-worker deploys istag/amp-backend:master
deployment #1 running for 3 minutes - 0/1 pods
dc/system-sidekiq deploys istag/amp-system:master
deployment #1 running for 3 minutes - 0/1 pods
dc/zync-que deploys istag/amp-zync:master
deployment #1 running for 3 minutes - 0/1 pods
8 infos identified, use 'oc status --suggest' to see details.
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$ oc status
In project 3scale on server https://api.crc.testing:6443
svc/apicast-production - 10.217.4.63 ports 8080, 8090
dc/apicast-production deploys istag/amp-apicast:master
deployment #1 running for 3 minutes - 0/1 pods
svc/apicast-staging - 10.217.4.187 ports 8080, 8090
dc/apicast-staging deploys istag/amp-apicast:master
deployment #1 running for 3 minutes - 0/1 pods
https://backend-3scale.lvh.me (and http) to pod port http (svc/backend-listener)
dc/backend-listener deploys istag/amp-backend:master
deployment #1 running for 4 minutes - 0/1 pods
svc/backend-redis - 10.217.4.179:6379
dc/backend-redis deploys istag/backend-redis:master
deployment #1 running for 4 minutes - 0/1 pods
svc/system-master - 10.217.5.144:3000 -> master
svc/system-provider - 10.217.4.180:3000 -> provider
svc/system-developer - 10.217.5.53:3000 -> developer
dc/system-app deploys istag/amp-system:master
deployment #1 running for 4 minutes
svc/system-memcache - 10.217.4.154:11211
dc/system-memcache deploys istag/system-memcached:master
deployment #1 running for 4 minutes - 0/1 pods
svc/system-mysql - 10.217.4.240:3306
dc/system-mysql deploys istag/system-mysql:master
deployment #1 running for 4 minutes - 0/1 pods
svc/system-redis - 10.217.4.122:6379
dc/system-redis deploys istag/system-redis:master
deployment #1 running for 4 minutes - 0/1 pods
svc/system-sphinx - 10.217.5.84:9306
dc/system-sphinx deploys istag/amp-system:master
deployment #1 running for 4 minutes - 0/1 pods
svc/zync - 10.217.5.29:8080
dc/zync deploys istag/amp-zync:master
deployment #1 running for 4 minutes - 0/1 pods
svc/zync-database - 10.217.4.241:5432
dc/zync-database deploys istag/zync-database-postgresql:master
deployment #1 running for 3 minutes - 0/1 pods
dc/backend-cron deploys istag/amp-backend:master
deployment #1 running for 4 minutes - 0/1 pods
dc/backend-worker deploys istag/amp-backend:master
deployment #1 running for 4 minutes - 0/1 pods
dc/system-sidekiq deploys istag/amp-system:master
deployment #1 running for 4 minutes - 0/1 pods
dc/zync-que deploys istag/amp-zync:master
deployment #1 running for 4 minutes - 0/1 pods
8 infos identified, use 'oc status --suggest' to see details.
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$ oc get project
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get projects.project.openshift.io)
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$ oc get project
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get projects.project.openshift.io)
[admin@localhost 3scale-operator]$ oc get project
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get projects.project.openshift.io)
[admin@localhost 3scale-operator]$ the server is currently unable to handle the requestthe server is currently unable to handle the request (get projects.project.openshift.io)
-bash: syntax error near unexpected token `('
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$ oc projects --loglevel=8
I0111 04:40:17.893119 133237 loader.go:372] Config loaded from file: /home/admin/.kube/config
I0111 04:40:17.895097 133237 round_trippers.go:432] GET https://api.crc.testing:6443/apis/project.openshift.io/v1/projects/3scale
I0111 04:40:17.895124 133237 round_trippers.go:438] Request Headers:
I0111 04:40:17.895135 133237 round_trippers.go:442] User-Agent: oc/4.9.0 (linux/amd64) kubernetes/96e95ce
I0111 04:40:17.895147 133237 round_trippers.go:442] Authorization: Bearer <masked>
I0111 04:40:17.895158 133237 round_trippers.go:442] Accept: application/json, */*
I0111 04:40:21.128573 133237 round_trippers.go:457] Response Status: 401 Unauthorized in 3233 milliseconds
I0111 04:40:21.128622 133237 round_trippers.go:460] Response Headers:
I0111 04:40:21.128633 133237 round_trippers.go:463] Audit-Id: 89fc8679-302a-4fbf-8631-dc68cdde3bfd
I0111 04:40:21.128644 133237 round_trippers.go:463] Cache-Control: no-cache, private
I0111 04:40:21.128654 133237 round_trippers.go:463] Content-Type: application/json
I0111 04:40:21.128660 133237 round_trippers.go:463] Content-Length: 129
I0111 04:40:21.128666 133237 round_trippers.go:463] Date: Tue, 11 Jan 2022 09:40:21 GMT
I0111 04:40:21.128733 133237 request.go:1181] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0111 04:40:21.129368 133237 round_trippers.go:432] GET https://api.crc.testing:6443/apis/project.openshift.io/v1/projects
I0111 04:40:21.129389 133237 round_trippers.go:438] Request Headers:
I0111 04:40:21.129401 133237 round_trippers.go:442] Authorization: Bearer <masked>
I0111 04:40:21.129409 133237 round_trippers.go:442] Accept: application/json, */*
I0111 04:40:21.129416 133237 round_trippers.go:442] User-Agent: oc/4.9.0 (linux/amd64) kubernetes/96e95ce
I0111 04:40:24.157711 133237 round_trippers.go:457] Response Status: 401 Unauthorized in 3028 milliseconds
I0111 04:40:24.157753 133237 round_trippers.go:460] Response Headers:
I0111 04:40:24.157764 133237 round_trippers.go:463] Content-Type: application/json
I0111 04:40:24.157771 133237 round_trippers.go:463] Content-Length: 129
I0111 04:40:24.157816 133237 round_trippers.go:463] Date: Tue, 11 Jan 2022 09:40:24 GMT
I0111 04:40:24.157823 133237 round_trippers.go:463] Audit-Id: 2b198d10-bb0c-4203-9fa5-d903fcb74037
I0111 04:40:24.157830 133237 round_trippers.go:463] Cache-Control: no-cache, private
I0111 04:40:24.157863 133237 request.go:1181] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0111 04:40:24.158167 133237 helpers.go:217] server response object: [{
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}]
F0111 04:40:24.158297 133237 helpers.go:116] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000144001, 0xc0005a4a00, 0x68, 0x1f6)
/go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x5bf1460, 0xc000000003, 0x0, 0x0, 0xc0002ac380, 0x2, 0x4bc92bc, 0xa, 0x74, 0x11da600)
/go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
k8s.io/klog/v2.(*loggingT).printDepth(0x5bf1460, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0009bb910, 0x1, 0x1)
/go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:735 +0x185
k8s.io/klog/v2.FatalDepth(...)
/go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:1500
k8s.io/kubectl/pkg/cmd/util.fatal(0xc000787a80, 0x39, 0x1)
/go/src/github.com/openshift/oc/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
k8s.io/kubectl/pkg/cmd/util.checkErr(0x443d880, 0xc00088cdc0, 0x4133c90)
/go/src/github.com/openshift/oc/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935
k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
/go/src/github.com/openshift/oc/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
github.com/openshift/oc/pkg/cli/projects.NewCmdProjects.func1(0xc0009a3b80, 0xc000684ca0, 0x0, 0x1)
/go/src/github.com/openshift/oc/pkg/cli/projects/projects.go:83 +0x165
github.com/spf13/cobra.(*Command).execute(0xc0009a3b80, 0xc000684c90, 0x1, 0x1, 0xc0009a3b80, 0xc000684c90)
/go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:856 +0x2c2
github.com/spf13/cobra.(*Command).ExecuteC(0xc0009a2500, 0x2, 0xc0009a2500, 0x2)
/go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:960 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
/go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:897
main.main()
/go/src/github.com/openshift/oc/cmd/oc/oc.go:93 +0x645
goroutine 21 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x5bf1460)
/go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
created by k8s.io/klog/v2.init.0
/go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:420 +0xdf
goroutine 25 [select]:
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4133b88, 0x443db80, 0xc0003ae000, 0x1, 0xc000116360)
/go/src/github.com/openshift/oc/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4133b88, 0x12a05f200, 0x0, 0xc000496001, 0xc000116360)
/go/src/github.com/openshift/oc/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/src/github.com/openshift/oc/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/apimachinery/pkg/util/wait.Forever(0x4133b88, 0x12a05f200)
/go/src/github.com/openshift/oc/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
created by k8s.io/component-base/logs.InitLogs
/go/src/github.com/openshift/oc/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
goroutine 12 [select]:
io.(*pipe).Read(0xc000a534a0, 0xc000266000, 0x1000, 0x1000, 0x386fca0, 0x1, 0xc000266000)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/io/pipe.go:57 +0xcb
io.(*PipeReader).Read(0xc000206278, 0xc000266000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/io/pipe.go:134 +0x4c
bufio.(*Scanner).Scan(0xc0003ecb00, 0x0)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/bufio/scan.go:214 +0xa9
github.com/openshift/oc/pkg/cli/admin/mustgather.newPrefixWriter.func1(0xc0003ecb00, 0x443f400, 0xc000144008, 0x3df4d01, 0x17)
/go/src/github.com/openshift/oc/pkg/cli/admin/mustgather/mustgather.go:495 +0x13e
created by github.com/openshift/oc/pkg/cli/admin/mustgather.newPrefixWriter
/go/src/github.com/openshift/oc/pkg/cli/admin/mustgather/mustgather.go:494 +0x1d0
goroutine 54 [IO wait]:
internal/poll.runtime_pollWait(0x7f429cc612d8, 0x72, 0xffffffffffffffff)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc0008d2d98, 0x72, 0x1400, 0x1465, 0xffffffffffffffff)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc0008d2d80, 0xc000178000, 0x1465, 0x1465, 0x0, 0x0, 0x0)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/internal/poll/fd_unix.go:166 +0x1d5
net.(*netFD).Read(0xc0008d2d80, 0xc000178000, 0x1465, 0x1465, 0x1408, 0xc000178058, 0x5)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc000144018, 0xc000178000, 0x1465, 0x1465, 0x0, 0x0, 0x0)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/net/net.go:183 +0x91
crypto/tls.(*atLeastReader).Read(0xc0008f2c90, 0xc000178000, 0x1465, 0x1465, 0x1408, 0xc00007b000, 0x0)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/crypto/tls/conn.go:776 +0x63
bytes.(*Buffer).ReadFrom(0xc00026a278, 0x4439640, 0xc0008f2c90, 0x11d7d25, 0x39e98e0, 0x3cf26e0)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/bytes/buffer.go:204 +0xbe
crypto/tls.(*Conn).readFromUntil(0xc00026a000, 0x443ef20, 0xc000144018, 0x5, 0xc000144018, 0x8a)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/crypto/tls/conn.go:798 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc00026a000, 0x0, 0x0, 0x0)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/crypto/tls/conn.go:605 +0x115
crypto/tls.(*Conn).readRecord(...)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/crypto/tls/conn.go:573
crypto/tls.(*Conn).Read(0xc00026a000, 0xc00091f000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/crypto/tls/conn.go:1276 +0x165
bufio.(*Reader).Read(0xc00018d080, 0xc0009022d8, 0x9, 0x9, 0x178ccab, 0xc000967c78, 0x11d2fc5)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/bufio/bufio.go:227 +0x222
io.ReadAtLeast(0x44393c0, 0xc00018d080, 0xc0009022d8, 0x9, 0x9, 0x9, 0xc0009bb4f0, 0x7ef2edff319900, 0xc0009bb4f0)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/io/io.go:328 +0x87
io.ReadFull(...)
/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/io/io.go:347
golang.org/x/net/http2.readFrameHeader(0xc0009022d8, 0x9, 0x9, 0x44393c0, 0xc00018d080, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/openshift/oc/vendor/golang.org/x/net/http2/frame.go:237 +0x89
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0009022a0, 0xc0009da030, 0x0, 0x0, 0x0)
/go/src/github.com/openshift/oc/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000967fa8, 0x0, 0x0)
/go/src/github.com/openshift/oc/vendor/golang.org/x/net/http2/transport.go:1821 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0003f2780)
/go/src/github.com/openshift/oc/vendor/golang.org/x/net/http2/transport.go:1743 +0x6f
created by golang.org/x/net/http2.(*Transport).newClientConn
/go/src/github.com/openshift/oc/vendor/golang.org/x/net/http2/transport.go:695 +0x6c5
[admin@localhost 3scale-operator]$ oc login -u kubeadmin -p QKnP8-AFAU9-mzssj-mbGsU https://api.crc.testing:6443
The connection to the server oauth-openshift.apps-crc.testing was refused - did you specify the right host or port?
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$ crc start
INFO A CodeReady Containers VM for OpenShift 4.9.10 is already running
Started the OpenShift cluster.
The server is accessible via web console at:
https://console-openshift-console.apps-crc.testing
Log in as administrator:
Username: kubeadmin
Password: QKnP8-AFAU9-mzssj-mbGsU
Log in as user:
Username: developer
Password: developer
Use the 'oc' command line interface:
$ eval $(crc oc-env)
$ oc login -u developer https://api.crc.testing:6443
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$ crc status
CRC VM: Running
OpenShift: Degraded (v4.9.10)
Disk Usage: 26.14GB of 32.74GB (Inside the CRC VM)
Cache Usage: 15.65GB
Cache Directory: /home/admin/.crc/cache
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$
[admin@localhost 3scale-operator]$ oc verssion
Error: unknown command "verssion" for "oc"
Did you mean this?
version
Run 'oc --help' for usage.
[admin@localhost 3scale-operator]$ oc version
Client Version: 4.9.10
error: You must be logged in to the server (Unauthorized)
[admin@localhost 3scale-operator]$ oc login -u kubeadmin -p QKnP8-AFAU9-mzssj-mbGsU https://api.crc.testing:6443
The connection to the server oauth-openshift.apps-crc.testing was refused - did you specify the right host or port?
[admin@localhost 3scale-operator]$ crc stop
INFO Stopping kubelet and all containers...
ERRO Failed to stop all containers: ssh command error:
command : sudo -- sh -c 'crictl stop $(crictl ps -q)'
err : wait: remote command exited without exit status or exit signal
-
ssh command error:
command : sudo -- sh -c 'crictl stop $(crictl ps -q)'
err : wait: remote command exited without exit status or exit signal
[admin@localhost 3scale-operator]$ crictl stop $(crictl ps -q)
bash: crictl: command not found...
bash: crictl: command not found...
[admin@localhost 3scale-operator]$ sudo -- sh -c 'crictl stop $(crictl ps -q)'
[sudo] password for admin:
sh: crictl: command not found
sh: crictl: command not found
[admin@localhost 3scale-operator]$ oc version
Client Version: 4.9.10
Unable to connect to the server: dial tcp 192.168.130.11:6443: connect: no route to host
[admin@localhost 3scale-operator]$
First, 3scale templates are only supported for OCP 3.11. For OCP 4.X, operator based install is supported.
Second, make sure your OCP cluster provides RWX (ReadWriteMany) persistent volume storage class. The template pkg/3scale/amp/auto-generated-templates/amp/amp-eval.yml
requires one. The s3 templates do not require RWX storage class. If the default storage class of the cluster does not provide RWX PVs, you can set the storage class to be used with the RWX_STORAGE_CLASS
template parameter
Thank you!