openshift/console-plugin-template

`yarn run start-console` fails in mac M1 with http proxy error

AjayJagan opened this issue · 10 comments

Our team is using the console-plugin-template as a base for the project:https://github.com/artemiscloud/activemq-artemis-self-provisioning-plugin

We are using the M1 chip MacBooks and when we run yarn run start-console and access http://locahost:9000, the following error comes up.

More information

  • macOS M1
  • Hypervisor: Podman(qemu)
  • Did you run crc setup before starting it (Yes)
  • Running CRC on: Laptop

CRC version

CRC version: 2.10.2+065f0741
OpenShift version: 4.11.7
Podman version: 4.2.0

CRC status

DEBU CRC version: 2.10.2+065f0741
DEBU OpenShift version: 4.11.7
DEBU Podman version: 4.2.0
DEBU Running 'crc status'
CRC VM:          Running
OpenShift:       Running (v4.11.7)
RAM Usage:       8.943GB of 16.34GB
Disk Usage:      15.18GB of 32.74GB (Inside the CRC VM)
Cache Usage:     73.53GB
Cache Directory: /Users/ajay/.crc/cache

CRC config

- consent-telemetry                     : no

Host Operating System

ProductName:		macOS
ProductVersion:		13.0.1
BuildVersion:		22A400

Steps to reproduce

  1. clone https://github.com/artemiscloud/activemq-artemis-self-provisioning-plugin.git
  2. cd activemq-artemis-self-provisioning-plugin
  3. run yarn run build (should have yarn installed)
  4. run yarn run start in a separate terminal
  5. run yarn run start-console in another terminal

Expected

The UI should be accessible at http://localhost:9000

Actual

The logs fail with a HTTP proxy error and UI is not accessible.

Logs

Starting local OpenShift console...
API Server: https://api.crc.testing:6443
Console Image: quay.io/openshift/origin-console:latest
Console URL: http://localhost:9000
Trying to pull quay.io/openshift/origin-console:latest...
Getting image source signatures
Copying blob sha256:b1a581a9885c08c968f83a70f6d6940564f288cb9fc4b039e4743e7be8c9c8cc
Copying blob sha256:69788635f90cb66139f7d62ff431c784b4fd7b98632bb18e7cc10bebab55205e
Copying blob sha256:9578bde7a5452e36c2da7dca455705194e76d896b3d3371358fc3fc027e203d4
Copying blob sha256:dfaf6e5198c10186790f5f02d3d4e18941704c0f29b0dda3e7a1df12563a5c5a
Copying blob sha256:230a4f53a4a69f1aed929af369883fd1563117b0643f0c8bf1ad1578d41a895e
Copying config sha256:e47e7f353b8fa136ba24f53037156b0b6a5c735c5f9a638f575ca66f686becca
Writing manifest to image destination
Storing signatures
WARNING: image platform ({amd64 linux [] }) does not match the expected platform ({arm64 linux [] })
W1121 05:28:19.153195 1 main.go:228] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!
I1121 05:28:19.155403 1 main.go:238] The following console plugins are enabled:
I1121 05:28:19.155896 1 main.go:240] - activemq-artemis-self-provisioning-plugin
W1121 05:28:19.156592 1 main.go:351] cookies are not secure because base-address is not https!
W1121 05:28:19.158006 1 main.go:682] running with AUTHENTICATION DISABLED!
I1121 05:28:19.168760 1 main.go:798] Binding to 0.0.0.0:9000...
I1121 05:28:19.169235 1 main.go:803] not using TLS
2022/11/21 05:28:45 http: proxy error: dial tcp: lookup api.crc.testing on 192.168.127.1:53: no such host
2022/11/21 05:28:45 http: proxy error: dial tcp: lookup api.crc.testing on 192.168.127.1:53: no such host
2022/11/21 05:28:45 http: proxy error: dial tcp: lookup api.crc.testing on 192.168.127.1:53: no such host
2022/11/21 05:28:45 http: proxy error: dial tcp: lookup api.crc.testing on 192.168.127.1:53: no such host
2022/11/21 05:28:45 http: proxy error: dial tcp: lookup api.crc.testing on 192.168.127.1:53: no such host
2022/11/21 05:28:45 http: proxy error: dial tcp: lookup api.crc.testing on 192.168.127.1:53: no such host
2022/11/21 05:28:45 http: proxy error: dial tcp: lookup api.crc.testing on 192.168.127.1:53: no such host

I would be great if you could assist us in finding the exact issue. Thanks in advance.

Hey @AjayJagan,

can you check if your crc configuration in /etc/hosts match the output of crc ip?

Hi @jerolimov , thanks for your reply. So the output of

crc ip is -> 127.0.0.1

my /etc/hosts/ file has the following entry ->

127.0.0.1 localhost api.crc.testing canary-openshift-ingress-canary.apps-crc.testing console-openshift-console.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing downloads-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing

I'm also just a CRC user and not involved there in details.

On my Linux machine, it uses a 192.168.x.x address. Returned by crc ip and then I have another line in my /etc/hosts. But when checking the CRC issue tracker (https://github.com/crc-org/crc/issues?q=crc+ip+localhost) it looks like localhost is fine on Windows and Mac.

But this console output should be the problem:

2022/11/21 05:28:45 http: proxy error: dial tcp: lookup api.crc.testing on 192.168.127.1:53: no such host

start-console.sh is using oc whoami --show-server to check which IP address your server uses. Is the CRC server the last server you logged in with oc or kubectl?

Please check

oc whoami
oc whoami --show-server

I use this login command before starting the console:

oc login -u kubeadmin -p $(cat ~/.crc/machines/crc/kubeadmin-password) --server https://api.crc.testing:6443

But I'm unsure if the CRC kubeadmin password is saved on a Mac at the same spot. You can try to replace it with the password that CRC prints at the end of crc start.

Hey @jerolimov ! Thanks a lot for the reply!! so when I ran
oc whoami -> kubeadmin
oc whoami --show-server -> https://api.crc.testing:6443

Also what I do to login is use the webconsole -> there is is copy login command on the top right drop down.. which gives the token to login - maybe that is the issue?

When checking the error again:

2022/11/21 05:28:45 http: proxy error: dial tcp: lookup api.crc.testing on 192.168.127.1:53: no such host

It says 192.168.127.1:53: no such host, port 53 is DNS. So the lookup for this doesn't work, and the reason is that this is not a lookup on your machine directly. It tries to solve within the console container.

At the end of start-console.sh you can see that it uses podman or docker, depending if it is installed or not. And in this console container image, the hostname "api.crc.testing" is unknown.

As a workaround or test you can set the BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT variable in start-console.sh to an IP address (in the format https://127.0.0.1:6443 or https://192.168.x.y:6443) of your mac. The console container should have a way to connect to your CRC.

Hi thanks for the suggestions again :) but the issue is - I tried replacing BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT with two of the options and got the following results

  1. https://127.0.0.1:6443 - 2022/12/19 08:08:38 http: proxy error: dial tcp 127.0.0.1:6443: connect: connection refused
  2. https://192.168.127.1:6443 -
2022/12/19 08:13:55 http: proxy error: dial tcp 192.168.127.1:6443: i/o timeout
2022/12/19 08:14:00 http: proxy error: dial tcp 192.168.127.1:6443: i/o timeout
E1219 08:14:03.204147       1 handlers.go:165] GET request for "activemq-artemis-self-provisioning-plugin" plugin failed: Get "http://host.containers.internal:9001/plugin-manifest.json": dial tcp 192.168.127.254:9001: connect: connection refused
2022/12/19 08:14:03 http: panic serving 10.88.0.6:53364: runtime error: invalid memory address or nil pointer dereference
goroutine 137 [running]:
net/http.(*conn).serve.func1()
	/usr/local/go/src/net/http/server.go:1825 +0xbf
panic({0x2dce5c0, 0x48eaa30})
	/usr/local/go/src/runtime/panic.go:844 +0x258
github.com/openshift/console/pkg/plugins.(*PluginsHandler).proxyPluginRequest(0xc0003f2800, 0x2?, {0xc0006288b1, 0x29}, {0x35374b8, 0xc0000fe540}, 0xc000624990?)
	/go/src/github.com/openshift/console/pkg/plugins/handlers.go:166 +0x582
github.com/openshift/console/pkg/plugins.(*PluginsHandler).HandlePluginAssets(0xc000076c00?, {0x35374b8, 0xc0000fe540}, 0xc000579800)
	/go/src/github.com/openshift/console/pkg/plugins/handlers.go:148 +0x265
github.com/openshift/console/pkg/server.(*Server).HTTPHandler.func31({0x35374b8?, 0xc0000fe540?}, 0x102d901?)
	/go/src/github.com/openshift/console/pkg/server/server.go:591 +0x33
net/http.HandlerFunc.ServeHTTP(0x0?, {0x35374b8?, 0xc0000fe540?}, 0xc000076c00?)
	/usr/local/go/src/net/http/server.go:2084 +0x2f
net/http.StripPrefix.func1({0x35374b8, 0xc0000fe540}, 0xc00018e900)
	/usr/local/go/src/net/http/server.go:2127 +0x330
net/http.HandlerFunc.ServeHTTP(0xcd0000c000609020?, {0x35374b8?, 0xc0000fe540?}, 0xc0005a79c8?)```

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale