aerogear-attic/mobile-core

Can't install mobile services on fedora 28 in virtual machine

Closed this issue · 10 comments

Description

SSIA, using 1.0.0 branch

TASK [oc-cluster-up : debug] **************************************************************************************************************************************************************************************
ok: [localhost] => {
    "msg": "Executing oc cluster up command - oc cluster up --service-catalog=true --host-data-dir=/home/asdf/mobile-core/ui/openshift-data --routing-suffix=192.168.124.25.nip.io --public-hostname=192.168.124.25 --version=v3.9.0 --image=openshift/origin"
}

TASK [oc-cluster-up : Cluster up] *********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "oc cluster up --service-catalog=true --host-data-dir=/home/asdf/mobile-core/ui/openshift-data --routing-suffix=192.168.124.25.nip.io --public-hostname=192.168.124.25 --version=v3.9.0 --image=openshift/origin", "delta": "0:00:00.111072", "end": "2018-08-07 13:02:52.168680", "msg": "non-zero return code", "rc": 1, "start": "2018-08-07 13:02:52.057608", "stderr": "Error: unknown flag: --service-catalog\n\n\nUsage:\n  oc cluster up [flags]\n\nExamples:\n  # Start OpenShift using a specific public host name\n  oc cluster up --public-hostname=my.address.example.com\n\nOptions:\n      --base-dir='': Directory on Docker host for cluster up configuration\n      --enable=[*]: A list of components to enable.  '*' enables all on-by-default components, 'foo' enables the component named 'foo', '-foo' disables the component named 'foo'.\nAll components: automation-service-broker, centos-imagestreams, persistent-volumes, registry, rhel-imagestreams, router, sample-templates, service-catalog, template-service-broker, web-console\nDisabled-by-default components: automation-service-broker, rhel-imagestreams, service-catalog, template-service-broker\n      --forward-ports=false: Use Docker port-forwarding to communicate with origin container. Requires 'socat' locally.\n      --http-proxy='': HTTP proxy to use for master and builds\n      --https-proxy='': HTTPS proxy to use for master and builds\n      --image='openshift/origin-${component}:${version}': Specify the images to use for OpenShift\n      --no-proxy=[]: List of hosts or subnets for which a proxy should not be used\n      --public-hostname='': Public hostname for OpenShift cluster\n      --routing-suffix='': Default suffix for server routes\n      --server-loglevel=0: Log level for OpenShift server\n      --skip-registry-check=false: Skip Docker daemon registry check\n      --write-config=false: Write the configuration files into host config dir\n\nUse \"oc options\" for a list of global command-line options (applies to all commands).", "stderr_lines": ["Error: unknown flag: --service-catalog", "", "", "Usage:", "  oc cluster up [flags]", "", "Examples:", "  # Start OpenShift using a specific public host name", "  oc cluster up --public-hostname=my.address.example.com", "", "Options:", "      --base-dir='': Directory on Docker host for cluster up configuration", "      --enable=[*]: A list of components to enable.  '*' enables all on-by-default components, 'foo' enables the component named 'foo', '-foo' disables the component named 'foo'.", "All components: automation-service-broker, centos-imagestreams, persistent-volumes, registry, rhel-imagestreams, router, sample-templates, service-catalog, template-service-broker, web-console", "Disabled-by-default components: automation-service-broker, rhel-imagestreams, service-catalog, template-service-broker", "      --forward-ports=false: Use Docker port-forwarding to communicate with origin container. Requires 'socat' locally.", "      --http-proxy='': HTTP proxy to use for master and builds", "      --https-proxy='': HTTPS proxy to use for master and builds", "      --image='openshift/origin-${component}:${version}': Specify the images to use for OpenShift", "      --no-proxy=[]: List of hosts or subnets for which a proxy should not be used", "      --public-hostname='': Public hostname for OpenShift cluster", "      --routing-suffix='': Default suffix for server routes", "      --server-loglevel=0: Log level for OpenShift server", "      --skip-registry-check=false: Skip Docker daemon registry check", "      --write-config=false: Write the configuration files into host config dir", "", "Use \"oc options\" for a list of global command-line options (applies to all commands)."], "stdout": "", "stdout_lines": []}

Environment

Operating system:
Fedora 28
OpenShift versions:
openshift-origin-client-tools-v3.10.0-dd10d17-linux-64bit
Docker version
Docker version 18.06.0-ce, build 0ffa825

[asdf@localhost mobile-core]$ ansible --version
ansible 2.6.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/asdf/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.14 (default, Mar 14 2018, 16:45:33) [GCC 8.0.1 20180222 (Red Hat 8.0.1-0.16)]

[asdf@localhost mobile-core]$ oc version
oc v3.10.0+dd10d17
kubernetes v1.10.0+b81c8f8
features: Basic-Auth GSSAPI Kerberos SPNEGO

[asdf@localhost mobile-core]$ docker --version
Docker version 1.13.1, build bdb8293-unsupported

[asdf@localhost mobile-core]$ uname -a
Linux localhost.localdomain 4.16.3-301.fc28.x86_64 #1 SMP Mon Apr 23 21:59:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Steps to reproduce

  1. install fedora 28 in qemu/kvm
  2. follow docs.aerogear.org

@camilamacedo86 hey, i have updated the issue with the system info, thanks

@pmacko1,

Following some observations.

  1. When add the log add it as a code. Following an example.
Example
  1. Shows that you are not using the latest version of the installer at all. Please, remove the aerogear-mobile dir and clone it again.

  2. After that please run it without the --debug and with the debug to add it here.

@camilamacedo86 ah, ok, i used one backquote and it did not look good, edited, thanks.

tested with aerogear:master and camilamacedo/aerogear:FEDORA

Starting OpenShift using openshift/origin-control-plane:v3.10 ...
I0808 15:16:16.453528    5161 flags.go:30] Running "create-kubelet-flags"
Error: error creating node config: could not run "create-kubelet-flags": Docker run error rc=139; caused by: Docker run error rc=139
Error to run 'oc cluster up'. 
See https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md#getting-started. 

@camilamacedo86 Could you tell me what shows that I'm not using the latest installer, please?

Hi @pmacko1,

You were not using the correct repo and branches. The installer is working perfectly well according to the above description. See that you have an error to run oc cluster up and it is telling for you the OCP doc that you need a check to fix your local environment. The problem is with your setup

Try to run oc cluster down then oc cluster up. If it did not work check the error in this documentation and fix it before you run the installer.

after our conversation, lets resolve it next week

Hi @pmacko1,

I am closing this one since we could check that it is working, however, feel free to re-opened. Following the main problems tracked which affects the installation in Fedora. If we face other scenarios/issues that affect the Fedora installation we need track them too.

  • The SO did not have installed the libselinux-python which is required (#140)

  • If some error occurs in the execution of the Ansible scripts it is finalizing with a success message when the expected behaviour is the process be killed and an error message be informed. (#205)

PS.: Please, feel free to reopen if you don't agree.

Hey @camilamacedo86, thanks. I would keep this open, because it is still in progress as a trello card. wdyt?

@pmacko1 the task where this issue is described properly is here: #205. The description of this one here did not track an issue in the mobile installer at all. It just shows that your cluster was not working/setup properly.