Could not install module error
pguerin3 opened this issue · 12 comments
Error installing with high availability enabled
No problem, if I don't enable the high-availability feature:

But if I enable the feature (ie 3 master nodes), then the last master node will fail with:

Environment:
- Host OS: Fedora 36 with 32GB of physical memory
- Kernel version (for Linux host): fedora 5.19.12-200
- Vagrant version: 2.2.19
- Vagrant provider:
- For VirtualBox:
- VirtualBox version: 6.1.38r153438
- For VirtualBox:
- Vagrant project: OLCNE
Additional information
Add any other information about the issue here (console log, ...).
Correction - the last master node does install, so 3 masters in total are installed.
But something is attempting to be installed after the 3rd master node.
Installing the Kubernetes module is failing. The likely cause is too little memory on your worker node.
If you’re using the vagrant-env plug-in, copy your .env file to .local.env and increase the memory allocated for your WORKER_MEMORY. If you don’t use the plug-in edit the Vagrantfile directly (not recommended).
I suggest closing this issue. I can replicate by setting WORKER_MEMORY in .env.local to low amount.
What value did you use for the WORKER_MEMORY?
What value did you use for the WORKER_MEMORY?
To fail, usually less than 680.
To work, at least 1024. Though 2048 or higher would make more sense to run useful workload on your cluster and/or to test.
@pguerin3 - I just took a closer look at your screenshot. How exactly did you enable HA? In your screenshot, deploying the K8s modules is happening on master1, which shouldn't be the case in HA setup.
To enable HA, simply set MULTI_MASTER=true in .env.local (assuming you have the vagrant-env plugin). This will:
- auto-set
STANDALONE_OPERATOR=true; - provision a new VM named
operator
K8s modules will then be installed once the operator VM is up and running.
I have now installed the plugin and created the following .env.local file:
MULTI_MASTER=true
WORKER_MEMORY=2048
SUBNET=192.168.56
Plus the new files mentioned in this issue: 468
I'll report back on the result soon.
I would need more information to help. Would you:
- Login to your operator node, and share the log file in question:
vagrant ssh operator
cat /var/tmp/cmd_OXQEP.log
- Login to your
master1node and check with repo providesconmon
vagrant ssh master1
dnf list installed conmon
Also, I would recommend you set VERBOSE=true in your .env.local and try again:
vagrant destroy -f
vagrant up
(again, I'm assuming you have a current clone of this Git repo)
My issue is that a host with 24GB of memory is barely enough, so I had to find the right balance via trial and error for the VMs.
My .env.local looks like this:
MULTI_MASTER=true
WORKER_MEMORY=1048
OPERATOR_MEMORY=3128
MASTER_MEMORY=5128
SUBNET=192.168.56
To do anything useful with the cluster, I would say that you need a host with at least 32GB.
I'm glad your issue is resolved. I run all my tests on my old Intel MacBook with 16GB of RAM, and you're right, more RAM would be nice.

