The SOCK variable in the ./podman/minichris.sh file
Opened this issue · 11 comments
I was having a bit of a problem with a particular line in the minichris.sh file. I'm trying to store it as a var in the ansible playbook and it keeps returning an error because of the jinja2 templating or something
Under the constants section, there is a SOCK variable. that is equal to the result of a particular command, to check if the podman socket is active.
In my ansible-playbook, to equate it to a variable, I used the lookup-pipe module that should ordinarily work for outputting and storing the results of commands, but it brings up an error that I looked up and saw that it was about the jinja templating issue or something.
Above is where I used the lookup-pipe module.
Above is the error it returns.
I googled it and I learnt that it's a jinja2 templating engine error, and the way to resolve it is to let ansible know to process the variable raw. And I can accomplish that in one of two ways:
- Placing " !unsafe " before the command
- Wrapping the command in {% raw %}...{% endraw %}
Above is the test file I created to specifically work on this issue.
You may observe that I am using the "!unsafe" option.
Below the "sock" variable, I also created another one, "chronos", where I check that the lookup-pipe module works.
Now when I run the file, you will see that the "chronos" variable outputs the result of the date command.
But the "sock" variable outputs the command itself and not its result.
The command is:
podman info --format "{{ .Host.RemoteSocket.Path }}"
The result should be:
/run/user/1000/podman/podman.sock
The other option {%raw%}...{%endraw} also returns error.
You can replicate the error to better understand what I am talking about by running the test-file here.
---
- name: Test file for the SOCK variable
hosts: localhost
vars:
sock: "{{ lookup('pipe', 'podman info --format {{.Host.RemoteSocket.Path}}') }}"
chronos: "{{ lookup('pipe', 'date') }}"
tasks:
- name: Checking my vars
debug:
msg: "{{ item }}"
loop:
- "{{ sock }}"
- "{{ chronos }}"
Another problem I've been having is that, occassionally when I try to run the application with ./minichris.sh up
, it returns the following error. In fact it's been doing so for a while now except once when it worked fine. Usually what happens is that:
- I run the application.
- It complains about me not having enough space.
- I stop the process, create space
- Manually create the pods that hadn't been created yet and run the
./minichris.sh down
so it can delete everything.
But then when I try to bring up the application with ./minichris.sh up
, it returns the following error:
I'm wondering if it's because something got corrupted when it got interrupted by me not having enough space, is there anything I need to delete to stop the error?
A third issue I am currently having, is trying to figure out which command originally starts the INIT_CONTAINER in the ./podman/minichris.sh file when you run minichris.sh up
.
I need to figure this out so in my ansible playbooks I can create a task specifically for this purpose, but I can't quite figure out how it starts yet.
@devbird007 podman kube down
usually fails whenever podman kube play
failed. So, it's necessary for us to clean up our state ourselves. Manually creating the pods probably won't work since podman kube play
does many things and it would be hard to replicate all of it.
The minichris.sh
script and related files were originally copied from my repo here: https://github.com/FNNDSC/minichris-k8s
In the README.md I provide advice on how to recover from errors. If miniChRIS is the only thing you're running using Podman, then you can remove everything by running
podman pod rm -af
podman volume prune -f
The INIT_CONTAINER
is started by podman kube play
. None of the code inside of minichris.sh
related to INIT_CONTAINER
creates state, i.e. for your purposes, it's probably safe to ignore it all.
As for why the code is there if it's supposedly safe to ignore, read my documentation here: https://github.com/FNNDSC/miniChRIS-k8s/wiki/Podman-initContainers
@devbird007
podman kube down
usually fails wheneverpodman kube play
failed. So, it's necessary for us to clean up our state ourselves. Manually creating the pods probably won't work sincepodman kube play
does many things and it would be hard to replicate all of it.The
minichris.sh
script and related files were originally copied from my repo here: https://github.com/FNNDSC/minichris-k8sIn the README.md I provide advice on how to recover from errors. If miniChRIS is the only thing you're running using Podman, then you can remove everything by running
podman pod rm -af podman volume prune -f
Okay I will definitely try this.
Hey @jennydaman . Now I cannot even bring up the application at all. It keeps returning the error.
That is, it displays the output of the command podman pod logs minichris-cube-pod
.
I have tried removing and re-downloading the cube image itself, but that hasn't helped.
I'm not sure what the state of your system is. If I were to guess, your working setup depended on some pre-existing state (such as the swift
pod) which is not created reproducibly by your code. Please ask @veniceofcode for guidance on your specific issue.
Keep in mind that the code here in this repo ChRIS-in-a-box is an outdated copy of https://github.com/FNNDSC/miniChRIS-k8s. miniChRIS-k8s has my support.
If you want to run ChRIS on Podman, make sure your system meets the requirements and that you are starting from a clean state. A clean state can be achieved by running
# warning: this command is destructive!
podman system reset
Follow the instructions here. If anything doesn't work then create a detailed bug report and I will fix it.
If you want to use the code from miniChRIS-k8s in your own project/ChRIS-in-a-box, make sure you keep your code up-to-date with miniChRIS-k8s.
I'm not sure what the state of your system is. If I were to guess, your working setup depended on some pre-existing state (such as the
swift
pod) which is not created reproducibly by your code. Please ask @veniceofcode for guidance on your specific issue.Keep in mind that the code here in this repo ChRIS-in-a-box is an outdated copy of https://github.com/FNNDSC/miniChRIS-k8s. miniChRIS-k8s has my support.
If you want to run ChRIS on Podman, make sure your system meets the requirements and that you are starting from a clean state. A clean state can be achieved by running
# warning: this command is destructive! podman system reset
Follow the instructions here. If anything doesn't work then create a detailed bug report and I will fix it.
If you want to use the code from miniChRIS-k8s in your own project/ChRIS-in-a-box, make sure you keep your code up-to-date with miniChRIS-k8s.
Hi @jennydaman . I ran the podman system reset
command. I also decided to try and remove and reinstall podman. Then I'll try it again.
The problem is, debian-based distros are stuck at podman version 3.4.4.
So if I want to use the versions 4.*, I have to build from source code.
Do you think this could have contributed to the problem I kept encountering whenever I try to run minichris?
Also, do you know how I can tackle the very first problem I listed in this issue...About the SOCK variable.
Hi @jennydaman
I had to build podman from source, so it wasn't simple removing all traces of it. But I was eventually able to do it. Removed all images too. I rebuilt podman, and then tried to run the minichris.sh
file, and it still returned the same errors as in the screenshots I sent earlier.
You should install Podman from the Kubic repository as described here https://podman.io/docs/installation#ubuntu
Hi @jennydaman, thank you so much for your help.
But is there a reason my minichris-cube-pod pod is always coming out degraded?
When I run podman pod inspect minichris-cube-pod
, it returns this:
{
"Id": "f3c3205211785d7a531bcd4f028518476fb400bb3e33e38dcc2512e396c8e8d6",
"Name": "minichris-cube-pod",
"Created": "2023-07-17T17:26:00.235370096+01:00",
"ExitPolicy": "stop",
"State": "Degraded",
"Hostname": "minichris-cube-pod",
"Labels": {
"app": "minichris-cube",
"org.chrisproject.role": "ChRIS ultron backEnd"
},
"CreateCgroup": true,
"CgroupParent": "user.slice",
"CgroupPath": "user.slice/user-libpod_pod_f3c3205211785d7a531bcd4f028518476fb400bb3e33e38dcc2512e396c8e8d6.slice",
"CreateInfra": true,
"InfraContainerID": "8dc5bc5e678fad362a629572e14e5bd6b963e866ab949bcc4f047fb5459b8121",
"InfraConfig": {
"PortBindings": {
"8000/tcp": [
{
"HostIp": "",
"HostPort": "8000"
}
]
},
"HostNetwork": false,
"StaticIP": "",
"StaticMAC": "",
"NoManageResolvConf": false,
"DNSServer": null,
"DNSSearch": null,
"DNSOption": null,
"NoManageHosts": false,
"HostAdd": null,
"Networks": [
"podman-default-kube-network"
],
"NetworkOptions": null,
"pid_ns": "private",
"userns": "host",
"uts_ns": "private"
},
"SharedNamespaces": [
"ipc",
"net",
"uts"
],
"NumContainers": 6,
"Containers": [
{
"Id": "3328bb9fa62f4a28463bba25e02a3a0f37ba8a4986bdda4d7bee6db73b558433",
"Name": "minichris-cube-pod-cube-worker-periodic",
"State": "created"
},
{
"Id": "3b1b950b73b14acd991084da10a2cc9df97b19cc98e8bb637fb4ae0aef14a278",
"Name": "minichris-cube-pod-cube-worker",
"State": "created"
},
{
"Id": "76d45c144a88c53cc628584edcb7e34c8cd9575323eca68e0e1bc7e61fc8b0e6",
"Name": "minichris-cube-pod-server",
"State": "created"
},
{
"Id": "7a6d164289bfd2dbfefef86e827f2bdb56760f17b3cbcdcdab7c2c3b0b0f285f",
"Name": "minichris-cube-pod-cube-celery-beat",
"State": "created"
},
{
"Id": "83fdbd8b20d60de32a7ed85ed0aa427bb45444df32567f9e7f99b36aaebe95d8",
"Name": "minichris-cube-pod-migratedb",
"State": "exited"
},
{
"Id": "8dc5bc5e678fad362a629572e14e5bd6b963e866ab949bcc4f047fb5459b8121",
"Name": "f3c320521178-infra",
"State": "running"
}
]
}