immortal/immortal

Question - can we do one retry using `retries`

alexzaytsev-newsroomly opened this issue · 6 comments

We use immortal to configure some code insider our docker containers.
Depending on some external configuration, we may want our service to always be restarted, or to run just once.

I tried setting retries to 0 in immortal configuration but that didn't seem to do it.

Any way i can use immortal so that supervisor exits if service exits, using run.yml approach?

Actually updating to immortal 0.20 and setting IMMORTAL_EXIT variable almost did it - except on exit immortal crashes with the following go stack trace:

2018/09/11 06:08:31 PID 25 (/some/executable) terminated, exit status 0 [1.484205s user 99.079ms sys 1m4.953751881s up]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x4ff1ce]
goroutine 1 [running]:
log/syslog.(*Writer).Close(0x0, 0x0, 0x0)
/usr/local/go/src/log/syslog/syslog.go:180 +0x2e
main.main()
/go/src/github.com/immortal/immortal/cmd/immortal/main.go:125 +0x8f8
nbari commented

Hi @alexzaytsev-newsroomly, retries 1 should do the work if using immortaldir, but if runing form the command line immortal -r 1 I notice it will start the process and if it fails it will restart only once and then exit, retries 0 would be like running the app without supervisor but to be consistent could be probably nice to have since it will also be the only way to exit if the apps exist.

For now will check how to make the cli and immortaldir behave the same. They indeed behave the same, I was testing the wrong way.

Could you describe more how you are using the IMMORTAL_EXIT , I would like to test more in detail about why is crashing. thanks in advance 👍

nbari commented

@alexzaytsev-newsroomly can you reproduce the "crash"? I am currently testing with retries 0, -r 0 and IMMORTAL_EXIT but working as expected

nbari commented

@alexzaytsev-newsroomly please try latest version 0.21.0

nbari commented

Closing this for now, please feel free to reopen the issue if persists