tbrand/neph

Dynamic environment per job

waghanza opened this issue · 8 comments

Hi @tbrand,

neph is missing some features for https://github.com/the-benchmarker/web-frameworks evolution

  • Create environments per jobs
  • Executing sequentially
  • ...

Do you plan / have time to implement them ?

Regards,

Sorry for the late reply.
Yup, I still have a motivation for updating neph.

Create environments per jobs

I don't understand the function. What do you want to do specifically?

Executing sequentially

How do you do it specifically?
I mean, you know, you can do it in neph.yaml like

job0
 - depends_on:
      job1
job1
  - depends_on:
      job2
...

You mean command line level?

Hi @tbrand,

Let clarify my points / needs :

  • It could be useful to have before and after
    • before could be used to set-up env (shared variables with commands)
    • after could be use for garbage collecting (drop temp files, clean memory ...)
  • sequentially mean having a flag --seq, because running in parallel could be counter-productive ow low resources machines

Am I clearer ?

@waghanza OK, I'll work on it. About letter one, you don't care about the order of execution if the job dependencies are in same level?

@tbrand Not sure I understand

  1. before / after : create environment, having specific var per job (dynamically created by script)
  2. sequentially : using a sequential mode (option) instead of parallel on (default). for our benchmark tool, parallel is useful since I create all targets in parallel, but in case of baremetal (only one target) we have to create / compute / destroy one by one (sequentially)

I hope I am clearer

@waghanza I think I could understand. I'll submit the commit soon.

@waghanza I've submitted it.
#80

@tbrand in fact, I was thinking about sharings variable between commands in a job. I mean that each line in commands could share some variables, but those won't be available for other jobs

main:
  depends_on:
    - ruby

ruby:
  depends_on:
    - sinatra

sinatra:
  commands:
    - bin/droplet create
    - bin/droplet upload
    - bin/droplet exec

Hi @tbrand,

I re-open this issue since my thought was different than the implementation. In
https://github.com/waghanza/http-benchmark/blob/cloudify/tools/jobs/cloud/digitalocean.yml#L58-L64 we have a description of what I wanted (but I've done an other way)

The command bin/droplet create -l ruby -f sinatra -w setup an environment (create a droplet in this case**). My first thought was to share some vars with jobs in the same job.

I mean using an var (an ip for example) in job above and bin/benchmark extract -l ruby -f sinatra because there are in the same "family".

Actually, I use a global database to access variable between jobs, but I have thinking of using a share env (like in https://crystal-lang.org/api/0.27.0/Process.html#new%28command%3AString%2Cargs%3Dnil%2Cenv%3AEnv%3Dnil%2Cclear_env%3ABool%3Dfalse%2Cshell%3ABool%3Dfalse%2Cinput%3AStdio%3DRedirect%3A%3AClose%2Coutput%3AStdio%3DRedirect%3A%3AClose%2Cerror%3AStdio%3DRedirect%3A%3AClose%2Cchdir%3AString%3F%3Dnil%29-class-method) to ensure a proper and secure exchange of var between processes

What do you think ?