insitro/redun

How to execute in local containers ?

Closed this issue · 2 comments

Melkaz commented

Hi !

When attempting to run Docker containers locally, I face this error:

  File "/home/user/.local/lib/python3.8/site-packages/redun/executors/aws_utils.py", line 64, in get_aws_env_vars
    creds = session.get_credentials().get_frozen_credentials()
AttributeError: 'NoneType' object has no attribute 'get_frozen_credentials'

Do we have to set-up AWS variables if we want to run locally ?

Config:

[executors.my_executor]
type = docker
image = 'ubuntu:20.04'
scratch = scratch

Code:

from redun import script, task, Scheduler

@task(executor="my_executor", version='1')
def task1():
    return script(
        "echo hello"
    )

@task(version='1')
def main():
    return task1()

if __name__ == "__main__":
    scheduler = Scheduler()
    result = scheduler.run(main())
    print(result)

Thanks :)

@Melkaz no need to setup AWS variables. You can add include_aws_env = false to your redun.ini file within the executor config like:

[executors.my_executor]
type = docker
image = 'ubuntu:20.04'
scratch = scratch
include_aws_env = false

@mattrasmus do you think this fallback should be false and left to the AWS Executor to set to true in its docker config like the GCP Executor currently does(though set to false)? Regardless, we should probably add this config option to the executor docs so it is clearer to users what this controls and how to enable/disable depending on what default we settle on.

Thanks for developing redun. We like it. I have a question related to running docker_executors locally.

If I run redun on a local server using docker_executors, is there a way to control job numbers/processes? The "max_workers" setting doesn't seem to work since it is for "local_executors". I wonder how I can specify max number of jobs/processes for "docker_executors" running on a local server. I tried to search around and but couldn't figure out. Thanks!