Jc2k/pytest-docker-tools

Keep containers alive after test completion

Closed this issue · 7 comments

Sometimes it is expensive to start new containers afresh over and over again. Thus, it might be a good idea to have an option to keep the containers that were generated for the purposes of testing alive after the test is over.

Perhaps it would be the case of including an option keepalive in the container function call and then adding it as a CLI option as well.

If we added the option to keep containers alive, we would need to loop through the live containers to check if there is a container with the right specification and if not to start a new one.

It might also be a good idea to allow the user, through a fixture to decide the callable to wait for. So it would be great to include callable as another argument.

So overall something like this:

def container(request, docker_client, **kwargs):
    ''' Docker container: image={image} '''

    kwargs.update({
        'callable':  lambda: container.reload() or container.ready(),
        'keepalive': False
        'detach': True,
    })

    keepalive = kwargs['keepalive']
    callable = kwargs['callable']
    del kwargs['keepalive'], kwargs['callable']

    raw_container = None

    # loop through all existing containers
    for c in docker_client.containers.list():
        for k, v in kwargs.items():

            # was the live container initialised with the options we require?
            # each k should be available in the containers `attrs`
            if v not in c.get(k):
                raw_container = None
                break
            raw_container = c

        # if we found the right container we can proceed
        if raw_container: break

   # if the found container is sleeping, we restart it
    if raw_container and raw_container.status != 'running':
        raw_container.restart()
    # if after looping we didn't find anything, let's start a new container
    else:
        raw_container = docker_client.containers.run(**kwargs)

    # let the user decide whether to keep this container alive or not
    if not pytestconfig.getoption('keepalive'):
           request.addfinalizer(...)

    container = Container(raw_container)

    # add custom callable
    wait_for_callable(
            f'Waiting for container to be ready',
            callable
        )

    return container

I'm happy to implement those changes and submit a PR.

Jc2k commented

Hi! Thanks for the feedback!

Just to check that this isn't already enough for you: In theory you should be able to use the same scope modifiers as normal py.test fixtures. That was always the plan, but I haven't tried it yet. So do something like:

redis_image = fetch(repository='redis:latest')

redis = container(
    image='{redis_image.id}',
    scope='session',
)

That would create a redis once (the first time a test needs a redis). That same redis would then be reused for any other tests that needed a redis. module and package scopes should also work.

This is definitely something that I need to make more obvious in the README - and add tests for - so i'm curious if it works for you.

Thanks for the feedback. Using the fixture scoped at the session level is already very useful and I have been doing that already.

However, my use case is one where I will develop my codebase and run tests iteratively. Since any containers and/or images created in the context of the test even at the session level get discarded every time that the session is over, I will have to re-download the image and re-run the container when I decide to re-run my tests.

For instance, let's say that just to initialise a container I need to wait for a long time. For instance, I might need to do a very expensive operation to set it up, etc. Then it would be useful to persist that container over different sessions.

Let me know if that makes sense.

Jc2k commented

OK - I was kind of worried you might say that :-) I totally get the use case, though I think we are going to have to think through about how to make it fit cleanly as I have a few reservations about the right way to approach this.

E.g.

from pytest_docker_tools import container, fetch, volume


minio_image = fetch(repository='minio/minio:latest')

minio_volume = volume(
   initial_content={
       'bucket-1': None,
       'bucket-2/example.txt': b'Test file 1',
   }
)

minio = container(
   image='{minio_image.id}',
   command=['server', '/data'],
   volumes={
       '{minio_volume.name}': {'bind': '/data'},
   },
   environment={
       'MINIO_ACCESS_KEY': 'minio',
       'MINIO_SECRET_KEY': 'minio123',
   },
   keepalive=True,
)

def test_volume_is_seeded(minio):
   files = minio.get_files('/data')
   assert files['data/bucket-2/example.txt'] == b'Test file 1'
   assert files['data/bucket-1'] == None

Looking at this and thinking about the issue my first thoughts are:

  • Any support containers (like postgres, redis, etc) have to also be marked as keepalive - otherwise you will create (and not destroy) a new instance of this container every time the support container is created. With a default scope that is once per test. This will be surprising and annoying to clean up.
  • The default scope of the volume is function. So a new volume would be created every test, which would make it create a new container every test.
  • It will try to delete the volume whilst the container it just attached it to is still running. This will fail.
  • Both of these points will apply to networks as well.
  • This won't work well with pytest-xdist - unless your tests are designed to share a single environment, which I don't think can be reasonably expected.

From a user experience point of view I worry about what the clean up story is. When do you stop these containers? How do you identify and stop them (we don't know them with the expectation everything its ephemeral and parallelisable). With the current implementation the clean up story is that you don't have to clean up. With docker-compose the clean up story is that you do docker-compose down -v.

With keepalive it feels like we are making a mini docker-compose. In which case I guess the big question is then, what does this fixture (with keepalive) get you over a docker-compose environment? If you had a fixture that did a docker-compose up for you, would you need this plugin? If we can answer that question maybe we can pull that bit out and make it useful outside of the normal fixture lifecycle.

You made very good points, and in fact you made a good case that it might be better after all to simply maintain a good docker-compose file for managing the docker containers that must live after the test session.

On the other hand, I like the idea of being able to run the tests simply by calling pytest and let it handle fixture set-up and clean-up. In fact, there is already a plugin for managing docker-compose files.

In both plugins, the containers are destroyed after the test. I think there is no way for pytest to know that you are done with your iterative testing session. Therefore, one might ask why would pytest create persistent Docker containers but not destroy it? Ideally the environment before and after the test should be kept the same.

Overall, I think my initial problem might be solved after the discussions we had here.

Jc2k commented

For your specific case you might be able to do a cheeky:

@pytest.fixture
def docker_environment(self):
    import subprocess
    subprocess.check_call(['docker-compose', 'up', '-d'])

Because docker-compose is idempotent this is a fairly safe operation. You can just run py.test and be happy the right thing will happen, and only have to do docker-compose down when you need to.

If think you could even support xdist wth something like:

@pytest.fixture
def docker_environment(workerid):
    import subprocess
    subprocess.check_call(['docker-compose', 'up', '-p', workerid, '-d'])

(Not sure what the exact fixture name is but it ships with pytest-xdist and gives each worker a unique name - so you could have an env per CPU core).

But it sounds like we can close this ticket? Or do you have any more ideas/thoughts?

I think we can close and thanks for tips.