ElSnoMan/pyleniumio

Allow to reuse `Pylenium` instance accross tests

rnestler opened this issue · 4 comments

Re-creating the whole Pylenium session per test is really expensive. Since the py fixture is bound to the function scope it encourages to put multiple tests into one test function to improve performance. But this is bad for result reporting if thing go wrong.

For example I sometimes loop over different variables instead of using pytest.mark.parametrize to avoid the performance penalty of recreating the Pylenium session for every parameter. But this has the downside that if one value fails the test will be aborted and the following values won't get tested, so we don't get results for them.

I'm not sure yet how an ideal solution would look like. I experimented locally by having a class scoped def _py fixture which is used by the py fixture:

@pytest.fixture(scope="class")
def _py(_py_config):
    py = Pylenium(_py_config)
    yield py
    py.quit()

@pytest.fixture(scope="function")
def py(_py, test_case, py_config, request, rp_logger):
    ...

This will then reuse the Pylenium instance across tests in the same class. Of course such behavior would need to be documented well and probably there is a better approach that I'm missing.

We definitely want a "function"-scoped Pylenium because that is the ideal way to write tests (ie modular, atomic, etc). However, there are definitely cases where you may want a session-scoped or class-scoped instance. In such cases, you can already do that just like you did.

If we want to add this as a fixture for all users, then yeah, we just need to come up with a good solution.

related to the expensive part of creating a new session with py:

I am wondering if there is a flag or config value that could be added to avoid checking your current browser version, then reaching out to the internet to check if there is a new version every time a py session is created?

It was sort of funny, when I first started using pylenium I ran into an issue with webdriver manager at the beginning of each test taking forever to go through its steps. It didn't happen all the time. By the end of the day, it was sooo slow. I thought it was my network being slow every once in a while, but then figured out it was actually GitHub's API rate limiting coming into effect. I tried for a while to think of ways to workaround the WDM functionality that in my own pytest framework, but no success.

I realize there are work arounds such as:

  • pinning the browser version
  • setting the local key value in the config
  • using selenium server/grid

@rnestler Sorry for taking so long to get to this. I'm thinking of adding two more Pylenium fixtures:

  • pys - session-scoped Pylenium instance
  • pyc - class-scoped Pylenium instance

What do you think?

* `pys` - session-scoped Pylenium instance
* `pyc` - class-scoped Pylenium instance

What do you think?

Sounds reasonable and useful 🙂