eficode/robotframework-imagehorizonlibrary

Locate Improvement ability to give hints where to look for pattern

Opened this issue · 3 comments

I think in some big resolution to improve performance it would be nice to give option to give hints where to start looking for pattern, when hints failed resume using standard approach.
I see two solutions
first give rectangle where to start to search (10,10| 200|200)
or general information about sector like TOP, CENTER,BOTTOM, LEFT, RIGHT ex. BOTTOM -RIGHT or TOP-CENTER.
And Information about which display.

A nice idea, although there are many implementation concerns:

  • Can pyautogui even target other screens than the main screen monitor?
  • How to deal with the fact that pyautogui handles coordinates differently between OSes?
  • What should happen if user gives incorrect coordinates/sector? Should the library then just start searching whole screen area? What is the worst case performance hit in this case?
  • What is the performance gain in best case?
  • How to enhance debugging that it's clear where the reference image was not found (in the hinted coordinates/sector or the whole screen)?
  • Should this be a separate keyword? At least it is very backwards incompatible to the current implementation. You could do this by having different "modes" to the locating, but that pattern is usually not a good idea because it sacrifices the usability of the library.

I think it would be best option to add kwargs args to locate.
I see the support for multiple screen its no roadmap so maybe it will have to wait (https://github.com/asweigart/pyautogui/blob/master/docs/roadmap.rst)

About when selector failed using hint i think i should do fallback to whole search sinice its hint only.
Performance gain i think since area to expect to find is 1/9 (selecting region by example top-right) of typical search, decrease will be great event worst case scenario its not that bad 10/9 ( 111% typical use). So i think its much to gain.

There are also two other options that I notice when I start looking for some answers.
http://pyautogui.readthedocs.io/en/latest/screenshot.html

There are to options that could be useful also
Grayscale Matching (useGray=False)- with this enable its decrease search by around 30%
Pixel Matching( tolerance ) (tolerance=0)- it could help with some cases when colors change a little, much slower

Thanks for your input but we will measure the performance hits or gains empirically while implementing, as well as answer to the rest of the questions.

Grayscale matching in pyautogui is next to useless. In my experience, it just leads to not being able to find anything. Pixel matching is completely different thing and the way I understand it, not applicable to reference image recognition in any way.