dvlab-research/PFENet

Preprocessing

steveazzolin opened this issue · 1 comments

Hi, I'm reading your code and I have a question:

is there a particular reason for which you decided to implement transformations (in transform.py) by hand, instead of relying on the torch torchvision.transforms package?
Do you think it can be a source of delays in the code?

Thanks in advance!

Thanks for being interested in our work.

We mainly followed the implementations of https://github.com/hszhao/semseg which customizes the transformation functions that might be more suitable for semantic segmentation compared to functions available in ``torchvision.transforms'' at that time.