microsoft/UniVL

Run Without Distributed

Maddy12 opened this issue · 3 comments

Hello, I am trying to run your code but I keep running into issues with the distributed learning. Is it possible to run without this?

Hi @Maddy12, what is your error when you run with distributed launch? It can be run on only one GPU. Otherwise, I think you should modify the code to the none distributed version.

I apologize for wasting your time. There was an error but it was not in the distributed element. There is an error in the metrics.py in the function compute_metrics.

My hack is:

if isinstance(x, list): 
        x = np.concatenate(x)
sx = np.sort(-x, axis=1)

I do not know if this expected behavior or if I am actually now implementing something incorrect.

Hi @Maddy12, what is the error? Can you print it here? Or you can test x= np.concatenate(tuple(x), axis=0) as follows,

if isinstance(x, list): 
    x= np.concatenate(tuple(x), axis=0)
sx = np.sort(-x, axis=1)