graph4ai/graph4nlp

Why adjusting learning rate is commented out?

code2graph opened this issue · 7 comments

Questions and Help

I see in the NMT example, adjusting learning rate is disabled.

self._adjust_lr(epoch)

        for epoch in range(200):
            self.model.train()
            self.train_epoch(epoch, split="train")

            # self._adjust_lr(epoch)
            if epoch >= 0:

I am wondering whether that was done purposefully? You were getting bad result when it was enabled?

Would appreciate if you share your insight.

No, I just comment on it during debugging since I didn't tune it.

@AlanSwift what I have to do to tune it? Not sure whether I understood it. It felt to me the same adjusting learning rate was used for other applications as well. My question is:

  1. What do you mean by you have not tune it?
  2. I would be happy to follow the steps if you stare with me and I can contribute back to the library.

The early stop is a good trick to get better performance but you have to tune the stop time. But I haven't tuned it. Pr is welcomed!

@AlanSwift can you please give me some pointers how can I tune it? I would be happy to work on it and contribute a PR.

@AlanSwift can you please provide any pointers 🙏

@AlanSwift sorry for pinging you again with this issue.

Can you please provide me with some pointers and I am happy to work that, tune it and then contribute back to the library. Would really appreciate your help.

I'm really sorry for the late reply since I'm busy the past few days.
You can uncomment this line and tune these parameters: lr_start_decay_epoch, lr_decay_per_epoch, lr_decay_rate.
The current lr schedule implementation is ExponentialLR.
I suggest you use the official PyTorch implementation: ExponentialLR. And there are some other learning rate schedule methods if you are interested.