codekansas/keras-language-modeling

Question on top1-precision evaluation formula

wailoktam opened this issue · 1 comments

Hi, I am wondering why you calculate the top 1 precision by check whether the answer assigned the maximum score by the model is the good answer assigned the maximum score by the model (if my interpretation is not wrong)

I try replacing the computation of c1 by:

c_1 += 1 if max_r in d['good'] else 0 

, which I think is more appropriate. But it seems to go wrong as it always end up being zero. Can anyone give me any insight on this? Many thanks.

               indices = d['good'] + d['bad']
                answers = self.pada([self.answers[i] for i in indices])
                question = self.padq([d['question']] * len(indices))
                n_good = len(d['good'])
                sims = model.predict([question, answers], batch_size=500).flatten()
                r = rankdata(sims, method='max')
                max_r = np.argmax(r)
                max_n = np.argmax(r[:n_good])
                c_1 += 1 if max_r == max_n else 0              
                c_2 += 1 / float(r[max_r] - r[max_n] + 1)            

Your formula probably wouldn't work because max_r indicates the index of the maximum similarity answer in d['good'], not the actual value in d['good'] (i.e. np.max). I guess max_r should really be argmax_r... The logic of the equation 1 if max_r == max_n else 0 is that if the index of the maximum in d['good'] is also the index of the maximum in d['all'] = d['good'] + d['bad']. Hope this makes sense.

Edit: I just changed some parts in the repository