yeeeqichen/KGQA

embedding模型表现糟糕

Closed this issue · 2 comments

您好,我将kb.txt去重打乱后按8:1:1划分为训练集、验证集、测试集后使用rotatR(1000epoch),transE(2000epoch)分别进行了训练(no type constraint),表现完全不如您在notes.txt最后记录的(我只有在把train当test进行测试时数据才能达到notes里的水平),请问您做了什么额外处理吗?
下面是transE的数据:
no type constraint results:
metric: MRR MR hit@10 hit@3 hit@1
l(raw): 0.059048 4849.178711 0.113640 0.058467 0.031442
r(raw): 0.187911 3171.404785 0.360683 0.216125 0.104582
averaged(raw): 0.123479 4010.291748 0.237161 0.137296 0.068012

l(filter): 0.142708 4566.729980 0.207516 0.160428 0.108774
r(filter): 0.238894 3169.721924 0.377976 0.260293 0.171732
averaged(filter): 0.190801 3868.226074 0.292746 0.210361 0.140253
0.292746
0.2927459180355072

您好,我将kb.txt去重打乱后按8:1:1划分为训练集、验证集、测试集后使用rotatR(1000epoch),transE(2000epoch)分别进行了训练(no type constraint),表现完全不如您在notes.txt最后记录的(我只有在把train当test进行测试时数据才能达到notes里的水平),请问您做了什么额外处理吗? 下面是transE的数据: no type constraint results: metric: MRR MR hit@10 hit@3 hit@1 l(raw): 0.059048 4849.178711 0.113640 0.058467 0.031442 r(raw): 0.187911 3171.404785 0.360683 0.216125 0.104582 averaged(raw): 0.123479 4010.291748 0.237161 0.137296 0.068012

l(filter): 0.142708 4566.729980 0.207516 0.160428 0.108774 r(filter): 0.238894 3169.721924 0.377976 0.260293 0.171732 averaged(filter): 0.190801 3868.226074 0.292746 0.210361 0.140253 0.292746 0.2927459180355072
您好,想问一下您训练这个两个模型大概耗时多久吗,是否方便分享一下相关的参数

rotatE一共1500epch两个小时,transE一共4000epoch两小时,
rotate参数为
rotate = RotatE(
ent_tot = train_dataloader.get_ent_tot(),
rel_tot = train_dataloader.get_rel_tot(),
dim = 256,
margin = 6.0,
epsilon = 2.0,
)

define the loss function

model = NegativeSampling(
model = rotate,
loss = SigmoidLoss(adv_temperature = 2),
batch_size = train_dataloader.get_batch_size(),
regul_rate = 0.0
)

train the model

trainer = Trainer(model = model, data_loader = train_dataloader, train_times = 500, alpha = 2e-5, use_gpu = True, opt_method = "adam")

transe为
transe = TransE(
ent_tot = train_dataloader.get_ent_tot(),
rel_tot = train_dataloader.get_rel_tot(),
dim = 256,
p_norm = 1,
norm_flag = True)

define the loss function

model = NegativeSampling(
model = transe,
loss = MarginLoss(margin = 5.0),
batch_size = train_dataloader.get_batch_size()
)

train the model

trainer = Trainer(model = model, data_loader = train_dataloader, train_times = 4000, alpha = 1.0, use_gpu = True)

我在dim为1024和256时都跑了一遍,结果差不多,请问还有其他什么需要调的吗?