Pytorch Dataloaders are used, Now how to instantiate the model training as before?
rishikeshraj5 opened this issue · 2 comments
rishikeshraj5 commented
Hi sir,
As suggested by you I have used Pytorch Dataloaders due to cuda memory issues. Now plz guide me with code How to train test and validate the model to replicate your output.
I have tried the code below.
train_set = Dataset(X=X_train, y=Y_train,mode="train")
tr_loader = DL(train_set, batch_size=8, num_workers=0,shuffle=True)
test_set = Dataset(X=X_test, y=Y_test,mode="train")
ts_loader = DL(test_set, batch_size=8,num_workers=0,shuffle=False)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Selected device is {}'.format(device))
model = HybridModel(num_emotions=len(EMOTIONS)).to(device)
print('Number of trainable params: ',sum(p.numel() for p in model.parameters()) )
criterion=nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.01, weight_decay=1e-3, momentum=0.8)
#%%
train_step = make_train_step(model, loss_fnc, optimizer=optimizer)
validate = make_validate_fnc(model,loss_fnc)
#%%
verbose=True
Losses = []
Accuracies = []
epochs=50
DLS = {"train": tr_loader, "valid": ts_loader}
start_time = time.time()
for e in range(epochs):
epochLoss = {"train": 0, "valid": 0}
epochAccs = {"train": 0, "valid": 0}
for phase in ["train", "valid"]:
if phase == "train":
model.train()
else:
model.eval()
lossPerPass = []
accuracy = []
for X, y in DLS[phase]:
X, y = X.to(device), y.to(device).view(-1)
optimizer.zero_grad()
alpha=1.0
beta=1.0
with torch.set_grad_enabled(phase == "train"):
pred_emo, output_softmax= model(X)
emotion_loss = criterion(pred_emo,y)
total_loss = alpha*emotion_loss
if phase == "train":
total_loss.backward()
optimizer.step()
lossPerPass.append(total_loss.item())
accuracy.append(accuracy_score(torch.argmax(torch.exp(output_softmax.detach().cpu()), dim=1), y.cpu()))
torch.save(model.state_dict(),"E................................/Epoch_{}.pt".format(e+1))
epochLoss[phase] = np.mean(np.array(lossPerPass))
epochAccs[phase] = np.mean(np.array(accuracy))
# Epoch Checkpoint // All or Best
Losses.append(epochLoss)
Accuracies.append(epochAccs)
if verbose:
print("Epoch : {} | Train Loss : {:.5f} | Valid Loss : {:.5f} \
| Train Accuracy : {:.5f} | Valid Accuracy : {:.5f}".format(e + 1, epochLoss["train"], epochLoss["valid"],
epochAccs["train"], epochAccs["valid"]))
Is it okay. I am using a batch size of 8. But the model is performing very poor in this training , for 50 epochs the accuracy is fluctuating between 25 to 30%. I mean for how much epochs you run it?
rishikeshraj5 commented
y-mk-yc commented
Hello sir,
Thanks for your code. I used Pytorch Dataloaders to run your code for about 1500 epochs, and the accuracy is 70%. I wonder how much epochs you have for run it?