Simple NN example issue
goodle06 opened this issue · 3 comments
Hello! I'm trying to recreate simple net example, but I'm stuck here:
for( int epoch = 1; epoch < 15; ++epoch ) {
float epochLoss = 0; // total loss for the epoch
for( int iter = 0; iter < iterationPerEpoch; ++iter ) {
// trainData methods are used to transmit the data into the blob
trainData.GetSamples( iter * batchSize, dataBlob );
trainData.GetLabels( iter * batchSize, labelBlob );
net.RunAndLearnOnce(); // run the learning iteration
epochLoss += loss->GetLastLoss(); // add the loss value on the last step
}
::printf( "Epoch #%02d avg loss: %f\n", epoch, epochLoss / iterationPerEpoch );
trainData.ReShuffle( random ); // reshuffle the data
}
I don't understand what class is trainData (and testData) and can't find GetSamples and GetLabels functions on my own. Please help.
Hello!
The purpose of this sample was to show the common logic of work.
And this class wasn't included because it was too big (it's used in tests inside the company and contains too much of additional code and dependencies).
@FedyuninV , thank you. I guess my question isn't really an issue. But still it would be nice to have a working example. I've tried to change code, but it doesn't work for me. Could you take a look?
CArray<uchar> src_uchar;
CArchiveFile file((filename+".ser").toStdString().c_str(), CArchive::TDirection::load);
CArchive archive(&file, CArchive::TDirection::load);
src_uchar.Serialize(archive);
archive.Close();
CArray<float> src;
src.SetSize(src_uchar.Size());
for (int i=0;i<src_uchar.Size();i++) {
src[i]=(float)src_uchar[i];
}
CArray<int> codes;
CArchiveFile file2((filename+".codes.ser").toStdString().c_str(), CArchive::TDirection::load);
CArchive archive2(&file2, CArchive::TDirection::load);
codes.Serialize(archive2);
archive2.Close();
int codes_count=OCRKeys::count();
CArray<float> labels;
labels.SetSize(codes.Size()*codes_count);
for (int i=0; i<labels.Size();i++) labels[i]=0.0f;
for (int i=0; i<codes.Size(); i++) {
labels[i*codes_count+codes[i]]=1.0f;
}
IMathEngine& mathEngine=GetDefaultCpuMathEngine();
CRandom random(451);
CDnn net(random, mathEngine);
CPtr<CSourceLayer> data=new CSourceLayer(mathEngine);
data->SetName("data");
net.AddLayer(*data);
CPtr<CSourceLayer> label=new CSourceLayer(mathEngine);
label->SetName("label");
net.AddLayer(*label);
// The first fully-connected layer of size 1024
CPtr<CFullyConnectedLayer> fc1 = new CFullyConnectedLayer( mathEngine );
fc1->SetName( "fc1" );
fc1->SetNumberOfElements( 1024 ); // set the number of elements
fc1->Connect( *data ); // connect to the previous layer
net.AddLayer( *fc1 );
// The activation function
CPtr<CReLULayer> relu1 = new CReLULayer( mathEngine );
relu1->SetName( "relu1" );
relu1->Connect( *fc1 );
net.AddLayer( *relu1 );
// The second fully-connected layer of size 512
CPtr<CFullyConnectedLayer> fc2 = new CFullyConnectedLayer( mathEngine );
fc2->SetName( "fc2" );
fc2->SetNumberOfElements( 512 );
fc2->Connect( *relu1 );
net.AddLayer( *fc2 );
// The activation function
CPtr<CReLULayer> relu2 = new CReLULayer( mathEngine );
relu2->SetName( "relu2" );
relu2->Connect( *fc2 );
net.AddLayer( *relu2 );
// The third fully-connected layer of size equal to the number of classes (10)
CPtr<CFullyConnectedLayer> fc3 = new CFullyConnectedLayer( mathEngine );
fc3->SetName( "fc3" );
fc3->SetNumberOfElements( codes_count );
fc3->Connect( *relu2 );
net.AddLayer( *fc3 );
// Cross-entropy loss function; this layer already calculates softmax
// on its inputs, so there is no need to add a softmax layer before it
CPtr<CCrossEntropyLossLayer> loss = new CCrossEntropyLossLayer( mathEngine );
loss->SetName( "loss" );
loss->Connect( 0, *fc3 ); // first input: the network response
loss->Connect( 1, *label ); // second input: the correct classes
net.AddLayer( *loss );
const int total=codes.Size();
const int batchSize=total;
const int iterationPerEpoch = total/batchSize; // the training set contains 60000 images (600 batches)
CPtr<CDnnBlob> dataBlob=CDnnBlob::CreateListBlob(mathEngine,CT_Float,1,batchSize,402,1);
dataBlob->CopyFrom(src.GetPtr());
CPtr<CDnnBlob> labelBlob = CDnnBlob::CreateDataBlob( mathEngine, CT_Float, 1, batchSize, codes_count);
labelBlob->CopyFrom(labels.GetPtr());
data->SetBlob( dataBlob );
label->SetBlob( labelBlob );
for( int epoch = 1; epoch < 15; ++epoch ) {
float epochLoss = 0; // total loss for the epoch
net.RunAndLearnOnce(); // run the learning iteration
epochLoss += loss->GetLastLoss(); // add the loss value on the last step
::printf( "Epoch #%02d avg loss: %f\n", epoch, epochLoss / iterationPerEpoch );
}
float testDataLoss = 0;
// The testing data set contains 10000 images (100 batches)
net.RunOnce();
testDataLoss += loss->GetLastLoss();
::printf( "\nTest data loss: %f\n", testDataLoss / 100 );
Loading and copying serialized data into blob works fine, but run&learn throws exception with no error message.
This is what compiler says:
:-1: warning: Debugger encountered an exception: Exception at 0x7ffbe8db3e49, code: 0xe06d7363: C++ exception, flags=0x1 (execution cannot be continued)
(If actual) Replace CreateListBlob(mathEngine,CT_Float,1,batchSize,402,1);
with CDnnBlob::CreateDataBlob( mathEngine, CT_Float, 1, batchSize, 402 );
and get rid of iterationPerEpoch
to avoid division by zero.