Fine Tuning
Armin234 opened this issue · 1 comments
Armin234 commented
Hi,
is there an example for train / retrain / fine tune a network created with the "Neural Network Console" ?
I tryed with the result.nnp net.nntxt and model.nnp, but i didn't find the right way.
It would be great if I could use the networks developed with the "Neural Network Console", simply with the Nabla-Lib in a C++ program on my Win10 computer and retrain or fine tune the networks with other data sets and then perform the inference with the same C++ program.
Regards
Armin
TomonobuTsujikawa commented
Here is the sample code how to train the model and monitor its performance:
// Copyright 2018,2019,2020,2021 Sony Group Corporation.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <nbla_utils/nnp.hpp>
#ifdef WITH_CUDA
#include <nbla/cuda/cudnn/init.hpp>
#include <nbla/cuda/init.hpp>
#endif
#ifdef TIMING
#include <chrono>
#endif
#include <cassert>
#include <fstream>
#include <iomanip>
#include <iostream>
#include <sstream>
#include <string>
using namespace nbla;
/******************************************/
// Example of mnist training
/******************************************/
int main(int argc, char *argv[]) {
if (argc != 5) {
std::cerr << "Usage: " << argv[0] << " nnp_file input_pgm" << std::endl;
std::cerr << std::endl;
std::cerr << "Positional arguments: " << std::endl;
std::cerr << " nnp_file : .nnp file created by "
"examples/vision/mnist/save_nnp_classification.py."
<< std::endl;
std::cerr << " optimizer : Executor name in nnp file."
<< std::endl;
std::cerr << " monitor : Executor name in nnp file."
<< std::endl;
std::cerr << " iter_num : iteration number"
<< std::endl;
return -1;
}
const std::string nnp_file(argv[1]);
std::string optimizer_name;
optimizer_name = argv[2];
std::string monitor_name;
monitor_name = argv[3];
int iter_num = atoi(argv[4]);
// Create a context (the following setting is recommended.)
nbla::Context cpu_ctx{{"cpu:float"}, "CpuCachedArray", "0"};
#ifdef WITH_CUDA
nbla::init_cudnn();
nbla::Context ctx{
{"cudnn:float", "cuda:float", "cpu:float"}, "CudaCachedArray", "0"};
#else
nbla::Context ctx = cpu_ctx;
#endif
// Create a Nnp object
nbla::utils::nnp::Nnp nnp(ctx);
// Set nnp file to Nnp object.
nnp.add(nnp_file);
auto optimizer = nnp.get_optimizer(optimizer_name);
auto monitor = nnp.get_monitor(monitor_name);
#ifdef TIMING
#ifdef WITH_CUDA
nbla::cuda_device_synchronize("0");
#endif
// Timing starts
auto start = std::chrono::steady_clock::now();
#endif
for (int i = 0; i < iter_num; ++i) {
float loss = optimizer->update(i);
printf("loss=%f\n", loss);
}
// called for each epoch
float avg = monitor->monitor_epoch();
printf("epoch_avg=%f\n", avg);
#ifdef TIMING
#ifdef WITH_CUDA
nbla::cuda_device_synchronize("0");
#endif
// Timing ends
auto end = std::chrono::steady_clock::now();
std::cout << "Elapsed time: "
<< std::chrono::duration_cast<std::chrono::microseconds>(end - start).count() * 0.001
<< " [ms]." << std::endl;
#endif
return 0;
}