uber/neuropod

Error after upgrade: "The model was not loaded before calling `infer`"

Closed this issue · 2 comments

I tried to upgrade my project with latest neuropod and one of the TensorFlow tests (TestInferenceExampleTensorFlowAdditionModel) failed with error:

    model_test.go:115:
        	Error Trace:	model_test.go:115
        	            				model_test.go:151
        	Error:      	Received unexpected error:
        	            	Neuropod Error: The model was not loaded before calling `infer`. This usually means that `load_model_at_construction` was set to false and `load_model()` was not explicitly called

In my project we don't set load_model_at_construction and I see its default value is true:

  struct RuntimeOptions
  {
      // The device to run this Neuropod on.
      // Some devices are defined in the namespace above. For machines with more
      // than 8 GPUs, passing in an index will also work (e.g. `9` for `GPU9`).
      //
      // To attempt to run the model on CPU, set this to `Device::CPU`
      NeuropodDevice visible_device = Device::GPU0;

      // Sometimes, it's important to be able to instantiate a Neuropod without
      // immediately loading the model. If this is set to `false`, the model will
      // not be loaded until the `load_model` method is called on the Neuropod.
      bool load_model_at_construction = true;
  };

We use constructor:

Neuropod(const std::string &neuropod_path, const RuntimeOptions &options = {});

It means this regression is caused not by caller code.

I will try to reset OSX cache and see if it still fails, I remember after one of upgrade I observed some problems disappeared after reset of cache.

After reset cache and rebuild it hits another compile time problem

include/neuropod/internal/error_utils.hh:7:10: fatal error: 'fmt/format.h' file not found

Will close this one for now.