drprojects/superpoint_transformer

Running from a single inference script

Closed this issue · 3 comments

20IR commented

Hi,

Does anyone have a way to perform inference on a dataset, i.e. LAS?

Ideally, I'd love to run inference from a single Python script that has all the needed code embedded within it. I could then change the embedded YAML script and file paths from within.

I have trained many custom models with excellent performance and accuracy, but need a good way to use/deploy this without needing to store the entire project. This is mainly for encryption purposes.

Any thoughts on the best local deployment method are greatly appreciated.

Thank you.

Hi @20IR

We do not support instantiation and inference without the hydra and lightning dependencies. The main reason for this is that even a "simple" inference pipeline would require providing many parameters for instantiating the necessary pre-transforms, transforms, on-device transforms, model, dataset, dataloader. Although, it would be possible to write a hydra-free script that explicitly and exhaustively sets the hyperparameters to your needs, this would not be scalable for the diversity of models and datasets of this project. Hence, this would be more of a user-specific scenario which we let users implement by themselves.

Besides, what do you mean by "single Python script that has all the needed code embedded within it" and "without needing to store the entire project" ? Are you implying that you would like to summarize the entire project into a single python file ? Running a "simple" inference requires many different parts of the project. I do not see how a single script file of reasonable size could capture all the codebase's complexity.

PS: If you ❤️ or use this project, don't forget to give it a ⭐, it means a lot to us !

20IR commented

Thank you for the response and the awesome project!

I want to implement a secure way to use this within my encrypted C++ app.

Currently, my app is encrypted and decrypted in memory at runtime. My proposed solution involves storing the entire Python inference script as a resource. By updating the relevant paths and running this script from within C++, the Python code becomes inaccessible to the end user. The model file will also be encrypted and decrypted when needed.

I understand that a single Python script will not be "simple" due to the complexity of the model, pre-transforms, transforms, and so on, as you mentioned; however, it would help simplify deploying the model and code locally while keeping everything away from the end user.

This is currently my initial idea for implementation. I am open to any questions and suggestions you may have. 👍

Thank you.

I see. Sorry, but I cannot help in this endeavor. This is too specific and far from our use case.