/nexport

A Python package for exporting the weights and biases of neural networks.

Primary LanguagePythonOtherNOASSERTION


GitHub tag (latest by date) GitHub release (latest by date) GitHub Release Date GitHub license GitHub commit activity GitHub wiki

PyPI PyPI - Python Version PyPI - Wheel PyPI - Status PyPI - Downloads

GitHub Repo stars GitHub watchers GitHub forks Lines of code GitHub repo file count GitHub repo size

Overview

nexport is a lightweight Python 3.10+ package which empowers deep learning developers in exporting the trainable parameters of deep neural networks to human-readable and transrerable file types.

Table of contents

Current support

Filetype PyTorch Keras/TensorFlow
Export Import Export Import
Text (.txt) 🚧
JSON (.json) 🚧
CSV (.csv)
XML (.xml)

Install & use

  1. From terminal: pip install nexport Or clone the repository to local and navigate to project folder pip install .
  2. From python environment: import nexport
  3. Call function via: nexport.export(...)

The example of using this function call is located in test/src/inference_example.py. To run it, you need to provide a network trained by PyTorch and place it under the test folder.

For current version, it is recommended to address all the following specified the parameters:

nexport.export(
              model = YOUR_PYTORCH_MODEL,
              filetype = "json_exp",
              input_size = 80, output_size = 31, intercept=0.0, slope=1.0,
              include_metadata=True, model_name="YOUR_PYTORCH_MODEL_NAME", model_author="YOUR_PYTORCH_MODEL_AUTHOR", activation_function="gelu",
              using_skip_connections=False)

model: Required. A PyTorch model instance.

acceptable_engine_tag: Required. A string version number indicates an acceptable inference engine release. Because nexport generates a JSON file that is intimately tied to Inference-Engine, the JSON keywords may change when Inference-Engine's file format changes. This parameter indicates an Inference-Engine git tag known to be able to read and write the nexport output. Other Inference-Engine revisions might also be compatible.

file_type: Required. Mandatory to put json_exp for now

input_size and output_size: Required. The input and output size of your model

intercept and slope: the normalization boundary for input and output data. If no normalization, you can leave it without specifying them. The formula is x = (slope - intercept) * y + intercept, where y is normalized value and x is original data.

include_metadata: Required. If True, the final json file will have the metadata. Mandatory to put True for now.

model_name, model_author, activation_function: not mandatory but recommend to put some text in there.

using_skip_connections: should be false in most cases.

Objectives

  • Export weights and biases to human-readable file
  • Ensure compatability with all popular neural network development software

History

This package is intended to be used in conjunction with inference-engine. As such, nexport was developed by the inference-engine developers to enable compatability between the two softwares. nexport does this by exporting the weights and biases from networks compiled in PyTorch, Keras, and TensorFlow into standardized human-readable files. These files can be read by inference-engine to instantiate the netwoks in Fortran 2018 for inference.

Credits

nexport was created and is currently maintained by Jordan Welsman. Parts of this project were based on prior work by Tan Nguyen.

License

nexport is developed and distributed under a Berkeley Laboratory modified BSD license.

Note See LICENSE for more details.

Links

📁 See this project on GitHub

🎁 See this project on PyPI

🐱 Follow me on GitHub

💼 Connect with me on Linkedin

📧 Send me an email

💭 Based on this project