English | 简体中文
To edit an ONNX model, one common way is to visualize the model graph, and edit it using ONNX Python API. This works fine. However, we have to code to edit, then visualize to check. The two processes may iterate for many times, which is time-consuming. 👋
What if we have a tool, which allows us to edit and preview the editing effect in a totally visualization fashion?
Then onnx-modifier
comes. With it, we can focus on editing the model graph in the visualization pannel. All the editing information will be summarized and processed by Python ONNX API automatically at last. Then our time can be saved! 🚀
onnx-modifier
is built based on the popular network viewer Netron and the lightweight web application framework Flask.
Currently, the following editing operations are supported:
✅ Delete nodes
✅ Add new nodes
✅ Rename the node inputs and outputs
✅ Rename the model inputs and outputs
✅ Add new model outputs
✅ Add new model inputs
✅ Edit model input shape
✅ Edit attribute of nodes
✅ Edit model initializers
Here is the update log and TODO list. Here is the design overview, which may be helpful for someone who wants to contribute to this project.
Hope it helps!
We have three methods to launch onnx-modifier
now.
Clone the repo and install the required Python packages by
git clone https://github.com/ZhangGe6/onnx-modifier.git
cd onnx-modifier
pip install -r requirements.txt
Then run
python app.py
Click the url in the output info generated by flask (defaults to http://127.0.0.1:5000/
), then onnx-modifier
will be launched in the web browser.
Click to expand
- Windows: Download onnx-modifier.exe (28.3MB) Google Drive / Baidu NetDisk, double-click it and enjoy.
- Edge browser is used for runtime environment by default.
I recorded how I made the the executable file in
app_desktop.py
. The executable file for other platforms are left for future work.
Click to expand
We create a docker container like this:
git clone git@github.com:ZhangGe6/onnx-modifier.git
cd onnx-modifier
docker build --file Dockerfile . -t onnx-modifier
After building the container, we run onnx-modifier by mapping docker port and a local folder modified_onnx
mkdir -p modified_onnx
docker run -d -t \
--name onnx-modifier \
-u $(id -u ${USER}):$(id -g ${USER}) \
-v $(pwd)/modified_onnx:/modified_onnx \
-p 5000:5000 \
onnx-modifier
Then we have access to onnx-modifer from URL http://127.0.0.1:5000. The modified ONNX models are expected to be found inside the local folder modified_onnx
.
Click Open Model...
to upload the ONNX model to edit. The model will be parsed and shown on the page.
Graph-level-operation elements are placed on the left-top of the page. Currently, there are three buttons: Reset
, Download
and Add node
. They can do:
Reset
: Reset the whole model graph to its initial state;Download
: Save the modified model into disk. Note the two checkboxes on the right- (experimental) select
shape inference
to do shape inference when saving model.- The
shape inference
feature is built on onnx-tool, which is a powerful ONNX third-party tool.
- The
- (experimental) select
clean up
to remove the unused nodes and tensors (like ONNX GraphSurgeon).
- (experimental) select
Add node
: Add a new node into the model.
Node-level-operation elements are all in the sidebar, which can be invoked by clicking a specific node.
Let's take a closer look.
There are two modes for deleting node: Delete With Children
and Delete Single Node
. Delete Single Node
only deletes the clicked node, while Delete With Children
also deletes all the node rooted on the clicked node, which is convenient and natural if we want to delete a long path of nodes.
The implementation of
Delete With Children
is based on the backtracking algorithm.
For previewing, The deleted nodes are in grey mode at first. If a node is deleted by mistake, Recover Node
button can help us recover it back to graph. Click Enter
button to take the deleting operation into effect, then the updated graph will show on the page automatically.
The following figure shows a typical deleting process:
Sometimes we want to add new nodes into the existed model. onnx-modifier
supports this feature experimentally now.
Note there is an Add node
button, following with a selector elements on the top-left of the index page. To do this, what we need to do is as easy as 3 steps:
-
Choose a node type in the selector, and click
Add node
button. Then an empty node of the chosen type will emerge on the graph.The selector contains all the supported operator types in domains of
ai.onnx
(171),ai.onnx.preview.training
(4),ai.onnx.ml
(18) andcom.microsoft
(1). -
Click the new node and edit it in the invoked siderbar. What we need to fill are the node Attributes (
undefined
by default) and its Inputs/Outputs (which decide where the node will be inserted in the graph). -
We are done.
The following are some notes for this feature:
-
By clicking the
?
in theNODE PROPERTIES -> type
element, or the+
in eachAttribute
element, we can get some reference to help us fill the node information. -
It is suggested to fill all of the
Attribute
, without leaving them asundefined
. The default value may not be supported well in the current version. -
For the
Attribute
with typelist
, items are split with ',
' (comma). Note that[]
is not needed. -
For the
Inputs/Outputs
with typelist
, it is forced to be at most 8 elements in the current version. If the actual inputs/outputs number is less than 8, we can leave the unused items with the name starting withlist_custom
, and they will be automatically omitted.
By changing the input/output name of nodes, we can change the model forward path. It can also be helpful if we want to rename the model output(s).
Using onnx-modifier
, we can achieve this by simply enter a new name for node inputs/outputs in its corresponding input placeholder. The graph topology is updated automatically and instantly, according to the new names.
For example, Now we want remove the preprocess operators (Sub->Mul->Sub->Transpose
) shown in the following figure. We can
- Click on the 1st
Conv
node, rename its input (X) as serving_default_input:0 (the output of nodedata_0
). - The model graph is updated automatically and we can see the input node links to the 1st
Conv
directly. In addition, the preprocess operators have been split from the main routine. Delete them. - We are done! (click
Download
, then we can get the modified ONNX model).
Note: To link node
$A$ (data_0
in the above example) to node$B$ (the 1stConv
in the above example), it is suggested to edit the input of node$B$ to the output of nodeA
, rather than edit the output of node$A$ to the input of nodeB
. Because the input of$B$ can also be other node's output (Transpose
in the above example ) and unexpected result will happen.
The process is shown in the following figure:
Click the model input/output node, type a new name in the sidebar, then we are done.
Sometimes we want to set the output of a certain node as model output. For example, we want to extract intermediate layer output for fine-grained analysis. In onnx-modifier
, we can achieve this by simply clicking the Add Output
button in the sidebar of the corresponding node. Then we can get a new model output node following the corresponding node. Its name is the same as the output of the corresponding node.
In the following example, we add 2 new model outputs, which are the outputs of the 1st Conv
node and 2nd Conv
node, respectively.
Sometimes we need to add inputs to a model (such as a sub-model extracted from an original model). In onnx-modifier
, we can achieve it by:
- Clicking the node to add input, and click the "Add Input" button in the invoked sidebar.
- In the poped dialog, choose the input name in the selector, and input its shape. Then cilck "confirm".
- We are done.
Note: The input shape is supposed in "dtype[dim0, dim1, ...]" format, like "float32[1,3, 224,224]". Otherwise Warning shows and the "confirm" button is disabled. In addition, sometimes the input shape can be pre-filled by analysing the model (we can trust it). If not, we should set it manually.
Change the original attribute to a new value, then we are done.
By clicking the
+
in the right side of placeholder, we can get some helpful reference.
onnx-modifier
supports editting input shape now. Click the target model input, then click the Change input shape (static)
button. In the popped dialog, set a new shape for the input and click "confirm". The downsrteam tensor shape will be updated in the downloaded modified model (rather than in the pannel instantly, as the shape inference process is applied after "Download" is clicked).
onnx-modifier
also supports changing input to be dynamic. Currently only the batch dimension is supported. Just click the Set dynamic batch size
button, then we get a model which supports dynamic batch size inference.
Sometimes we want to edit the values which are stored in model initializers, such as the weight/bias of a convolution layer or the shape parameter of a Reshape
node. onnx-modifier
supports this feature now! Input a new value for the initializer in the invoked sidebar and click Download, then we are done.
Note: For the newly added node, we should also input the datatype of the initializer. (If we are not sure what the datatype is, click
NODE PROPERTIES->type->?
, we may get some clues.)
The latest version (after 2023.12.10) supports reading initializer values from numpy file! Just click the "Open *.npy" button and select the numpy file, the values will be parsed and shown in the above placeholder. The values can be furtherly editted.
For quick testing, some typical sample models are provided as following. Most of them are from onnx model zoo
- squeezeNet Link (4.72MB)
- MobileNet Link (13.3MB)
- ResNet50-int8 Link (24.6MB)
- movenet-lightning Link (9.01MB)
- Converted from the pretrained tflite model using tensorflow-onnx;
- There are preprocess nodes and a big bunch of postprocessing nodes in the model.
onnx-modifier
is under active development 🛠. Welcome to use, create issues and pull requests! 🥰
- Netron
- Flask
- ONNX IR Official doc
- ONNX Python API Official doc, Leimao's Blog
- ONNX IO Stream Leimao's Blog
- onnx-utils
- sweetalert
- flaskwebgui
- onnx-tool 👍
- Ascend/ait