`output_rounding_saturation_mode` pass does not work with convolutional layers
qberthet opened this issue · 2 comments
Prerequisites
- Test that the bug appears on the current version of the master branch. Make sure to include the commit hash of the commit you checked out.
- Check that the issue hasn't already been reported, by checking the currently open issues.
- If there are steps to reproduce the problem, make sure to write them down below.
- If relevant, please include the hls4ml project files, which were created directly before and/or after the bug.
Quick summary
Using the output_rounding_saturation_mode
pass to change the rounding and saturation mode of convolutional layers does not work.
Details
When trying to use output_rounding_saturation_mode
to modify a model using a Conv1d
, Conv2d
, QConv1d
or QConv2d
layer, the resulting model does not use the specified rounding and saturation mode for the convolutional layer. It does set other layers correctly.
From a quick test, I confirm that the pass indeed match the convolutional layer during the run, and the precision is indeed modified in the layer configuration by the output_rounding_saturation_mode
pass, but this modification seems to be lost later during the model conversion.
Tested with version ccf17c6, but also reproducible with older versions.
Steps to Reproduce
Running the following code:
from tensorflow.keras import Input, Model
import hls4ml
from qkeras import QConv1D, QActivation
from qkeras.quantizers import quantized_relu
# Build a dummy model
input_layer = Input(shape=(10, 4))
layer = input_layer
layer = QConv1D(
filters=10,
kernel_size=1,
padding='same',
data_format='channels_last',
kernel_quantizer="quantized_bits(16,6,alpha=1.)",
bias_quantizer="quantized_bits(16,6,alpha=1.)",
)(layer)
layer = QActivation(activation=quantized_relu(16, 6))(layer)
output_layer = layer
model = Model(inputs=input_layer, outputs=output_layer)
# Configure the optimizer
hls4ml.model.optimizer.get_optimizer("output_rounding_saturation_mode").configure(
layers=['Input', 'Conv1D', 'Activation'],
rounding_mode='AP_RND_CONV',
saturation_mode='AP_SAT',
)
config = hls4ml.utils.config_from_keras_model(model)
hls_model = hls4ml.converters.convert_from_keras_model(
model=model,
hls_config=config,
)
# Plot the model to check the resulting rounding and saturation modes
hls4ml.utils.plot_model(
hls_model,
show_shapes=True,
show_precision=True,
to_file='model.png',
)
Expected behavior
By looking at the generated model.png plot, the QConv1D
should have a fixed<16,6,RND_CONV,SAT,0>
type (Also verified in defines.h if the project is generated).
Actual behavior
By looking at the generated model.png plot, the QConv1D
have a default fixed<16,6,TRN,WRAP,0>
type.
This is a known issue and it happens due to the optimization of 1x1 Conv1D/2D to PointwiseConv1D/2D. However, even if you anticipate that and try to change the layers
parameter to include PointwiseConv1D
(or the name of the layer, q_conv1d
) it won't affect the outcome due to the order in which the optimizers are applied. I hope to get rid of the output_rounding_saturation_mode
in the future as this was a workaround to increase the compatibility with QKeras when we didn't have enough flexibility with other optimizers.
A solution that I suggest in this case is to change the hls_config
that you pass to the converter. In your example that could be:
config = hls4ml.utils.config_from_keras_model(model, granularity='name')
config['LayerName']['q_conv1d']['Precision']['result'] = 'fixed<16,6, RND_CONV, SAT>'
You can do this for other layers as well and avoid using the optimizer at all. To avoid guessing the name of the layer, you can name them in Keras, a nice practice regardless. Similar can be achieved with granularity='type'
if you want to type less, that would be a more direct equivalent to what you were doing with the optimizer.
Thanks @vloncar, this is helpful.
I forgot to mention in the issue that I am already explicitly defining the precision for each named layer as a workaround (I my project, not in this example code). Setting the default precision to fixed<16,6, RND_CONV, SAT>
also works but is even less flexible than the optimizer pass.
I noticed the conversion of 1x1 Conv1D/2D to PointwiseConv1D/2D, but looking at this optimizer code I don't understand why the initial precision is not transferred to the new layer and this appeared to me as a bug. But I agree that it is not that important once known (Maybe could be mentioned in the doc/code/tutorial?) as the optimizer pass is useful during exploration when the model structure is changing a lot, less when the structure is fixed and explicit layer config can be defined.
IMO this issue can be closed if there is plan for a better solution in the future.