TypeError: 'NoneType' object is not subscriptable
Opened this issue · 4 comments
I have a qkeras model. When I load model(load_qmodel) and I want to compile it I get this error.
I installed the last version of HLS4ML libarary.
Code:
model_path = 'weights-chkpt-11-0.657679.h5'
model = load_qmodel(model_path)
model.summary()
HLS Code:
import hls4ml
hls_config = hls4ml.utils.config_from_keras_model(model, granularity='model')
hls_config['Model']['ReuseFactor']=16
hls_config['Model']['Strategy']='Resource'
hls_config['LayerName']['output']['exp_table_t'] = 'ap_fixed<16,6>'
hls_config['LayerName']['output']['inv_table_t'] = 'ap_fixed<16,6>'
hls_config['LayerName']['output']['Strategy'] = 'Stable'
cfg = hls4ml.converters.create_config(backend='Vivado')
cfg['IOType'] = 'io_stream'
cfg['HLSConfig'] = hls_config
cfg['KerasModel'] = model
cfg['OutputDir'] = 'CNN_16_6_16_q'
hls_model = hls4ml.converters.convert_from_keras_model(hls_config=cfg, backend='VivadoAccelerator', part='xczu7ev-ffvc1156-2-e')
hls_model.compile()
print('NOW is finished')
Result:
I appreciate for your response.
I think this is fixed by #997. The model however is too large for hls4ml and won't work.
So because of large model I can not use the hls4ml?
I think this is fixed by #997. The model however is too large for hls4ml and won't work.
I used a CNN model with 115,072 parameters and
used the hls4ml, after synthesis I found that the number of resource allocations is so low!
What should I do?
Model:
rf_in = Input(shape=(1024, 2), name = 'rf_input')
x = Conv1D(32, 5, activation=None, padding='same', use_bias=False)(rf_in)
x = BatchNormalization()(x)
x = Activation('selu')(x)
x = MaxPooling1D(2, strides = 2, padding='same') (x)
x = Conv1D(32, 7, activation=None, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('selu')(x)
x = MaxPooling1D(2, strides = 2, padding='same') (x)
x = Conv1D(16, 3, activation=None, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('selu')(x)
x = MaxPooling1D(2, strides=2, padding='same') (x)
x = Conv1D(16, 3, activation=None, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('selu')(x)
x = MaxPooling1D(2, strides=2, padding='same') (x)
x = Flatten()(x)
dense_1 = Dense(96, activation='selu', use_bias=False)(x)
dropout_1 = Dropout(0.35)(dense_1)
dense_2 = Dense(64, activation='selu', use_bias=False)(dropout_1)
dropout_2 = Dropout(0.55)(dense_2)
softmax = Dense(7, activation='softmax', use_bias=False)(dropout_2)
model = keras.Model(rf_in, softmax)
opt = keras.optimizers.Adam(learning_rate=0.001)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=["accuracy"])
model.summary()
Probably a conversion error somewhere and the HLS model is not properly connected. The synthesis log will have plenty of warnings I assume