Doubts regarding some of the functions
sainikitha79 opened this issue · 2 comments
sainikitha79 commented
(1) Can you please explain what exactly the line arm_nn_requantize() perform.
(2) Also, the functionality of the arm_nn_activation_s16 .
(3) How the sigmoid lookup table is created?
(4) Explain the working of vmaxq_s32(acc, vdupq_n_s32(NN_Q15_MIN)) which is present in the folder
NNSupportFunctions->arm_nn_vec_mat_mul_result_acc_s8.c
Can you please clear the above doubts as they will be very useful for me.
felix-johnny commented
Hi @sainikitha79 Thanks for the questions..
- arm_nn_requantize() is a requantization step from 32 bit input into 8 bits. For e.g when you do Multiply Accumulates for a int8 convolution operation, the result is in 32 bits . requantization gets it back to 8 bits and it follows the TensorFlow Lite specification for int8 quantization.
- 2 and 3 are activation functions for sigmoid and tanh. Those tables are the integer pre-calculated sigmoid values.
- vmaxq is an Arm Helium Technology intrinsic. https://developer.arm.com/architectures/instruction-sets/intrinsics/#f:@navigationhierarchiessimdisa=[Helium]&q=vmaxq[_s32] . It is applicable for e.g on the latest Arm Cortex-M55 processor with M-Profile Vector Extension(MVE). It performs max of two values in their corresponding lanes of the q or vector register.
sainikitha79 commented
Thank you so much for your explanation.