mk-minchul/AdaFace

Demo video code

MyraBaba opened this issue · 11 comments

Hi,
@mk-minchul
Where we acn find the code of the demo which is comparing the adaface and arcface ?

For adaface what would be the similarity score that we can call that this is the same face ? above 0.3 or 0.5?

Hi @MyraBaba, Have you test out what is the best similarity score?

@quanqigu
not yet I need onnx version of the models. do you know is there any ?

i am using c++ and onnxruntime. so I have arcface & cosface onnx . but not have adaface's
working on a streams facerecognizers(4 cam and 1 gpu)

I have convert one just now, not test it yet. You can try it with this script.

import torch
import onnx
import onnxruntime
from torch.autograd import Variable
from inference import load_pretrained_model  # Add this line

def export_onnx(model, input_shape, output_path):
    # Move the model to GPU if available
    model = model.to('cuda' if torch.cuda.is_available() else 'cpu')

    # Move the model to evaluation mode
    model.eval()

    # Create a dummy input tensor
    device = next(model.parameters()).device
    dummy_input = torch.randn(*input_shape, device=device)

    # Export the model
    torch.onnx.export(model, dummy_input, output_path, verbose=True, input_names=['input'], output_names=['output'])

model_arch = 'ir_101'
# Load AdaFace model
model = load_pretrained_model(model_arch)

# Define dynamic input shape
dynamic_input_shape = (1, 3, 112, 112)  # You can adjust the batch size as needed

# Export the model to ONNX
onnx_output_path = f'./pretrained/adaface_{model_arch}_model.onnx'
export_onnx(model, dynamic_input_shape, onnx_output_path)

# Validate the exported ONNX model
onnx_model = onnx.load(onnx_output_path)
onnx.checker.check_model(onnx_model)

# Create ONNX Runtime session
ort_session = onnxruntime.InferenceSession(onnx_output_path)

# Validate the dynamic input shape with a sample input
sample_input = torch.randn(dynamic_input_shape).cpu().numpy()
ort_inputs = {ort_session.get_inputs()[0].name: sample_input}
ort_outputs = ort_session.run(None, ort_inputs)

print("ONNX model conversion and validation successful.")

Not good at QT or C++.

taking the best face from real stream is also important. sending the best angle/better face per person.

do you have any idea about how to catch best angle and clear face?

trying to pitch roll and yaw calculations but not %100 satisfied . also calculate the light and the fuziness

@quanqigu https://github.com/deepcam-cn/FaceQuality

can you test above model and export to onnx. ? so we have a metric

@quanqigu did you see the FaceQuality ?

@MyraBaba sorry, I have no time on that model currently.