d2 这样成功了吗?
leochengang opened this issue · 1 comments
python3 examples/02_detectron2/tools/convert_pt2ait.py
INFO:aitemplate.backend.build_cache_base:Build cache disabled
examples/02_detectron2/tools/convert_pt2ait.py:112: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
fuse_model[param_name] = torch.tensor(arr)
examples/02_detectron2/tools/convert_pt2ait.py:75: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
conv_w = torch.tensor(conv_w)
examples/02_detectron2/tools/convert_pt2ait.py:76: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
bn_rm = torch.tensor(bn_rm)
examples/02_detectron2/tools/convert_pt2ait.py:77: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
bn_rv = torch.tensor(bn_rv)
examples/02_detectron2/tools/convert_pt2ait.py:78: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
bn_w = torch.tensor(bn_w)
examples/02_detectron2/tools/convert_pt2ait.py:79: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
bn_b = torch.tensor(bn_b)
2023-04-25 11:18:51,251 INFO <aitemplate.testing.detect_target> Set target to CUDA
CUDA_VISIBLE_DEVICES=0 python3 examples/02_detectron2/demo.py --weight tmp/ait0425model_final.pth --config examples/02_detectron2/configs/faster_rcnn_R_50_FPN.yaml --batch 1 --input "/data/yiwu/base_train/data/0316/img/20230201_1_0_2304_1152_3584_2432.jpg" --confidence-threshold 0.5 --display --cudagraph
INFO:aitemplate.backend.build_cache_base:Build cache disabled
2023-04-25 11:26:44,308 INFO <aitemplate.testing.detect_target> Set target to CUDA
[11:26:45] model_container.cu:67: Device Runtime Version: 11060; Driver Version: 11060
[11:26:45] model_container.cu:81: Hardware accelerator device properties:
Device:
ASCII string identifying device: NVIDIA GeForce RTX 3080 Ti
Major compute capability: 8
Minor compute capability: 6
UUID: GPU-50de18d6-4c47-1c54-a2b3-183867b35341
Unique identifier for a group of devices on the same multi-GPU board: 0
PCI bus ID of the device: 101
PCI device ID of the device: 0
PCI domain ID of the device: 0
Memory limits:
Constant memory available on device in bytes: 65536
Global memory available on device in bytes: 12630884352
Size of L2 cache in bytes: 6291456
Shared memory available per block in bytes: 49152
Shared memory available per multiprocessor in bytes: 102400
[11:26:45] model_container.cu:85: Init AITemplate Runtime with 1 concurrency
run faster_rcnn_R_50_FPN end2end
1 images, run 1 batch
[11:26:45] model_container.cu:870: Benchmark runtime ms/iter: 11.5462
[11:26:45] model_container.cu:870: Benchmark runtime ms/iter: 10.6596
AIT Detection: Batch size: 1, Time per iter: 11.10 ms, FPS: 90.07
Yes, from the output it do seems like it's running successfully :)