编译安装paddle-custom-npu-0.0.0后使用paddleocr贴别慢
fallbernana123456 opened this issue · 11 comments
基础功能检查 没有问题。但是在paddleocr预测时:
I0531 15:46:17.706218 112503 init.cc:236] ENV [CUSTOM_DEVICE_ROOT]=/usr/local/lib/python3.10/dist-packages/paddle_custom_device
I0531 15:46:17.706283 112503 init.cc:145] Try loading custom device libs from: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
I0531 15:46:18.313715 112503 custom_device.cc:1099] Succeed in loading custom runtime in lib: /usr/local/lib/python3.10/dist-packages/paddle_custom_device/libpaddle-custom-npu.so
I0531 15:46:18.319205 112503 custom_kernel.cc:63] Succeed in loading 350 custom kernel(s) from loaded lib(s), will be used like native ones.
I0531 15:46:18.319399 112503 init.cc:157] Finished in LoadCustomDevice with libs_path: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
I0531 15:46:18.319448 112503 init.cc:242] CustomDevice: npu, visible devices count: 8
[2024/05/31 15:46:19] ppocr DEBUG: Namespace(help='==SUPPRESS==', use_gpu=False, use_xpu=False, use_npu=True, ir_optim=True, use_tensorrt=False, min_subgraph_size=15, precision='fp32', gpu_mem=500, gpu_id='0,1,2,3', image_dir=None, page_num=0, det_algorithm='DB', det_model_dir='/root/.paddleocr/whl/det/ch/ch_PP-OCRv4_det_infer', det_limit_side_len=680, det_limit_type='max', det_box_type='quad', det_db_thresh=0.3, det_db_box_thresh=0.6, det_db_unclip_ratio=1.5, max_batch_size=10, use_dilation=False, det_db_score_mode='fast', det_east_score_thresh=0.8, det_east_cover_thresh=0.1, det_east_nms_thresh=0.2, det_sast_score_thresh=0.5, det_sast_nms_thresh=0.2, det_pse_thresh=0, det_pse_box_thresh=0.85, det_pse_min_area=16, det_pse_scale=1, scales=[8, 16, 32], alpha=1.0, beta=1.0, fourier_degree=5, rec_algorithm='SVTR_LCNet', rec_model_dir='/root/.paddleocr/whl/rec/ch/ch_PP-OCRv4_rec_infer', rec_image_inverse=True, rec_image_shape='3, 48, 320', rec_batch_num=6, max_text_length=25, rec_char_dict_path='/usr/local/lib/python3.10/dist-packages/paddleocr/ppocr/utils/ppocr_keys_v1.txt', use_space_char=True, vis_font_path='./doc/fonts/simfang.ttf', drop_score=0.5, e2e_algorithm='PGNet', e2e_model_dir=None, e2e_limit_side_len=768, e2e_limit_type='max', e2e_pgnet_score_thresh=0.5, e2e_char_dict_path='./ppocr/utils/ic15_dict.txt', e2e_pgnet_valid_set='totaltext', e2e_pgnet_mode='fast', use_angle_cls=True, cls_model_dir='/root/.paddleocr/whl/cls/ch_ppocr_mobile_v2.0_cls_infer', cls_image_shape='3, 48, 192', label_list=['0', '180'], cls_batch_num=6, cls_thresh=0.9, enable_mkldnn=False, cpu_threads=10, use_pdserving=False, warmup=False, sr_model_dir=None, sr_image_shape='3, 32, 128', sr_batch_num=1, draw_img_save_dir='./inference_results', save_crop_res=False, crop_res_save_dir='./output', use_mp=True, total_process_num=1, process_id=0, benchmark=True, save_log_path='./log_output/', show_log=True, use_onnx=False, output='./output', table_max_len=488, table_algorithm='TableAttn', table_model_dir=None, merge_no_span_structure=True, table_char_dict_path=None, layout_model_dir=None, layout_dict_path=None, layout_score_threshold=0.5, layout_nms_threshold=0.5, kie_algorithm='LayoutXLM', ser_model_dir=None, re_model_dir=None, use_visual_backbone=True, ser_dict_path='../train_data/XFUND/class_list_xfun.txt', ocr_order_method=None, mode='structure', image_orientation=False, layout=True, table=True, ocr=True, recovery=False, use_pdf2docx_api=False, invert=False, binarize=False, alphacolor=(255, 255, 255), lang='ch', det=True, rec=True, type='ocr', ocr_version='PP-OCRv4', structure_version='PP-StructureV2')
load finish
I0531 15:46:21.097870 113010 init.cc:236] ENV [CUSTOM_DEVICE_ROOT]=/usr/local/lib/python3.10/dist-packages/paddle_custom_device
I0531 15:46:21.097935 113010 init.cc:145] Try loading custom device libs from: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
I0531 15:46:21.287110 113251 init.cc:236] ENV [CUSTOM_DEVICE_ROOT]=/usr/local/lib/python3.10/dist-packages/paddle_custom_device
I0531 15:46:21.287169 113251 init.cc:145] Try loading custom device libs from: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
I0531 15:46:21.682519 113010 custom_device.cc:1099] Succeed in loading custom runtime in lib: /usr/local/lib/python3.10/dist-packages/paddle_custom_device/libpaddle-custom-npu.so
I0531 15:46:21.687526 113010 custom_kernel.cc:63] Succeed in loading 350 custom kernel(s) from loaded lib(s), will be used like native ones.
I0531 15:46:21.687721 113010 init.cc:157] Finished in LoadCustomDevice with libs_path: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
I0531 15:46:21.687773 113010 init.cc:242] CustomDevice: npu, visible devices count: 8
I0531 15:46:21.876283 113251 custom_device.cc:1099] Succeed in loading custom runtime in lib: /usr/local/lib/python3.10/dist-packages/paddle_custom_device/libpaddle-custom-npu.so
I0531 15:46:21.879951 113251 custom_kernel.cc:63] Succeed in loading 350 custom kernel(s) from loaded lib(s), will be used like native ones.
I0531 15:46:21.880151 113251 init.cc:157] Finished in LoadCustomDevice with libs_path: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
I0531 15:46:21.880192 113251 init.cc:242] CustomDevice: npu, visible devices count: 8
[2024/05/31 15:49:47] ppocr DEBUG: dt_boxes num : 3, elapsed : 32.64580488204956
[2024/05/31 15:49:47] ppocr DEBUG: cls num : 3, elapsed : 0.022808074951171875
Warning: tiling offset out of range, index: 32
Warning: tiling offset out of range, index: 32
Warning: tiling offset out of range, index: 32
[2024/05/31 15:50:49] ppocr DEBUG: rec_res num : 3, elapsed : 62.09084606170654
[8.0, 7.0] [752.0, 7.0] [752.0, 25.0] [8.0, 25.0] 如果想要使用om模型进行推理,就需要使用华为acl的特定的Api,关于atc的api具体可查看他们 0.9951098561286926
[9.0, 35.0] [742.0, 35.0] [742.0, 53.0] [9.0, 53.0] 的官网的api说明,其实如果说学过CUDA的话,就会发现,它的Api和CUDA其实蛮像的,比如 0.9868237972259521
[8.0, 62.0] [331.0, 61.0] [331.0, 80.0] [8.0, 81.0] cudamemorymalloc对应acl.rt.malloc等 0.9853638410568237
###############ocr_test1.png is ok###########
[2024/05/31 15:51:24] ppocr DEBUG: dt_boxes num : 6, elapsed : 35.48471665382385
[2024/05/31 15:51:55] ppocr DEBUG: cls num : 6, elapsed : 30.283159494400024
Warning: tiling offset out of range, index: 32
Warning: tiling offset out of range, index: 32
Warning: tiling offset out of range, index: 32
[2024/05/31 15:53:00] ppocr DEBUG: rec_res num : 6, elapsed : 65.58153176307678
[32.0, 9.0] [379.0, 9.0] [379.0, 24.0] [32.0, 24.0] 1.npu加载.om模型,并将模型转移到NPU上 0.9712381362915039
[31.0, 38.0] [279.0, 38.0] [279.0, 53.0] [31.0, 53.0] 2.NPU上分配模型的输入和输出 0.9975005388259888
[29.0, 64.0] [278.0, 64.0] [278.0, 83.0] [29.0, 83.0] 3.CPU加载图片,从硬盘到内存 0.9993540644645691
[29.0, 91.0] [280.0, 91.0] [280.0, 110.0] [29.0, 110.0] 4.输入数据拷贝,从内存到NPU 0.9980849027633667
[29.0, 120.0] [158.0, 120.0] [158.0, 139.0] [29.0, 139.0] 5.NPU执行推理 0.9916012287139893
[30.0, 149.0] [280.0, 149.0] [280.0, 164.0] [30.0, 164.0] 6.输出数据拷贝,从NPU到内存 0.9904201030731201
###############ocr_test2.png is ok###########
预测的时间都在 一分钟以上,而且NPU 能跑满60G(这个很奇怪),
你好,可以贴一下你的运行的命令,Paddle和PaddleCustomDevice的版本
请问有测过GPU的推理时间吗?建议做下对比;
对一次预测profile的用法可以参考:https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/profiler/Profiler_cn.html#profiler
NPU的profiler的初始化的方式如下:
import paddle.profiler as profiler
profiler = profiler.Profiler(targets=[profiler.ProfilerTarget.CUSTOM_DEVICE], custom_device_types=['npu'])
profiler
huawei /work pip list|grep paddle
paddle-custom-npu 0.0.0
paddle2onnx 1.0.5
paddlepaddle 3.0.0.dev20240531
paddleocr==2.7.3
`from paddleocr import PaddleOCR
def load_pp_ocr():
ocr_loader = PaddleOCR(
use_angle_cls=True,
lang="ch",
use_npu=True,
det_limit_side_len=680,
gpu_id="0,1,2,3",
benchmark=True,
show_log=True,
use_mp=True
)
return ocr_loader
def pp_ocr_box(ocr_loader: PaddleOCR, img_path):
result = ocr_loader.ocr(img_path)
if result:
for one_box in result[0]:
[lb, rb, rt, lt], (text, det_threshold) = one_box
yield (lb, rb, rt, lt, text, det_threshold)
def main():
ocr_loader = load_pp_ocr()
print("load finish\n")
filePath = "test.png"
for box in pp_ocr_box(ocr_loader, filePath):
if box:
lb, rb, rt, lt, text, det_threshold = box
print(lb, rb, rt, lt, text, det_threshold)
print("test.png is ok\n")
if name == "main":
main()`
https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/profiler/Profiler_cn.html#profiler
示例1:
import paddle.profiler as profiler import paddle paddle.device.set_device('npu') with profiler.Profiler( targets=[profiler.ProfilerTarget.CUSTOM_DEVICE],custom_device_types=['npu'], scheduler = (2, 5), on_trace_ready = profiler.export_chrome_tracing('./log') ) as p: for iter in range(10): # train() p.step()
log里显示:
`{
"displayTimeUnit": "ms",
"schemaVersion": "1.0.2",
"span_indx": "0",
"traceEvents": [
{}
],
"ExtraInfo": {
"Process Cpu Utilization": "0.051724",
"System Cpu Utilization": "0.005901"
}
}`
示例2:
import paddle.profiler as profiler import paddle paddle.device.set_device('npu') with profiler.Profiler( targets=[profiler.ProfilerTarget.CUSTOM_DEVICE],custom_device_types=['npu'], scheduler = profiler.make_scheduler(closed=1, ready=1, record=3, repeat=3), on_trace_ready = profiler.export_chrome_tracing('./log') ) as p: for iter in range(10): # train() p.step()
结果:
`{
"displayTimeUnit": "ms",
"schemaVersion": "1.0.2",
"span_indx": "0",
"traceEvents": [
{}
],
"ExtraInfo": {
"Process Cpu Utilization": "0.067797",
"System Cpu Utilization": "0.006307"
}
}`
示例3:
import paddle.profiler as profiler import paddle paddle.device.set_device('npu') p = profiler.Profiler(targets=[profiler.ProfilerTarget.CUSTOM_DEVICE], custom_device_types=['npu']) p.start() for iter in range(10): #train() p.step() p.stop() p.summary()
结果:
I0603 10:52:55.700057 4842 init.cc:236] ENV [CUSTOM_DEVICE_ROOT]=/usr/local/lib/python3.10/dist-packages/paddle_custom_device
I0603 10:52:55.700158 4842 init.cc:145] Try loading custom device libs from: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
I0603 10:52:56.345526 4842 custom_device.cc:1099] Succeed in loading custom runtime in lib: /usr/local/lib/python3.10/dist-packages/paddle_custom_device/libpaddle-custom-npu.so
I0603 10:52:56.351272 4842 custom_kernel.cc:63] Succeed in loading 350 custom kernel(s) from loaded lib(s), will be used like native ones.
I0603 10:52:56.351514 4842 init.cc:157] Finished in LoadCustomDevice with libs_path: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
I0603 10:52:56.351567 4842 init.cc:242] CustomDevice: npu, visible devices count: 8
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0603 10:52:57.224920 4842 runtime.cc:1001] ascend profiling data will be saved in ascend_profiling
I0603 10:52:57.881390 4842 chrometracing_logger.cc:46] writing profiling data to ./profiler_log/host_huaweipid_4842_time_2024_06_03_10_52_57_881216.paddle_trace.json
-------------------Device Summary-------------------
Device Utilization (%)
CPU(Process) 1.69
CPU(System) 0.61
Note:
CPU(Process) Utilization = Current process CPU time over all cpu cores / elapsed time, so max utilization can be reached 100% * number of cpu cores.
CPU(System) Utilization = All processes CPU time over all cpu cores(busy time) / (busy time + idle time).
GPU Utilization = Current process GPU time / elapsed time.
示列4:
`import paddle
import paddle.profiler as profiler
class RandomDataset(paddle.io.Dataset):
def init(self, num_samples):
self.num_samples = num_samples
def getitem(self, idx):
image = paddle.rand(shape=[100], dtype='float32')
label = paddle.randint(0, 10, shape=[1], dtype='int64')
return image, label
def len(self):
return self.num_samples
class SimpleNet(paddle.nn.Layer):
def init(self):
super().init()
self.fc = paddle.nn.Linear(100, 10)
def forward(self, image, label=None):
return self.fc(image)
dataset = RandomDataset(20 * 4)
simple_net = SimpleNet()
opt = paddle.optimizer.SGD(learning_rate=1e-3, parameters=simple_net.parameters())
BATCH_SIZE = 4
loader = paddle.io.DataLoader(
dataset,
batch_size=BATCH_SIZE)
p = profiler.Profiler(targets=[profiler.ProfilerTarget.CUSTOM_DEVICE], custom_device_types=['npu'],timer_only=True)
p.start()
for i, (image, label) in enumerate(loader()):
out = simple_net(image)
loss = paddle.nn.functional.cross_entropy(out, label)
avg_loss = paddle.mean(loss)
avg_loss.backward()
opt.minimize(avg_loss)
simple_net.clear_gradients()
p.step(num_samples=BATCH_SIZE)
if i % 10 == 0:
step_info = p.step_info(unit='images')
print("Iter {}: {}".format(i, step_info))
# The average statistics for 10 steps between the last and this call will be
# printed when the "step_info" is called at 10 iteration intervals.
# The values you get may be different from the following.
# Iter 0: reader_cost: 0.51946 s batch_cost: 0.66077 s ips: 6.054 images/s
# Iter 10: reader_cost: 0.00014 s batch_cost: 0.00441 s ips: 907.009 images/s
p.stop()`
结果是:
I0603 10:44:51.187688 3105 init.cc:236] ENV [CUSTOM_DEVICE_ROOT]=/usr/local/lib/python3.10/dist-packages/paddle_custom_device
I0603 10:44:51.187783 3105 init.cc:145] Try loading custom device libs from: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
I0603 10:44:51.957778 3105 custom_device.cc:1099] Succeed in loading custom runtime in lib: /usr/local/lib/python3.10/dist-packages/paddle_custom_device/libpaddle-custom-npu.so
I0603 10:44:51.965678 3105 custom_kernel.cc:63] Succeed in loading 350 custom kernel(s) from loaded lib(s), will be used like native ones.
I0603 10:44:51.965973 3105 init.cc:157] Finished in LoadCustomDevice with libs_path: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
I0603 10:44:51.966037 3105 init.cc:242] CustomDevice: npu, visible devices count: 8
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0603 10:44:53.259622 3105 runtime.cc:1001] ascend profiling data will be saved in ascend_profiling
Iter 0: reader_cost: 0.04590 s batch_cost: 26.21655 s ips: 0.153 images/s
然后就是无限等待了。
你好,具体查看profiling的方法如下:
-
在运行的目录下会生成一个名为ascend_profiling的目录,里面存储的就是生成的性能分析数据,
-
运行如下命令,进行性能数据的转化
alias msprof='/usr/local/Ascend/ascend-toolkit/latest/tools/profiler/bin/msprof' msprof --export=on --output=ascend_profiling
- 会生成summary目录和timeline目录,summary/op_statistic_0_1.csv为CANN OP 的调用总结
有邮箱吗?我把csv发给你。
你好,文件夹下没有summary/op_statistic_0_1.csv,感觉是没有抓到执行的CANN op,确定是跑在NPU上了吗?
可以运行时打开GLOG_v=4 python ***.py > log 2>&1; cat log | grep NpuOpRunner,看看能不能抓到NpuOpRunner,
如果有的话,可以检查下添加profiler的代码是否正确;
其他的建议:使用GPU/CPU运行一遍你的代码,看下其他设备一次推理时间多久
您好,关于NPU推理性能,可以参考下这个issue的回复 #1118
可以先排除下以上2个可能,看下(1) 是否有算子 fallback CPU跑;(2) 推理之前warmup足够的次数。如果这2个办法无法解决您的问题,可以再分析下NPU的Profiling看下NPU的算子性能瓶颈在哪里。
你好,具体查看profiling的方法如下:
- 在运行的目录下会生成一个名为ascend_profiling的目录,里面存储的就是生成的性能分析数据,
- 运行如下命令,进行性能数据的转化
alias msprof='/usr/local/Ascend/ascend-toolkit/latest/tools/profiler/bin/msprof' msprof --export=on --output=ascend_profiling
- 会生成summary目录和timeline目录,summary/op_statistic_0_1.csv为CANN OP 的调用总结
from paddleocr import PaddleOCR
import paddle.profiler as profiler
profiler = profiler.Profiler(targets=[profiler.ProfilerTarget.CUSTOM_DEVICE], custom_device_types=['npu'])
def load_pp_ocr():
ocr_loader = PaddleOCR(
use_angle_cls=True,
lang="ch",
use_npu=True,
det_limit_side_len=680,
gpu_id="0,1,2,3",
benchmark=True,
show_log=True,
use_mp=True
)
return ocr_loader
def pp_ocr_box(ocr_loader: PaddleOCR, img_path):
result = ocr_loader.ocr(img_path)
if result:
for one_box in result[0]:
[lb, rb, rt, lt], (text, det_threshold) = one_box
yield (lb, rb, rt, lt, text, det_threshold)
def main():
ocr_loader = load_pp_ocr()
print("load finish\n")
filePath = "test.png"
for box in pp_ocr_box(ocr_loader, filePath):
if box:
lb, rb, rt, lt, text, det_threshold = box
print(lb, rb, rt, lt, text, det_threshold)
print("test.png is ok\n")
if __name__ == "__main__":
profiler.start()
main()
profiler.stop()
您好,关于NPU推理性能,可以参考下这个issue的回复 #1118
可以先排除下以上2个可能,看下(1) 是否有算子 fallback CPU跑;(2) 推理之前warmup足够的次数。如果这2个办法无法解决您的问题,可以再分析下NPU的Profiling看下NPU的算子性能瓶颈在哪里。
paddle-custom-npu 0.0.0
paddleocr 2.7.3
paddlepaddle 3.0.0.dev20240530
version: 0.0.0
commit: 024f62a
custom_op commit: 024f62a
cann: 7.0.0
(1) 目录下有kernel_meta_* 临时目录生成,我是按照https://github.com/PaddlePaddle/PaddleCustomDevice/blob/develop/backends/npu/README_cn.md 的要求安装的,应该是新的吧。在测试相同的图片,第二次会快,但是新的输入还是会有很长的等待时间。因为数据各种都有,所以warm up估计用处不大。
(2)debug日志里没有fallbacking 关键字。
你好,根据你的贴图和提供的信息可以看出来没有异构到CPU计算的算子,耗时最多的是BatchMatmul也比较符合预期。建议可以加上warmup;如果你的输入数据的shape很多,个人建议可以增加下预处理
你好,请问还有其他问题嘛?