Skip to content

cpp_infer 中无法使用 tensorrt 推理 #17887

@shesung

Description

@shesung

🔎 Search before asking

  • I have searched the PaddleOCR Docs and found no similar bug report.
  • I have searched the PaddleOCR Issues and found no similar bug report.
  • I have searched the PaddleOCR Discussions and found no similar bug report.

🐛 Bug (问题描述)

按照教程 通用 OCR 产线 C++ 部署 - Linux 编译。
修改代码static_infer.cc, 开启 tensorrt,编译和链接都正常。

config.EnableTensorRtEngine(1 << 30, FLAGS_batch_size, min_subgraph_size, precision, use_static, use_calib_mode);
模型:v5-mobile-det, v5-mobile-rec, v5-server-rec。
运行程序没有观察到trt 被调用,也没有生成 trt_serialized文件。速度和未开启时一样。

问题: cpp_infer样例里,对PPOCR-v5系列模型,怎样正确开启 tensorrt 推理?

🏃‍♂️ Environment (运行环境)

镜像: ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddle:3.0.0-gpu-cuda11.8-cudnn8.9-trt8.6

paddle版本: 3.3.0

🌰 Minimal Reproducible Example (最小可复现问题的Demo)

如上

Metadata

Metadata

Assignees

Labels

task/deploymentRelated to service deployment or serving.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions