diff --git a/docs/version3.x/pipeline_usage/PaddleOCR-VL.en.md b/docs/version3.x/pipeline_usage/PaddleOCR-VL.en.md index 32eab6ae56c..b3561af577c 100644 --- a/docs/version3.x/pipeline_usage/PaddleOCR-VL.en.md +++ b/docs/version3.x/pipeline_usage/PaddleOCR-VL.en.md @@ -1643,7 +1643,11 @@ After launching the VLM inference service, the client can call the service throu #### 3.2.1 CLI Invocation -Specify the backend type (`vllm-server`, `sglang-server`, `fastdeploy-server`, `mlx-vlm-server` or `llama-cpp-server`) using `--vl_rec_backend` and the service address using `--vl_rec_server_url`, for example: +Specify the backend type (`vllm-server`, `sglang-server`, `fastdeploy-server`, `mlx-vlm-server` or `llama-cpp-server`) using `--vl_rec_backend` and the service address using `--vl_rec_server_url`. + +> **Note:** The `llama-cpp-server` backend requires installing the latest PaddleOCR from source (`pip install git+https://github.com/PaddlePaddle/PaddleOCR.git`). The PyPI release (3.4.0) does not yet support this backend. + +For example: ```shell paddleocr doc_parser --input paddleocr_vl_demo.png --vl_rec_backend vllm-server --vl_rec_server_url http://localhost:8118/v1 @@ -1686,7 +1690,11 @@ paddleocr doc_parser \ #### 3.2.2 Python API Invocation -When creating a `PaddleOCRVL` object, specify the backend type (`vllm-server`, `sglang-server`, `fastdeploy-server`, `mlx-vlm-server` or `llama-cpp-server`) using `vl_rec_backend` and the service address using `vl_rec_server_url`, for example: +When creating a `PaddleOCRVL` object, specify the backend type (`vllm-server`, `sglang-server`, `fastdeploy-server`, `mlx-vlm-server` or `llama-cpp-server`) using `vl_rec_backend` and the service address using `vl_rec_server_url`. + +> **Note:** The `llama-cpp-server` backend requires installing the latest PaddleOCR from source (`pip install git+https://github.com/PaddlePaddle/PaddleOCR.git`). The PyPI release (3.4.0) does not yet support this backend. + +For example: ```python pipeline = PaddleOCRVL(vl_rec_backend="vllm-server", vl_rec_server_url="http://localhost:8118/v1") diff --git a/docs/version3.x/pipeline_usage/PaddleOCR-VL.md b/docs/version3.x/pipeline_usage/PaddleOCR-VL.md index c2fddb9adc4..1f62927cfc1 100644 --- a/docs/version3.x/pipeline_usage/PaddleOCR-VL.md +++ b/docs/version3.x/pipeline_usage/PaddleOCR-VL.md @@ -1624,7 +1624,11 @@ paddleocr genai_server --model_name PaddleOCR-VL-1.5-0.9B --backend vllm --port #### 3.2.1 CLI 调用 -可通过 `--vl_rec_backend` 指定后端类型(`vllm-server`、`sglang-server`、`fastdeploy-server`、`mlx-vlm-server` 或 `llama-cpp-server`),通过 `--vl_rec_server_url` 指定服务地址,例如: +可通过 `--vl_rec_backend` 指定后端类型(`vllm-server`、`sglang-server`、`fastdeploy-server`、`mlx-vlm-server` 或 `llama-cpp-server`),通过 `--vl_rec_server_url` 指定服务地址。 + +> **注意:** `llama-cpp-server` 后端需要从源码安装最新版 PaddleOCR(`pip install git+https://github.com/PaddlePaddle/PaddleOCR.git`),PyPI 上的 3.4.0 版本尚不支持该后端。 + +例如: ```shell paddleocr doc_parser --input paddleocr_vl_demo.png --vl_rec_backend vllm-server --vl_rec_server_url http://localhost:8118/v1 @@ -1667,7 +1671,11 @@ paddleocr doc_parser \ #### 3.2.2 Python API 调用 -创建 `PaddleOCRVL` 对象时传入 `vl_rec_backend` 指定后端类型(`vllm-server`、`sglang-server`、`fastdeploy-server`、`mlx-vlm-server` 或 `llama-cpp-server`), `vl_rec_server_url` 指定服务地址,例如: +创建 `PaddleOCRVL` 对象时传入 `vl_rec_backend` 指定后端类型(`vllm-server`、`sglang-server`、`fastdeploy-server`、`mlx-vlm-server` 或 `llama-cpp-server`), `vl_rec_server_url` 指定服务地址。 + +> **注意:** `llama-cpp-server` 后端需要从源码安装最新版 PaddleOCR(`pip install git+https://github.com/PaddlePaddle/PaddleOCR.git`),PyPI 上的 3.4.0 版本尚不支持该后端。 + +例如: ```python pipeline = PaddleOCRVL(vl_rec_backend="vllm-server", vl_rec_server_url="http://localhost:8118/v1")