Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 10 additions & 2 deletions docs/version3.x/pipeline_usage/PaddleOCR-VL.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -1643,7 +1643,11 @@ After launching the VLM inference service, the client can call the service throu

#### 3.2.1 CLI Invocation

Specify the backend type (`vllm-server`, `sglang-server`, `fastdeploy-server`, `mlx-vlm-server` or `llama-cpp-server`) using `--vl_rec_backend` and the service address using `--vl_rec_server_url`, for example:
Specify the backend type (`vllm-server`, `sglang-server`, `fastdeploy-server`, `mlx-vlm-server` or `llama-cpp-server`) using `--vl_rec_backend` and the service address using `--vl_rec_server_url`.

> **Note:** The `llama-cpp-server` backend requires installing the latest PaddleOCR from source (`pip install git+https://github.com/PaddlePaddle/PaddleOCR.git`). The PyPI release (3.4.0) does not yet support this backend.

For example:

```shell
paddleocr doc_parser --input paddleocr_vl_demo.png --vl_rec_backend vllm-server --vl_rec_server_url http://localhost:8118/v1
Expand Down Expand Up @@ -1686,7 +1690,11 @@ paddleocr doc_parser \

#### 3.2.2 Python API Invocation

When creating a `PaddleOCRVL` object, specify the backend type (`vllm-server`, `sglang-server`, `fastdeploy-server`, `mlx-vlm-server` or `llama-cpp-server`) using `vl_rec_backend` and the service address using `vl_rec_server_url`, for example:
When creating a `PaddleOCRVL` object, specify the backend type (`vllm-server`, `sglang-server`, `fastdeploy-server`, `mlx-vlm-server` or `llama-cpp-server`) using `vl_rec_backend` and the service address using `vl_rec_server_url`.

> **Note:** The `llama-cpp-server` backend requires installing the latest PaddleOCR from source (`pip install git+https://github.com/PaddlePaddle/PaddleOCR.git`). The PyPI release (3.4.0) does not yet support this backend.

For example:

```python
pipeline = PaddleOCRVL(vl_rec_backend="vllm-server", vl_rec_server_url="http://localhost:8118/v1")
Expand Down
12 changes: 10 additions & 2 deletions docs/version3.x/pipeline_usage/PaddleOCR-VL.md
Original file line number Diff line number Diff line change
Expand Up @@ -1624,7 +1624,11 @@ paddleocr genai_server --model_name PaddleOCR-VL-1.5-0.9B --backend vllm --port

#### 3.2.1 CLI 调用

可通过 `--vl_rec_backend` 指定后端类型(`vllm-server`、`sglang-server`、`fastdeploy-server`、`mlx-vlm-server` 或 `llama-cpp-server`),通过 `--vl_rec_server_url` 指定服务地址,例如:
可通过 `--vl_rec_backend` 指定后端类型(`vllm-server`、`sglang-server`、`fastdeploy-server`、`mlx-vlm-server` 或 `llama-cpp-server`),通过 `--vl_rec_server_url` 指定服务地址。

> **注意:** `llama-cpp-server` 后端需要从源码安装最新版 PaddleOCR(`pip install git+https://github.com/PaddlePaddle/PaddleOCR.git`),PyPI 上的 3.4.0 版本尚不支持该后端。

例如:

```shell
paddleocr doc_parser --input paddleocr_vl_demo.png --vl_rec_backend vllm-server --vl_rec_server_url http://localhost:8118/v1
Expand Down Expand Up @@ -1667,7 +1671,11 @@ paddleocr doc_parser \

#### 3.2.2 Python API 调用

创建 `PaddleOCRVL` 对象时传入 `vl_rec_backend` 指定后端类型(`vllm-server`、`sglang-server`、`fastdeploy-server`、`mlx-vlm-server` 或 `llama-cpp-server`), `vl_rec_server_url` 指定服务地址,例如:
创建 `PaddleOCRVL` 对象时传入 `vl_rec_backend` 指定后端类型(`vllm-server`、`sglang-server`、`fastdeploy-server`、`mlx-vlm-server` 或 `llama-cpp-server`), `vl_rec_server_url` 指定服务地址。

> **注意:** `llama-cpp-server` 后端需要从源码安装最新版 PaddleOCR(`pip install git+https://github.com/PaddlePaddle/PaddleOCR.git`),PyPI 上的 3.4.0 版本尚不支持该后端。

例如:

```python
pipeline = PaddleOCRVL(vl_rec_backend="vllm-server", vl_rec_server_url="http://localhost:8118/v1")
Expand Down
Loading