Skip to content

vLLM error when Loading LoRa Adapter on API request #309

@selectorseb

Description

@selectorseb

I first train meetkai/functionary-small-v3.2 using deepspeed functionary/train/train_lora.py with your provided params. Then I run the following script to serve LoRA adapters at startup
python server_vllm.py --model meetkai/functionary-small-v3.2 --enable-lora --lora-modules {name}={path} --host 0.0.0.0 --port 8000

The issue comes when I try a request to the model vLLM fails with the following message

INFO:     131.226.33.184:53594 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 105, in _load_adapter
    lora = self._lora_model_cls.from_local_checkpoint(
  File "/opt/conda/lib/python3.10/site-packages/vllm/lora/models.py", line 221, in from_local_checkpoint
    peft_helper = PEFTHelper.from_dict(config)
  File "/opt/conda/lib/python3.10/site-packages/vllm/lora/peft_helper.py", line 80, in from_dict
    return cls(**filtered_dict)
  File "<string>", line 14, in __init__
  File "/opt/conda/lib/python3.10/site-packages/vllm/lora/peft_helper.py", line 45, in __post_init__
    self._validate_features()
  File "/opt/conda/lib/python3.10/site-packages/vllm/lora/peft_helper.py", line 42, in _validate_features
    raise ValueError(f"{', '.join(error_msg)}")
ValueError: vLLM only supports modules_to_save being None.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions