Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to serve fine-tuned model with vllm #945

Open
Some-random opened this issue May 7, 2024 · 2 comments
Open

How to serve fine-tuned model with vllm #945

Some-random opened this issue May 7, 2024 · 2 comments
Assignees

Comments

@Some-random
Copy link

After training, the output folder only contain files like meta_model_0.pt. If I try to use vllm server to serve this model like this: python -m vllm.entrypoints.openai.api_server --model finetuned_model_path --dtype bfloat16 --port 1235 --max-logprobs 1. An error will show up saying finetuned_model_pathdoes not appear to have a file named config.json

@Some-random
Copy link
Author

I've copied a few json files file origin folder to the my fine-tuned model folder and the above issue was resolved. But I'm facing another issue now:

python -m vllm.entrypoints.openai.api_server --model /weka/scratch/djiang21/Meta-Llama-3-8B/gsm8k-quick/ --dtype bfloat16  --port 1235 --max-logprobs 1
INFO 05-07 21:27:02 llm_engine.py:100] Initializing an LLM engine (v0.4.1) with config: model='/weka/scratch/djiang21/Meta-Llama-3-8B/gsm8k-quick/', speculative_config=None, tokenizer='/weka/scratch/djiang21/Meta-Llama-3-8B/gsm8k-quick/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=8192, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO 05-07 21:27:03 utils.py:660] Found nccl from library /home/djiang21/.config/vllm/nccl/cu12/libnccl.so.2.18.1
INFO 05-07 21:27:04 selector.py:27] Using FlashAttention-2 backend.
[rank0]: Traceback (most recent call last):
[rank0]:   File "/weka/scratch/djiang21/miniconda/envs/quiet-star/lib/python3.9/runpy.py", line 197, in _run_module_as_main
[rank0]:     return _run_code(code, main_globals, None,
[rank0]:   File "/weka/scratch/djiang21/miniconda/envs/quiet-star/lib/python3.9/runpy.py", line 87, in _run_code
[rank0]:     exec(code, run_globals)
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/entrypoints/openai/api_server.py", line 168, in <module>
[rank0]:     engine = AsyncLLMEngine.from_engine_args(
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/engine/async_llm_engine.py", line 366, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/engine/async_llm_engine.py", line 324, in __init__
[rank0]:     self.engine = self._init_engine(*args, **kwargs)
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/engine/async_llm_engine.py", line 442, in _init_engine
[rank0]:     return engine_class(*args, **kwargs)
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/engine/llm_engine.py", line 159, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/executor/executor_base.py", line 41, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/executor/gpu_executor.py", line 23, in _init_executor
[rank0]:     self._init_non_spec_worker()
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/executor/gpu_executor.py", line 69, in _init_non_spec_worker
[rank0]:     self.driver_worker.load_model()
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/worker/worker.py", line 118, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/worker/model_runner.py", line 164, in load_model
[rank0]:     self.model = get_model(
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/model_executor/model_loader/__init__.py", line 19, in get_model
[rank0]:     return loader.load_model(model_config=model_config,
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/model_executor/model_loader/loader.py", line 224, in load_model
[rank0]:     model.load_weights(
[rank0]:   File "/weka/scratch/djiang21/Dongwei_quiet_star/vllm/vllm/model_executor/models/llama.py", line 415, in load_weights
[rank0]:     param = params_dict[name]
[rank0]: KeyError: 'tok_embeddings.weight'

Seems like we need to do some conversion between torchtune generated model format and HF model format?

@MaxwelsDonc
Copy link

I have the same question.

@joecummings joecummings self-assigned this May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants