Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix warning messages about config.json when the base model_id is local. #1668

Merged
merged 11 commits into from
May 21, 2024

Conversation

elementary-particle
Copy link
Contributor

It should be possible for the user to specify a local directory as the base model in the library.
However, currently the library only checks for remote presence of config.json, and fails to check the actual config.json when using a local repo.
This PR adds check for a local model_id and fixes the behavior.

elementary-particle added 2 commits April 22, 2024 08:47
…ile is available on the filesystem.

When tuning a model with peft, sometimes the user might wish to use a local base model. In such cases, `model_id` points to a local directory instead of a remote repository.
This commit adds check on the local directory to address this issue.
@BenjaminBossan
Copy link
Member

BenjaminBossan commented Apr 25, 2024

Thanks for the PR, what you describe sounds reasonable. Do you have a small example where this change would apply? Ideally, we can use that to create a unit test.

Also, could you please remove the \ for line breaks?

@elementary-particle
Copy link
Contributor Author

Sure, a simple example is to create a LoRA adapter for a local base model and saving it.
for example, create a PeftModel for a local snapshot of mistralai/Mistral-7B-v0.1, and saving the adapter issues a warning saying

warnings.warn(
    f"Could not find a config file in {model_id} - will assume that the vocabulary was not modified."
)

And the configuration is not checked therefore.

@elementary-particle
Copy link
Contributor Author

from transformers import AutoModelForCausalLM
from peft import LoraConfig, PeftModel
from peft import prepare_model_for_kbit_training, get_peft_model

local_dir = 'path/to/model'

base_model = AutoModelForCausalLM.from_pretrained(local_dir)
peft_config = LoraConfig(
    lora_alpha=16,
    lora_dropout=0.1,
    r=64,
    bias="none",
    task_type="CAUSAL_LM",
    target_modules=["q_proj", "k_proj", "v_proj", "o_proj","gate_proj"]
)
peft_model = get_peft_model(base_model, peft_config)
peft_model.save_pretrained("test")

@elementary-particle
Copy link
Contributor Author

Is this sufficient? @BenjaminBossan

@BenjaminBossan
Copy link
Member

BenjaminBossan commented Apr 26, 2024

Nice, thanks for providing the example. I used it to create a test:

class TestLocalModel:
    def test_local_model_saving_no_warning(self, recwarn, tmp_path):
        model_id = "facebook/opt-125m"
        model = AutoModelForCausalLM.from_pretrained(model_id)
        local_dir = tmp_path / model_id
        model.save_pretrained(local_dir)
        del model

        base_model = AutoModelForCausalLM.from_pretrained(local_dir)
        peft_config = LoraConfig()
        peft_model = get_peft_model(base_model, peft_config)
        peft_model.save_pretrained(local_dir)

        for warning in recwarn.list:
            assert "Could not find a config file" not in warning.message.args[0]

We could for instance put it into tests/test_hub_features.py, WDYT? Running it locally on main, it currently fails, but on your branch, it should pass.

…e `PeftModel.save_pretrained` method.

When the base model is loaded from a local directory, we should be able to find the `config.json` there.
@elementary-particle
Copy link
Contributor Author

Certainly, I followed the syntax in testing_common.py and created a test unit for the issue.
Maybe some further checks? @BenjaminBossan

@BenjaminBossan
Copy link
Member

Certainly, I followed the syntax in testing_common.py and created a test unit for the issue.

The way you added the test, it's not executed. You would have to add corresponding methods that call this method in test_decoder_models.py, test_encoder_decoder_models.py, etc. But this is overkill, we don't really need to check this with all kind of different model architectures. Instead, as I suggested earlier, just add this test to tests/test_hub_features.py and it should be good. Let's also add a comment that explains why we need this test.

elementary-particle added 2 commits May 7, 2024 11:53
…using the `PeftModel.save_pretrained` method."

This reverts commit da94f3b.
and no warning is issued when saving a model and checking for vocab changes.
@elementary-particle
Copy link
Contributor Author

The test case is fixed as advised and comments are added to explain the issue.
Please review the changes, thanks.

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, the test will now be run as part of the test suite. However, you forgot some imports for the test.

Comment on lines +54 to +55
peft_config = LoraConfig()
peft_model = get_peft_model(base_model, peft_config)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LoraConfig and get_peft_model need to be imported.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved.

@BenjaminBossan
Copy link
Member

@elementary-particle Thanks for the update. Could you please run make style?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@BenjaminBossan
Copy link
Member

@elementary-particle This PR is almost good to go, just a small merge conflict, could you please check it out?

@elementary-particle
Copy link
Contributor Author

elementary-particle commented May 18, 2024

Thanks for keeping up with this PR. The merge conflict is resolved.

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for fixing this warning, LGTM.

@BenjaminBossan BenjaminBossan merged commit bc6a999 into huggingface:main May 21, 2024
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants