-
Notifications
You must be signed in to change notification settings - Fork 25.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Device error on TokenClassificationPipeline #13816
Comments
Nice catch! Would you like to open a PR with the fix? |
Yes, I can do it for only 6 characters |
Done, See pull request above: #13819 I let the CI/CD tests run as there is no new features and I didn't want to run them locally burning my pc down :) Have a great day |
similar issue later in the file, line 223
|
Thanks, I committed new changes. @LysandreJik Do you want me to also add a test (all currents tests are passing) ? in @require_torch_gpu
@slow
def test_correct_devices(self):
sentence = "This dummy sentence checks if all the variables can be loaded on gpu and bring back to cpu"
ner = TokenClassificationPipeline(model="distilbert-base-cased", device=0) |
I believe this was fixed by #13856, which also implemented tests. |
Environment info
transformers
version: 4.11.0Who can help
Library:
Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
The tasks I am working on is:
To reproduce
Steps to reproduce the behavior:
pipe = TokenClassificationPipeline(model=DistilBertForTokenClassification.from_pretrained("PATH"))
pipe(["My", "text", "tokens"])
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
Expected behavior
Be able to run the pipeline
The pipeline should bring data to gpu/cpu or model to gpu/cpu and vice versa.
The traceback
Placing a
.cpu()
would solve the problemThanks in advance for any help
Have a wonderful day
The text was updated successfully, but these errors were encountered: