New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QuestionAnsweringPipeline cannot handle impossible answer #14277
Comments
Thanks for the detail issue, very easy to reproduce, and everything is correct. I am creating a PR to fix this, however do you have an example where this argument is needed in an obvious way ? I would love to add a meaningful test for this option (already added unit test for it) |
I think you could copy the |
The fast tests use random networks, so we don't really have any way to control that. Do you have a specific example with a given model that would display the desired behavior ? That will be included in slow tests. |
Unfortunately, I don't have any specific example no. What I had in mind is something like:
|
In this old issue there is another example: #5563 |
None of the examples in the other issue you mentioned yield an error now, so I am unclear what the problem is. TBH, I am not really sure what I am not sure all models even possess a CLS token. It seems this was added to prevent indexing errors, but I don't think that's what you expect from this argument am I correct ? |
|
Thank you for your help. Looking forward to the next release. 👍 |
Closing this, feel free to reopen. |
Was that issue fixed? Because I still get the same error |
@CyrilShch are you using master ? A release is coming this week which could help. |
@Narsil Yes, I'm using master. Thanks! Looking forward! |
Can you provide a reproducible script ? The one that used to not work: import os
from transformers import pipeline
pipe = pipeline("question-answering", handle_impossible_answer=True)
out = pipe(question="This", context="that")
print(" - " * 20)
print(out)
print(" - " * 20) seems to be working fine. |
@Narsil e.g., if you open a new google colab notebook and run the very same example that you provided!
And locally it seems to work for me as well when I fork the repository and try to change it. |
Hi, this is not master you're installing but the latest release (which does not contain the fix yet). Can you try
A new release should happen this week which will contain the fix ! |
@Narsil Ops, my bad. Works fine now! Thanks a lot :) I guess the issue can be closed 👍 |
It is already closed so it's ok but thanks for the confirmation . Cheers ! |
Perfect, just found the bug myself and saw this fix. |
Environment info
transformers
version: latest master. I think the bug was introduced by this PR: Fixing question-answering with long contexts #13873 so it's part of transformers since the 4.11.3 release and I can confirm that I didn't see this bug with the 4.11.2 release.Who can help
Hi @Narsil I hope you could look again at #13873 and check the changes it makes for the case when
handle_impossible_answer
isTrue
. Thanks a lot!To reproduce
Steps to reproduce the behavior:
run_pipeline_test
test intest_pipelines_question_answering.py
handle_impossible_answer
toTrue
in thequestion_answerer
so that the code is the following:Expected behavior
Test should run through.
Additional Info
I came across this problem when upgrading the transformers dependency of haystack and ran our tests with different versions of transformers to find the last working release/first failing release: deepset-ai/haystack#1659
The text was updated successfully, but these errors were encountered: