-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not limiting the width of float elements (less than double precision) can cause infinities in the tensor, even if allow_infinity
is False.
#20
Comments
Two potential solutions that I've looked at, but don't like are:
Right now, if I had to push out a patch immediately, I would go with (2). I am convinced there is a better way by inspecting the |
Hypothesis itself actually solves a similar problem when specifying if width < 64:
def downcast(x:float) -> float:
"""Downcast a float to a smaller width.
This function is used to ensure that only floats that can be represented exactly are generated.
Adapted from `hypothesis.strategies.numbers.floats`.
Args:
x: The float to downcast.
Returns:
The downcasted float.
"""
try:
return float_of(x, width)
except OverflowError: # pragma: no cover
reject()
elements = elements.map(downcast) This seems to work great for fp16, fp32, and fp64. It fails spectacularly for bfloat16, because hypothesis does not support |
I accidentally pushed the partial solution to I meant to push to a branch and open a Draft PR. I really need to fix the branch protections in the repo. 😦 |
I have pushed a fix that I am content with for now for As referenced in HypothesisWorks/hypothesis#3959 (comment), there will likely be a future for hypothesis where it can natively handle generating |
This bug report is courtesy of @ringohoffman.
Describe the bug
Not limiting the width of float elements (less than double precision) can cause infinities in the tensor, even if
allow_infinity
is False.If the
floats
strategy doesn't havewidth
set, it's generating floats that are outside of thefloat32
range (i.e. 1e308 (max for fp64) is greater than the max value for fp32). When those values are passed to numpy internally, they're coerced toinf
.To Reproduce
Expected behavior
A clear and concise description of what you expected to happen.
The generated tensors should never have infinities if
allow_infinity
is disabled in the elements strategy.The text was updated successfully, but these errors were encountered: