Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

moments_weighted_normalized causes warning "invalid value encountered in double_scalars" #7420

Open
aeisenbarth opened this issue May 14, 2024 · 2 comments
Labels

Comments

@aeisenbarth
Copy link
Contributor

Description:

When measuring region properties on image data that may have negative values (background subtracted), a warning is thrown.

This is because sum(powers) / nu.ndim + 1 can be fractional, and in this specific case mu0 is negative, which attempts computing the root of a negative number.

RuntimeWarning: invalid value encountered in double_scalars
  nu[powers] = (mu[powers] / scale ** sum(powers)) / (mu0 ** (sum(powers) / nu.ndim + 1))

in:

mu0 ** (sum(powers) / nu.ndim + 1)

Should users handle (or mute) this warning, or is this something scikit-image should handle (or wrap in an skimage warning)?

Way to reproduce:

import numpy as np
from skimage.measure import regionprops

regionprops(np.array([[0,0,0], [0,1,0], [0,0,0]]), np.full((3,3), fill_value=-1.0))[0].moments_weighted_normalized

Version information:

3.9.19 (main, Mar 21 2024, 17:11:28) 
[GCC 11.2.0]
Linux-5.15.0-101-generic-x86_64-with-glibc2.35
scikit-image version: 0.22.0
numpy version: 1.26.4
@aeisenbarth aeisenbarth changed the title moments_weighted_normalized causes Numpy warning "invalid value encountered in double_scalars" moments_weighted_normalized causes warning "invalid value encountered in double_scalars" May 14, 2024
@lagru
Copy link
Member

lagru commented May 14, 2024

Thanks for the report. I can reproduce this. Not sure right now about the best way to handle this because I'm not sure where the actual problem starts.

Does moments_central()[0, 0] being -1. make sense from an interpretation stand point?

@aeisenbarth
Copy link
Contributor Author

I don't know. I don't use moments myself, and this is rather an outlier case.
This happens only with moments_weighted_normalized which receives mu from moments_weighted_central.
When comparing different labels and intensities,

>>> labels1 = np.array([[0, 0, 0, 0, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 0, 0, 0, 0]])
>>> labels2 = np.array([[0, 0, 0, 0, 0], [0, 1, 1, 1, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]])
>>> intensity_pos = np.full((5, 5), fill_value=1.0)
>>> intensity_0 = np.full((5, 5), fill_value=0.0)
>>> intensity_neg = np.full((5, 5), fill_value=-1.0)
>>> regionprops(labels1, intensity_pos)[0].moments_weighted_normalized
array([[       nan,        nan, 0.07407407, 0.        ],
       [       nan, 0.        , 0.        , 0.        ],
       [0.07407407, 0.        , 0.00548697, 0.        ],
       [0.        , 0.        , 0.        , 0.        ]])
>>> regionprops(labels1, intensity_neg)[0].moments_weighted_normalized
array([[        nan,         nan, -0.07407407,         nan],
       [        nan,  0.        ,         nan, -0.        ],
       [-0.07407407,         nan,  0.00548697,         nan],
       [        nan, -0.        ,         nan,  0.        ]])

If the normalization term is absolute (… / (np.abs(mu0) ** (sum(powers) / nu.ndim + 1)), it gives identical values for the positive case, and analog values without NaN for the negative case:

array([[        nan,         nan, -0.07407407,  0.       ],
       [        nan,  0.        ,  0.        , -0.       ],
       [-0.07407407,  0.        ,  0.00548697,  0.       ],
       [ 0.        , -0.        ,  0.        ,  0.       ]])

But this is just a guess. This would need someone to look deeper into the math.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants