Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MAINT: 1.6.0 rc2 backports #13279

Merged
merged 13 commits into from Dec 22, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/workflows/linux.yml
Expand Up @@ -19,6 +19,7 @@ jobs:
- uses: actions/checkout@v2
- name: Configuring Test Environment
run: |
sudo apt-get update
sudo apt install python3.7-dbg python3.7-dev libatlas-base-dev liblapack-dev gfortran libgmp-dev libmpfr-dev libsuitesparse-dev ccache swig libmpc-dev
free -m
python3.7-dbg --version # just to check
Expand Down
84 changes: 36 additions & 48 deletions .github/workflows/macos.yml
Expand Up @@ -29,65 +29,53 @@ jobs:
with:
python-version: ${{ matrix.python-version }}

- name: Setup openblas
- name: Setup gfortran
run: |
# this setup is originally taken from the .travis.yml
# virtualenv needed for the multibuild steps
pip install virtualenv
brew install libmpc gcc@6 suitesparse swig
# The openblas binary used here was built using a gfortran older than 7,
# so it needs the older abi libgfortran.
export FC=gfortran-6
export CC=gcc-6
export CXX=g++-6
mkdir gcc_aliases
pushd gcc_aliases
ln -s `which gcc-6` gcc
ln -s `which g++-6` g++
ln -s `which gfortran-6` gfortran
# make gcc aliases in current dir
export PATH=$PWD/gcc_aliases:$PATH
popd
touch config.sh
git clone --depth=1 https://github.com/matthew-brett/multibuild.git
# this is taken verbatim from the numpy azure pipeline setup.
set -xe
# same version of gfortran as the open-libs and numpy-wheel builds
curl -L https://github.com/MacPython/gfortran-install/raw/master/archives/gfortran-4.9.0-Mavericks.dmg -o gfortran.dmg
GFORTRAN_SHA256=$(shasum -a 256 gfortran.dmg)
KNOWN_SHA256="d2d5ca5ba8332d63bbe23a07201c4a0a5d7e09ee56f0298a96775f928c3c4b30 gfortran.dmg"
if [ "$GFORTRAN_SHA256" != "$KNOWN_SHA256" ]; then
echo sha256 mismatch
exit 1
fi
hdiutil attach -mountpoint /Volumes/gfortran gfortran.dmg
sudo installer -pkg /Volumes/gfortran/gfortran.pkg -target /
otool -L /usr/local/gfortran/lib/libgfortran.3.dylib
# Manually symlink gfortran-4.9 to plain gfortran for f2py.
# No longer needed after Feb 13 2020 as gfortran is already present
# and the attempted link errors. Keep this for future reference.
# ln -s /usr/local/bin/gfortran-4.9 /usr/local/bin/gfortran
# designed for travis, but probably work on github actions
source multibuild/common_utils.sh
source multibuild/travis_steps.sh
before_install
export CFLAGS="-arch x86_64"
export CXXFLAGS="-arch x86_64"
printenv
- name: Setup openblas
run: |
# this is taken verbatim from the numpy azure pipeline setup.
set -xe
target=$(python tools/openblas_support.py)
ls -lR $target
# manually link to appropriate system paths
cp $target/lib/lib* /usr/local/lib/
cp $target/include/* /usr/local/include/
# Grab openblas
OPENBLAS_PATH=$(python tools/openblas_support.py)
# Copy it to the working directory
mv $OPENBLAS_PATH ./
# otool -L /usr/local/lib/libopenblas*
# Modify the openblas dylib so it can be used in its current location
# Also make it use the current install location for libgfortran, libquadmath, and libgcc_s.
pushd openblas/lib
install_name_tool -id $PWD/libopenblasp-r*.dylib libopenblas.dylib
install_name_tool -change /usr/local/gfortran/lib/libgfortran.3.dylib /usr/local/Cellar/gcc@6/6.5.0_5/lib/gcc/6/libgfortran.3.dylib libopenblas.dylib
install_name_tool -change /usr/local/gfortran/lib/libquadmath.0.dylib /usr/local/Cellar/gcc@6/6.5.0_5/lib/gcc/6/libquadmath.0.dylib libopenblas.dylib
install_name_tool -change /usr/local/gfortran/lib/libgcc_s.1.dylib /usr/local/Cellar/gcc@6/6.5.0_5/lib/gcc/6/libgcc_s.1.dylib libopenblas.dylib
popd
echo "[openblas]" > site.cfg
echo "libraries = openblas" >> site.cfg
echo "library_dirs = $PWD/openblas/lib" >> site.cfg
echo "include_dirs = $PWD/openblas/include" >> site.cfg
echo "runtime_library_dirs = $PWD/openblas/lib" >> site.cfg
# remove a spurious gcc/gfortran toolchain install
rm -rf /usr/local/Cellar/gcc/9.2.0_2
#
export PATH="$PATH:$PWD/openblas"
echo "library_dirs = /usr/local/lib" >> site.cfg
echo "include_dirs = /usr/local/include" >> site.cfg
echo "runtime_library_dirs = /usr/local/lib" >> site.cfg
- name: A few other packages
run: |
brew install libmpc suitesparse swig
- name: Install packages
run: |
pip install ${{ matrix.numpy-version }}
pip install setuptools wheel cython pytest pytest-xdist pybind11 pytest-xdist mpmath gmpy2
- name: Test SciPy
run: |
export DYLD_LIBRARY_PATH=/usr/local/Cellar/gcc@6/6.5.0_5/lib/gcc/6/:$DYLD_LIBRARY_PATH
python -u runtests.py
12 changes: 11 additions & 1 deletion doc/release/1.6.0-notes.rst
Expand Up @@ -402,6 +402,7 @@ Authors
* Sambit Panda
* Dima Pasechnik
* Tirth Patel +
* Matti Picus
* Paweł Redzyński +
* Vladimir Philipenko +
* Philipp Thölke +
Expand Down Expand Up @@ -449,7 +450,7 @@ Authors
* ZhihuiChen0903 +
* Jacob Zhong +

A total of 121 people contributed to this release.
A total of 122 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
This list of names is automatically generated, and may not be fully complete.

Expand Down Expand Up @@ -589,6 +590,8 @@ Issues closed for 1.6.0
* `#13182 <https://github.com/scipy/scipy/issues/13182>`__: Key appears twice in \`test_optimize.test_show_options\`
* `#13191 <https://github.com/scipy/scipy/issues/13191>`__: \`scipy.linalg.lapack.dgesjv\` overwrites original arrays if...
* `#13207 <https://github.com/scipy/scipy/issues/13207>`__: TST: Erratic test failure in test_cossin_separate
* `#13221 <https://github.com/scipy/scipy/issues/13221>`__: BUG: pavement.py glitch
* `#13248 <https://github.com/scipy/scipy/issues/13248>`__: ndimage: improper cval handling for complex-valued inputs

Pull requests for 1.6.0
-----------------------
Expand Down Expand Up @@ -927,6 +930,13 @@ Pull requests for 1.6.0
* `#13190 <https://github.com/scipy/scipy/pull/13190>`__: BUG: optimize: fix a duplicate key bug for \`test_show_options\`
* `#13192 <https://github.com/scipy/scipy/pull/13192>`__: BUG:linalg: Add overwrite option to gejsv wrapper
* `#13194 <https://github.com/scipy/scipy/pull/13194>`__: BUG: slsqp should be able to use rel_step
* `#13199 <https://github.com/scipy/scipy/pull/13199>`__: [skip travis] DOC: 1.6.0 release notes
* `#13203 <https://github.com/scipy/scipy/pull/13203>`__: fix typos
* `#13209 <https://github.com/scipy/scipy/pull/13209>`__: TST:linalg: set the seed for cossin test
* `#13212 <https://github.com/scipy/scipy/pull/13212>`__: [DOC] Backtick and directive consistency.
* `#13217 <https://github.com/scipy/scipy/pull/13217>`__: REL: add necessary setuptools and numpy version pins in pyproject.toml...
* `#13226 <https://github.com/scipy/scipy/pull/13226>`__: BUG: pavement.py file handle fixes
* `#13249 <https://github.com/scipy/scipy/pull/13249>`__: Handle cval correctly for ndimage functions with complex-valued...
* `#13253 <https://github.com/scipy/scipy/pull/13253>`__: BUG,MAINT: Ensure all Pool objects are closed
* `#13260 <https://github.com/scipy/scipy/pull/13260>`__: CI: fix macOS testing
* `#13269 <https://github.com/scipy/scipy/pull/13269>`__: CI: github actions: In the linux dbg tests, update apt before...
4 changes: 2 additions & 2 deletions pavement.py
Expand Up @@ -209,7 +209,7 @@ def compute_md5(idirs):
for fn in sorted(released):
with open(fn, 'rb') as f:
m = md5(f.read())
checksums.append('%s %s' % (m.hexdigest(), os.path.basename(f)))
checksums.append('%s %s' % (m.hexdigest(), os.path.basename(fn)))

return checksums

Expand All @@ -221,7 +221,7 @@ def compute_sha256(idirs):
for fn in sorted(released):
with open(fn, 'rb') as f:
m = sha256(f.read())
checksums.append('%s %s' % (m.hexdigest(), os.path.basename(f)))
checksums.append('%s %s' % (m.hexdigest(), os.path.basename(fn)))

return checksums

Expand Down
5 changes: 1 addition & 4 deletions scipy/_lib/tests/test__util.py
Expand Up @@ -124,8 +124,7 @@ def test_mapwrapper_parallel():
assert_(excinfo.type is ValueError)

# can also set a PoolWrapper up with a map-like callable instance
try:
p = Pool(2)
with Pool(2) as p:
q = MapWrapper(p.map)

assert_(q._own_pool is False)
Expand All @@ -135,8 +134,6 @@ def test_mapwrapper_parallel():
# because it didn't create it
out = p.map(np.sin, in_arg)
assert_equal(list(out), out_arg)
finally:
p.close()


# get our custom ones and a few from the "import *" cases
Expand Down
3 changes: 1 addition & 2 deletions scipy/integrate/_quad_vec.py
Expand Up @@ -267,7 +267,6 @@ def quad_vec(f, a, b, epsabs=1e-200, epsrel=1e-8, norm='2', cache_size=100e6, li
else:
norm_func = norm_funcs[norm]

mapwrapper = MapWrapper(workers)

parallel_count = 128
min_intervals = 2
Expand Down Expand Up @@ -341,7 +340,7 @@ def quad_vec(f, a, b, epsabs=1e-200, epsrel=1e-8, norm='2', cache_size=100e6, li
}

# Process intervals
with mapwrapper:
with MapWrapper(workers) as mapwrapper:
ier = NOT_CONVERGED

while intervals and len(intervals) < limit:
Expand Down
39 changes: 24 additions & 15 deletions scipy/ndimage/filters.py
Expand Up @@ -51,23 +51,32 @@ def _invalid_origin(origin, lenw):
return (origin < -(lenw // 2)) or (origin > (lenw - 1) // 2)


def _complex_via_real_components(func, input, weights, output, **kwargs):
def _complex_via_real_components(func, input, weights, output, cval, **kwargs):
"""Complex convolution via a linear combination of real convolutions."""
complex_input = input.dtype.kind == 'c'
complex_weights = weights.dtype.kind == 'c'
if complex_input and complex_weights:
# real component of the output
func(input.real, weights.real, output=output.real, **kwargs)
output.real -= func(input.imag, weights.imag, output=None, **kwargs)
func(input.real, weights.real, output=output.real,
cval=numpy.real(cval), **kwargs)
output.real -= func(input.imag, weights.imag, output=None,
cval=numpy.imag(cval), **kwargs)
# imaginary component of the output
func(input.real, weights.imag, output=output.imag, **kwargs)
output.imag += func(input.imag, weights.real, output=None, **kwargs)
func(input.real, weights.imag, output=output.imag,
cval=numpy.real(cval), **kwargs)
output.imag += func(input.imag, weights.real, output=None,
cval=numpy.imag(cval), **kwargs)
elif complex_input:
func(input.real, weights, output=output.real, **kwargs)
func(input.imag, weights, output=output.imag, **kwargs)
func(input.real, weights, output=output.real, cval=numpy.real(cval),
**kwargs)
func(input.imag, weights, output=output.imag, cval=numpy.imag(cval),
**kwargs)
else:
func(input, weights.real, output=output.real, **kwargs)
func(input, weights.imag, output=output.imag, **kwargs)
if numpy.iscomplexobj(cval):
raise ValueError("Cannot provide a complex-valued cval when the "
"input is real.")
func(input, weights.real, output=output.real, cval=cval, **kwargs)
func(input, weights.imag, output=output.imag, cval=cval, **kwargs)
return output


Expand Down Expand Up @@ -104,10 +113,10 @@ def correlate1d(input, weights, axis=-1, output=None, mode="reflect",
if complex_weights:
weights = weights.conj()
weights = weights.astype(numpy.complex128, copy=False)
kwargs = dict(axis=axis, mode=mode, cval=cval, origin=origin)
kwargs = dict(axis=axis, mode=mode, origin=origin)
output = _ni_support._get_output(output, input, complex_output=True)
return _complex_via_real_components(correlate1d, input, weights,
output, **kwargs)
output, cval, **kwargs)

output = _ni_support._get_output(output, input)
weights = numpy.asarray(weights, dtype=numpy.float64)
Expand Down Expand Up @@ -638,12 +647,12 @@ def _correlate_or_convolve(input, weights, output, mode, cval, origin,
# As for numpy.correlate, conjugate weights rather than input.
weights = weights.conj()
kwargs = dict(
mode=mode, cval=cval, origin=origin, convolution=convolution
mode=mode, origin=origin, convolution=convolution
)
output = _ni_support._get_output(output, input, complex_output=True)

return _complex_via_real_components(_correlate_or_convolve, input,
weights, output, **kwargs)
weights, output, cval, **kwargs)

origins = _ni_support._normalize_sequence(origin, input.ndim)
weights = numpy.asarray(weights, dtype=numpy.float64)
Expand Down Expand Up @@ -889,9 +898,9 @@ def uniform_filter1d(input, size, axis=-1, output=None,
origin)
else:
_nd_image.uniform_filter1d(input.real, size, axis, output.real, mode,
cval, origin)
numpy.real(cval), origin)
_nd_image.uniform_filter1d(input.imag, size, axis, output.imag, mode,
cval, origin)
numpy.imag(cval), origin)
return output


Expand Down
40 changes: 25 additions & 15 deletions scipy/ndimage/interpolation.py
Expand Up @@ -329,12 +329,14 @@ def geometric_transform(input, mapping, output_shape=None,
output = _ni_support._get_output(output, input, shape=output_shape,
complex_output=complex_output)
if complex_output:
kwargs = dict(order=order, mode=mode, cval=cval, prefilter=prefilter,
kwargs = dict(order=order, mode=mode, prefilter=prefilter,
output_shape=output_shape,
extra_arguments=extra_arguments,
extra_keywords=extra_keywords)
geometric_transform(input.real, mapping, output=output.real, **kwargs)
geometric_transform(input.imag, mapping, output=output.imag, **kwargs)
geometric_transform(input.real, mapping, output=output.real,
cval=numpy.real(cval), **kwargs)
geometric_transform(input.imag, mapping, output=output.imag,
cval=numpy.imag(cval), **kwargs)
return output

if prefilter and order > 1:
Expand Down Expand Up @@ -437,9 +439,11 @@ def map_coordinates(input, coordinates, output=None, order=3,
output = _ni_support._get_output(output, input, shape=output_shape,
complex_output=complex_output)
if complex_output:
kwargs = dict(order=order, mode=mode, cval=cval, prefilter=prefilter)
map_coordinates(input.real, coordinates, output=output.real, **kwargs)
map_coordinates(input.imag, coordinates, output=output.imag, **kwargs)
kwargs = dict(order=order, mode=mode, prefilter=prefilter)
map_coordinates(input.real, coordinates, output=output.real,
cval=numpy.real(cval), **kwargs)
map_coordinates(input.imag, coordinates, output=output.imag,
cval=numpy.imag(cval), **kwargs)
return output
if prefilter and order > 1:
padded, npad = _prepad_for_spline_filter(input, mode, cval)
Expand Down Expand Up @@ -551,9 +555,11 @@ def affine_transform(input, matrix, offset=0.0, output_shape=None,
complex_output=complex_output)
if complex_output:
kwargs = dict(offset=offset, output_shape=output_shape, order=order,
mode=mode, cval=cval, prefilter=prefilter)
affine_transform(input.real, matrix, output=output.real, **kwargs)
affine_transform(input.imag, matrix, output=output.imag, **kwargs)
mode=mode, prefilter=prefilter)
affine_transform(input.real, matrix, output=output.real,
cval=numpy.real(cval), **kwargs)
affine_transform(input.imag, matrix, output=output.imag,
cval=numpy.imag(cval), **kwargs)
return output
if prefilter and order > 1:
padded, npad = _prepad_for_spline_filter(input, mode, cval)
Expand Down Expand Up @@ -655,9 +661,11 @@ def shift(input, shift, output=None, order=3, mode='constant', cval=0.0,
# import under different name to avoid confusion with shift parameter
from scipy.ndimage.interpolation import shift as _shift

kwargs = dict(order=order, mode=mode, cval=cval, prefilter=prefilter)
_shift(input.real, shift, output=output.real, **kwargs)
_shift(input.imag, shift, output=output.imag, **kwargs)
kwargs = dict(order=order, mode=mode, prefilter=prefilter)
_shift(input.real, shift, output=output.real, cval=numpy.real(cval),
**kwargs)
_shift(input.imag, shift, output=output.imag, cval=numpy.imag(cval),
**kwargs)
return output
if prefilter and order > 1:
padded, npad = _prepad_for_spline_filter(input, mode, cval)
Expand Down Expand Up @@ -763,9 +771,11 @@ def zoom(input, zoom, output=None, order=3, mode='constant', cval=0.0,
# import under different name to avoid confusion with zoom parameter
from scipy.ndimage.interpolation import zoom as _zoom

kwargs = dict(order=order, mode=mode, cval=cval, prefilter=prefilter)
_zoom(input.real, zoom, output=output.real, **kwargs)
_zoom(input.imag, zoom, output=output.imag, **kwargs)
kwargs = dict(order=order, mode=mode, prefilter=prefilter)
_zoom(input.real, zoom, output=output.real, cval=numpy.real(cval),
**kwargs)
_zoom(input.imag, zoom, output=output.imag, cval=numpy.imag(cval),
**kwargs)
return output
if prefilter and order > 1:
padded, npad = _prepad_for_spline_filter(input, mode, cval)
Expand Down
2 changes: 2 additions & 0 deletions scipy/ndimage/src/ni_support.c
Expand Up @@ -310,6 +310,7 @@ int NI_ExtendLine(double *buffer, npy_intp line_length,
break;
/* kkkkkkkk|abcd]kkkkkkkk */
case NI_EXTEND_CONSTANT:
case NI_EXTEND_GRID_CONSTANT:
val = extend_value;
dst = buffer;
while (size_before--) {
Expand Down Expand Up @@ -670,6 +671,7 @@ int NI_InitFilterOffsets(PyArrayObject *array, npy_bool *footprint,
}
break;
case NI_EXTEND_CONSTANT:
case NI_EXTEND_GRID_CONSTANT:
if (cc < 0 || cc >= len)
cc = *border_flag_value;
break;
Expand Down