numpy
The NumPy 1.23.0 release continues the ongoing work to improve the handling and promotion of dtypes, increase the execution speed, clarify the documentation, and expire old deprecations. The highlights are:
- Implementation of
loadtxt
in C, greatly improving its performance. - Exposing DLPack at the Python level for easy data exchange.
- Changes to the promotion and comparisons of structured dtypes.
- Improvements to f2py.
See below for the details,
A masked array specialization of
ndenumerate
is now available asnumpy.ma.ndenumerate
. It provides an alternative tonumpy.ndenumerate
and skips masked values by default.(gh-20020)
numpy.from_dlpack
has been added to allow easy exchange of data using the DLPack protocol. It accepts Python objects that implement the__dlpack__
and__dlpack_device__
methods and returns a ndarray object which is generally the view of the data of the input object.(gh-21145)
Setting
__array_finalize__
toNone
is deprecated. It must now be a method and may wish to callsuper().__array_finalize__(obj)
after checking forNone
or if the NumPy version is sufficiently new.(gh-20766)
Using
axis=32
(axis=np.MAXDIMS
) in many cases had the same meaning asaxis=None
. This is deprecated andaxis=None
must be used instead.(gh-20920)
The hook function
PyDataMem_SetEventHook
has been deprecated and the demonstration of its use in tool/allocation_tracking has been removed. The ability to track allocations is now built-in to python viatracemalloc
.(gh-20394)
numpy.distutils
has been deprecated, as a result ofdistutils
itself being deprecated. It will not be present in NumPy for Python >= 3.12, and will be removed completely 2 years after the release of Python 3.12 For more details, seedistutils-status-migration
.(gh-20875)
numpy.loadtxt
will now give aDeprecationWarning
when an integerdtype
is requested but the value is formatted as a floating point number.(gh-21663)
The
NpzFile.iteritems()
andNpzFile.iterkeys()
methods have been removed as part of the continued removal of Python 2 compatibility. This concludes the deprecation from 1.15.(gh-16830)
The
alen
andasscalar
functions have been removed.(gh-20414)
The
UPDATEIFCOPY
array flag has been removed together with the enumNPY_ARRAY_UPDATEIFCOPY
. The associated (and deprecated)PyArray_XDECREF_ERR
was also removed. These were all deprecated in 1.14. They are replaced byNPY_ARRAY_WRITEBACKIFCOPY
, that requires callingPyArray_ResolveWritebackIfCopy
before the array is deallocated.(gh-20589)
Exceptions will be raised during array-like creation. When an object raised an exception during access of the special attributes
__array__
or__array_interface__
, this exception was usually ignored. This behaviour was deprecated in 1.21, and the exception will now be raised.(gh-20835)
Multidimensional indexing with non-tuple values is not allowed. Previously, code such as
arr[ind]
whereind = [[0, 1], [0, 1]]
produced aFutureWarning
and was interpreted as a multidimensional index (i.e.,arr[tuple(ind)]
). Now this example is treated like an array index over a single dimension (arr[array(ind)]
). Multidimensional indexing with anything but a tuple was deprecated in NumPy 1.15.(gh-21029)
Changing to a dtype of different size in F-contiguous arrays is no longer permitted. Deprecated since Numpy 1.11.0. See below for an extended explanation of the effects of this change.
(gh-20722)
crackfortran
parser now understands operator and assignment definitions in a module. They are added in the body
list of the module which contains a new key implementedby
listing the names of the subroutines or functions implementing the operator or assignment.
(gh-15006)
As a result, one does not need to use public
or private
statements to specify derived type access properties.
(gh-15844)
This parameter behaves the same as ndmin
from numpy.loadtxt
.
(gh-20500)
numpy.loadtxt
now supports an additional quotechar
keyword argument which is not set by default. Using quotechar='"'
will read quoted fields as used by the Excel CSV dialect.
Further, it is now possible to pass a single callable rather than a dictionary for the converters
argument.
(gh-20580)
Previously, viewing an array with a dtype of a different item size required that the entire array be C-contiguous. This limitation would unnecessarily force the user to make contiguous copies of non-contiguous arrays before being able to change the dtype.
This change affects not only ndarray.view
, but other construction mechanisms, including the discouraged direct assignment to ndarray.dtype
.
This change expires the deprecation regarding the viewing of F-contiguous arrays, described elsewhere in the release notes.
(gh-20722)
For F77 inputs, f2py
will generate modname-f2pywrappers.f
unconditionally, though these may be empty. For free-form inputs, modname-f2pywrappers.f
, modname-f2pywrappers2.f90
will both be generated unconditionally, and may be empty. This allows writing generic output rules in cmake
or meson
and other build systems. Older behavior can be restored by passing --skip-empty-wrappers
to f2py
. f2py-meson
details usage.
(gh-21187)
The parameter keepdims
was added to the functions numpy.average
and numpy.ma.average
. The parameter has the same meaning as it does in reduction functions such as numpy.sum
or numpy.mean
.
(gh-21485)
np.unique
was changed in 1.21 to treat all NaN
values as equal and return a single NaN
. Setting equal_nan=False
will restore pre-1.21 behavior to treat NaNs
as unique. Defaults to True
.
(gh-21623)
Previously, this would promote to float64
when the ord
argument was not one of the explicitly listed values, e.g. ord=3
:
>>> f32 = np.float32([1, 2])
>>> np.linalg.norm(f32, 2).dtype
dtype('float32')
>>> np.linalg.norm(f32, 3)
dtype('float64') # numpy 1.22
dtype('float32') # numpy 1.23
This change affects only float32
and float16
vectors with ord
other than -Inf
, 0
, 1
, 2
, and Inf
.
(gh-17709)
In general, NumPy now defines correct, but slightly limited, promotion for structured dtypes by promoting the subtypes of each field instead of raising an exception:
>>> np.result_type(np.dtype("i,i"), np.dtype("i,d"))
dtype([('f0', '<i4'), ('f1', '<f8')])
For promotion matching field names, order, and titles are enforced, however padding is ignored. Promotion involving structured dtypes now always ensures native byte-order for all fields (which may change the result of np.concatenate
) and ensures that the result will be "packed", i.e. all fields are ordered contiguously and padding is removed. See structured_dtype_comparison_and_promotion
for further details.
The repr
of aligned structures will now never print the long form including offsets
and itemsize
unless the structure includes padding not guaranteed by align=True
.
In alignment with the above changes to the promotion logic, the casting safety has been updated:
"equiv"
enforces matching names and titles. The itemsize is allowed to differ due to padding."safe"
allows mismatching field names and titles- The cast safety is limited by the cast safety of each included field.
- The order of fields is used to decide cast safety of each individual field. Previously, the field names were used and only unsafe casts were possible when names mismatched.
The main important change here is that name mismatches are now considered "safe" casts.
(gh-19226)
NumPy cannot be compiled with NPY_RELAXED_STRIDES_CHECKING=0
anymore. Relaxed strides have been the default for many years and the option was initially introduced to allow a smoother transition.
(gh-20220)
The row counting of numpy.loadtxt
was fixed. loadtxt
ignores fully empty lines in the file, but counted them towards max_rows
. When max_rows
is used and the file contains empty lines, these will now not be counted. Previously, it was possible that the result contained fewer than max_rows
rows even though more data was available to be read. If the old behaviour is required, itertools.islice
may be used:
import itertools
lines = itertools.islice(open("file"), 0, max_rows)
result = np.loadtxt(lines, ...)
While generally much faster and improved, numpy.loadtxt
may now fail to converter certain strings to numbers that were previously successfully read. The most important cases for this are:
- Parsing floating point values such as
1.0
into integers is now deprecated. - Parsing hexadecimal floats such as
0x3p3
will fail - An
_
was previously accepted as a thousands delimiter100_000
. This will now result in an error.
If you experience these limitations, they can all be worked around by passing appropriate converters=
. NumPy now supports passing a single converter to be used for all columns to make this more convenient. For example, converters=float.fromhex
can read hexadecimal float numbers and converters=int
will be able to read 100_000
.
Further, the error messages have been generally improved. However, this means that error types may differ. In particularly, a ValueError
is now always raised when parsing of a single entry fails.
(gh-20580)
This means subclasses can now use super().__array_finalize__(obj)
without worrying whether ndarray
is their superclass or not. The actual call remains a no-op.
(gh-20766)
With VSX4/Power10 enablement, the new instructions available in Power ISA 3.1 can be used to accelerate some NumPy operations, e.g., floor_divide, modulo, etc.
(gh-20821)
The numpy.fromiter
function now supports object and subarray dtypes. Please see he function documentation for examples.
(gh-20993)
Compiling is preceded by a detection phase to determine whether the underlying libc supports certain math operations. Previously this code did not respect the proper signatures. Fixing this enables compilation for the wasm-ld
backend (compilation for web assembly) and reduces the number of warnings.
(gh-21154)
np.kron
maintains subclass information now such as masked arrays while computing the Kronecker product of the inputs
>>> x = ma.array([[1, 2], [3, 4]], mask=[[0, 1], [1, 0]])
>>> np.kron(x,x)
masked_array(
data=[[1, --, --, --],
[--, 4, --, --],
[--, --, 4, --],
[--, --, --, 16]],
mask=[[False, True, True, True],
[ True, False, True, True],
[ True, True, False, True],
[ True, True, True, False]],
fill_value=999999)
Warning
np.kron
output now follows ufunc
ordering (multiply
) to determine the output class type
>>> class myarr(np.ndarray):
>>> __array_priority__ = -1
>>> a = np.ones([2, 2])
>>> ma = myarray(a.shape, a.dtype, a.data)
>>> type(np.kron(a, ma)) == np.ndarray
False # Before it was True
>>> type(np.kron(a, ma)) == myarr
True
(gh-21262)
numpy.loadtxt
is now generally much faster than previously as most of it is now implemented in C.
(gh-20580)
Reduction operations like numpy.sum
, numpy.prod
, numpy.add.reduce
, numpy.logical_and.reduce
on contiguous integer-based arrays are now much faster.
(gh-21001)
numpy.where
is now much faster than previously on unpredictable/random input data.
(gh-21130)
Many operations on NumPy scalars are now significantly faster, although rare operations (e.g. with 0-D arrays rather than scalars) may be slower in some cases. However, even with these improvements users who want the best performance for their scalars, may want to convert a known NumPy scalar into a Python one using scalar.item()
.
(gh-21188)
numpy.kron
is about 80% faster as the product is now computed using broadcasting.
(gh-21354)