Releases: explosion/thinc
v9.0.0: better learning rate schedules, integration of thinc-apple-ops
The main new feature of Thinc v9 is the support for learning rate schedules that can take the training dynamics into account. For example, the new
plateau.v1
schedule scales the learning rate when no progress has been found after a given number of evaluation steps. Another visible change is thatAppleOps
is now part of Thinc, so it is not necessary anymore to installthinc-apple-ops
to use the AMX units on Apple Silicon.
✨ New features and improvements
- Learning rate schedules can now take the training step as well as an arbitrary set of keyword arguments. This makes it possible to pass information such a the parameter name and last evaluation score to determine the learning rate (#804).
- Added the
plateau.v1
schedule (#842). This schedule scales the learning rate if training was found to be stagnant for a given period. - The functionality of
thinc-apple-ops
is integrated into Thinc (#927). Starting with this version of Thinc, it is not necessary anymore to installthinc-apple-ops
.
🔴 Bug fixes
- Fix the use of thread-local storage (#917).
⚠️ Backwards incompatibilities
- Thinc v9.0.0 only support Python 3.9 and later.
- Schedules are not generators anymore, but implementations of the
Schedule
class (#804). thinc.backends.linalg
has been removed (#742). The same functionality is provided by implementations in BLAS that are better tested and more performant.thinc.extra.search
has been removed (#743). The beam search functionality in this module was strongly coupled to the spaCy transition parser and has therefore moved to spaCy in v4.
👥 Contributors
@adrianeboyd, @danieldk, @honnibal, @ines, @kadarakos, @shadeMe, @svlandeg
v8.2.3: Fix CuPy compatibility and fix strings2arrays for sequences of inequal length
v8.2.2: Parametric attention with key transformation
v8.2.1: Support Python 3.12
✨ New features and improvements
Updates and binary wheels for Python 3.12.
👥 Contributors
v8.2.0: Disable automatic MXNet and TensorFlow imports
✨ New features and improvements
To improve loading times and reduce conflicts, MXNet and TensorFlow are no longer imported automatically (#890).
⚠️ Backwards incompatibilities
MXNet and TensorFlow support needs to be enabled explicitly. Previously, MXNet and TensorFlow were imported automatically if they were available in the current environment.
To enable MXNet:
from thinc.api import enable_mxnet
enable_mxnet()
To enable TensorFlow:
from thinc.api import enable_tensorflow
enable_tensorflow()
With spaCy CLI commands you can provide this custom code using -c code.py
. For training use spacy train -c code.py
and to package your code with your pipeline use spacy package -c code.py
.
Future deprecation warning: built-in MXNet and TensorFlow support will be removed in Thinc v9. If you need MXNet or TensorFlow support in the future, you can transition to using a custom copy of the current MXNetWrapper
or TensorFlowWrapper
in your package or project.
👥 Contributors
v8.1.12: Support zero-length batches and hidden sizes in reductions
v8.1.11: Support Pydantic v2, update package setup
✨ New features and improvements
- Update NumPy build constraints for NumPy v1.25 (#885).
- Switch from
distutils
tosetuptools
/sysconfig
(#888). - Allow Pydantic v2 using transitional v1 support (#891).
📖 Documentation and examples
- Fix typo in example code (#879).
👥 Contributors
@adrianeboyd, @Ankush-Chander, @danieldk, @honnibal, @ines, @svlandeg
v8.1.10: Lazy loading for CuPy kernels and additional CuPy and MPS improvements
✨ New features and improvements
- Implement
pad
as a CUDA kernel (#860). - Avoid h2d - d2h roundtrip when using
unflatten
(#861). - Improve exception when CuPy/PyTorch MPS is not installed (#863).
- Lazily load custom
cupy
kernels (#870).
🔴 Bug fixes
- Initially load TorchScript models on CPU for MPS devices (#864).
👥 Contributors
@adrianeboyd, @danieldk, @honnibal, @ines, @shadeMe, @svlandeg
v8.1.9: Type fixes
v8.1.8: New faster mapping layer and bug fixes for resizeable layer
✨ New features and improvements
🔴 Bug fixes
- Make resizable layer work with textcat and transformers (#820).
📖 Documentation
👥 Contributors
@adrianeboyd, @danieldk, @essenmitsosse, @honnibal, @ines, @kadarakos, @patjouk, @polm, @svlandeg