All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog.
- MRR metrics calculation (#886)
- docs for MetricCallbacks (#947)
- SoftMax, CosFace, ArcFace layers to contrib (#939)
catalyst-dl tune
config specification - now optuna params are grouped understudy_params
(#947)IRunner._prepare_for_stage
logic moved toIStageBasedRunner.prepare_for_stage
(#947)- now we create components in the following order: datasets/loaders, model, criterion, optimizer, scheduler, callbacks
AMPOptimizerCallback
- fix grad clip fn support (#948)- removed deprecated docs types (#947) (#952)
- docs for a few files (#952)
-
Runner registry support for Config API (#936)
-
catalyst-dl tune
command - Optuna with Config API integration for AutoML hyperparameters optimization (#937) -
OptunaPruningCallback
alias forOptunaCallback
(#937) -
AdamP and SGDP to
catalyst.contrib.nn.criterion
(#942)
- Config API components preparation logic moved to
utils.prepare_config_api_components
(#936)
force
andbert-level
keywords tocatalyst-data text2embedding
(#917)OptunaCallback
tocatalyst.contrib
(#915)DynamicQuantizationCallback
andcatalyst-dl quantize
script for fast quantization of your model (#890)- Multi-scheduler support for multi-optimizer case (#923)
- Native mixed-precision training support (#740)
OptiomizerCallback
- flaguse_fast_zero_grad
for faster (and hacky) version ofoptimizer.zero_grad()
(#927)IOptiomizerCallback
,ISchedulerCallback
,ICheckpointCallback
,ILoggerCallback
as core abstractions for Callbacks (#933)- flag
USE_AMP
for PyTorch AMP usage (#933)
- autoresume option for Config API (#907)
- a few issues with TF projector (#917)
- batch sampler speed issue (#921)
- add apex key-value optimizer support (#924)
- runtime warning for PyTorch 1.6 (920)
- Apex synbn usage (920)
- Catalyst dependency on system git (922)
CMCScoreCallback
(#880)- kornia augmentations
BatchTransformCallback
(#862) average_precision
andmean_average_precision
metrics (#883)MultiLabelAccuracyCallback
,AveragePrecisionCallback
andMeanAveragePrecisionCallback
callbacks (#883)- minimal examples for multi-class and milti-label classification (#883)
- experimental TPU support (#893)
- add
Imagenette
,Imagewoof
, andImagewang
datasets (#902) IMetricCallback
,IBatchMetricCallback
,ILoaderMetricCallback
,BatchMetricCallback
,LoaderMetricCallback
abstractions (#897)HardClusterSampler
inbatch sampler (#888)
- all registries merged to one
catalyst.registry
(#883) mean_average_precision
logic merged withaverage_precision
(#897)- all imports moved to absolute (#905)
catalyst.contrib.data
merged tocatalyst.data
(#905)- {breaking} Catalyst transform
ToTensor
was renamed toImageToTensor
(#905) TracerCallback
moved tocatalyst.dl
(#905)ControlFlowCallback
,PeriodicLoaderCallback
moved tocatalyst.core
(#905)
log
parameter toWandbLogger
(#836)- hparams experiment property (#839)
- add docs build on push to master branch (#844)
WrapperCallback
andControlFlowCallback
(#842)BatchOverfitCallback
(#869)overfit
flag for Config API (#869)InBatchSamplers
:AllTripletsSampler
andHardTripletsSampler
(#825)
- Renaming (#837)
SqueezeAndExcitation
->cSE
ChannelSqueezeAndSpatialExcitation
->sSE
ConcurrentSpatialAndChannelSqueezeAndChannelExcitation
->scSE
_MetricCallback
->IMetricCallback
dl.Experiment.process_loaders
->dl.Experiment._get_loaders
LRUpdater
become abstract class (#837)calculate_confusion_matrix_from_arrays
changed params order (#837)dl.Runner.predict_loader
uses_prepare_inner_state
and cleansexperiment
(#863)toml
to the dependencies (#872)
crc32c
dependency (#872)
workflows/deploy_push.yml
failed to push some refs (#864).dependabot/config.yml
contained invalid details (#781)LanguageModelingDataset
(#841)global_*
counters inRunner
(#858)- EarlyStoppingCallback considers first epoch as bad (#854)
- annoying numpy warning (#860)
PeriodicLoaderCallback
overwrites best state (#867)OneCycleLRWithWarmup
(#851)
- docs structure were updated during (#822)
utils.process_components
moved fromutils.distributed
toutils.components
(#822)catalyst.core.state.State
merged tocatalyst.core.runner._Runner
(#823) (backward compatibility included)catalyst.core.callback.Callback
now works directly withcatalyst.core.runner._Runner
state_kwargs
renamed tostage_kwargs
- Circle loss implementation (#802)
- BatchBalanceSampler for metric learning and classification (#806)
CheckpointCallback
: new argumentload_on_stage_start
which acceptsstr
andDict[str, str]
(#797)- LanguageModelingDataset to catalyst[nlp] (#808)
- Extra counters for batches, loaders and epochs (#809)
TracerCallback
(#789)
CheckpointCallback
: additional logic for argumentload_on_stage_end
- acceptsstr
andDict[str, str]
(#797)- counters names for batches, loaders and epochs (#809)
utils.trace_model
: changed logic -runner
argument was changed topredict_fn
(#789)- redesigned
contrib.data
andcontrib.datasets
(#820) catalyst.utils.meters
moved tocatalyst.tools
(#820)catalyst.contrib.utils.tools.tensorboard
moved tocatalyst.contrib.tools
(#820)
- Added new docs and minimal examples (#747)
- Added experiment to registry (#746)
- Added examples with extra metrics (#750)
- Added VAE example (#752)
- Added gradient tracking (#679
- Added dependabot (#771)
- Added new test for Config API (#768)
- Added Visdom logger (#769)
- Added new github actions and templates (#777)
- Added
save_n_best=0
support for CheckpointCallback (#784) - Added new contrib modules for CV (#793)
- Added new github actions CI (#791)
- Changed
Alchemy
dependency (fromalchemy-catalyst
toalchemy
) (#748) - Changed warnings logic (#719)
- Github actions CI was updated (#754)
- Changed default
num_epochs
to 1 forState
(#756) - Changed
state.batch_in
/state.batch_out
tostate.input
/state.output
(#763) - Moved
torchvision
dependency fromcatalyst
tocatalyst[cv]
(#738))
- Fixed docker dependencies ($753)
- Fixed
text2embeddding
script (#722) - Fixed
utils/sys
exception (#762) - Returned
detach
method (#766) - Fixed timer division by zero (#749)
- Fixed minimal torch version (#775)
- Fixed segmentation tutorial (#778)
- Fixed Dockerfile dependency (#780)