Releases: NKI-AI/direct
DIRECT v2.1.0 Release Notes
Direct v2.1.0 Release Notes
Anouncing the release of DIRECT v2.1.0. This update brings in a nutshell:
- CMRxRecon Challenge scripts: to reproduce team's DIRECT (second best ranking and fastest best team) results.
- New Features: SSL & JSSL MRI reconstruction training, new augmentation transforms, kt subsampling functions.
- Python 3.9 and 3.10 support.
Changes Since v2.0.0
New Features
-
SSL MRI Reconstruction (SSDU)
direct.sslmodule added containing basic SSL MRI functionality with mask splitters for creating two disjoint sets.- Added supervised and SSL_SSDU MRI transforms that split the mask and k-space into input and target subsets
- Model engines for SSL training (
VSharpNetSSLEngine,Unet2dSSLEngine,VSharpNetSSLEngine)
-
JSSL MRI Reconstruction
- Model engines for JSSL training (
VSharpNetJSSLEngine,Unet2dJSSLEngine,VSharpNetJSSLEngine)
- Model engines for JSSL training (
-
Augmentation MRI Transforms
- New transforms
RescaleKspace,PadKspaceandCompressCoilModuletransforms.
- New transforms
-
Non-Static and kt Sampling:
- Introduced new kt sampling mask functions, including
KtGaussian1DMaskFunc,KtRadialMaskFunc, andKtUniformMaskFunc. - Supported dynamic and multislice sampling modes dictated by
MaskFuncMode, which can be STATIC, MULTISLICE, or DYNAMIC.
- Introduced new kt sampling mask functions, including
Other Additions
- CMRxRecon Challenge 2023 DIRECT participation scripts:
- JSSL Experiments YAML configuration files, data split lists (.lst files), and running instructions for the experiments detailed in arXiv:2311.15856.
Code Quality Improvements
- Now DIRECT ensures compatibility with Python 3.9 and 3.10.
- Enhanced typing information.
Acknowledgments
This release was made possible by the hard work and dedication of our team and contributors:
- Code Development & Maintenance: George Yiasemis (@georgeyiasemis).
- Code Review & Project Management: Jonas Teuwen (@jonasteuwen).
Documentation and Changelogs
Access detailed documentation for DIRECT v2.0.0 at our documentation site.
- v2.0.0 to v2.1.0 Changelog: View Changes
DIRECT v2.0.0 Release Notes
We're excited to announce DIRECT v2.0.0, featuring several advancements and updates. Here's a snapshot of what's new:
- New Features: New MRI transforms, datasets, loss functions and models including challenge winning models.
- User Experience Enhancements: Updated commands and additional examples
- Performance Optimizations: Addressed memory and performance issues.
- Code Quality Enhancements: Significant improvements for a more robust and reliable codebase.
Dive into the details below to see how DIRECT v2.0.0 can enhance your work.
Major Updates Since v1.0.0
- Major New Features:
- Additional MRI transforms (#210, #226, #233, #235)
- Additional Loss functions (#226, #262)
- Additional MRI models including challenge winning models (
RecurrentVarNetwinner at MC-MRI challenge 2022,vSHARPNetwinner at CMRxRecon challenge 2023) (#156, #180, #228, #271, #273) - Additional subsampling functions, including Variable density Poisson, Equispaced with exploited symmetry, Gaussian 1D and Gaussian 2D (#216, #230)
- Additional (Shepp Logan phantom) dataset (#202)
- 3D functionality including transforms and vSHARP 3D (#272, #273)
- User Experience Improvements:
- Performance Improvements:
- Code quality changes (#194, #196, #226, #228, #266)
Changes Since v1.0.4
New Features
- New MRI model architectures: including
ConjGradNetfor improved imaging,IterDualNet(similar toJointICNetwithout sensitivity map optimization),ResNetas a new denoiser model,VarSplitNetfor variable splitting optimization with deep learning (#228) andVSharpNetas presented in vSHARP: variable Splitting Half-quadratic ADMM algorithm for Reconstruction of inverse-Problems along with its 3D variantVSharpNet3D(#270, #273). - New MRI transforms: including
EspiritCalibrationtransform via power-method algorithm,CropKspace,RandomFlip,RandomRotation.,ComputePadding,ApplyPadding,ComputeImage,RenameKeys,ComputeScalingFactor. - New functionals and loss functions:
NMSE,NRMSE,NMAE,SobelGradL1Loss,SobelGradL2Loss,hfen_l1,hfen_l2,HFENLoss,HFENL1Loss,HFENL2Loss,snr,SNRLossandSSIM3DLoss(#226, #262). - New masking functions:
- Gaussian in 1D (rectilinear sampling) and in 2D (point sampling):
Gaussian1DMaskFuncandGaussian2DMaskFunc, respectively. Implemented using Cython (#230).
- Gaussian in 1D (rectilinear sampling) and in 2D (point sampling):
- 3D MRI Reconstruction functionality:
Improvements
- Refactored MRI model engines to only implement
forward_methodinstead of_do_iteration. (#226) - Transforms configuration for training and inference now implemented by flattening input
DictConfigfromomegaconfusingdict_flatten(#235, #250).
Code Quality Changes
- Minor quality improvements (#226).
- Introduction of
DirectEnumas a base class for clean typing of options of modules such as transforms, etc (#228, #266).
Other Changes
- New version of
blackreformatting (#241) - Update for new versions of tooling packages (#263)
- Updated documentation (#226, #242 - #272)
Acknowledgments
This release was made possible by the hard work and dedication of our team and contributors:
- Code Development & Maintenance: George Yiasemis (@georgeyiasemis).
- Code Review & Project Management: Jonas Teuwen (@jonasteuwen).
Documentation and Changelogs
Access detailed documentation for DIRECT v2.0.0 at our documentation site.
- v1.0.0 to v2.0.0 Changelog: View Changes
- v1.0.4 to v2.0.0 Changelog: View Changes
New mri transform features, new loss functions, improving code quality fixes and lr scheduler fix
New features
- New training losses implemented (
NMSE,NRMSE,NMAE,SobelGradL1Loss,SobelGradL2Loss) and also k-space losses (#226) - New mri tranforms
ComputeZeroPadding: computes padding in k-space input (#226)ApplyZeroPadding: applies padding (#226)ComputeImage: computes image from k-space input (#226)RenameKeys: rename keys in input (#226)CropKSpace: transforms k-space to image domain, crops it, and backprojects it (#210)ApplyMask: applies sampling mask (#210)
- New sub-sampling patterns (#216):
- Variable density Poisson masking function (cython implementation)
- Fastmri's Magic masking function
- New model added: CRIM (#156)
Code quality
mri_modelsperforms_do_iteration method, child engines performforward_functionwhich returns output_image and/or output_kspace (#226)
Bufixes
torch.whereoutput needed to be made contiguous to be inputted to fft, due to new torch version (#216)- HasStateDict type changed to include torch.optim.lr_scheduler._LRScheduler which was missing before, causing the checkpointer to not save/load the state_dict of LRSchedulers (#218)
Contributors
Full Changelog: v1.0.3...v1.0.4
Fixed bugs affecting RIM performance
What's Changed
- RIM performance fixed (#208)
modulus_if_complexreinstated but requiring to set complex axis
- MRI models metrics in
MRIEnginecheck if prediction is complex (complex axis=last) and apply the modulus if they are (#208).
Contributors
Full Changelog: v1.0.2...v1.0.3
CVPR experiments, SheppLogan datasets, New training command
New features
- Normalised ConvGRU model (
NormConv2dGRU) following the implementation ofNormUnet2d(#176) - Shepp Logan Datasets based on "2D & 3D Shepp-Logan phantom standards for MRI", 2008 19th International Conference on Systems Engineering. IEEE, 2008. (#202):
SheppLoganProtonDatasetSheppLoganT1DatasetSheppLoganT2Dataset
- Sensitivity map simulator by producing Gaussian distributions with number of centers = number of desired coils (#202)
- Documentation updates (#180, #183, #196)
- Experiments for our CVPR 2022 paper "Recurrent Variational Network: A Deep Learning Inverse Problem Solver applied to the task of Accelerated MRI Reconstruction" as shown in the paper (#180)
- Tutorials/examples for Calgary Campinas Dataset and Google Colab added (#199)
Code quality
- Remove unambiguous complex assertions (#194)
modulus_if_complexfunction removed,modulusneeds to specify axis (#194)- Added tests/end-to-end tests. Coverage to 81% (#196)
- Improve typing (#196)
mypyandpylintfixes (#196)- Docker image updated (#204)
- Refactored
direct train,direct predictandpython3 projects/predict_val.pyto not necessarily requirepath to dataas some datasets don't require it (e.g. SheppLogan Datasets) -build_dataset_from_inputrelies on**kwargsnow. Refactored configs and docs to comply with the above. (#202)-
Train command example:
direct train <experiment_directory> --num-gpus <number_of_gpus> --cfg <path_or_url_to_yaml_file> \ [--training-root <training_data_root> --validation-root <validation_data_root>] [--other-flags]
-
Bufixes
Contributors
Full Changelog: v1.0.1...v1.0.2
Bug fix release reducing memory consumption and improving code quality fixes
In v1.0.1 we mainly provide bug fixes (including a memory leak) and code quality improvements.
New features
- Added
direct upload-to-buckettool by @jonasteuwen in #167
Code Quality Changes
- Fix high memory consumption and code quality improvements by @georgeyiasemis in #174
h5py 3.6.0toh5py 3.3.0FastMRIDatasetheader reader function refactored- MRI model engines now only perform
do_iterationmethod.MRIModelEnginenow includes all methods for MRI models. evaluatemethod of MRI Models has been rewritten.
Full Changelog: v1.0.0...v1.0.1
First stable release including baselines
In the first stable release, we provide the implementation of baselines, including trained models on the publicly available datasets.
New features
- Command-line interface
direct trainanddirect predictreplace the corresponding python scripts (#151, #139) - Added baseline models (#122, #121, #123):
- Updated documentation
- Checkpoints and configs can now be loaded from remote URL (#133, #127, #135), and training configuration now supports the ability to initialize from a URL (#141)
- New sampling masks (radial and spiral for Cartesian data) (#140)
- Implement recon models by (#123)
- Add recurrentvarnet implementation (#131)
- Added
direct trainCLI interface (Closes #109, #139)
Code quality
- Removed experimental named tensor feature, enabling the update to pytorch 1.9 (PR #103)
- Remove large files from the repository and store these in an S3 bucket and are downloaded when used.
- Code coverage checks and badges are added (#153)
- Add several tests, code coverage is now to 73% (#144)
- Tests are now in a separate folder (#142)
- Outdated checkpoints are removed (#146)
- New models are added, requiring that
MRIReconstructionis merged withRIM(#113) - Allow reading checkpoints from S3 storage (#133, Closes #135)
- Allow for remote config files (#133, Closes #135)
Documentation
Internal changes
- Experimental named tensors are removed (PR #103)
- Pytorch 1.9 and Python 3.8 are now required.
Bugfixes
- Evaluation function had a problem where the last volume sometimes was dropped (#111)
- Checkpointer tried to load
state_dictif key is of the format__<>__(#144 closes #143) - Fixed crash when validation set is empty (#125)
New Contributors
Full Changelog: v0.2...v1.0.0
v0.2
Major release with many bug fixes, and baseline models
Many new features have been added, of which most will likely have introduced breaking changes. Several performance
issues have been addressed.
An improved version to the winning solution for the Calgary-Campinas challenge is also added to v0.2, including model weights.
New features
- Baseline model for the Calgary-Campinas challenge (see
model_zoo.md) - Added FastMRI 2020 dataset.
- Challenge metrics for FastMRI and the Calgary-Campinas.
- Allow initialization from zero-filled or external input.
- Allow initialization from something else than the zero-filled image in
train_rim.pyby passing a directory. - Refactoring environment class allowing the use of different models except RIM.
- Added inference key to the configuration which sets the proper transforms to be used during training, this became
necessary when we introduced the possibility to have multiple training and validation sets, created a inference script
honoring these changes. - Separate validation and testing scripts for the Calgary-Campinas challenge.
Technical changes in functions
direct.utils.io.write_jsonserializes real-valued numpy and torch objects.direct.utils.str_to_classnow supports partial argument parsing, e.g.fft2(centered=False)will be properly parsed
in the config.- Added support for regularizers.
- Engine is now aware of the backward and forward operators, these are not passed in the environment anymore, but are
properties of the engine. - PyTorch 1.6 and Python 3.8 are now required.
Work in progress
- Added a preliminary version of a 3D RIM version. This includes changing several transforms to versions being dimension
independent and also intends to support 3D + time data.
Bugfixes
- Fixed progressive slowing down during validation by refactoring engine and turning lists of dataloaders
into a generator, also disabled memory pinning to alleviate this problem. - Fixed a bug that when initializing from a previous checkpoint additional models were not loaded.
- Fixed normalization of the sensitivity map in
rim_engine.py. direct.data.samplers.BatchVolumeSamplerreturned wrong length which caused dropping of volumes during validation.
Necessary bug fixes and logging improvements
In this version we provide necessary bugfixes for v0.1.1 and several improvements:
Big changes
- Bugfixes in FastMRI dataset class.
- Improvements in logging.
- Improvements by adding more exceptions for unexpected occurrences.
- Allow the reading of subsets of the dataset by providing lists.
- Add an augmentation to pad coils (zero-padding), allowing the batching of images with a different number of coils.
- Add the ability to add additional models to the engine class by configuration (WIP).
Stylistic changes
Black is now used as a code formatter, this had the consequence that certain parts became hard to read, so these were refactored to improve readability after applying black.
Breaking changes
As you can expect from a pre-release, while we intend to keep it to a minimum, it is possible that things break, especially in the configuration files. In case you encounter one, please open an issue.
Pytorch 1.6, mixed precision update
In v0.1.1:
- The pytorch version has been updated to 1.6 (also in the Dockerfile) (and pytorch 1.6 is now required)
- Mixed precision support with the
--mixed-precisionflag. If you have supporting hardware this can speed up training with more than 33% and reduce memory by more than 40%.