We proudly present our newest produce, a totally well-defined extension for Tensorflow-Keras users!
Still not available now, will implement in the future.
Now we have such progress on the semi-product:
- optimzers:
- Manually switched optimizers (
Adam2SGD
andNAdam2NSGD
). - Automatically switched optimizer (
SWATS
). - Advanced adaptive optimizers (
Adabound
,Nadabound
andMNadam
supportingamsgrad
). - Wrapped default optimizers.
- Manually switched optimizers (
- layers:
- Ghost layer (used to construct trainable input layer).
- Tied dense layer for the symmetric autoencoder.
- Extended dropout and noise layers.
- Extended activation layers.
- Extended normalization layers.
- Group convolutional layers.
- Modern convolutional layers (support group convolution).
- Modern transposed convolutional layers (support group convolution).
- Tied (trivial) transposed convolutional layers for the symmetric autoencoder.
- Residual layers (or blocks) and their transposed versions.
- ResNeXt layers (or blocks) and their transposed versions.
- Inception-v4 layers (or blocks) and their transposed versions.
- InceptionRes-v2 layers (or blocks) and their transposed versions.
- InceptionPlus layers (or blocks) and their transposed versions.
- External interface for using generic python function.
- Droupout method options for all avaliable modern layers.
- data:
- Basic h5py (HDF5) IO handles.
- Basic SQLite IO handles.
- Basic Bcolz IO handles.
- Basic CSV IO handles.
- Basic JSON IO handles.
- Data parsing utilities.
- estimators:
- VGG16
- U-Net
- ResNet
- functions:
- (loss): Lovasz loss for IoU
- (loss): Linear interpolated loss for IoU
- (metrics): signal-to-noise ratio (SNR and PSNR)
- (metrics): Pearson correlation coefficient
- (metrics): IoU / Jaccard index
- utilities:
- Revised save and load model functions.
- Beholder plug-in callback.
- Revised ModelCheckpoint callback.
- LossWeightsScheduler callback (for changing the loss weights during the training).
- OptimizerSwitcher callback (for using manually switched optimizers).
- ModelWeightsReducer callback (parameter decay strategy including L1 decay and L2 decay).
- Extended data visualization tools.
- Tensorboard log file parser.
Check the branch demos
to learn more details.
- Finish H5Converter
H5Converter
in.data
.
- Fix some bugs and add features in
.utilities.draw
. - Add
webfiles.zip
for.utilities.tboard
. - Fix a small bug in
.utilities
.
- Enhance the
save_model
/load_model
for supportting storing/recovering customized loss/metric class. - Finish the submodule
.utilities.draw
for providing extended visualizations. - Finish the submodule
.utilities.tboard
for providing extended tensorboard interfaces. - Fix some bugs.
- Let
.save_model
support compression. - Revise the optional arguments for
RestrictSub
in.layers
.
- Fix a bug for
H5GCombiner
in.data
when adding more parsers. - Finish
H5VGParser
in.data
, this parser is used for splitting validation set from a dataset. - Finish
ExpandDims
in.layers
, it is a layer version oftf.expand_dims
. - Enable
ModelCheckpoint
in.utilities.callbacks
to support the option for not saving optimizer.
- Fix a bug for serializing
Ghost
in.layers
. - Finish activation layers in
.layers
, includingSlice
,Restrict
andRestrictSub
.
- Let
.save_model
/.load_model
supports storing/recovering variable loss weights. - Finish
LossWeightsScheduler
in.utilities.callbacks
.
Enable the H5SupSaver
in .data
to add more data to an existed file.
Enable the H5SupSaver
in .data
to expand if data is dumped in series.
- Finish
MNadam
,Adabound
andNadabound
in.optimizers
. - Slightly change
.optimizers.mixture
. - Change the quick interface in
.optimizers
.
- Finish the demo version for
SWATS
in.optimizers
. Need further tests. - Fix a small bug for
.load_model
. - Change the warning backend to tensorflow version.
- Finish
ModelWeightsReducer
in.utilities.callbacks
. - Finish
Ghost
in.layers
. - Fix small bugs.
- Fix the bugs of manually switched optimizers in
.optimizers.
Now they require to be used with a callback or switch the phase byswitch()
. - Add a plain momentum SGD optimizer to fast interface in
.optimizers
. - Finish
OptimizerSwitcher
in.utilities.callbacks
. It is used to control the phase of the manually swtiched optimizers. - Improve the efficiency for
Adam2SGD
andNAdam2NSGD
in.optimizers
.
- Finish the manually switched optimizers in
.optimizers
:Adam2SGD
andNAdam2NSGD
. Both of them supports amsgrad mode. - Adjust the fast interface
.optimizers.optimizer
. Now it supports 2 more tensorflow based optimizers and the default momentum of Nesterov SGD optimizer is changed to 0.9.
- Fix some bugs in
.layers.conv
and.layers.unit
. - Remove the normalization layer from all projection branches in
.layers.residual
and.layers.inception
.
- Support totally new
save_model
andload_model
APIs in.utilites
. - Finish
ModelCheckpoint
in.utilities.callbacks
.
Finish losses.linear_jaccard_index
, losses.lovasz_jaccard_loss
, metrics.signal_to_noise
, metrics.correlation
, metrics.jaccard_index
in .functions
(may require tests in the future).
- Add dropout options to all advanced blocks (including residual, ResNeXt, inception, incept-res and incept-plus).
- Strengthen the compatibility.
- Fix minor bugs for spatial dropout in
0.50-b
. - Thanks to GOD!
.layers
has been finished, although it may require modification in the future.
- Fix a bug for implementing the channel_first mode for
AConv
in.layers
. - Finish
InstanceGaussianNoise
in.layers
. - Prepare the test for adding dropout to residual layers in
.layers
.
- Finish
Conv1DTied
,Conv2DTied
,Conv3DTied
in.layers
. - Switch back to the 0.48 version for
.layers.DenseTied
APIs because testing show that the modification in 0.48-b will cause bugs.
A Test on replacing the .layers.DenseTied
APIs like tf.keras.layers.Wrappers
.
- Finish
Inceptplus1D
,Inceptplus2D
,Inceptplus3D
,Inceptplus1DTranspose
,Inceptplus2DTranspose
,Inceptplus3DTranspose
in.layers
. - Minor changes for docstrings and default settings in
.layers.inception
.
- Enable the
ResNeXt
to estimate the latent group and local filter number. - Make a failed try on implementing quick group convolution, testing results show that using
tf.nn.depthwise_conv2d
to replace multipleconvND
ops would cause the computation to be even slower.
- Enable Modern convolutional layers to work with group convolution.
- Reduce the memory consumption for network construction when using ResNeXt layers in case of out of memory (OOM) problems.
- Fix a minor bug for group convolution.
- Finish
GroupConv1D
,GroupConv2D
,GroupConv3D
in.layers
. - Fix the bugs in channel detections for residual and inception layers.
- Finish
Resnext1D
,Resnext2D
,Resnext3D
,Resnext1DTranspose
,Resnext2DTranspose
,Resnext3DTranspose
in.layers
. - Fix the repeating biases problems in inception-residual layers.
- Finish
Inceptres1D
,Inceptres2D
,Inceptres3D
,Inceptres1DTranspose
,Inceptres2DTranspose
,Inceptres3DTranspose
in.layers
. - Fix some bugs and revise docstrings for
.layers.residual
and.layers.inception
.
Finish Inception1D
, Inception2D
, Inception3D
, Inception1DTranspose
, Inception2DTranspose
, Inception3DTranspose
in .layers
.
Finish Residual1D
, Residual2D
, Residual3D
, Residual1DTranspose
, Residual2DTranspose
, Residual3DTranspose
in .layers
.
- Fix the bug about padding for transposed dilation convolutional layers.
- Add a new option
output_mshape
to help transposed convolutional layers to control the desired output shape. - Finish
PyExternal
in.layers
.
Finish H5GCombiner
in .data
.
- Use
keras.Sequence()
to redefineH5GParser
andH5HGParser
. - Add compatible check.
Adjust the .data.h5py
module to make it more generalized.
- Finish
H5HGParser
,H5SupSaver
,H5GParser
in.data
. - Finish
DenseTied
,InstanceNormalization
,GroupNormalization
,AConv1D
,AConv2D
,AConv3D
,AConv1DTranspose
,AConv2DTranspose
,AConv3DTranspose
in.layers
.
Create this project.