Our implementation of paper: A ConvNet for the 2020s, using tensorflow 2
This library is part of our project: Building an AI library with ProtonX
ConvNeXt Architecture :
Authors:
- Github: thinguyenkhtn
- Email: thinguyenkhtn@gmail.com
Advisors:
- Github: https://github.com/bangoc123
- Email: protonxai@gmail.com
Reviewers:
- @Khoi: https://github.com/NKNK-vn
- @Quynh: https://github.com/quynhtl
-
Step 1: Make sure you have installed Miniconda. If not yet, see the setup document here.
-
Step 2: Clone this repository:
git clone https://github.com/protonx-tf-04-projects/ConvNext-2020s
- Download the data:
- Download dataset here
- Extract file and put folder
train
andvalidation
to./data
- train folder was used for the training process
- validation folder was used for validating training result after each epoch
This library use ImageDataGenerator API from Tensorflow 2.0 to load images. Make sure you have some understanding of how it works via its document
Structure of these folders in ./data
train/
...cats/
......cat.0.jpg
......cat.1.jpg
...dogs/
......dog.0.jpg
......dog.1.jpg
validation/
...cats/
......cat.2000.jpg
......cat.2001.jpg
...dogs/
......dog.2000.jpg
......dog.2001.jpg
Review training on colab:
Training script:
!python train.py --train-folder ${train_folder} --valid-folder ${valid_folder} --num-classes ${num-classes} --image-size ${image-size} --lr ${lr} --batch-size ${batch-size} --model ${model} --epochs ${epochs}
Example:
!python train.py --train-folder $train_folder --valid-folder $valid_folder --num-classes 2 --image-size 224 --lr 0.0001 --batch-size 32 --model tiny --epochs ${epochs}
There are some important arguments for the script you should consider when running it:
train-folder
: The folder of training datavalid-folder
: The folder of validation datamodel-folder
: Where the model after training savednum-classes
: The number of your problem classes.batch-size
: The batch size of the datasetimage-size
: The image size of the datasetlr
: The learning ratemodel
: Type of ConvNeXt model, valid option: tiny, small, base, large, xlarge
python predict.py --test-data ${link_to_test_data}
Your implementation
Epoch 195: val_accuracy did not improve from 0.81000
63/63 [==============================] - 74s 1s/step - loss: 0.1756 - accuracy: 0.9300 - val_loss: 0.5760 - val_accuracy: 0.7930
Epoch 196/200
63/63 [==============================] - ETA: 0s - loss: 0.1788 - accuracy: 0.9270
Epoch 196: val_accuracy did not improve from 0.81000
63/63 [==============================] - 74s 1s/step - loss: 0.1788 - accuracy: 0.9270 - val_loss: 0.5847 - val_accuracy: 0.7890
Epoch 197/200
63/63 [==============================] - ETA: 0s - loss: 0.1796 - accuracy: 0.9290
Epoch 197: val_accuracy did not improve from 0.81000
63/63 [==============================] - 74s 1s/step - loss: 0.1796 - accuracy: 0.9290 - val_loss: 0.5185 - val_accuracy: 0.7840
Epoch 198/200
63/63 [==============================] - ETA: 0s - loss: 0.1768 - accuracy: 0.9290
Epoch 198: val_accuracy did not improve from 0.81000
63/63 [==============================] - 74s 1s/step - loss: 0.1768 - accuracy: 0.9290 - val_loss: 0.5624 - val_accuracy: 0.7870
Epoch 199/200
63/63 [==============================] - ETA: 0s - loss: 0.1744 - accuracy: 0.9340
Epoch 199: val_accuracy did not improve from 0.81000
63/63 [==============================] - 74s 1s/step - loss: 0.1744 - accuracy: 0.9340 - val_loss: 0.5416 - val_accuracy: 0.7790
Epoch 200/200
63/63 [==============================] - ETA: 0s - loss: 0.1995 - accuracy: 0.9230
Epoch 200: val_accuracy did not improve from 0.81000
63/63 [==============================] - 74s 1s/step - loss: 0.1995 - accuracy: 0.9230 - val_loss: 0.4909 - val_accuracy: 0.7930
If you meet any issues when using this library, please let us know via the issues submission tab.