You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
mindspore/model_zoo/official/cv/resnet/README.md

29 KiB

Contents

ResNet Description

Description

ResNet (residual neural network) was proposed by Kaiming He and other four Chinese of Microsoft Research Institute. Through the use of ResNet unit, it successfully trained 152 layers of neural network, and won the championship in ilsvrc2015. The error rate on top 5 was 3.57%, and the parameter quantity was lower than vggnet, so the effect was very outstanding. Traditional convolution network or full connection network will have more or less information loss. At the same time, it will lead to the disappearance or explosion of gradient, which leads to the failure of deep network training. ResNet solves this problem to a certain extent. By passing the input information to the output, the integrity of the information is protected. The whole network only needs to learn the part of the difference between input and output, which simplifies the learning objectives and difficulties.The structure of ResNet can accelerate the training of neural network very quickly, and the accuracy of the model is also greatly improved. At the same time, ResNet is very popular, even can be directly used in the concept net network.

These are examples of training ResNet50/ResNet101/SE-ResNet50 with CIFAR-10/ImageNet2012 dataset in MindSpore.ResNet50 and ResNet101 can reference paper 1 below, and SE-ResNet50 is a variant of ResNet50 which reference paper 2 and paper 3 below, Training SE-ResNet50 for just 24 epochs using 8 Ascend 910, we can reach top-1 accuracy of 75.9%.(Training ResNet101 with dataset CIFAR-10 and SE-ResNet50 with CIFAR-10 is not supported yet.)

Paper

1.paper:Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Deep Residual Learning for Image Recognition"

2.paper:Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. "Squeeze-and-Excitation Networks"

3.paper:Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. "Bag of Tricks for Image Classification with Convolutional Neural Networks"

Model Architecture

The overall network architecture of ResNet is show below: Link

Dataset

Dataset used: CIFAR-10

  • Dataset size60,000 32*32 colorful images in 10 classes
    • Train50,000 images
    • Test 10,000 images
  • Data formatbinary files
    • NoteData will be processed in dataset.py
  • Download the dataset, the directory structure is as follows:
├─cifar-10-batches-bin
│
└─cifar-10-verify-bin

Dataset used: ImageNet2012

  • Dataset size 224*224 colorful images in 1000 classes
    • Train1,281,167 images
    • Test 50,000 images
  • Data formatjpeg
    • NoteData will be processed in dataset.py
  • Download the dataset, the directory structure is as follows:
└─dataset
   ├─ilsvrc                # train dataset
   └─validation_preprocess # evaluate dataset

Features

Mixed Precision

The mixed precision training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.

Environment Requirements

Quick Start

After installing MindSpore via the official website, you can start training and evaluation as follows:

  • Running on Ascend
# distributed training
Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)

# standalone training
Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
[PRETRAINED_CKPT_PATH](optional)

# run evaluation example
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  • Running on GPU
# distributed training example
sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012]  [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)

# standalone training example
sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)

# infer example
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]

Script Description

Script and Sample Code

.
└──resnet
  ├── README.md
  ├── scripts
    ├── run_distribute_train.sh            # launch ascend distributed training(8 pcs)
    ├── run_parameter_server_train.sh      # launch ascend parameter server training(8 pcs)
    ├── run_eval.sh                        # launch ascend evaluation
    ├── run_standalone_train.sh            # launch ascend standalone training(1 pcs)
    ├── run_distribute_train_gpu.sh        # launch gpu distributed training(8 pcs)
    ├── run_parameter_server_train_gpu.sh  # launch gpu parameter server training(8 pcs)
    ├── run_eval_gpu.sh                    # launch gpu evaluation
    └── run_standalone_train_gpu.sh        # launch gpu standalone training(1 pcs)
  ├── src
    ├── config.py                          # parameter configuration
    ├── dataset.py                         # data preprocessing
    ├── CrossEntropySmooth.py              # loss definition for ImageNet2012 dataset
    ├── lr_generator.py                    # generate learning rate for each step
    └── resnet.py                          # resnet backbone, including resnet50 and resnet101 and se-resnet50
  ├── export.py                            # export model for inference
  ├── mindspore_hub_conf.py                # mindspore hub interface
  ├── eval.py                              # eval net
  └── train.py                             # train net

Script Parameters

Parameters for both training and evaluation can be set in config.py.

  • Config for ResNet50, CIFAR-10 dataset
"class_num": 10,                  # dataset class num
"batch_size": 32,                 # batch size of input tensor
"loss_scale": 1024,               # loss scale
"momentum": 0.9,                  # momentum
"weight_decay": 1e-4,             # weight decay 
"epoch_size": 90,                 # only valid for taining, which is always 1 for inference 
"pretrain_epoch_size": 0,         # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
"save_checkpoint": True,          # whether save checkpoint or not
"save_checkpoint_epochs": 5,      # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last step
"keep_checkpoint_max": 10,        # only keep the last keep_checkpoint_max checkpoint
"save_checkpoint_path": "./",     # path to save checkpoint
"warmup_epochs": 5,               # number of warmup epoch
"lr_decay_mode": "poly"           # decay mode can be selected in steps, ploy and default
"lr_init": 0.01,                  # initial learning rate
"lr_end": 0.00001,                # final learning rate
"lr_max": 0.1,                    # maximum learning rate
  • Config for ResNet50, ImageNet2012 dataset
"class_num": 1001,                # dataset class number
"batch_size": 256,                 # batch size of input tensor
"loss_scale": 1024,               # loss scale
"momentum": 0.9,                  # momentum optimizer
"weight_decay": 1e-4,             # weight decay 
"epoch_size": 90,                 # only valid for taining, which is always 1 for inference 
"pretrain_epoch_size": 0,         # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
"save_checkpoint": True,          # whether save checkpoint or not
"save_checkpoint_epochs": 5,      # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
"keep_checkpoint_max": 10,        # only keep the last keep_checkpoint_max checkpoint
"save_checkpoint_path": "./",     # path to save checkpoint relative to the executed path
"warmup_epochs": 0,               # number of warmup epoch
"lr_decay_mode": "Linear",        # decay mode for generating learning rate
"use_label_smooth": True,         # label smooth
"label_smooth_factor": 0.1,       # label smooth factor
"lr_init": 0,                     # initial learning rate
"lr_max": 0.8,                    # maximum learning rate
"lr_end": 0.0,                    # minimum learning rate
  • Config for ResNet101, ImageNet2012 dataset
"class_num": 1001,                # dataset class number
"batch_size": 32,                 # batch size of input tensor
"loss_scale": 1024,               # loss scale
"momentum": 0.9,                  # momentum optimizer
"weight_decay": 1e-4,             # weight decay
"epoch_size": 120,                # epoch size for training
"pretrain_epoch_size": 0,         # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
"save_checkpoint": True,          # whether save checkpoint or not
"save_checkpoint_epochs": 5,      # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
"keep_checkpoint_max": 10,        # only keep the last keep_checkpoint_max checkpoint
"save_checkpoint_path": "./",     # path to save checkpoint relative to the executed path
"warmup_epochs": 0,               # number of warmup epoch
"lr_decay_mode": "cosine"         # decay mode for generating learning rate
"use_label_smooth": True,         # label_smooth
"label_smooth_factor": 0.1,       # label_smooth_factor
"lr": 0.1                         # base learning rate
  • Config for SE-ResNet50, ImageNet2012 dataset
"class_num": 1001,                # dataset class number
"batch_size": 32,                 # batch size of input tensor
"loss_scale": 1024,               # loss scale
"momentum": 0.9,                  # momentum optimizer
"weight_decay": 1e-4,             # weight decay
"epoch_size": 28 ,                # epoch size for creating learning rate
"train_epoch_size": 24            # actual train epoch size
"pretrain_epoch_size": 0,         # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
"save_checkpoint": True,          # whether save checkpoint or not
"save_checkpoint_epochs": 4,      # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
"keep_checkpoint_max": 10,        # only keep the last keep_checkpoint_max checkpoint
"save_checkpoint_path": "./",     # path to save checkpoint relative to the executed path
"warmup_epochs": 3,               # number of warmup epoch
"lr_decay_mode": "cosine"         # decay mode for generating learning rate
"use_label_smooth": True,         # label_smooth
"label_smooth_factor": 0.1,       # label_smooth_factor
"lr_init": 0.0,                   # initial learning rate
"lr_max": 0.3,                    # maximum learning rate
"lr_end": 0.0001,                 # end learning rate

Training Process

Usage

Running on Ascend

# distributed training
Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)

# standalone training
Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
[PRETRAINED_CKPT_PATH](optional)

# run evaluation example
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]

For distributed training, a hccl configuration file with JSON format needs to be created in advance.

Please follow the instructions in the link hccn_tools.

Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.

Running on GPU

# distributed training example
sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012]  [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)

# standalone training example
sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)

# infer example
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]

Running parameter server mode training

  • Parameter server training Ascend example
sh run_parameter_server_train.sh [resnet50|resnet101] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  • Parameter server training GPU example
sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)

Result

  • Training ResNet50 with CIFAR-10 dataset
# distribute training result(8 pcs)
epoch: 1 step: 195, loss is 1.9601055
epoch: 2 step: 195, loss is 1.8555021
epoch: 3 step: 195, loss is 1.6707983
epoch: 4 step: 195, loss is 1.8162166
epoch: 5 step: 195, loss is 1.393667
...
  • Training ResNet50 with ImageNet2012 dataset
# distribute training result(8 pcs)
epoch: 1 step: 5004, loss is 4.8995576
epoch: 2 step: 5004, loss is 3.9235563
epoch: 3 step: 5004, loss is 3.833077
epoch: 4 step: 5004, loss is 3.2795618
epoch: 5 step: 5004, loss is 3.1978393
...
  • Training ResNet101 with ImageNet2012 dataset
# distribute training result(8 pcs)
epoch: 1 step: 5004, loss is 4.805483
epoch: 2 step: 5004, loss is 3.2121816
epoch: 3 step: 5004, loss is 3.429647
epoch: 4 step: 5004, loss is 3.3667371
epoch: 5 step: 5004, loss is 3.1718972
...
  • Training SE-ResNet50 with ImageNet2012 dataset
# distribute training result(8 pcs)
epoch: 1 step: 5004, loss is 5.1779146
epoch: 2 step: 5004, loss is 4.139395
epoch: 3 step: 5004, loss is 3.9240637
epoch: 4 step: 5004, loss is 3.5011306
epoch: 5 step: 5004, loss is 3.3501816
...

Evaluation Process

Usage

Running on Ascend

# evaluation
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
# evaluation example
sh run_eval.sh resnet50 cifar10 ~/cifar10-10-verify-bin ~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt

checkpoint can be produced in training process.

Running on GPU

sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]

Result

Evaluation result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.

  • Evaluating ResNet50 with CIFAR-10 dataset
result: {'acc': 0.91446314102564111} ckpt=~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  • Evaluating ResNet50 with ImageNet2012 dataset
result: {'acc': 0.7671054737516005} ckpt=train_parallel0/resnet-90_5004.ckpt
  • Evaluating ResNet101 with ImageNet2012 dataset
result: {'top_5_accuracy': 0.9429417413572343, 'top_1_accuracy': 0.7853513124199744} ckpt=train_parallel0/resnet-120_5004.ckpt
  • Evaluating SE-ResNet50 with ImageNet2012 dataset
result: {'top_5_accuracy': 0.9342589628681178, 'top_1_accuracy': 0.768065781049936} ckpt=train_parallel0/resnet-24_5004.ckpt

Model Description

Performance

Evaluation Performance

ResNet50 on CIFAR-10

Parameters Ascend 910 GPU
Model Version ResNet50-v1.5 ResNet50-v1.5
Resource Ascend 910CPU 2.60GHz 192coresMemory 755G GPU(Tesla V100 SXM2)CPU 2.1GHz 24coresMemory 128G
uploaded Date 04/01/2020 (month/day/year) 08/01/2020 (month/day/year)
MindSpore Version 0.1.0-alpha 0.6.0-alpha
Dataset CIFAR-10 CIFAR-10
Training Parameters epoch=90, steps per epoch=195, batch_size = 32 epoch=90, steps per epoch=195, batch_size = 32
Optimizer Momentum Momentum
Loss Function Softmax Cross Entropy Softmax Cross Entropy
outputs probability probability
Loss 0.000356 0.000716
Speed 18.4ms/step8pcs 69ms/step8pcs
Total time 6 mins 20.2 mins
Parameters (M) 25.5 25.5
Checkpoint for Fine tuning 179.7M (.ckpt file) 179.7M (.ckpt file)
Scripts Link Link

ResNet50 on ImageNet2012

Parameters Ascend 910 GPU
Model Version ResNet50-v1.5 ResNet50-v1.5
Resource Ascend 910CPU 2.60GHz 192coresMemory 755G GPU(Tesla V100 SXM2)CPU 2.1GHz 24coresMemory 128G
uploaded Date 04/01/2020 (month/day/year) 08/01/2020 (month/day/year)
MindSpore Version 0.1.0-alpha 0.6.0-alpha
Dataset ImageNet2012 ImageNet2012
Training Parameters epoch=90, steps per epoch=626, batch_size = 256 epoch=90, steps per epoch=5004, batch_size = 32
Optimizer Momentum Momentum
Loss Function Softmax Cross Entropy Softmax Cross Entropy
outputs probability probability
Loss 1.8464266 1.9023
Speed 118ms/step8pcs 67.1ms/step8pcs
Total time 114 mins 500 mins
Parameters (M) 25.5 25.5
Checkpoint for Fine tuning 197M (.ckpt file) 197M (.ckpt file)
Scripts Link Link

ResNet101 on ImageNet2012

Parameters Ascend 910 GPU
Model Version ResNet101 ResNet101
Resource Ascend 910CPU 2.60GHz 192coresMemory 755G GPU(Tesla V100 SXM2)CPU 2.1GHz 24coresMemory 128G
uploaded Date 04/01/2020 (month/day/year) 08/01/2020 (month/day/year)
MindSpore Version 0.1.0-alpha 0.6.0-alpha
Dataset ImageNet2012 ImageNet2012
Training Parameters epoch=120, steps per epoch=5004, batch_size = 32 epoch=120, steps per epoch=5004, batch_size = 32
Optimizer Momentum Momentum
Loss Function Softmax Cross Entropy Softmax Cross Entropy
outputs probability probability
Loss 1.6453942 1.7023412
Speed 30.3ms/step8pcs 108.6ms/step8pcs
Total time 301 mins 1100 mins
Parameters (M) 44.6 44.6
Checkpoint for Fine tuning 343M (.ckpt file) 343M (.ckpt file)
Scripts Link Link

SE-ResNet50 on ImageNet2012

Parameters Ascend 910
Model Version SE-ResNet50
Resource Ascend 910CPU 2.60GHz 192coresMemory 755G
uploaded Date 08/16/2020 (month/day/year)
MindSpore Version 0.7.0-alpha
Dataset ImageNet2012
Training Parameters epoch=24, steps per epoch=5004, batch_size = 32
Optimizer Momentum
Loss Function Softmax Cross Entropy
outputs probability
Loss 1.754404
Speed 24.6ms/step8pcs
Total time 49.3 mins
Parameters (M) 25.5
Checkpoint for Fine tuning 215.9M (.ckpt file)
Scripts Link

Inference Performance

ResNet50 on CIFAR-10

Parameters Ascend GPU
Model Version ResNet50-v1.5 ResNet50-v1.5
Resource Ascend 910 GPU
Uploaded Date 04/01/2020 (month/day/year) 08/01/2020 (month/day/year)
MindSpore Version 0.1.0-alpha 0.6.0-alpha
Dataset CIFAR-10 CIFAR-10
batch_size 32 32
outputs probability probability
Accuracy 91.44% 91.37%
Model for inference 91M (.air file)

ResNet50 on ImageNet2012

Parameters Ascend GPU
Model Version ResNet50-v1.5 ResNet50-v1.5
Resource Ascend 910 GPU
Uploaded Date 04/01/2020 (month/day/year) 08/01/2020 (month/day/year)
MindSpore Version 0.1.0-alpha 0.6.0-alpha
Dataset ImageNet2012 ImageNet2012
batch_size 256 32
outputs probability probability
Accuracy 76.70% 76.74%
Model for inference 98M (.air file)

ResNet101 on ImageNet2012

Parameters Ascend GPU
Model Version ResNet101 ResNet101
Resource Ascend 910 GPU
Uploaded Date 04/01/2020 (month/day/year) 08/01/2020 (month/day/year)
MindSpore Version 0.1.0-alpha 0.6.0-alpha
Dataset ImageNet2012 ImageNet2012
batch_size 32 32
outputs probability probability
Accuracy 78.53% 78.64%
Model for inference 171M (.air file)

SE-ResNet50 on ImageNet2012

Parameters Ascend
Model Version SE-ResNet50
Resource Ascend 910
Uploaded Date 08/16/2020 (month/day/year)
MindSpore Version 0.7.0-alpha
Dataset ImageNet2012
batch_size 32
outputs probability
Accuracy 76.80%
Model for inference 109M (.air file)

Description of Random Situation

In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.

ModelZoo Homepage

Please check the official homepage.