|
|
@ -1,19 +1,61 @@
|
|
|
|
# EfficientNet-B0 Example
|
|
|
|
# Contents
|
|
|
|
|
|
|
|
|
|
|
|
## Description
|
|
|
|
- [EfficientNet-B0 Description](#efficientnet-description)
|
|
|
|
|
|
|
|
- [Model Architecture](#model-architecture)
|
|
|
|
|
|
|
|
- [Dataset](#dataset)
|
|
|
|
|
|
|
|
- [Environment Requirements](#environment-requirements)
|
|
|
|
|
|
|
|
- [Quick Start](#quick-start)
|
|
|
|
|
|
|
|
- [Script Description](#script-description)
|
|
|
|
|
|
|
|
- [Script and Sample Code](#script-and-sample-code)
|
|
|
|
|
|
|
|
- [Script Parameters](#script-parameters)
|
|
|
|
|
|
|
|
- [Training Process](#training-process)
|
|
|
|
|
|
|
|
- [Evaluation Process](#evaluation-process)
|
|
|
|
|
|
|
|
- [Model Description](#model-description)
|
|
|
|
|
|
|
|
- [Performance](#performance)
|
|
|
|
|
|
|
|
- [Training Performance](#evaluation-performance)
|
|
|
|
|
|
|
|
- [Inference Performance](#evaluation-performance)
|
|
|
|
|
|
|
|
- [ModelZoo Homepage](#modelzoo-homepage)
|
|
|
|
|
|
|
|
|
|
|
|
This is an example of training EfficientNet-B0 in MindSpore.
|
|
|
|
# [EfficientNet-B0 Description](#contents)
|
|
|
|
|
|
|
|
|
|
|
|
## Requirements
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- Install [Mindspore](http://www.mindspore.cn/install/en).
|
|
|
|
[Paper](https://arxiv.org/abs/1905.11946): Mingxing Tan, Quoc V. Le. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. 2019.
|
|
|
|
- Download the dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Structure
|
|
|
|
# [Model architecture](#contents)
|
|
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
The overall network architecture of EfficientNet-B0 is show below:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
[Link](https://arxiv.org/abs/1905.11946)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# [Dataset](#contents)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Dataset used: [imagenet](http://www.image-net.org/)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- Dataset size: ~125G, 1.2W colorful images in 1000 classes
|
|
|
|
|
|
|
|
- Train: 120G, 1.2W images
|
|
|
|
|
|
|
|
- Test: 5G, 50000 images
|
|
|
|
|
|
|
|
- Data format: RGB images.
|
|
|
|
|
|
|
|
- Note: Data will be processed in src/dataset.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# [Environment Requirements](#contents)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- Hardware GPU
|
|
|
|
|
|
|
|
- Prepare hardware environment with GPU processor.
|
|
|
|
|
|
|
|
- Framework
|
|
|
|
|
|
|
|
- [MindSpore](https://www.mindspore.cn/install/en)
|
|
|
|
|
|
|
|
- For more information, please check the resources below:
|
|
|
|
|
|
|
|
- [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
|
|
|
|
|
|
|
|
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# [Script description](#contents)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## [Script and sample code](#contents)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```python
|
|
|
|
.
|
|
|
|
.
|
|
|
|
└─nasnet
|
|
|
|
└─efficientnet
|
|
|
|
├─README.md
|
|
|
|
├─README.md
|
|
|
|
├─scripts
|
|
|
|
├─scripts
|
|
|
|
├─run_standalone_train_for_gpu.sh # launch standalone training with gpu platform(1p)
|
|
|
|
├─run_standalone_train_for_gpu.sh # launch standalone training with gpu platform(1p)
|
|
|
@ -26,14 +68,14 @@ This is an example of training EfficientNet-B0 in MindSpore.
|
|
|
|
├─loss.py # Customized loss function
|
|
|
|
├─loss.py # Customized loss function
|
|
|
|
├─transform_utils.py # random augment utils
|
|
|
|
├─transform_utils.py # random augment utils
|
|
|
|
├─transform.py # random augment class
|
|
|
|
├─transform.py # random augment class
|
|
|
|
├─eval.py # eval net
|
|
|
|
├─eval.py # eval net
|
|
|
|
└─train.py # train net
|
|
|
|
└─train.py # train net
|
|
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
## Parameter Configuration
|
|
|
|
## [Script Parameters](#contents)
|
|
|
|
|
|
|
|
|
|
|
|
Parameters for both training and evaluating can be set in config.py
|
|
|
|
Parameters for both training and evaluating can be set in config.py.
|
|
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
```
|
|
|
|
'random_seed': 1, # fix random seed
|
|
|
|
'random_seed': 1, # fix random seed
|
|
|
@ -62,35 +104,34 @@ Parameters for both training and evaluating can be set in config.py
|
|
|
|
'resume_start_epoch': 0, # resume start epoch
|
|
|
|
'resume_start_epoch': 0, # resume start epoch
|
|
|
|
```
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
## Running the example
|
|
|
|
## [Training Process](#contents)
|
|
|
|
|
|
|
|
|
|
|
|
### Train
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### Usage
|
|
|
|
#### Usage
|
|
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
```
|
|
|
|
# distribute training example(8p)
|
|
|
|
GPU:
|
|
|
|
sh run_distribute_train_for_gpu.sh DATA_DIR
|
|
|
|
# distribute training example(8p)
|
|
|
|
# standalone training
|
|
|
|
sh run_distribute_train_for_gpu.sh
|
|
|
|
sh run_standalone_train_for_gpu.sh DATA_DIR DEVICE_ID
|
|
|
|
# standalone training
|
|
|
|
|
|
|
|
sh run_standalone_train_for_gpu.sh DEVICE_ID DATA_DIR
|
|
|
|
```
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
#### Launch
|
|
|
|
#### Launch
|
|
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
```bash
|
|
|
|
# distributed training example(8p) for GPU
|
|
|
|
# distributed training example(8p) for GPU
|
|
|
|
sh scripts/run_distribute_train_for_gpu.sh /dataset
|
|
|
|
cd scripts
|
|
|
|
|
|
|
|
sh run_distribute_train_for_gpu.sh 8 0,1,2,3,4,5,6,7 /dataset/train
|
|
|
|
# standalone training example for GPU
|
|
|
|
# standalone training example for GPU
|
|
|
|
sh scripts/run_standalone_train_for_gpu.sh /dataset 0
|
|
|
|
cd scripts
|
|
|
|
|
|
|
|
sh run_standalone_train_for_gpu.sh 0 /dataset/train
|
|
|
|
```
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
#### Result
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You can find checkpoint file together with result in log.
|
|
|
|
You can find checkpoint file together with result in log.
|
|
|
|
|
|
|
|
|
|
|
|
### Evaluation
|
|
|
|
## [Evaluation Process](#contents)
|
|
|
|
|
|
|
|
|
|
|
|
#### Usage
|
|
|
|
### Usage
|
|
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
```
|
|
|
|
# Evaluation
|
|
|
|
# Evaluation
|
|
|
@ -101,11 +142,51 @@ sh run_eval_for_gpu.sh DATA_DIR DEVICE_ID PATH_CHECKPOINT
|
|
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
```bash
|
|
|
|
# Evaluation with checkpoint
|
|
|
|
# Evaluation with checkpoint
|
|
|
|
sh scripts/run_eval_for_gpu.sh /dataset 0 ./checkpoint/efficientnet_b0-600_1251.ckpt
|
|
|
|
cd scripts
|
|
|
|
|
|
|
|
sh run_eval_for_gpu.sh /dataset/eval ./checkpoint/efficientnet_b0-600_1251.ckpt
|
|
|
|
```
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
> checkpoint can be produced in training process.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### Result
|
|
|
|
#### Result
|
|
|
|
|
|
|
|
|
|
|
|
Evaluation result will be stored in the scripts path. Under this, you can find result like the followings in log.
|
|
|
|
Evaluation result will be stored in the scripts path. Under this, you can find result like the followings in log.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
acc=76.96%(TOP1)
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# [Model description](#contents)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## [Performance](#contents)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Training Performance
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| Parameters | efficientnet_b0 |
|
|
|
|
|
|
|
|
| -------------------------- | ------------------------- |
|
|
|
|
|
|
|
|
| Resource | NV SMX2 V100-32G |
|
|
|
|
|
|
|
|
| uploaded Date | 10/26/2020 |
|
|
|
|
|
|
|
|
| MindSpore Version | 1.0.0 |
|
|
|
|
|
|
|
|
| Dataset | ImageNet |
|
|
|
|
|
|
|
|
| Training Parameters | src/config.py |
|
|
|
|
|
|
|
|
| Optimizer | rmsprop |
|
|
|
|
|
|
|
|
| Loss Function | LabelSmoothingCrossEntropy |
|
|
|
|
|
|
|
|
| Loss | 1.8886 |
|
|
|
|
|
|
|
|
| Accuracy | 76.96%(TOP1) |
|
|
|
|
|
|
|
|
| Total time | 132 h 8ps |
|
|
|
|
|
|
|
|
| Checkpoint for Fine tuning | 64 M(.ckpt file) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Inference Performance
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| Parameters | |
|
|
|
|
|
|
|
|
| -------------------------- | ------------------------- |
|
|
|
|
|
|
|
|
| Resource | NV SMX2 V100-32G |
|
|
|
|
|
|
|
|
| uploaded Date | 10/26/2020 |
|
|
|
|
|
|
|
|
| MindSpore Version | 1.0.0 |
|
|
|
|
|
|
|
|
| Dataset | ImageNet, 1.2W |
|
|
|
|
|
|
|
|
| batch_size | 128 |
|
|
|
|
|
|
|
|
| outputs | probability |
|
|
|
|
|
|
|
|
| Accuracy | acc=76.96%(TOP1) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# [ModelZoo Homepage](#contents)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
|
|
|
|