!8004 update readme

Merge pull request !8004 from panfengfeng/update_readme
pull/8004/MERGE
mindspore-ci-bot 4 years ago committed by Gitee
commit 3c059eefa2

@ -1,39 +1,81 @@
# NASNet Example # Contents
## Description - [NASNet Description](#nasnet-description)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
- [Environment Requirements](#environment-requirements)
- [Quick Start](#quick-start)
- [Script Description](#script-description)
- [Script and Sample Code](#script-and-sample-code)
- [Script Parameters](#script-parameters)
- [Training Process](#training-process)
- [Evaluation Process](#evaluation-process)
- [Model Description](#model-description)
- [Performance](#performance)
- [Training Performance](#evaluation-performance)
- [Inference Performance](#evaluation-performance)
- [ModelZoo Homepage](#modelzoo-homepage)
This is an example of training NASNet-A-Mobile in MindSpore. # [NASNet Description](#contents)
## Requirements
- Install [Mindspore](http://www.mindspore.cn/install/en). [Paper](https://arxiv.org/abs/1707.07012): Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le. Learning Transferable Architectures for Scalable Image Recognition. 2017.
- Download the dataset.
## Structure # [Model architecture](#contents)
```shell The overall network architecture of NASNet is show below:
[Link](https://arxiv.org/abs/1707.07012)
# [Dataset](#contents)
Dataset used: [imagenet](http://www.image-net.org/)
- Dataset size: ~125G, 1.2W colorful images in 1000 classes
- Train: 120G, 1.2W images
- Test: 5G, 50000 images
- Data format: RGB images.
- Note: Data will be processed in src/dataset.py
# [Environment Requirements](#contents)
- Hardware GPU
- Prepare hardware environment with GPU processor.
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Script description](#contents)
## [Script and sample code](#contents)
```python
. .
└─nasnet └─nasnet
├─README.md ├─README.md
├─scripts ├─scripts
├─run_standalone_train_for_gpu.sh # launch standalone training with gpu platform(1p) ├─run_standalone_train_for_gpu.sh # launch standalone training with gpu platform(1p)
├─run_distribute_train_for_gpu.sh # launch distributed training with gpu platform(8p) ├─run_distribute_train_for_gpu.sh # launch distributed training with gpu platform(8p)
└─run_eval_for_gpu.sh # launch evaluating with gpu platform └─run_eval_for_gpu.sh # launch evaluating with gpu platform
├─src ├─src
├─config.py # parameter configuration ├─config.py # parameter configuration
├─dataset.py # data preprocessing ├─dataset.py # data preprocessing
├─loss.py # Customized CrossEntropy loss function ├─loss.py # Customized CrossEntropy loss function
├─lr_generator.py # learning rate generator ├─lr_generator.py # learning rate generator
├─nasnet_a_mobile.py # network definition ├─nasnet_a_mobile.py # network definition
├─eval.py # eval net ├─eval.py # eval net
├─export.py # convert checkpoint ├─export.py # convert checkpoint
└─train.py # train net └─train.py # train net
``` ```
## Parameter Configuration ## [Script Parameters](#contents)
Parameters for both training and evaluating can be set in config.py Parameters for both training and evaluating can be set in config.py.
``` ```
'random_seed': 1, # fix random seed 'random_seed': 1, # fix random seed
@ -56,22 +98,18 @@ Parameters for both training and evaluating can be set in config.py
'opt_eps': 1.0, # epsilon 'opt_eps': 1.0, # epsilon
'rmsprop_decay': 0.9, # rmsprop decay 'rmsprop_decay': 0.9, # rmsprop decay
'loss_scale': 1, # loss scale 'loss_scale': 1, # loss scale
``` ```
## [Training Process](#contents)
## Running the example
### Train
#### Usage #### Usage
``` ```
# distribute training example(8p) GPU:
sh run_distribute_train_for_gpu.sh DATA_DIR # distribute training example(8p)
# standalone training sh run_distribute_train_for_gpu.sh DATA_DIR
sh run_standalone_train_for_gpu.sh DEVICE_ID DATA_DIR # standalone training
sh run_standalone_train_for_gpu.sh DEVICE_ID DATA_DIR
``` ```
#### Launch #### Launch
@ -83,13 +121,11 @@ sh scripts/run_distribute_train_for_gpu.sh /dataset/train
sh scripts/run_standalone_train_for_gpu.sh 0 /dataset/train sh scripts/run_standalone_train_for_gpu.sh 0 /dataset/train
``` ```
#### Result
You can find checkpoint file together with result in log. You can find checkpoint file together with result in log.
### Evaluation ## [Evaluation Process](#contents)
#### Usage ### Usage
``` ```
# Evaluation # Evaluation
@ -103,9 +139,47 @@ sh run_eval_for_gpu.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
sh scripts/run_eval_for_gpu.sh 0 /dataset/val ./checkpoint/nasnet-a-mobile-rank0-248_10009.ckpt sh scripts/run_eval_for_gpu.sh 0 /dataset/val ./checkpoint/nasnet-a-mobile-rank0-248_10009.ckpt
``` ```
> checkpoint can be produced in training process.
#### Result #### Result
Evaluation result will be stored in the scripts path. Under this, you can find result like the followings in log. Evaluation result will be stored in the scripts path. Under this, you can find result like the followings in log.
```
acc=73.5%(TOP1)
```
# [Model description](#contents)
## [Performance](#contents)
### Training Performance
| Parameters | NASNet |
| -------------------------- | ------------------------- |
| Resource | NV SMX2 V100-32G |
| uploaded Date | 09/24/2020 |
| MindSpore Version | 1.0.0 |
| Dataset | ImageNet |
| Training Parameters | src/config.py |
| Optimizer | Momentum |
| Loss Function | SoftmaxCrossEntropyWithLogits |
| Loss | 1.8965 |
| Accuracy | 73.5%(TOP1) |
| Total time | 144 h 8ps |
| Checkpoint for Fine tuning | 89 M(.ckpt file) |
### Inference Performance
| Parameters | |
| -------------------------- | ------------------------- |
| Resource | NV SMX2 V100-32G |
| uploaded Date | 09/24/2020 |
| MindSpore Version | 1.0.0 |
| Dataset | ImageNet, 1.2W |
| batch_size | 32 |
| outputs | probability |
| Accuracy | acc=73.5%(TOP1) |
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).

@ -14,6 +14,8 @@
- [Training Performance](#evaluation-performance) - [Training Performance](#evaluation-performance)
- [Inference Performance](#evaluation-performance) - [Inference Performance](#evaluation-performance)
- [ModelZoo Homepage](#modelzoo-homepage)
# [ShuffleNetV2 Description](#contents) # [ShuffleNetV2 Description](#contents)
ShuffleNetV2 is a much faster and more accurate netowrk than the previous networks on different platforms such as Ascend or GPU. ShuffleNetV2 is a much faster and more accurate netowrk than the previous networks on different platforms such as Ascend or GPU.
@ -117,3 +119,38 @@ You can start evaluation using python or shell scripts. The usage of shell scrip
### Result ### Result
Inference result will be stored in the example path, you can find result in `eval.log`. Inference result will be stored in the example path, you can find result in `eval.log`.
# [Model description](#contents)
## [Performance](#contents)
### Training Performance
| Parameters | ShuffleNetV2 |
| -------------------------- | ------------------------- |
| Resource | NV SMX2 V100-32G |
| uploaded Date | 09/24/2020 |
| MindSpore Version | 1.0.0 |
| Dataset | ImageNet |
| Training Parameters | src/config.py |
| Optimizer | Momentum |
| Loss Function | CrossEntropySmooth |
| Accuracy | 69.4%(TOP1) |
| Total time | 49 h 8ps |
### Inference Performance
| Parameters | |
| -------------------------- | ------------------------- |
| Resource | NV SMX2 V100-32G |
| uploaded Date | 09/24/2020 |
| MindSpore Version | 1.0.0 |
| Dataset | ImageNet, 1.2W |
| batch_size | 128 |
| outputs | probability |
| Accuracy | acc=69.4%(TOP1) |
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
Loading…
Cancel
Save