@ -172,7 +175,7 @@ Parameters for both training and evaluation can be set in config.py.
```
```
"class_num": 1001, # dataset class number
"class_num": 1001, # dataset class number
"batch_size": 256, # batch size of input tensor
"batch_size": 256, # batch size of input tensor
"loss_scale": 1024, # loss scale
"loss_scale": 1024, # loss scale
"momentum": 0.9, # momentum optimizer
"momentum": 0.9, # momentum optimizer
"weight_decay": 1e-4, # weight decay
"weight_decay": 1e-4, # weight decay
@ -184,7 +187,7 @@ Parameters for both training and evaluation can be set in config.py.
"save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
"save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
"warmup_epochs": 0, # number of warmup epoch
"warmup_epochs": 0, # number of warmup epoch
"lr_decay_mode": "Linear", # decay mode for generating learning rate
"lr_decay_mode": "Linear", # decay mode for generating learning rate
"use_label_smooth": True, # label smooth
"use_label_smooth": True, # label smooth
"label_smooth_factor": 0.1, # label smooth factor
"label_smooth_factor": 0.1, # label smooth factor
"lr_init": 0, # initial learning rate
"lr_init": 0, # initial learning rate
"lr_max": 0.8, # maximum learning rate
"lr_max": 0.8, # maximum learning rate
@ -207,7 +210,7 @@ Parameters for both training and evaluation can be set in config.py.
"save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
"save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
"warmup_epochs": 0, # number of warmup epoch
"warmup_epochs": 0, # number of warmup epoch
"lr_decay_mode": "cosine" # decay mode for generating learning rate
"lr_decay_mode": "cosine" # decay mode for generating learning rate
"use_label_smooth": True, # label_smooth
"use_label_smooth": True, # label_smooth
"label_smooth_factor": 0.1, # label_smooth_factor
"label_smooth_factor": 0.1, # label_smooth_factor
"lr": 0.1 # base learning rate
"lr": 0.1 # base learning rate
```
```
@ -229,7 +232,7 @@ Parameters for both training and evaluation can be set in config.py.
"save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
"save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
"warmup_epochs": 3, # number of warmup epoch
"warmup_epochs": 3, # number of warmup epoch
"lr_decay_mode": "cosine" # decay mode for generating learning rate
"lr_decay_mode": "cosine" # decay mode for generating learning rate
"use_label_smooth": True, # label_smooth
"use_label_smooth": True, # label_smooth
"label_smooth_factor": 0.1, # label_smooth_factor
"label_smooth_factor": 0.1, # label_smooth_factor
"lr_init": 0.0, # initial learning rate
"lr_init": 0.0, # initial learning rate
"lr_max": 0.3, # maximum learning rate
"lr_max": 0.3, # maximum learning rate
@ -254,7 +257,7 @@ Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [D
```
```
For distributed training, a hccl configuration file with JSON format needs to be created in advance.
For distributed training, a hccl configuration file with JSON format needs to be created in advance.
Please follow the instructions in the link [hccn_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).
Please follow the instructions in the link [hccn_tools](https://gitee.com/mindspore/mindspore/tree/r1.0/model_zoo/utils/hccl_tools).
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
@ -313,18 +316,13 @@ epoch: 5 step: 5004, loss is 3.1978393