!10181 support CPU resnet50

From: @zhao_ting_v
Reviewed-by: @wuxuejian,@guoqi1024
Signed-off-by: @wuxuejian
pull/10181/MERGE
mindspore-ci-bot 5 years ago committed by Gitee
commit 539689ce75

@ -81,8 +81,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
# [Environment Requirements](#contents)
- HardwareAscend/GPU
- Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
- HardwareAscend/GPU/CPU
- Prepare hardware environment with Ascend, GPU or CPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
@ -123,6 +123,16 @@ sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [C
sh run_gpu_resnet_benchmark.sh [DATASET_PATH] [BATCH_SIZE](optional) [DTYPE](optional) [DEVICE_NUM](optional) [SAVE_CKPT](optional) [SAVE_PATH](optional)
```
- Running on CPU
```bash
# standalone training example
python train.py --net=[resnet50|resnet101] --dataset=[cifar10|imagenet2012] --device_target=CPU --dataset_path=[DATASET_PATH] --pre_trained=[CHECKPOINT_PATH](optional)
# infer example
python eval.py --net=[resnet50|resnet101] --dataset=[cifar10|imagenet2012] --dataset_path=[DATASET_PATH] --checkpoint_path=[CHECKPOINT_PATH] --device_target=CPU
```
# [Script Description](#contents)
## [Script and Sample Code](#contents)

@ -28,7 +28,8 @@ parser.add_argument('--dataset', type=str, default=None, help='Dataset, either c
parser.add_argument('--checkpoint_path', type=str, default=None, help='Checkpoint file path')
parser.add_argument('--dataset_path', type=str, default=None, help='Dataset path')
parser.add_argument('--device_target', type=str, default='Ascend', help='Device target')
parser.add_argument('--device_target', type=str, default='Ascend', choices=("Ascend", "GPU", "CPU"),
help="Device target, support Ascend, GPU and CPU.")
args_opt = parser.parse_args()
set_seed(1)

@ -39,7 +39,8 @@ parser.add_argument('--run_distribute', type=ast.literal_eval, default=False, he
parser.add_argument('--device_num', type=int, default=1, help='Device num.')
parser.add_argument('--dataset_path', type=str, default=None, help='Dataset path')
parser.add_argument('--device_target', type=str, default='Ascend', help='Device target')
parser.add_argument('--device_target', type=str, default='Ascend', choices=("Ascend", "GPU", "CPU"),
help="Device target, support Ascend, GPU and CPU.")
parser.add_argument('--pre_trained', type=str, default=None, help='Pretrained checkpoint path')
parser.add_argument('--parameter_server', type=ast.literal_eval, default=False, help='Run parameter server train')
args_opt = parser.parse_args()
@ -66,6 +67,9 @@ else:
if __name__ == '__main__':
target = args_opt.device_target
if target == "CPU":
args_opt.run_distribute = False
ckpt_save_dir = config.save_checkpoint_path
# init context
@ -153,7 +157,7 @@ if __name__ == '__main__':
model = Model(net, loss_fn=loss, optimizer=opt, loss_scale_manager=loss_scale, metrics={'acc'},
amp_level="O2", keep_batchnorm_fp32=False)
else:
# GPU target
# GPU and CPU target
if args_opt.dataset == "imagenet2012":
if not config.use_label_smooth:
config.label_smooth_factor = 0.0
@ -162,7 +166,8 @@ if __name__ == '__main__':
else:
loss = SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
if (args_opt.net == "resnet101" or args_opt.net == "resnet50") and not args_opt.parameter_server:
if (args_opt.net == "resnet101" or args_opt.net == "resnet50") and \
not args_opt.parameter_server and target != "CPU":
opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), lr, config.momentum, config.weight_decay,
config.loss_scale)
loss_scale = FixedLossScaleManager(config.loss_scale, drop_overflow_update=False)
@ -187,5 +192,6 @@ if __name__ == '__main__':
# train model
if args_opt.net == "se-resnet50":
config.epoch_size = config.train_epoch_size
dataset_sink_mode = (not args_opt.parameter_server) and target != "CPU"
model.train(config.epoch_size - config.pretrain_epoch_size, dataset, callbacks=cb,
sink_size=dataset.get_dataset_size(), dataset_sink_mode=(not args_opt.parameter_server))
sink_size=dataset.get_dataset_size(), dataset_sink_mode=dataset_sink_mode)

Loading…
Cancel
Save