@ -75,22 +76,31 @@ Parameters for both training and evaluation can be set in config.py.
#### Usage
```
# distributed training
# distributed training in Ascend
Usage: sh run_distribute_train.sh [MINDSPORE_HCCL_CONFIG_PATH] [DATASET_PATH]
# distributed training in GPU
Usage: sh run_distribute_train_for_gpu.sh [RANK_SIZE] [DATASET_PATH]
# standalone training
Usage: sh run_standalone_train.sh [DATASET_PATH]
Usage: sh run_standalone_train.sh [DATASET_PATH] [PLATFORM]
```
#### Launch
```
# distribute training example
# distribute training example in Ascend
sh run_distribute_train.sh rank_table.json ../data/train
# standalone training example
sh run_standalone_train.sh ../data/train
# distribute training example in GPU
sh run_distribute_train.sh 8 ../data/train
# standalone training example in Ascend
sh run_standalone_train.sh ../data/train Ascend
# standalone training example in GPU
sh run_standalone_train.sh ../data/train GPU
```
> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).