The python command above will run in the background, The log and model checkpoint will be generated in `output/202x-xx-xx_time_xx_xx_xx/`. The loss value will be achieved as follows:
@ -190,7 +190,7 @@ You can modify the training behaviour through the various flags in the `train.py
- running on Ascend
```
sh scripts/run_distribute_train.sh 8 rank_table.json /PATH/TO/DATASET
sh scripts/run_distribute_train.sh 8 rank_table.json /PATH/TO/DATASET /PATH/TO/PRETRAINED_CKPT
```
The above shell script will run distribute training in the background. You can view the results log and model checkpoint through the file `train[X]/output/202x-xx-xx_time_xx_xx_xx/`. The loss value will be achieved as follows:
@ -217,7 +217,7 @@ You can modify the training behaviour through the various flags in the `train.py