GPU: sh run_distribute_train_for_gpu.sh ~/imagenet/train/
GPU: cd scripts &sh run_distribute_train_for_gpu.sh 8 0,1,2,3,4,5,6,7 ~/imagenet/train/
```
### Result
Training result will be stored in the example path. Checkpoints will be stored at `./checkpoint` by default, and training log will be redirected to `./train/train.log`.
Training result will be stored in the example path. Checkpoints will be stored at `./checkpoint` by default, and training log will be redirected to `./train/train.log`.
## [Eval process](#contents)
@ -99,21 +99,21 @@ Training result will be stored in the example path. Checkpoints will be stored a
You can start evaluation using python or shell scripts. The usage of shell scripts as follows:
- GPU: sh run_eval_for_multi_gpu.sh [DEVICE_ID] [EPOCH]
- GPU: sh run_eval_for_gpu.sh [DATASET_PATH] [CHECKPOINT_PATH]