@ -180,7 +180,7 @@ We need three parameters for this scripts.
- `DATASET_PATH`:the path of train dataset.
- `DEVICE_NUM`: the device number for distributed train.
Training result will be stored in the current path, whose folder name begins with "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
Training result will be stored in the current path, whose folder name begins with "train_parallel". Under this, you can find checkpoint file together with result like the following in log.
```shell
...
@ -202,7 +202,7 @@ epoch: 42 step: 5004, loss is 1.6453942
sh run_distribute_train_gpu.sh [DATASET_PATH] [DEVICE_NUM]
```
Training result will be stored in the current path, whose folder name begins with "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
Training result will be stored in the current path, whose folder name begins with "train_parallel". Under this, you can find checkpoint file together with result like the following in log.
```shell
...
@ -233,7 +233,7 @@ We need two parameters for this scripts.
> checkpoint can be produced in training process.
Inference result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
Inference result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the following in log.