@ -64,13 +64,13 @@ After dataset preparation, you can start training and evaluation as follows:
```bash
# run training example
cd ./scripts
sh run_standalone_train.sh [TRAIN_DATASET]
sh run_standalone_train.sh [TRAIN_DATASET] [DEVICEID]
# run distributed training example
sh run_distribute_train.sh [TRAIN_DATASET] [RANK_TABLE_PATH]
# run evaluation example
sh run_eval.sh [EVAL_DATASET_PATH] [DATASET_NAME] [MODEL_CKPT]
sh run_eval.sh [EVAL_DATASET_PATH] [DATASET_NAME] [MODEL_CKPT] [DEVICEID]
```
# [Script Description](#content)
@ -116,6 +116,7 @@ Parameters for both training and evaluation can be set in config.py. All the dat
```text
vocab_size # vocabulary size.
buckets # bucket sequence length.
test_buckets # test dataset bucket sequence length
batch_size # batch size of input dataset.
embedding_dims # The size of each embedding vector.
num_class # number of labels.
@ -134,7 +135,7 @@ Parameters for both training and evaluation can be set in config.py. All the dat
```bash
cd ./scripts
sh run_standalone_train.sh [DATASET_PATH]
sh run_standalone_train.sh [DATASET_PATH] [DEVICEID]
```
- Running scripts for distributed training of FastText. Task training on multiple device and run the following command in bash to be executed in `scripts/`:
@ -150,7 +151,7 @@ Parameters for both training and evaluation can be set in config.py. All the dat
``` bash
cd ./scripts
sh run_eval.sh [DATASET_PATH] [DATASET_NAME] [MODEL_CKPT]
sh run_eval.sh [DATASET_PATH] [DATASET_NAME] [MODEL_CKPT] [DEVICEID]
```
Note: The `DATASET_PATH` is path to mindrecord. eg. /dataset_path/*.mindrecord
@ -167,13 +168,13 @@ Parameters for both training and evaluation can be set in config.py. All the dat