@ -411,20 +411,22 @@ epoch: 0.0, current epoch percent: 0.002, step: 200, outpus are (Tensor(shape=[1
Before running the command below, please check the load pretrain checkpoint path has been set. Please set the checkpoint path to be the absolute full path, e.g:"/username/pretrain/checkpoint_100_300.ckpt".
```
bash scripts/run_classifier.sh
```
The command above will run in the background, you can view training logs in classfier_log.txt.
If you choose accuracy as assessment method, the result will be as follows:
```
acc_num XXX, total_num XXX, accuracy 0.588986
```
#### evaluation on cluener dataset when running on Ascend
```
bash scripts/ner.sh
```
The command above will run in the background, you can view training logs in ner_log.txt.
If you choose F1 as assessment method, the result will be as follows:
```
Precision 0.920507
Recall 0.948683
F1 0.920507
@ -433,9 +435,10 @@ F1 0.920507
#### evaluation on squad v1.1 dataset when running on Ascend
```
bash scripts/squad.sh
```
The command above will run in the background, you can view training logs in squad_log.txt.
The number of D chips can be automatically allocated based on the device_num set in hccl config file, You don not need to specify that.
The number of Ascend accelerators can be automatically allocated based on the device_num set in hccl config file, You don not need to specify that.
## how to use
For example, if we want to generate the launch command of the distributed training of Bert model on D chip, we can run the following command in `/bert/` dir:
For example, if we want to generate the launch command of the distributed training of Bert model on Ascend accelerators, we can run the following command in `/bert/` dir:
The number of D chips can be automatically allocated based on the device_num set in hccl config file, You don not need to specify that.
The number of Ascend accelerators can be automatically allocated based on the device_num set in hccl config file, You don not need to specify that.
## how to use
For example, if we want to generate the launch command of the distributed training of Bert model on D chip, we can run the following command in `/bert/` dir:
For example, if we want to generate the launch command of the distributed training of Bert model on Ascend accelerators, we can run the following command in `/bert/` dir:
Please note that the D chips used must be continuous, such [0,4) means to use four chips 0,1,2,3; [0,1) means to use chip 0; The first four chips are a group, and the last four chips are a group. In addition to the [0,8) chips are allowed, other cross-group such as [3,6) are prohibited.
Please note that the Ascend accelerators used must be continuous, such [0,4) means to use four chips 0,1,2,3; [0,1) means to use chip 0; The first four chips are a group, and the last four chips are a group. In addition to the [0,8) chips are allowed, other cross-group such as [3,6) are prohibited.