- [evaluation on cola dataset when running on Ascend](#evaluation-on-cola-dataset-when-running-on-ascend)
- [evaluation on cluener dataset when running on Ascend](#evaluation-on-cluener-dataset-when-running-on-ascend)
- [evaluation on msra dataset when running on Ascend](#evaluation-on-msra-dataset-when-running-on-ascend)
- [evaluation on squad v1.1 dataset when running on Ascend](#evaluation-on-squad-v11-dataset-when-running-on-ascend)
- [Model Description](#model-description)
- [Performance](#performance)
@ -215,7 +216,7 @@ For example, the schema file of cn-wiki-128 dataset for pretraining shows as fol
├─bert_for_finetune.py # backbone code of network
├─bert_for_pre_training.py # backbone code of network
├─bert_model.py # backbone code of network
├─clue_classification_dataset_precess.py # data preprocessing
├─finetune_data_preprocess.py # data preprocessing
├─cluner_evaluation.py # evaluation for cluner
├─config.py # parameter configuration for pretraining
├─CRF.py # assessment method for clue dataset
@ -301,6 +302,7 @@ options:
--load_finetune_checkpoint_path give a finetuning checkpoint path if only do eval
--train_data_file_path ner tfrecord for training. E.g., train.tfrecord
--eval_data_file_path ner tfrecord for predictions if f1 is used to evaluate result, ner json for predictions if clue_benchmark is used to evaluate result
--dataset_format dataset format, support mindrecord or tfrecord
The command above will run in the background, you can view training logs in ner_log.txt.
If you choose SpanF1 as assessment method and mode use_crf is set to be "true", the result will be as follows if evaluation is done after finetuning 10 epoches:
```text
Precision 0.953826
Recall 0.957749
F1 0.955784
```
#### evaluation on squad v1.1 dataset when running on Ascend