This example implements pre-training, fine-tuning and evaluation of [BERT-base](https://github.com/google-research/bert)(the base version of BERT model) and [BERT-NEZHA](https://github.com/huawei-noah/Pretrained-Language-Model)(a Chinese pretrained language model developed by Huawei, which introduced a improvement of Functional Relative Positional Encoding as an effective positional encoding scheme).
- Download the zhwiki dataset for pre-training. Extract and clean text in the dataset with [WikiExtractor](https://github.com/attardi/wikiextractor). Convert the dataset to TFRecord format and move the files to a specified path.
- Set options in `config.py`, including lossscale, optimizer and network. Click [here](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html#tfrecord) for more information about dataset and the json schema file.
It contains of parameters of BERT model and options for training, which is set in file `config.py`, `finetune_config.py` and `evaluation_config.py` respectively.
device_target targeted device to run task: Ascend | GPU
do_train whether to run training on training set: true | false
do_eval whether to run eval on dev set: true | false
assessment_method assessment method to do evaluation: f1 | clue_benchmark
use_crf whether to use crf to calculate loss: true | false
device_id device id to run task
epoch_num total number of training epochs to perform
num_class number of classes to do labeling
vocab_file_path the vocabulary file that the BERT model was trained on
label2id_file_path label to id json file
save_finetune_checkpoint_path path to save generated finetuning checkpoint
load_pretrain_checkpoint_path initial checkpoint (usually from a pre-trained BERT model)
load_finetune_checkpoint_path give a finetuning checkpoint path if only do eval
train_data_file_path ner tfrecord for training. E.g., train.tfrecord
eval_data_file_path ner tfrecord for predictions if f1 is used to evaluate result, ner json for predictions if clue_benchmark is used to evaluate result
schema_file_path path to datafile schema file
scripts/run_squad.sh:
device_target targeted device to run task: Ascend | GPU
do_train whether to run training on training set: true | false
do_eval whether to run eval on dev set: true | false
device_id device id to run task
epoch_num total number of training epochs to perform
num_class number of classes to classify, usually 2 for squad task
vocab_file_path the vocabulary file that the BERT model was trained on
eval_json_path path to squad dev json file
save_finetune_checkpoint_path path to save generated finetuning checkpoint
load_pretrain_checkpoint_path initial checkpoint (usually from a pre-trained BERT model)
load_finetune_checkpoint_path give a finetuning checkpoint path if only do eval
train_data_file_path squad tfrecord for training. E.g., train1.1.tfrecord
eval_data_file_path squad tfrecord for predictions. E.g., dev1.1.tfrecord
schema_file_path path to datafile schema file
scripts/run_classifier.sh
device_target targeted device to run task: Ascend | GPU
do_train whether to run training on training set: true | false
do_eval whether to run eval on dev set: true | false
assessment_method assessment method to do evaluation: accuracy | f1 | mcc | spearman_correlation
device_id device id to run task
epoch_num total number of training epochs to perform
num_class number of classes to do labeling
save_finetune_checkpoint_path path to save generated finetuning checkpoint
load_pretrain_checkpoint_path initial checkpoint (usually from a pre-trained BERT model)
load_finetune_checkpoint_path give a finetuning checkpoint path if only do eval
train_data_file_path tfrecord for training. E.g., train.tfrecord
eval_data_file_path tfrecord for predictions. E.g., dev.tfrecord