You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
mindspore/model_zoo/research/nlp/tprr/README.md

226 lines
8.8 KiB

# Contents
- [Thinking Path Re-Ranker](#thinking-path-re-ranker)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
- [Features](#features)
- [Mixed Precision](#mixed-precision)
- [Environment Requirements](#environment-requirements)
- [Quick Start](#quick-start)
- [Script Description](#script-description)
- [Script and Sample Code](#script-and-sample-code)
- [Script Parameters](#script-parameters)
- [Training Process](#training-process)
- [Training](#training)
- [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation)
- [Model Description](#model-description)
- [Performance](#performance)
- [Description of random situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
# [Thinking Path Re-Ranker](#contents)
Thinking Path Re-Ranker(TPRR) was proposed in 2021 by Huawei Poisson Lab & Parallel Distributed Computing Lab. By incorporating the
retriever, reranker and reader modules, TPRR shows excellent performance on open-domain multi-hop question answering. Moreover, TPRR has won
the first place in the current HotpotQA official leaderboard. This is a example of evaluation of TPRR with HotPotQA dataset in MindSpore. More
importantly, this is the first open source version for TPRR.
# [Model Architecture](#contents)
Specially, TPRR contains three main modules. The first is retriever, which generate document sequences of each hop iteratively. The second
is reranker for selecting the best path from candidate paths generated by retriever. The last one is reader for extracting answer spans.
# [Dataset](#contents)
The retriever dataset consists of three parts:
Wikipedia data: the 2017 English Wikipedia dump version with bidirectional hyperlinks.
dev data: HotPotQA full wiki setting dev data with 7398 question-answer pairs.
dev tf-idf data: the candidates for each question in dev data which is originated from top-500 retrieved from 5M paragraphs of Wikipedia
through TF-IDF.
The dataset of re-ranker consists of two parts:
Wikipedia data: the 2017 English Wikipedia dump version.
dev data: HotPotQA full wiki setting dev data with 7398 question-answer pairs.
# [Features](#contents)
## [Mixed Precision](#contents)
To ultilize the strong computation power of Ascend chip, and accelerate the evaluation process, the mixed evaluation method is used. MindSpore
is able to cope with FP32 inputs and FP16 operators. In TPRR example, the model is set to FP16 mode for the matmul calculation part.
# [Environment Requirements](#contents)
- Hardware (Ascend)
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below:
- [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Quick Start](#contents)
After installing MindSpore via the official website and Dataset is correctly generated, you can start training and evaluation as follows.
- running on Ascend
```python
# run evaluation example with HotPotQA dev dataset
sh run_eval_ascend.sh
sh run_eval_ascend_reranker_reader.sh
```
# [Script Description](#contents)
## [Script and Sample Code](#contents)
```shell
.
└─tprr
├─README.md
├─scripts
| ├─run_eval_ascend.sh # Launch retriever evaluation in ascend
| └─run_eval_ascend_reranker_reader # Launch re-ranker and reader evaluation in ascend
|
├─src
| ├─build_reranker_data.py # build data for re-ranker from result of retriever
| ├─config.py # Evaluation configurations for retriever
| ├─hotpot_evaluate_v1.py # Hotpotqa evaluation script
| ├─onehop.py # Onehop model of retriever
| ├─onehop_bert.py # Onehop bert model of retriever
| ├─process_data.py # Data preprocessing for retriever
| ├─reader.py # Reader model
| ├─reader_albert_xxlarge.py # Albert-xxlarge module of reader model
| ├─reader_downstream.py # Downstream module of reader model
| ├─reader_eval.py # Reader evaluation script
| ├─rerank_albert_xxlarge.py # Albert-xxlarge module of re-ranker model
| ├─rerank_and_reader_data_generator.py # Data generator for re-ranker and reader
| ├─rerank_and_reader_utils.py # Utils for re-ranker and reader
| ├─rerank_downstream.py # Downstream module of re-ranker model
| ├─reranker.py # Re-ranker model
| ├─reranker_eval.py # Re-ranker evaluation script
| ├─twohop.py # Twohop model of retriever
| ├─twohop_bert.py # Twohop bert model of retriever
| └─utils.py # Utils for retriever
|
├─retriever_eval.py # Evaluation net for retriever
└─reranker_and_reader_eval.py # Evaluation net for re-ranker and reader
```
## [Script Parameters](#contents)
Parameters for retriever evaluation can be set in config.py.
- config for TPRR retriever
```python
"q_len": 64, # Max query length
"d_len": 192, # Max doc length
"s_len": 448, # Max sequence length
"in_len": 768, # Input dim
"out_len": 1, # Output dim
"num_docs": 500, # Num of docs
"topk": 8, # Top k
"onehop_num": 8 # Num of onehop doc as twohop neighbor
```
config.py for more configuration.
Parameters for re-ranker and reader evaluation can be passed directly at execution time.
- parameters for TPRR re-ranker and reader
```python
"seq_len": 512, # sequence length
"rerank_batch_size": 32, # batch size for re-ranker evaluation
"reader_batch_size": 448, # batch size for reader evaluation
"sp_threshold": 8 # threshold for picking supporting sentence
```
config.py for more configuration.
## [Evaluation Process](#contents)
### Evaluation
- Retriever evaluation on Ascend
```python
sh run_eval_ascend.sh
```
Evaluation result will be stored in the scripts path, whose folder name begins with "eval_tr". You can find the result like the
followings in log.
```python
###step###: 0
val: 0
count: 1
true count: 0
PEM: 0.0
...
###step###: 7396
val:6796
count:7397
true count: 6924
PEM: 0.9187508449371367
true top8 PEM: 0.9815135759676488
evaluation time (h): 20.155506462653477
```
- Re-ranker and reader evaluation on Ascend
Use the output of retriever as input of re-ranker
```python
sh run_eval_ascend_reranker_reader.sh
```
Evaluation result will be stored in the scripts path, whose folder name begins with "eval". You can find the result like the
followings in log.
```python
total top1 pem: 0.8803511141120864
...
em: 0.67440918298447
f1: 0.8025625656569652
prec: 0.8292800393689271
recall: 0.8136908451841731
sp_em: 0.6009453072248481
sp_f1: 0.844555664157302
sp_prec: 0.8640844345841021
sp_recall: 0.8446123918845106
joint_em: 0.4537474679270763
joint_f1: 0.715119580346802
joint_prec: 0.7540052057184267
joint_recall: 0.7250240424067661
```
# [Model Description](#contents)
## [Performance](#contents)
### Inference Performance
| Parameter | BGCF Ascend |
| ------------------------------ | ---------------------------- |
| Model Version | Inception V1 |
| Resource | Ascend 910 |
| uploaded Date | 03/12/2021(month/day/year) |
| MindSpore Version | 1.2.0 |
| Dataset | HotPotQA |
| Batch_size | 1 |
| Output | inference path |
| PEM | 0.9188 |
| total top1 pem | 0.88 |
| joint_f1 | 0.7151 |
# [Description of random situation](#contents)
No random situation for evaluation.
# [ModelZoo Homepage](#contents)
Please check the official [homepage](http://gitee.com/mindspore/mindspore/tree/master/model_zoo).