|
4 years ago | |
---|---|---|
.. | ||
scripts | 4 years ago | |
src | 4 years ago | |
README.md | 4 years ago | |
retriever_eval.py | 4 years ago |
README.md
Contents
- Thinking Path Re-Ranker
- Model Architecture
- Dataset
- Features
- Environment Requirements
- Quick Start
- Script Description
- Model Description
- Description of random situation
- ModelZoo Homepage
Thinking Path Re-Ranker
Thinking Path Re-Ranker(TPRR) was proposed in 2021 by Huawei Poisson Lab & Parallel Distributed Computing Lab. By incorporating the retriever, reranker and reader modules, TPRR shows excellent performance on open-domain multi-hop question answering. Moreover, TPRR has won the first place in the current HotpotQA official leaderboard. This is a example of evaluation of TPRR with HotPotQA dataset in MindSpore. More importantly, this is the first open source version for TPRR.
Model Architecture
Specially, TPRR contains three main modules. The first is retriever, which generate document sequences of each hop iteratively. The second is reranker for selecting the best path from candidate paths generated by retriever. The last one is reader for extracting answer spans.
Dataset
The retriever dataset consists of three parts: Wikipedia data: the 2017 English Wikipedia dump version with bidirectional hyperlinks. dev data: HotPotQA full wiki setting dev data with 7398 question-answer pairs. dev tf-idf data: the candidates for each question in dev data which is originated from top-500 retrieved from 5M paragraphs of Wikipedia through TF-IDF.
Features
Mixed Precision
To ultilize the strong computation power of Ascend chip, and accelerate the evaluation process, the mixed evaluation method is used. MindSpore is able to cope with FP32 inputs and FP16 operators. In TPRR example, the model is set to FP16 mode for the matmul calculation part.
Environment Requirements
- Hardware (Ascend)
- Framework
- For more information, please check the resources below:
Quick Start
After installing MindSpore via the official website and Dataset is correctly generated, you can start training and evaluation as follows.
-
running on Ascend
# run evaluation example with HotPotQA dev dataset sh run_eval_ascend.sh
Script Description
Script and Sample Code
.
└─tprr
├─README.md
├─scripts
| ├─run_eval_ascend.sh # Launch evaluation in ascend
|
├─src
| ├─config.py # Evaluation configurations
| ├─onehop.py # Onehop model
| ├─onehop_bert.py # Onehop bert model
| ├─process_data.py # Data preprocessing
| ├─twohop.py # Twohop model
| ├─twohop_bert.py # Twohop bert model
| └─utils.py # Utils for evaluation
|
└─retriever_eval.py # Evaluation net for retriever
Script Parameters
Parameters for evaluation can be set in config.py.
-
config for TPRR retriever dataset
"q_len": 64, # Max query length "d_len": 192, # Max doc length "s_len": 448, # Max sequence length "in_len": 768, # Input dim "out_len": 1, # Output dim "num_docs": 500, # Num of docs "topk": 8, # Top k "onehop_num": 8 # Num of onehop doc as twohop neighbor
config.py for more configuration.
Evaluation Process
Evaluation
-
Evaluation on Ascend
sh run_eval_ascend.sh
Evaluation result will be stored in the scripts path, whose folder name begins with "eval". You can find the result like the followings in log.
###step###: 0 val: 0 count: 1 true count: 0 PEM: 0.0 ... ###step###: 7396 val:6796 count:7397 true count: 6924 PEM: 0.9187508449371367 true top8 PEM: 0.9815135759676488 evaluation time (h): 20.155506462653477
Model Description
Performance
Inference Performance
Parameter | BGCF Ascend |
---|---|
Model Version | Inception V1 |
Resource | Ascend 910 |
uploaded Date | 03/12/2021(month/day/year) |
MindSpore Version | 1.2.0 |
Dataset | HotPotQA |
Batch_size | 1 |
Output | inference path |
PEM | 0.9188 |
Description of random situation
No random situation for evaluation.
ModelZoo Homepage
Please check the official homepage.