parent
aaf8ade7ab
commit
1485ff2c4d
@ -1,82 +0,0 @@
|
|||||||
# Guideline to Convert Training Data CLUERNER2020 to MindRecord For Bert Fine Tuning
|
|
||||||
|
|
||||||
<!-- TOC -->
|
|
||||||
|
|
||||||
- [What does the example do](#what-does-the-example-do)
|
|
||||||
- [How to use the example to process CLUERNER2020](#how-to-use-the-example-to-process-cluerner2020)
|
|
||||||
- [Download CLUERNER2020 and unzip](#download-cluerner2020-and-unzip)
|
|
||||||
- [Generate MindRecord](#generate-mindrecord)
|
|
||||||
- [Create MindDataset By MindRecord](#create-minddataset-by-mindrecord)
|
|
||||||
|
|
||||||
|
|
||||||
<!-- /TOC -->
|
|
||||||
|
|
||||||
## What does the example do
|
|
||||||
|
|
||||||
This example is based on [CLUERNER2020](https://www.cluebenchmarks.com/introduce.html) training data, generating MindRecord file, and finally used for Bert Fine Tuning progress.
|
|
||||||
|
|
||||||
1. run.sh: generate MindRecord entry script
|
|
||||||
- data_processor_seq.py: the script from [CLUEbenchmark/CLUENER2020](https://github.com/CLUEbenchmark/CLUENER2020/tree/master/tf_version), we just change the part of the generated tfrecord to MindRecord.
|
|
||||||
- label2id.json: the file from [CLUEbenchmark/CLUENER2020](https://github.com/CLUEbenchmark/CLUENER2020/tree/master/tf_version).
|
|
||||||
- tokenization.py: the script from [CLUEbenchmark/CLUENER2020](https://github.com/CLUEbenchmark/CLUENER2020/tree/master/tf_version).
|
|
||||||
- vocab.txt: the file from [CLUEbenchmark/CLUENER2020](https://github.com/CLUEbenchmark/CLUENER2020/tree/master/tf_version).
|
|
||||||
2. run_read.py: create MindDataset by MindRecord entry script.
|
|
||||||
- create_dataset.py: use MindDataset to read MindRecord to generate dataset.
|
|
||||||
3. data: the output directory for MindRecord.
|
|
||||||
4. cluener_public: the CLUENER2020 training data.
|
|
||||||
|
|
||||||
## How to use the example to process CLUERNER2020
|
|
||||||
|
|
||||||
Download CLUERNER2020, convert it to MindRecord, use MindDataset to read MindRecord.
|
|
||||||
|
|
||||||
### Download CLUERNER2020 and unzip
|
|
||||||
|
|
||||||
1. Download the training data zip.
|
|
||||||
> [CLUERNER2020 dataset download address](https://www.cluebenchmarks.com/introduce.html) **-> 任务介绍 -> CLUENER 细粒度命名实体识别 -> cluener下载链接**
|
|
||||||
|
|
||||||
2. Unzip the training data to dir example/nlp_to_mindrecord/CLUERNER2020/cluener_public.
|
|
||||||
```
|
|
||||||
unzip -d {your-mindspore}/example/nlp_to_mindrecord/CLUERNER2020/cluener_public cluener_public.zip
|
|
||||||
```
|
|
||||||
|
|
||||||
### Generate MindRecord
|
|
||||||
|
|
||||||
1. Run the run.sh script.
|
|
||||||
```bash
|
|
||||||
bash run.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Output like this:
|
|
||||||
```
|
|
||||||
...
|
|
||||||
[INFO] ME(17603:139620983514944,MainProcess):2020-04-28-16:56:12.498.235 [mindspore/mindrecord/filewriter.py:313] The list of mindrecord files created are: ['data/train.mindrecord'], and the list of index files are: ['data/train.mindrecord.db']
|
|
||||||
...
|
|
||||||
[INFO] ME(17603,python):2020-04-28-16:56:13.400.175 [mindspore/ccsrc/mindrecord/io/shard_writer.cc:667] WriteRawData] Write 1 records successfully.
|
|
||||||
[INFO] ME(17603,python):2020-04-28-16:56:13.400.863 [mindspore/ccsrc/mindrecord/io/shard_writer.cc:667] WriteRawData] Write 1 records successfully.
|
|
||||||
[INFO] ME(17603,python):2020-04-28-16:56:13.401.534 [mindspore/ccsrc/mindrecord/io/shard_writer.cc:667] WriteRawData] Write 1 records successfully.
|
|
||||||
[INFO] ME(17603,python):2020-04-28-16:56:13.402.179 [mindspore/ccsrc/mindrecord/io/shard_writer.cc:667] WriteRawData] Write 1 records successfully.
|
|
||||||
[INFO] ME(17603,python):2020-04-28-16:56:13.402.702 [mindspore/ccsrc/mindrecord/io/shard_writer.cc:667] WriteRawData] Write 1 records successfully.
|
|
||||||
...
|
|
||||||
[INFO] ME(17603:139620983514944,MainProcess):2020-04-28-16:56:13.431.208 [mindspore/mindrecord/filewriter.py:313] The list of mindrecord files created are: ['data/dev.mindrecord'], and the list of index files are: ['data/dev.mindrecord.db']
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create MindDataset By MindRecord
|
|
||||||
|
|
||||||
1. Run the run_read.sh script.
|
|
||||||
```bash
|
|
||||||
bash run_read.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Output like this:
|
|
||||||
```
|
|
||||||
...
|
|
||||||
example 1340: input_ids: [ 101 3173 1290 4852 7676 3949 122 3299 123 126 3189 4510 8020 6381 5442 7357 2590 3636 8021 7676 3949 4294 1166 6121 3124 1277 6121 3124 7270 2135 3295 5789 3326 123 126 3189 1355 6134 1093 1325 3173 2399 6590 6791 8024 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
|
|
||||||
example 1340: input_mask: [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
|
|
||||||
example 1340: segment_ids: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
|
|
||||||
example 1340: label_ids: [ 0 18 19 20 2 4 0 0 0 0 0 0 0 34 36 26 27 28 0 34 35 35 35 35 35 35 35 35 35 36 26 27 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
|
|
||||||
example 1341: input_ids: [ 101 1728 711 4293 3868 1168 2190 2150 3791 934 3633 3428 4638 6237 7025 8024 3297 1400 5310 3362 6206 5023 5401 1744 3297 7770 3791 7368 976 1139 1104 2137 511 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
|
|
||||||
example 1341: input_mask: [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
|
|
||||||
example 1341: segment_ids: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
|
|
||||||
example 1341: label_ids: [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 18 19 19 19 19 20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
|
|
||||||
...
|
|
||||||
```
|
|
@ -1,36 +0,0 @@
|
|||||||
# Copyright 2020 Huawei Technologies Co., Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
# ============================================================================
|
|
||||||
"""create MindDataset by MindRecord"""
|
|
||||||
import mindspore.dataset as ds
|
|
||||||
|
|
||||||
def create_dataset(data_file):
|
|
||||||
"""create MindDataset"""
|
|
||||||
num_readers = 4
|
|
||||||
data_set = ds.MindDataset(dataset_file=data_file, num_parallel_workers=num_readers, shuffle=True)
|
|
||||||
index = 0
|
|
||||||
for item in data_set.create_dict_iterator():
|
|
||||||
# print("example {}: {}".format(index, item))
|
|
||||||
print("example {}: input_ids: {}".format(index, item['input_ids']))
|
|
||||||
print("example {}: input_mask: {}".format(index, item['input_mask']))
|
|
||||||
print("example {}: segment_ids: {}".format(index, item['segment_ids']))
|
|
||||||
print("example {}: label_ids: {}".format(index, item['label_ids']))
|
|
||||||
index += 1
|
|
||||||
if index % 1000 == 0:
|
|
||||||
print("read rows: {}".format(index))
|
|
||||||
print("total rows: {}".format(index))
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
create_dataset('data/train.mindrecord')
|
|
||||||
create_dataset('data/dev.mindrecord')
|
|
@ -1 +0,0 @@
|
|||||||
## output dir
|
|
@ -1,43 +0,0 @@
|
|||||||
{
|
|
||||||
"O": 0,
|
|
||||||
"S_address": 1,
|
|
||||||
"B_address": 2,
|
|
||||||
"M_address": 3,
|
|
||||||
"E_address": 4,
|
|
||||||
"S_book": 5,
|
|
||||||
"B_book": 6,
|
|
||||||
"M_book": 7,
|
|
||||||
"E_book": 8,
|
|
||||||
"S_company": 9,
|
|
||||||
"B_company": 10,
|
|
||||||
"M_company": 11,
|
|
||||||
"E_company": 12,
|
|
||||||
"S_game": 13,
|
|
||||||
"B_game": 14,
|
|
||||||
"M_game": 15,
|
|
||||||
"E_game": 16,
|
|
||||||
"S_government": 17,
|
|
||||||
"B_government": 18,
|
|
||||||
"M_government": 19,
|
|
||||||
"E_government": 20,
|
|
||||||
"S_movie": 21,
|
|
||||||
"B_movie": 22,
|
|
||||||
"M_movie": 23,
|
|
||||||
"E_movie": 24,
|
|
||||||
"S_name": 25,
|
|
||||||
"B_name": 26,
|
|
||||||
"M_name": 27,
|
|
||||||
"E_name": 28,
|
|
||||||
"S_organization": 29,
|
|
||||||
"B_organization": 30,
|
|
||||||
"M_organization": 31,
|
|
||||||
"E_organization": 32,
|
|
||||||
"S_position": 33,
|
|
||||||
"B_position": 34,
|
|
||||||
"M_position": 35,
|
|
||||||
"E_position": 36,
|
|
||||||
"S_scene": 37,
|
|
||||||
"B_scene": 38,
|
|
||||||
"M_scene": 39,
|
|
||||||
"E_scene": 40
|
|
||||||
}
|
|
@ -1,20 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Copyright 2020 Huawei Technologies Co., Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
rm data/train.mindrecord*
|
|
||||||
rm data/dev.mindrecord*
|
|
||||||
|
|
||||||
python data_processor_seq.py
|
|
@ -1,17 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Copyright 2020 Huawei Technologies Co., Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
python create_dataset.py
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -0,0 +1 @@
|
|||||||
|
## NLP dataset to MindRecord
|
@ -1,107 +0,0 @@
|
|||||||
# Guideline to Convert Training Data zhwiki to MindRecord For Bert Pre Training
|
|
||||||
|
|
||||||
<!-- TOC -->
|
|
||||||
|
|
||||||
- [What does the example do](#what-does-the-example-do)
|
|
||||||
- [Run simple test](#run-simple-test)
|
|
||||||
- [How to use the example to process zhwiki](#how-to-use-the-example-to-process-zhwiki)
|
|
||||||
- [Download zhwiki training data](#download-zhwiki-training-data)
|
|
||||||
- [Extract the zhwiki](#extract-the-zhwiki)
|
|
||||||
- [Generate MindRecord](#generate-mindrecord)
|
|
||||||
- [Create MindDataset By MindRecord](#create-minddataset-by-mindrecord)
|
|
||||||
|
|
||||||
|
|
||||||
<!-- /TOC -->
|
|
||||||
|
|
||||||
## What does the example do
|
|
||||||
|
|
||||||
This example is based on [zhwiki](https://dumps.wikimedia.org/zhwiki) training data, generating MindRecord file, and finally used for Bert network training.
|
|
||||||
|
|
||||||
1. run.sh: generate MindRecord entry script.
|
|
||||||
- create_pretraining_data.py: the script from [google-research/bert](https://github.com/google-research/bert), we just change the part of the generated tfrecord to MindRecord.
|
|
||||||
- tokenization.py: the script from [google-research/bert](https://github.com/google-research/bert).
|
|
||||||
- vocab.txt: the file from [huawei-noah/Pretrained-Language-Model](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-TensorFlow/nezha).
|
|
||||||
- sample_text.txt: the file from [google-research/bert](https://github.com/google-research/bert).
|
|
||||||
2. run_read.py: create MindDataset by MindRecord entry script.
|
|
||||||
- create_dataset.py: use MindDataset to read MindRecord to generate dataset.
|
|
||||||
|
|
||||||
## Run simple test
|
|
||||||
|
|
||||||
Follow the step:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
bash run.sh # generate zhwiki.mindrecord* by sample_text.txt
|
|
||||||
bash run_read.sh # use MindDataset to read zhwiki.mindrecord*
|
|
||||||
```
|
|
||||||
|
|
||||||
## How to use the example to process zhwiki
|
|
||||||
|
|
||||||
Download zhwikidata, extract it, convert it to MindRecord, use MindDataset to read MindRecord.
|
|
||||||
|
|
||||||
### Download zhwiki training data
|
|
||||||
|
|
||||||
> [zhwiki dataset download address](https://dumps.wikimedia.org/zhwiki) **-> 20200401 -> zhwiki-20200401-pages-articles-multistream.xml.bz2**
|
|
||||||
|
|
||||||
### Extract the zhwiki
|
|
||||||
|
|
||||||
1. Download [wikiextractor](https://github.com/attardi/wikiextractor) script.
|
|
||||||
|
|
||||||
2. Extract the zhwiki.
|
|
||||||
```python
|
|
||||||
python WikiExtractor.py -o {output_path}/extract {input_path}/zhwiki-20200401-pages-articles-multistream.xml.bz2
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Generate like this:
|
|
||||||
```
|
|
||||||
$ ls {output_path}/extract
|
|
||||||
AA AB AC AD AE AF AG AH AI AJ AK AL AM AN
|
|
||||||
```
|
|
||||||
|
|
||||||
### Generate MindRecord
|
|
||||||
|
|
||||||
1. Modify the parameters in run.sh: --input_file, --output_file, --partition_number.
|
|
||||||
```
|
|
||||||
--input_file: Input raw text file (or comma-separated list of files).
|
|
||||||
--output_file: Output MindRecord file.
|
|
||||||
--partition_number: The MindRecord file will be split into the number of partition.
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Run the run.sh script.
|
|
||||||
```
|
|
||||||
bash run.sh
|
|
||||||
```
|
|
||||||
> Caution: This process is slow, please wait patiently. Run it on server is recommended.
|
|
||||||
|
|
||||||
3. The output like this:
|
|
||||||
```
|
|
||||||
...
|
|
||||||
[INFO] ME(23485,python):2020-04-28-17:16:40.670.744 [mindspore/ccsrc/mindrecord/io/shard_writer.cc:667] WriteRawData] Write 1 records successfully.
|
|
||||||
[INFO] ME(23485,python):2020-04-28-17:16:40.671.227 [mindspore/ccsrc/mindrecord/io/shard_writer.cc:667] WriteRawData] Write 1 records successfully.
|
|
||||||
[INFO] ME(23485,python):2020-04-28-17:16:40.671.660 [mindspore/ccsrc/mindrecord/io/shard_writer.cc:667] WriteRawData] Write 1 records successfully.
|
|
||||||
[INFO] ME(23485,python):2020-04-28-17:16:40.672.037 [mindspore/ccsrc/mindrecord/io/shard_writer.cc:667] WriteRawData] Write 1 records successfully.
|
|
||||||
[INFO] ME(23485,python):2020-04-28-17:16:40.672.453 [mindspore/ccsrc/mindrecord/io/shard_writer.cc:667] WriteRawData] Write 1 records successfully.
|
|
||||||
[INFO] ME(23485,python):2020-04-28-17:16:40.672.833 [mindspore/ccsrc/mindrecord/io/shard_writer.cc:667] WriteRawData] Write 1 records successfully.
|
|
||||||
...
|
|
||||||
[INFO] ME(23485:140354285963072,MainProcess):2020-04-28-17:16:40.718.039 [mindspore/mindrecord/filewriter.py:313] The list of mindrecord files created are: ['zhwiki.mindrecord0', 'zhwiki.mindrecord1', 'zhwiki.mindrecord2', 'zhwiki.mindrecord3'], and the list of index files are: ['zhwiki.mindrecord0.db', 'zhwiki.mindrecord1.db', 'zhwiki.mindrecord2.db', 'zhwiki.mindrecord3.db']
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create MindDataset By MindRecord
|
|
||||||
|
|
||||||
1. Run the run_read.sh script.
|
|
||||||
```bash
|
|
||||||
bash run_read.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
2. The output like this:
|
|
||||||
```
|
|
||||||
...
|
|
||||||
example 74: input_ids: [ 101 8168 118 12847 8783 9977 15908 117 8256 9245 11643 8168 8847 8588 11575 8154 8228 143 8384 8376 9197 10241 103 10564 11421 8199 12268 112 161 8228 11541 9586 8436 8174 8363 9864 9702 103 103 119 103 9947 10564 103 8436 8806 11479 103 8912 119 103 103 103 12209 8303 103 8757 8824 117 8256 103 8619 8168 11541 102 11684 8196 103 8228 8847 11523 117 9059 9064 12410 8358 8181 10764 117 11167 11706 9920 148 8332 11390 8936 8205 10951 11997 103 8154 117 103 8670 10467 112 161 10951 13139 12413 117 10288 143 10425 8205 152 10795 8472 8196 103 161 12126 9172 13129 12106 8217 8174 12244 8205 143 103 8461 8277 10628 160 8221 119 102]
|
|
||||||
example 74: input_mask: [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]
|
|
||||||
example 74: segment_ids: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]
|
|
||||||
example 74: masked_lm_positions: [ 6 22 37 38 40 43 47 50 51 52 55 60 67 76 89 92 98 109 120 0]
|
|
||||||
example 74: masked_lm_ids: [ 8118 8165 8329 8890 8554 8458 119 8850 8565 10392 8174 11467 10291 8181 8549 12718 13139 112 158 0]
|
|
||||||
example 74: masked_lm_weights: [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0.]
|
|
||||||
example 74: next_sentence_labels: [0]
|
|
||||||
...
|
|
||||||
```
|
|
@ -1,43 +0,0 @@
|
|||||||
# Copyright 2020 Huawei Technologies Co., Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
# ============================================================================
|
|
||||||
"""create MindDataset by MindRecord"""
|
|
||||||
import argparse
|
|
||||||
import mindspore.dataset as ds
|
|
||||||
|
|
||||||
def create_dataset(data_file):
|
|
||||||
"""create MindDataset"""
|
|
||||||
num_readers = 4
|
|
||||||
data_set = ds.MindDataset(dataset_file=data_file, num_parallel_workers=num_readers, shuffle=True)
|
|
||||||
index = 0
|
|
||||||
for item in data_set.create_dict_iterator():
|
|
||||||
# print("example {}: {}".format(index, item))
|
|
||||||
print("example {}: input_ids: {}".format(index, item['input_ids']))
|
|
||||||
print("example {}: input_mask: {}".format(index, item['input_mask']))
|
|
||||||
print("example {}: segment_ids: {}".format(index, item['segment_ids']))
|
|
||||||
print("example {}: masked_lm_positions: {}".format(index, item['masked_lm_positions']))
|
|
||||||
print("example {}: masked_lm_ids: {}".format(index, item['masked_lm_ids']))
|
|
||||||
print("example {}: masked_lm_weights: {}".format(index, item['masked_lm_weights']))
|
|
||||||
print("example {}: next_sentence_labels: {}".format(index, item['next_sentence_labels']))
|
|
||||||
index += 1
|
|
||||||
if index % 1000 == 0:
|
|
||||||
print("read rows: {}".format(index))
|
|
||||||
print("total rows: {}".format(index))
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
parser = argparse.ArgumentParser()
|
|
||||||
parser.add_argument("--input_file", type=str, required=True, help='Input mindreord file')
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
create_dataset(args.input_file)
|
|
File diff suppressed because it is too large
Load Diff
@ -1,29 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Copyright 2020 Huawei Technologies Co., Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
rm zhwiki.mindrecord*
|
|
||||||
|
|
||||||
python create_pretraining_data.py \
|
|
||||||
--input_file=./sample_text.txt \
|
|
||||||
--output_file=zhwiki.mindrecord \
|
|
||||||
--partition_number=4 \
|
|
||||||
--vocab_file=./vocab.txt \
|
|
||||||
--do_lower_case=True \
|
|
||||||
--max_seq_length=128 \
|
|
||||||
--max_predictions_per_seq=20 \
|
|
||||||
--masked_lm_prob=0.15 \
|
|
||||||
--random_seed=12345 \
|
|
||||||
--dupe_factor=5
|
|
@ -1,17 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Copyright 2020 Huawei Technologies Co., Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
python create_dataset.py --input_file=zhwiki.mindrecord0
|
|
@ -1,33 +0,0 @@
|
|||||||
This text is included to make sure Unicode is handled properly: 力加勝北区ᴵᴺᵀᵃছজটডণত
|
|
||||||
Text should be one-sentence-per-line, with empty lines between documents.
|
|
||||||
This sample text is public domain and was randomly selected from Project Guttenberg.
|
|
||||||
|
|
||||||
The rain had only ceased with the gray streaks of morning at Blazing Star, and the settlement awoke to a moral sense of cleanliness, and the finding of forgotten knives, tin cups, and smaller camp utensils, where the heavy showers had washed away the debris and dust heaps before the cabin doors.
|
|
||||||
Indeed, it was recorded in Blazing Star that a fortunate early riser had once picked up on the highway a solid chunk of gold quartz which the rain had freed from its incumbering soil, and washed into immediate and glittering popularity.
|
|
||||||
Possibly this may have been the reason why early risers in that locality, during the rainy season, adopted a thoughtful habit of body, and seldom lifted their eyes to the rifted or india-ink washed skies above them.
|
|
||||||
"Cass" Beard had risen early that morning, but not with a view to discovery.
|
|
||||||
A leak in his cabin roof,--quite consistent with his careless, improvident habits,--had roused him at 4 A. M., with a flooded "bunk" and wet blankets.
|
|
||||||
The chips from his wood pile refused to kindle a fire to dry his bed-clothes, and he had recourse to a more provident neighbor's to supply the deficiency.
|
|
||||||
This was nearly opposite.
|
|
||||||
Mr. Cassius crossed the highway, and stopped suddenly.
|
|
||||||
Something glittered in the nearest red pool before him.
|
|
||||||
Gold, surely!
|
|
||||||
But, wonderful to relate, not an irregular, shapeless fragment of crude ore, fresh from Nature's crucible, but a bit of jeweler's handicraft in the form of a plain gold ring.
|
|
||||||
Looking at it more attentively, he saw that it bore the inscription, "May to Cass."
|
|
||||||
Like most of his fellow gold-seekers, Cass was superstitious.
|
|
||||||
|
|
||||||
The fountain of classic wisdom, Hypatia herself.
|
|
||||||
As the ancient sage--the name is unimportant to a monk--pumped water nightly that he might study by day, so I, the guardian of cloaks and parasols, at the sacred doors of her lecture-room, imbibe celestial knowledge.
|
|
||||||
From my youth I felt in me a soul above the matter-entangled herd.
|
|
||||||
She revealed to me the glorious fact, that I am a spark of Divinity itself.
|
|
||||||
A fallen star, I am, sir!' continued he, pensively, stroking his lean stomach--'a fallen star!--fallen, if the dignity of philosophy will allow of the simile, among the hogs of the lower world--indeed, even into the hog-bucket itself. Well, after all, I will show you the way to the Archbishop's.
|
|
||||||
There is a philosophic pleasure in opening one's treasures to the modest young.
|
|
||||||
Perhaps you will assist me by carrying this basket of fruit?' And the little man jumped up, put his basket on Philammon's head, and trotted off up a neighbouring street.
|
|
||||||
Philammon followed, half contemptuous, half wondering at what this philosophy might be, which could feed the self-conceit of anything so abject as his ragged little apish guide;
|
|
||||||
but the novel roar and whirl of the street, the perpetual stream of busy faces, the line of curricles, palanquins, laden asses, camels, elephants, which met and passed him, and squeezed him up steps and into doorways, as they threaded their way through the great Moon-gate into the ample street beyond, drove everything from his mind but wondering curiosity, and a vague, helpless dread of that great living wilderness, more terrible than any dead wilderness of sand which he had left behind.
|
|
||||||
Already he longed for the repose, the silence of the Laura--for faces which knew him and smiled upon him; but it was too late to turn back now.
|
|
||||||
His guide held on for more than a mile up the great main street, crossed in the centre of the city, at right angles, by one equally magnificent, at each end of which, miles away, appeared, dim and distant over the heads of the living stream of passengers, the yellow sand-hills of the desert;
|
|
||||||
while at the end of the vista in front of them gleamed the blue harbour, through a network of countless masts.
|
|
||||||
At last they reached the quay at the opposite end of the street;
|
|
||||||
and there burst on Philammon's astonished eyes a vast semicircle of blue sea, ringed with palaces and towers.
|
|
||||||
He stopped involuntarily; and his little guide stopped also, and looked askance at the young monk, to watch the effect which that grand panorama should produce on him.
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Loading…
Reference in new issue