!9709 fix readme error.

From: @linqingke
Reviewed-by: @oacjiewen,@wuxuejian
Signed-off-by: @wuxuejian
pull/9709/MERGE
mindspore-ci-bot 4 years ago committed by Gitee
commit 654ed6f1d2

@ -111,7 +111,8 @@ export PYTHONPATH=${dirname_path}:$PYTHONPATH
export RANK_TABLE_FILE=$rank_table
export RANK_SIZE=8
task_set_core=24 # for taskset, task_set_core=total cpu number/RANK_SIZE
cpus=`cat /proc/cpuinfo | grep "processor" | wc -l`
task_set_core=`expr $cpus \/ $RANK_SIZE` # for taskset, task_set_core=total cpu number/RANK_SIZE
echo 'start training'
for((i=0;i<=$RANK_SIZE-1;i++));
do

@ -4,7 +4,7 @@
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
- [Environment Requirements](#environment-requirements)
- [Quick Start](#quick-start)
- [Quick Start](#quick-start)
- [Script Description](#script-description)
- [Script and Sample Code](#script-and-sample-code)
- [Training Process](#training-process)
@ -16,11 +16,11 @@
- [Model Description](#model-description)
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
- [Inference Performance](#evaluation-performance)
- [Inference Performance](#inference-performance)
- [ModelZoo Homepage](#modelzoo-homepage)
# FasterRcnn Description
Before FasterRcnn, the target detection networks rely on the region proposal algorithm to assume the location of targets, such as SPPnet and Fast R-CNN. Progress has reduced the running time of these detection networks, but it also reveals that the calculation of the region proposal is a bottleneck.
FasterRcnn proposed that convolution feature maps based on region detectors (such as Fast R-CNN) can also be used to generate region proposals. At the top of these convolution features, a Region Proposal Network (RPN) is constructed by adding some additional convolution layers (which share the convolution characteristics of the entire image with the detection network, thus making it possible to make regions almost costlessProposal), outputting both region bounds and objectness score for each location.Therefore, RPN is a full convolutional network (FCN), which can be trained end-to-end, generate high-quality region proposals, and then fed into Fast R-CNN for detection.
@ -35,14 +35,14 @@ FasterRcnn is a two-stage target detection network,This network uses a region pr
Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
Dataset used: [COCO2017](<https://cocodataset.org/>)
Dataset used: [COCO2017](<https://cocodataset.org/>)
- Dataset size19G
- Train18G118000 images
- Val1G5000 images
- Annotations241Minstancescaptionsperson_keypoints etc
- Train18G118000 images
- Val1G5000 images
- Annotations241Minstancescaptionsperson_keypoints etc
- Data formatimage and json files
- NoteData will be processed in dataset.py
- NoteData will be processed in dataset.py
# Environment Requirements
@ -55,17 +55,17 @@ Dataset used: [COCO2017](<https://cocodataset.org/>)
1. If coco dataset is used. **Select dataset to coco when run script.**
Install Cython and pycocotool, and you can also install mmcv to process data.
```
```pip
pip install Cython
pip install pycocotools
pip install mmcv==0.2.14
```
And change the COCO_ROOT and other settings you need in `config.py`. The directory structure is as follows:
And change the COCO_ROOT and other settings you need in `config.py`. The directory structure is as follows:
```
```path
.
└─cocodataset
├─annotations
@ -73,27 +73,27 @@ Dataset used: [COCO2017](<https://cocodataset.org/>)
└─instance_val2017.json
├─val2017
└─train2017
```
2. If your own dataset is used. **Select dataset to other when run script.**
Organize the dataset infomation into a TXT file, each row in the file is as follows:
```
```log
train2017/0000001.jpg 0,259,401,459,7 35,28,324,201,2 0,30,59,80,2
```
Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class infomations of the format [xmin,ymin,xmax,ymax,class]. We read image from an image path joined by the `IMAGE_DIR`(dataset directory) and the relative path in `ANNO_PATH`(the TXT file path), `IMAGE_DIR` and `ANNO_PATH` are setting in `config.py`.
Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class infomations of the format [xmin,ymin,xmax,ymax,class]. We read image from an image path joined by the `IMAGE_DIR`(dataset directory) and the relative path in `ANNO_PATH`(the TXT file path), `IMAGE_DIR` and `ANNO_PATH` are setting in `config.py`.
# Quick Start
After installing MindSpore via the official website, you can start training and evaluation as follows:
After installing MindSpore via the official website, you can start training and evaluation as follows:
Note: 1.the first run will generate the mindeocrd file, which will take a long time.
2.pretrained model is a resnet50 checkpoint that trained over ImageNet2012.
3.VALIDATION_JSON_FILE is label file. CHECKPOINT_PATH is a checkpoint file after trained.
3.VALIDATION_JSON_FILE is label file. CHECKPOINT_PATH is a checkpoint file after trained.
```
```shell
# standalone training
sh run_standalone_train_ascend.sh [PRETRAINED_MODEL]
@ -110,7 +110,7 @@ sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
```shell
.
└─faster_rcnn
└─faster_rcnn
├─README.md // descriptions about fasterrcnn
├─scripts
├─run_standalone_train_ascend.sh // shell script for standalone on ascend
@ -139,27 +139,26 @@ sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
```
## Training Process
### Usage
```
```shell
# standalone training on ascend
sh run_standalone_train_ascend.sh [PRETRAINED_MODEL]
# distributed training on ascend
sh run_distribute_train_ascend.sh [RANK_TABLE_FILE] [PRETRAINED_MODEL]
```
> Rank_table.json which is specified by RANK_TABLE_FILE is needed when you are running a distribute task. You can generate it by using the [hccl_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).
> As for PRETRAINED_MODELit should be a ResNet50 checkpoint that trained over ImageNet2012. Ready-made pretrained_models are not available now. Stay tuned.
> The original dataset path needs to be in the config.py,you can select "coco_root" or "image_dir".
### Result
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". You can find checkpoint file together with result like the followings in loss_rankid.log.
```
```log
# distribute training result(8p)
epoch: 1 step: 7393, rpn_loss: 0.12054, rcnn_loss: 0.40601, rpn_cls_loss: 0.04025, rpn_reg_loss: 0.08032, rcnn_cls_loss: 0.25854, rcnn_reg_loss: 0.14746, total_loss: 0.52655
epoch: 2 step: 7393, rpn_loss: 0.06561, rcnn_loss: 0.50293, rpn_cls_loss: 0.02587, rpn_reg_loss: 0.03967, rcnn_cls_loss: 0.35669, rcnn_reg_loss: 0.14624, total_loss: 0.56854
@ -173,19 +172,19 @@ epoch: 12 step: 7393, rpn_loss: 0.00691, rcnn_loss: 0.10168, rpn_cls_loss: 0.005
## Evaluation Process
### Usage
```
```shell
# eval on ascend
sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
```
> checkpoint can be produced in training process.
### Result
Eval result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
```
```log
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.360
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.586
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.385
@ -200,13 +199,13 @@ Eval result will be stored in the example path, whose folder name is "eval". Und
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.631
```
# Model Description
## Performance
### Training Performance
### Evaluation Performance
| Parameters | FasterRcnn |
| Parameters | Ascend |
| -------------------------- | ----------------------------------------------------------- |
| Model Version | V1 |
| Resource | Ascend 910 CPU 2.60GHz192coresMemory755G |
@ -219,12 +218,11 @@ Eval result will be stored in the example path, whose folder name is "eval". Und
| Speed | 1pc: 190 ms/step; 8pcs: 200 ms/step |
| Total time | 1pc: 37.17 hours; 8pcs: 4.89 hours |
| Parameters (M) | 250 |
| Scripts | [fasterrcnn script](https://gitee.com/mindspore/mindspore/tree/r1.0/model_zoo/official/cv/faster_rcnn) |
| Scripts | [fasterrcnn script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/faster_rcnn) |
### Inference Performance
### Evaluation Performance
| Parameters | FasterRcnn |
| Parameters | Ascend |
| ------------------- | --------------------------- |
| Model Version | V1 |
| Resource | Ascend 910 |
@ -237,5 +235,5 @@ Eval result will be stored in the example path, whose folder name is "eval". Und
| Model for inference | 250M (.ckpt file) |
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).

@ -1,5 +1,4 @@
# 目录
<!-- TOC -->
- [目录](#目录)
- [Faster R-CNN描述](#faster-r-cnn描述)
@ -40,11 +39,11 @@ Faster R-CNN是一个两阶段目标检测网络该网络采用RPN可以
使用的数据集:[COCO 2017](<https://cocodataset.org/>)
- 数据集大小19G
- 训练集18G118,000个图像
- 验证集1G5000个图像
- 标注集241M实例字幕person_keypoints等
- 训练集18G118,000个图像
- 验证集1G5000个图像
- 标注集241M实例字幕person_keypoints等
- 数据格式图像和json文件
- 注意数据在dataset.py中处理。
- 注意数据在dataset.py中处理。
# 环境要求
@ -57,17 +56,17 @@ Faster R-CNN是一个两阶段目标检测网络该网络采用RPN可以
1. 若使用COCO数据集**执行脚本时选择数据集COCO。**
安装Cython和pycocotool也可以安装mmcv进行数据处理。
```
```python
pip install Cython
pip install pycocotools
pip install mmcv==0.2.14
```
在`config.py`中更改COCO_ROOT和其他您需要的设置。目录结构如下
在`config.py`中更改COCO_ROOT和其他您需要的设置。目录结构如下
```
```path
.
└─cocodataset
├─annotations
@ -75,13 +74,13 @@ Faster R-CNN是一个两阶段目标检测网络该网络采用RPN可以
└─instance_val2017.json
├─val2017
└─train2017
```
2. 若使用自己的数据集,**执行脚本时选择数据集为other。**
将数据集信息整理成TXT文件每行内容如下
```
```txt
train2017/0000001.jpg 0,259,401,459,7 35,28,324,201,2 0,30,59,80,2
```
@ -89,13 +88,15 @@ Faster R-CNN是一个两阶段目标检测网络该网络采用RPN可以
# 快速入门
通过官方网站安装MindSpore后您可以按照如下步骤进行训练和评估
通过官方网站安装MindSpore后您可以按照如下步骤进行训练和评估
注意1. 第一次运行生成MindRecord文件耗时较长。
2. 预训练模型是在ImageNet2012上训练的ResNet-50检查点。
3. VALIDATION_JSON_FILE为标签文件。CHECKPOINT_PATH是训练后的检查点文件。
注意:
```
1. 第一次运行生成MindRecord文件耗时较长。
2. 预训练模型是在ImageNet2012上训练的ResNet-50检查点。
3. VALIDATION_JSON_FILE为标签文件。CHECKPOINT_PATH是训练后的检查点文件。
```shell
# 单机训练
sh run_standalone_train_ascend.sh [PRETRAINED_MODEL]
@ -112,7 +113,7 @@ sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
```shell
.
└─faster_rcnn
└─faster_rcnn
├─README.md // Faster R-CNN相关说明
├─scripts
├─run_standalone_train_ascend.sh // Ascend单机shell脚本
@ -144,14 +145,14 @@ sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
### 用法
```
```shell
# Ascend单机训练
sh run_standalone_train_ascend.sh [PRETRAINED_MODEL]
# Ascend分布式训练
sh run_distribute_train_ascend.sh [RANK_TABLE_FILE] [PRETRAINED_MODEL]
```
> 运行分布式任务时需要用到RANK_TABLE_FILE指定的rank_table.json。您可以使用[hccl_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools)生成该文件。
> PRETRAINED_MODEL应该是在ImageNet 2012上训练的ResNet-50检查点。现成的pretrained_models目前不可用。敬请期待。
> config.py中包含原数据集路径可以选择“coco_root”或“image_dir”。
@ -160,8 +161,7 @@ sh run_distribute_train_ascend.sh [RANK_TABLE_FILE] [PRETRAINED_MODEL]
训练结果保存在示例路径中文件夹名称以“train”或“train_parallel”开头。您可以在loss_rankid.log中找到检查点文件以及结果如下所示。
```
```log
# 分布式训练结果8P
epoch: 1 step: 7393, rpn_loss: 0.12054, rcnn_loss: 0.40601, rpn_cls_loss: 0.04025, rpn_reg_loss: 0.08032, rcnn_cls_loss: 0.25854, rcnn_reg_loss: 0.14746, total_loss: 0.52655
epoch: 2 step: 7393, rpn_loss: 0.06561, rcnn_loss: 0.50293, rpn_cls_loss: 0.02587, rpn_reg_loss: 0.03967, rcnn_cls_loss: 0.35669, rcnn_reg_loss: 0.14624, total_loss: 0.56854
@ -176,7 +176,7 @@ epoch: 12 step: 7393, rpn_loss: 0.00691, rcnn_loss: 0.10168, rpn_cls_loss: 0.005
### 用法
```
```shell
# Ascend评估
sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
```
@ -187,7 +187,7 @@ sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
评估结果将保存在示例路径中文件夹名为“eval”。在此文件夹下您可以在日志中找到类似以下的结果。
```
```log
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.360
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.586
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.385
@ -208,7 +208,7 @@ sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
### 训练性能
| 参数 | Faster R-CNN |
| 参数 |Ascend |
| -------------------------- | ----------------------------------------------------------- |
| 模型版本 | V1 |
| 资源 | Ascend 910CPU 2.60GHz192核内存755G |
@ -221,11 +221,11 @@ sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
| 速度 | 1卡190毫秒/步8卡200毫秒/步 |
| 总时间 | 1卡37.17小时8卡4.89小时 |
| 参数(M) | 250 |
| 脚本 | [Faster R-CNN脚本](https://gitee.com/mindspore/mindspore/tree/r1.0/model_zoo/office/cv/faster_rcnn) |
| 脚本 | [Faster R-CNN脚本](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/office/cv/faster_rcnn) |
### 评估性能
| 参数 | Faster R-CNN |
| 参数 | Ascend |
| ------------------- | --------------------------- |
| 模型版本 | V1 |
| 资源 | Ascend 910 |
@ -238,4 +238,5 @@ sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
| 推理模型 | 250M.ckpt文件 |
# ModelZoo主页
请浏览官网[主页](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。

@ -52,14 +52,14 @@ Dataset used: [ImageNet2012](http://www.image-net.org/)
└─validation_preprocess # evaluate dataset
```
## [Features]
## Features
### [Mixed Precision(Ascend)]
### Mixed Precision(Ascend)
The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
## [Environment Requirements]
## Environment Requirements
- HardwareAscend
- Prepare hardware environment with Ascend. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
@ -69,9 +69,9 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
- [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
## [Script description]
## Script description
### [Script and sample code]
### Script and sample code
```python
├── MobileNetV1
@ -153,7 +153,7 @@ Inference result will be stored in the example path, you can find result like th
result: {'top_5_accuracy': 0.9010016025641026, 'top_1_accuracy': 0.7128004807692307} ckpt=./train_parallel0/ckpt_0/mobilenetv1-90_1251.ckpt
```
## [Model description]
## Model description
### [Performance](#contents)

@ -148,7 +148,6 @@ sh scripts/run_eval_for_gpu.sh 0 /dataset/val ./checkpoint/nasnet-a-mobile-rank0
### 结果
评估结果保存在脚本路径下。路径下的日志中,可以找到如下结果:
acc=73.5%(TOP1)
# 模型描述

@ -1,5 +1,4 @@
# 目录
<!-- TOC -->
- [目录](#目录)
- [PSENet概述](#psenet概述)
@ -9,56 +8,60 @@
- [环境要求](#环境要求)
- [快速入门](#快速入门)
- [脚本说明](#脚本说明)
- [脚本和样例代码](#脚本和样例代码)
- [脚本参数](#脚本参数)
- [训练过程](#训练过程)
- [分布式训练](#分布式训练)
- [评估过程](#评估过程)
- [运行测试代码](#运行测试代码)
- [ICDAR2015评估脚本](#icdar2015评估脚本)
- [用法](#用法)
- [结果](#结果)
- [脚本和样例代码](#脚本和样例代码)
- [脚本参数](#脚本参数)
- [训练过程](#训练过程)
- [分布式训练](#分布式训练)
- [评估过程](#评估过程)
- [运行测试代码](#运行测试代码)
- [ICDAR2015评估脚本](#icdar2015评估脚本)
- [用法](#用法)
- [结果](#结果)
- [模型描述](#模型描述)
- [性能](#性能)
- [评估性能](#评估性能)
- [推理性能](#推理性能)
- [使用方法](#使用方法)
- [推理](#推理)
- [使用方法](#使用方法)
- [推理](#推理)
<!-- /TOC -->
# PSENet概述
随着卷积神经网络的发展场景文本检测技术迅速发展但其算法中存在的两大问题阻碍了这一技术的应用第一现有的大多数算法都需要四边形边框来精确定位任意形状的文本第二两个相邻文本可能会因错误检测而被覆盖。传统意义上语义分割可以解决第一个问题但无法解决第二个问题。而PSENet能够精确地检测出任意形状文本实例同时解决了两个问题。具体地说PSENet为每个文本实例生成不同的扩展内核并逐渐将最小扩展内核扩展为具有完整形状的文本实例。由于最小内核之间的几何差别较大PSNet可以有效分割封闭的文本实例更容易地检测任意形状文本实例。通过在CTW1500、全文、ICDAR 2015和ICDAR 2017 MLT中进行多次实验PSENet的有效性得以验证。
随着卷积神经网络的发展场景文本检测技术迅速发展但其算法中存在的两大问题阻碍了这一技术的应用第一现有的大多数算法都需要四边形边框来精确定位任意形状的文本第二两个相邻文本可能会因错误检测而被覆盖。传统意义上语义分割可以解决第一个问题但无法解决第二个问题。而PSENet能够精确地检测出任意形状文本实例同时解决了两个问题。具体地说PSENet为每个文本实例生成不同的扩展内核并逐渐将最小扩展内核扩展为具有完整形状的文本实例。由于最小内核之间的几何差别较大PSNet可以有效分割封闭的文本实例更容易地检测任意形状文本实例。通过在CTW1500、全文、ICDAR 2015和ICDAR 2017 MLT中进行多次实验PSENet的有效性得以验证。
[论文](https://openaccess.thecvf.com/content_CVPR_2019/html/Wang_Shape_Robust_Text_Detection_With_Progressive_Scale_Expansion_Network_CVPR_2019_paper.html) Wenhai Wang, Enze Xie, Xiang Li, Wenbo Hou, Tong Lu, Gang Yu, Shuai Shao; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 9336-9345
# PSENet示例
## 概述
渐进尺度扩展网络PSENet是一种能够很好地检测自然场景中任意形状文本的文本检测器。
# 数据集
使用的数据集:[ICDAR2015](https://rrc.cvc.uab.es/?ch=4&com=tasks#TextLocalization)
训练集包括约4500个可读单词的1000张图像。
测试集约2000个可读单词。
# 环境要求
- 硬件昇腾处理器Ascend
- 使用Ascend处理器来搭建硬件环境。如需试用昇腾处理器请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD %93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com审核通过即可获得资源。
- 使用Ascend处理器来搭建硬件环境。如需试用昇腾处理器请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com审核通过即可获得资源。
- 框架
- [MindSpore](https://www.mindspore.cn/install)
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutory/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
- [MindSpore教程](https://www.mindspore.cn/tutory/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
- 安装Mindspore
- 安装[pyblind11](https://github.com/pybind/pybind11)
- 安装[Opencv3.4](https://docs.opencv.org/3.4.9/d7/d9f/tutory_linux_install.html)
# 快速入门
通过官方网站安装MindSpore后您可以按照如下步骤进行培训和评估
通过官方网站安装MindSpore后您可以按照如下步骤进行培训和评估
```python
# 分布式训练运行示例
sh scripts/run_distribute_train.sh pretrained_model.ckpt
@ -86,32 +89,33 @@ sh scripts/run_eval_ascend.sh
## 脚本说明
## 脚本和样例代码
```
```path
└── PSENet
├── README.md // PSENet相关描述
├── scripts
├── run_distribute_train.sh // 用于分布式训练的shell脚本
└── run_eval_ascend.sh // 用于评估的shell脚本
├──src
├── __init__.py
├── README.md // PSENet相关描述
├── scripts
├── run_distribute_train.sh // 用于分布式训练的shell脚本
└── run_eval_ascend.sh // 用于评估的shell脚本
├──src
├── __init__.py
├── generate_hccn_file.py // 创建rank.json文件
├── ETSNET
├── __init__.py
├── base.py // 卷积和BN算子
├── dice_loss.py // 计算PSENet损耗值
├── etsnet.py // PSENet中的子网
├── fpn.py // PSENet中的子网
├── resnet50.py // PSENet中的子网
├── pse // PSENet中的子网
├── ETSNET
├── __init__.py
├── base.py // 卷积和BN算子
├── dice_loss.py // 计算PSENet损耗值
├── etsnet.py // PSENet中的子网
├── fpn.py // PSENet中的子网
├── resnet50.py // PSENet中的子网
├── pse // PSENet中的子网
├── __init__.py
├── adaptor.cpp
├── adaptor.h
├── Makefile
├──config.py // 参数配置
├──dataset.py // 创建数据集
├──network_define.py // PSENet架构
├──test.py // 测试脚本
├──train.py // 训练脚本
├──config.py // 参数配置
├──dataset.py // 创建数据集
├──network_define.py // PSENet架构
├──test.py // 测试脚本
├──train.py // 训练脚本
```
@ -120,24 +124,24 @@ sh scripts/run_eval_ascend.sh
```python
train.py和config.py中主要参数如下
-- pre_trained是从零开始训练还是基于预训练模型训练。可选值为True、False。
-- pre_trained是从零开始训练还是基于预训练模型训练。可选值为True、False。
-- device_id用于训练或评估数据集的设备ID。当使用train.sh进行分布式训练时忽略此参数。
-- device_num使用train.sh进行分布式训练时使用的设备。
```
## 训练过程
### 分布式训练
```
```shell
sh scripts/run_distribute_train.sh pretrained_model.ckpt
```
上述shell脚本将在后台运行分布训练。可以通过`device[X]/test_*.log`文件查看结果。
上述shell脚本将在后台运行分布训练。可以通过`device[X]/test_*.log`文件查看结果。
采用以下方式达到损失值:
```
```log
# grep "epoch" device_*/loss.log
device_0/log:epoch 1, step: 20loss is 0.80383
device_0/log:epcoh 2, step: 40loss is 0.77951
@ -148,25 +152,32 @@ device_1/log:epcoh 2, step: 40loss is 0.76629
```
## 评估过程
### 运行测试代码
python test.py --ckpt=./device*/ckpt*/ETSNet-*.ckpt
### ICDAR2015评估脚本
#### 用法
+ 第一步:单击[此处](https://rrc.cvc.uab.es/?ch=4&com=tasks#TextLocalization)下载评估方法。
+ 第二步:单击"我的方法"按钮,下载评估脚本。
+ 第三步:建议将评估方法根符号链接到$MINDSPORE/model_zoo/psenet/eval_ic15/。如果您的文件夹结构不同,您可能需要更改评估脚本文件中的相应路径。
```
第一步:单击[此处](https://rrc.cvc.uab.es/?ch=4&com=tasks#TextLocalization)下载评估方法。
第二步:单击"我的方法"按钮,下载评估脚本。
第三步:建议将评估方法根符号链接到$MINDSPORE/model_zoo/psenet/eval_ic15/。如果您的文件夹结构不同,您可能需要更改评估脚本文件中的相应路径。
```shell
sh ./script/run_eval_ascend.sh.sh
```
#### 结果
Calculated!{"precision": 0.8147966668299853"recall"0.8006740491092923"hmean"0.8076736279747451"AP"0}
Calculated!{"precision": 0.8147966668299853"recall"0.8006740491092923"hmean"0.8076736279747451"AP"0}
# 模型描述
## 性能
### 评估性能
### 评估性能
| 参数 | PSENet |
| -------------------------- | ----------------------------------------------------------- |
@ -184,8 +195,7 @@ Calculated!{"precision": 0.8147966668299853"recall"0.8006740491092923"h
| 总时间 | 1卡75.48小时4卡18.87小时|
| 参数(M) | 27.36 |
| 微调检查点 | 109.44M .ckpt file |
| 脚本 | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/psenet |
| 脚本 | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/psenet> |
### 推理性能
@ -205,11 +215,11 @@ Calculated!{"precision": 0.8147966668299853"recall"0.8006740491092923"h
如果您需要使用已训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理可参考[此处](https://www.mindspore.cn/tutory/training/en/master/advanced_use/migrate_3rd_scripts.html)。操作示例如下:
```
```python
# 加载未知数据集进行推理
dataset = dataset.create_dataset(cfg.data_path, 1, False)
# 定义模型
# 定义模型
config.INFERENCE = False
net = ETSNet(config)
net = net.set_train()
@ -245,4 +255,3 @@ net.set_train(False)
acc = model.eval(dataset)
print("accuracy: ", acc)
```

Loading…
Cancel
Save