fix error links for master

pull/11368/head
lvmingfu 4 years ago
parent f679fcf075
commit bbdfb6ca79

File diff suppressed because it is too large Load Diff

@ -20,7 +20,7 @@
# [DeepText Description](#contents)
DeepText is a convolutional neural network architecture for text detection in non-specific scenarios. The DeepText system is based on the elegant framwork of Faster R-CNN. This idea was proposed in the paper "DeepText: A new approach for text proposal generation and text detection in natural images.", published in 2017.
DeepText is a convolutional neural network architecture for text detection in non-specific scenarios. The DeepText system is based on the elegant framework of Faster R-CNN. This idea was proposed in the paper "DeepText: A new approach for text proposal generation and text detection in natural images.", published in 2017.
[Paper](https://arxiv.org/pdf/1605.07314v1.pdf) Zhuoyao Zhong, Lianwen Jin, Shuangping Huang, South China University of Technology (SCUT), Published in ICASSP 2017.
@ -74,7 +74,7 @@ Here we used 4 datasets for training, and 1 datasets for Evaluation.
├─anchor_genrator.py # anchor generator
├─bbox_assign_sample.py # proposal layer for stage 1
├─bbox_assign_sample_stage2.py # proposal layer for stage 2
├─deeptext_vgg16.py # main network defination
├─deeptext_vgg16.py # main network definition
├─proposal_generator.py # proposal generator
├─rcnn.py # rcnn
├─roi_align.py # roi_align cell wrapper
@ -83,7 +83,7 @@ Here we used 4 datasets for training, and 1 datasets for Evaluation.
├─config.py # training configuration
├─dataset.py # data proprocessing
├─lr_schedule.py # learning rate scheduler
├─network_define.py # network defination
├─network_define.py # network definition
└─utils.py # some functions which is commonly used
├─eval.py # eval net
├─export.py # export checkpoint, surpport .onnx, .air, .mindir convert
@ -187,7 +187,7 @@ class 1 precision is 88.01%, recall is 82.77%
| Loss Function | SoftmaxCrossEntropyWithLogits for classification, SmoothL2Loss for bbox regression|
| Loss | ~0.008 |
| Total time (8p) | 4h |
| Scripts | [deeptext script](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/official/cv/deeptext) |
| Scripts | [deeptext script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/deeptext) |
#### Inference Performance
@ -219,4 +219,4 @@ We set seed to 1 in train.py.
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).

@ -197,7 +197,7 @@ Calculated!{"precision": 0.814796668299853, "recall": 0.8006740491092923, "hmean
| Total time | 1pc: 75.48 h; 8pcs: 10.01 h |
| Parameters (M) | 27.36 |
| Checkpoint for Fine tuning | 109.44M (.ckpt file) |
| Scripts | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/psenet> |
| Scripts | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/psenet> |
### Inference Performance

@ -195,7 +195,7 @@ Calculated!{"precision": 0.8147966668299853"recall"0.8006740491092923"h
| 总时间 | 1卡75.48小时4卡18.87小时|
| 参数(M) | 27.36 |
| 微调检查点 | 109.44M .ckpt file |
| 脚本 | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/psenet> |
| 脚本 | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/psenet> |
### 推理性能

@ -159,7 +159,7 @@ In this example, the download.gradle File configuration auto download `deeplabv
Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
deeplabv3.ms [deeplabv3.ms]( https://download.mindspore.cn/model_zoo/official/lite/deeplabv3_openimage_lite/deeplabv3.ms)
deeplabv3.ms [deeplabv3.ms](https://download.mindspore.cn/model_zoo/official/lite/deeplabv3_lite/deeplabv3.ms)
### Compiling On-Device Inference Code
@ -208,7 +208,7 @@ The inference code process is as follows. For details about the complete code, s
model.freeBuffer();
return;
}
// Note: when use model.freeBuffer(), the model can not be complile graph again.
// Note: when use model.freeBuffer(), the model can not be compile graph again.
model.freeBuffer();
```
@ -266,7 +266,7 @@ The inference code process is as follows. For details about the complete code, s
dstBitmap = scaleBitmapAndKeepRatio(dstBitmap, (int) resource_height, (int) resource_weight);
```
4. The process of image and output data can refer to methods showing bellow.
4. The process of image and output data can refer to methods showing below.
```Java
Bitmap scaleBitmapAndKeepRatio(Bitmap targetBmp, int reqHeightInPixels, int reqWidthInPixels) {
@ -323,7 +323,7 @@ The inference code process is as follows. For details about the complete code, s
float value = inputBuffer.getFloat((y * imageWidth * NUM_CLASSES + x * NUM_CLASSES + i) * 4);
if (i == 0 || value > maxVal) {
maxVal = value;
// Check wether a pixel belongs to a person whose label is 15.
// Check whether a pixel belongs to a person whose label is 15.
if (i == 15) {
mSegmentBits[x][y] = i;
} else {

@ -170,7 +170,7 @@ target_link_libraries(
从MindSpore Model Hub中下载模型文件本示例程序中使用的终端图像分割模型文件为`deeplabv3.ms`同样通过download.gradle脚本在APP构建时自动下载并放置在`app/src/main/assets`工程目录下。
> 若下载失败请手动下载模型文件deeplabv3.ms [下载链接](https://download.mindspore.cn/model_zoo/official/lite/deeplabv3_openimage_lite/deeplabv3.ms)。
> 若下载失败请手动下载模型文件deeplabv3.ms [下载链接](https://download.mindspore.cn/model_zoo/official/lite/deeplabv3_lite/deeplabv3.ms)。
### 编写端侧推理代码
@ -219,7 +219,7 @@ target_link_libraries(
model.freeBuffer();
return;
}
// Note: when use model.freeBuffer(), the model can not be complile graph again.
// Note: when use model.freeBuffer(), the model can not be compile graph again.
model.freeBuffer();
```
@ -334,7 +334,7 @@ target_link_libraries(
float value = inputBuffer.getFloat((y * imageWidth * NUM_CLASSES + x * NUM_CLASSES + i) * 4);
if (i == 0 || value > maxVal) {
maxVal = value;
// Check wether a pixel belongs to a person whose label is 15.
// Check whether a pixel belongs to a person whose label is 15.
if (i == 15) {
mSegmentBits[x][y] = i;
} else {

@ -655,4 +655,4 @@ The model has been validated on Ascend environment, not validated on CPU and GPU
# ModelZoo Homepage
[Link](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo)
[Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)

@ -57,7 +57,7 @@ Dataset used: [LibriSpeech](<http://www.openslr.org/12>)
- HardwareGPU
- Prepare hardware environment with GPU processor.
- Framework
- [MindSpore](https://cmc-szv.clouddragon.huawei.com/cmcversion/index/search?searchKey=Do-MindSpore%20V100R001C00B622)
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
@ -241,7 +241,7 @@ python export.py --pre_trained_model_path='ckpt_path'
| Speed | 2p 2.139s/step |
| Total time: training | 2p: around 1 week; |
| Checkpoint | 991M (.ckpt file) |
| Scripts | [DeepSpeech script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/audio/deepspeech) |
| Scripts | [DeepSpeech script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/audio/deepspeech2) |
### Inference Performance

@ -192,7 +192,7 @@ Parameters for both training and evaluation can be set in config.py
| Speed | 1pc: 160 samples/sec; |
| Total time | 1pc: 20 mins; |
| Checkpoint for Fine tuning | 198.73M(.ckpt file) |
| Scripts | [music_auto_tagging script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/audio/fcn-4) |
| Scripts | [music_auto_tagging script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/audio/fcn-4) |
## [ModelZoo Homepage](#contents)

Loading…
Cancel
Save