modify code formats for master

pull/11196/head
lvmingfu 4 years ago
parent bc276774e3
commit 27848587a3

@ -120,7 +120,7 @@ The following optimizers add the target interface: Adam, FTRL, LazyAdam, Proxim
</tr>
</table>
###### `export` Modify the input parameters and export's file name ([!7385](https://gitee.com/mind_spore/dashboard/projects/mindspore/mindspore/pulls/7385?tab=diffs) [!9057](https://gitee.com/mindspore/mindspore/pulls/9057/files))
###### `export` Modify the input parameters and export's file name ([!7385](https://gitee.com/mindspore/mindspore/pulls/7385) [!9057](https://gitee.com/mindspore/mindspore/pulls/9057/files))
Export the MindSpore prediction model to a file in the specified format.
@ -227,7 +227,7 @@ However, from a user's perspective, tensor.size and tensor.ndim (methods -> prop
</tr>
</table>
###### `EmbeddingLookup` add a config in the interface: sparse ([!8202](https://gitee.com/mind_spore/dashboard/projects/mindspore/mindspore/pulls/8202?tab=diffs))
###### `EmbeddingLookup` add a config in the interface: sparse ([!8202](https://gitee.com/mindspore/mindspore/pulls/8202))
sparse (bool): Using sparse mode. When 'target' is set to 'CPU', 'sparse' has to be true. Default: True.
@ -878,7 +878,7 @@ Contributions of any kind are welcome!
- Fix bug of list cannot be used as input in pynative mode([!1765](https://gitee.com/mindspore/mindspore/pulls/1765))
- Fix bug of kernel select ([!2103](https://gitee.com/mindspore/mindspore/pulls/2103))
- Fix bug of pattern matching for batchnorm fusion in the case of auto mix precision.([!1851](https://gitee.com/mindspore/mindspore/pulls/1851))
- Fix bug of generate hccl's kernel info.([!2393](https://gitee.com/mindspore/mindspore/mindspore/pulls/2393))
- Fix bug of generate hccl's kernel info.([!2393](https://gitee.com/mindspore/mindspore/pulls/2393))
- GPU platform
- Fix bug of summary feature invalid([!2173](https://gitee.com/mindspore/mindspore/pulls/2173))
- Data processing

File diff suppressed because it is too large Load Diff

@ -2,11 +2,15 @@
<!-- TOC -->
- [目录](#目录)
- [概述](#概述)
- [数据集](#数据集)
- [环境要求](#环境要求)
- [快速入门](#快速入门)
- [脚本详述](#脚本详述)
- [模型准备](#模型准备)
- [模型训练](#模型训练)
- [工程目录](#工程目录)
<!-- /TOC -->
@ -14,7 +18,7 @@
本文主要讲解如何在端侧进行LeNet模型训练。首先在服务器或个人笔记本上进行模型转换然后在安卓设备上训练模型。LeNet由2层卷积和3层全连接层组成模型结构简单因此可以在设备上快速训练。
# Dataset
# 数据集
本例使用[MNIST手写字数据集](http://yann.lecun.com/exdb/mnist/)
@ -40,8 +44,9 @@ mnist/
# 环境要求
- 服务器或个人笔记本
- [MindSpore Framework](https://www.mindspore.cn/install/en): 建议使用Docker安装
- [MindSpore ToD Framework](https://www.mindspore.cn/tutorial/tod/en/use/prparation.html)
- [MindSpore Framework](https://www.mindspore.cn/install): 建议使用Docker安装
- [MindSpore ToD Download](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/downloads.html)
- [MindSpore ToD Build](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/build.html)
- [Android NDK r20b](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip)
- [Android SDK](https://developer.android.com/studio?hl=zh-cn#cmdline-tools)
- Android移动设备
@ -116,4 +121,4 @@ train_lenet/
│   ├── model
│   │   └── lenet_tod.ms # model to train
│   └── train.sh # on-device script that load the initial model and train it
```
```

@ -2,10 +2,14 @@
<!-- TOC -->
- [目录](#目录)
- [概述](#概述)
- [数据集](#环境要求)
- [数据集](#数据集)
- [环境要求](#环境要求)
- [快速入门](#快速入门)
- [脚本详述](#脚本详述)
- [模型准备](#模型准备)
- [模型训练](#模型训练)
- [工程目录](#工程目录)
<!-- /TOC -->
@ -22,6 +26,7 @@
- 数据格式jpeg
> 注意
>
> - 当前发布版本中数据通过dataset.cc中自定义的`DataSet`类加载。我们使用[ImageMagick convert tool](https://imagemagick.org/)进行数据预处理包括图像裁剪、转换为BMP格式。
> - 本例将使用10分类而不是365类。
> - 训练、验证和测试数据集的比例分别是3:1:1。
@ -42,7 +47,8 @@ places
- 服务端
- [MindSpore Framework](https://www.mindspore.cn/install/en) - 建议使用安装docker环境
- [MindSpore ToD Framework](https://www.mindspore.cn/tutorial/tod/en/use/prparation.html)
- [MindSpore ToD Download](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/downloads.html)
- [MindSpore ToD Build](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/build.html)
- [Android NDK r20b](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip)
- [Android SDK](https://developer.android.com/studio?hl=zh-cn#cmdline-tools)
- [ImageMagick convert tool](https://imagemagick.org/)

@ -1,6 +1,9 @@
# Contents
- [CenterFace Description](#CenterFace-description)
<!-- TOC -->
- [Contents](#contents)
- [CenterFace Description](#centerface-description)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
- [Environment Requirements](#environment-requirements)
@ -11,7 +14,7 @@
- [Training Process](#training-process)
- [Training](#training)
- [Testing Process](#testing-process)
- [Evaluation](#testing)
- [Testing](#testing)
- [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation)
- [Convert Process](#convert-process)
@ -20,8 +23,11 @@
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
- [Inference Performance](#inference-performance)
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
<!-- /TOC -->
# [CenterFace Description](#contents)
CenterFace is a practical anchor-free face detection and alignment method for edge devices, we support training and evaluation on Ascend910.
@ -80,8 +86,8 @@ other datasets need to use the same format as WiderFace.
- Framework
- [MindSpore](https://cmc-szv.clouddragon.huawei.com/cmcversion/index/search?searchKey=Do-MindSpore%20V100R001C00B622)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Quick Start](#contents)
@ -226,7 +232,7 @@ sh eval_all.sh
the command is: python train.py [train parameters]
Major parameters train.py as follows:
```python
```text
--lr: learning rate
--per_batch_size: batch size on each device
--is_distributed: multi-device or not

File diff suppressed because it is too large Load Diff

@ -11,18 +11,18 @@
- [环境要求](#环境要求)
- [快速入门](#快速入门)
- [脚本说明](#脚本说明)
- [脚本及样例代码](#脚本及样例代码)
- [脚本参数](#脚本参数)
- [训练过程](#训练过程)
- [脚本及样例代码](#脚本及样例代码)
- [脚本参数](#脚本参数)
- [训练过程](#训练过程)
- [训练](#训练)
- [训练结果](#训练结果)
- [评估过程](#评估过程)
- [评估过程](#评估过程)
- [评估](#评估)
- [模型描述](#模型描述)
- [性能](#性能)
- [性能](#性能)
- [训练性能](#训练性能)
- [评估性能](#评估性能)
- [用法](#用法)
- [用法](#用法)
- [推理](#推理)
- [在预训练模型上继续训练](#在预训练模型上继续训练)
- [ModelZoo主页](#modelzoo主页)
@ -101,12 +101,12 @@ python src/preprocess_dataset.py
- 框架
- [MindSpore](https://www.mindspore.cn/install)
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
# 快速入门

@ -67,7 +67,7 @@ All the models in this repository are trained and validated on ImageNet-1K. The
## [Mixed Precision](#contents)
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
# [Environment Requirements](#contents)
@ -81,8 +81,8 @@ To run the python scripts in the repository, you need to prepare the environment
- Easydict
- MXNet 1.6.0 if running the script `param_convert.py`
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Quick Start](#contents)

@ -50,7 +50,7 @@ MaskRCNN是一个两级目标检测网络作为FasterRCNN的扩展模型
- 注释241M包括实例、字幕、人物关键点等
- 数据格式图像及JSON文件
- 注:数据在[dataset.py](http://dataset.py/)中处理。
- 注:数据在`dataset.py`中处理。
# 环境要求
@ -583,7 +583,7 @@ Accumulating evaluation results...
# 随机情况说明
[dataset.py](http://dataset.py/)中设置了“create_dataset”函数内的种子同时还使用[train.py](http://train.py/)中的随机种子进行权重初始化。
`dataset.py`中设置了“create_dataset”函数内的种子同时还使用`train.py`中的随机种子进行权重初始化。
# ModelZoo主页

@ -58,7 +58,7 @@ MobileNetV2总体网络架构如下
- 硬件Ascend/GPU/CPU
- 使用Ascend、GPU或CPU处理器来搭建硬件环境。如需试用Ascend处理器请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com审核通过即可获得资源。
- 框架
- [MindSpore](https://www.mindspore.cn/install/en)
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
@ -222,7 +222,7 @@ python export.py --platform [PLATFORM] --ckpt_file [CKPT_PATH] --file_format [EX
# 随机情况说明
<!-- [dataset.py](http://dataset.py/)中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。-->
<!-- `dataset.py`中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。-->
在train.py中设置了numpy.random、minspore.common.Initializer、minspore.ops.composite.random_ops和minspore.nn.probability.distribution所使用的种子。
# ModelZoo主页

@ -1,4 +1,5 @@
# 目录
<!-- TOC -->
- [目录](#目录)
@ -30,7 +31,6 @@
# MobileNetV2描述
MobileNetV2结合硬件感知神经网络架构搜索NAS和NetAdapt算法已经可以移植到手机CPU上运行后续随新架构进一步优化改进。2019年11月20日
[论文](https://arxiv.org/pdf/1905.02244)Howard, Andrew, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang et al."Searching for MobileNetV2."In Proceedings of the IEEE International Conference on Computer Vision, pp. 1314-1324.2019.
@ -47,12 +47,13 @@ MobileNetV2总体网络架构如下
使用的数据集:[imagenet](http://www.image-net.org/)
-数据集大小125G共1000个类、1.2万张彩色图像
- 训练集: 120G共1.2万张图像
- 测试集5G共5万张图像
- 数据格式RGB
- 注数据在src/dataset.py中处理。
- 数据集大小125G共1000个类、1.2万张彩色图像
- 训练集: 120G共1.2万张图像
- 测试集5G共5万张图像
- 数据格式RGB
- 注数据在src/dataset.py中处理。
# 特性
@ -64,13 +65,12 @@ MobileNetV2总体网络架构如下
# 环境要求
- 硬件昇腾处理器Ascend
- 使用昇腾处理器来搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com审核通过即可获得资源。
- 使用昇腾处理器来搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com审核通过即可获得资源。
- 框架
- [MindSpore](https://www.mindspore.cn/install/en)
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
# 脚本说明
@ -94,7 +94,6 @@ MobileNetV2总体网络架构如下
├── export.py # 导出检查点文件到air/onnx中
```
## 脚本参数
在config.py中可以同时配置训练参数和评估参数。
@ -123,13 +122,11 @@ MobileNetV2总体网络架构如下
### 用法
使用python或shell脚本开始训练。shell脚本的使用方法如下
- bash run_train.sh [Ascend] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH]\(可选)
- bash run_train.sh [GPU] [DEVICE_ID_LIST] [DATASET_PATH] [PRETRAINED_CKPT_PATH]\(可选)
### 启动
``` bash
@ -143,7 +140,7 @@ MobileNetV2总体网络架构如下
训练结果保存在示例路径中。`Ascend`处理器训练的检查点默认保存在`./train/device$i/checkpoint`,训练日志重定向到`./train/device$i/train.log`。`GPU`处理器训练的检查点默认保存在`./train/checkpointckpt_$i`中,训练日志重定向到`./train/train.log`中。
`train.log`内容如下:
```
```text
epoch:[ 0/200], step:[ 624/ 625], loss:[5.258/5.258], time:[140412.236], lr:[0.100]
epoch time:140522.500, per step time:224.836, avg loss:5.258
epoch:[ 1/200], step:[ 624/ 625], loss:[3.917/3.917], time:[138221.250], lr:[0.200]
@ -160,7 +157,7 @@ epoch time:138331.250, per step time:221.330, avg loss:3.917
### 启动
```
```bash
# 推理示例
shell:
Ascend: sh run_infer_quant.sh Ascend ~/imagenet/val/ ~/train/mobilenet-60_1601.ckpt
@ -172,7 +169,7 @@ epoch time:138331.250, per step time:221.330, avg loss:3.917
推理结果保存在示例路径,可以在`./val/infer.log`中找到如下结果:
```
```text
result:{'acc':0.71976314102564111}
```
@ -218,7 +215,7 @@ result:{'acc':0.71976314102564111}
# 随机情况说明
[dataset.py](http://dataset.py/)中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。
`dataset.py`中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。
# ModelZoo主页

@ -1,4 +1,5 @@
# 目录
<!-- TOC -->
- [目录](#目录)
@ -27,7 +28,6 @@
# MobileNetV3描述
MobileNetV3结合硬件感知神经网络架构搜索NAS和NetAdapt算法已经可以移植到手机CPU上运行后续随新架构进一步优化改进。2019年11月20日
[论文](https://arxiv.org/pdf/1905.02244)Howard, Andrew, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang et al."Searching for mobilenetv3."In Proceedings of the IEEE International Conference on Computer Vision, pp. 1314-1324.2019.
@ -43,38 +43,36 @@ MobileNetV3总体网络架构如下
使用的数据集:[imagenet](http://www.image-net.org/)
- 数据集大小125G共1000个类、1.2万张彩色图像
- 训练集120G共1.2万张图像
- 测试集5G共5万张图像
- 训练集120G共1.2万张图像
- 测试集5G共5万张图像
- 数据格式RGB
- 注数据在src/dataset.py中处理。
- 注数据在src/dataset.py中处理。
# 环境要求
- 硬件GPU
- 准备GPU处理器搭建硬件环境。
- 准备GPU处理器搭建硬件环境。
- 框架
- [MindSpore](https://www.mindspore.cn/install/en)
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
# 脚本说明
## 脚本和样例代码
```python
├── MobileNetV3
├── Readme.md # MobileNetV3相关描述
├── scripts
│ ├──run_train.sh # 用于训练的shell脚本
│ ├──run_eval.sh # 用于评估的shell脚本
├── src
│ ├──config.py # 参数配置
├── MobileNetV3
├── Readme.md # MobileNetV3相关描述
├── scripts
│ ├──run_train.sh # 用于训练的shell脚本
│ ├──run_eval.sh # 用于评估的shell脚本
├── src
│ ├──config.py # 参数配置
│ ├──dataset.py # 创建数据集
│ ├──launch.py # 启动python脚本
│ ├──lr_generator.py # 配置学习率
│ ├──lr_generator.py # 配置学习率
│ ├──mobilenetV3.py # MobileNetV3架构
├── train.py # 训练脚本
├── eval.py # 评估脚本
@ -91,7 +89,7 @@ MobileNetV3总体网络架构如下
### 启动
```
```bash
# 训练示例
python:
GPU: python train.py --dataset_path ~/imagenet/train/ --device_targe GPU
@ -101,9 +99,9 @@ MobileNetV3总体网络架构如下
### 结果
训练结果保存在示例路径中。检查点默认保存在`./checkpoint`中,训练日志重定向到`./train/train.log`,如下所示:
训练结果保存在示例路径中。检查点默认保存在`./checkpoint`中,训练日志重定向到`./train/train.log`,如下所示:
```
```text
epoch:[ 0/200], step:[ 624/ 625], loss:[5.258/5.258], time:[140412.236], lr:[0.100]
epoch time:140522.500, per step time:224.836, avg loss:5.258
epoch:[ 1/200], step:[ 624/ 625], loss:[3.917/3.917], time:[138221.250], lr:[0.200]
@ -120,7 +118,7 @@ epoch time:138331.250, per step time:221.330, avg loss:3.917
### 启动
```
```bash
# 推理示例
python:
GPU: python eval.py --dataset_path ~/imagenet/val/ --checkpoint_path mobilenet_199.ckpt --device_targe GPU
@ -129,13 +127,13 @@ epoch time:138331.250, per step time:221.330, avg loss:3.917
GPU: sh run_infer.sh GPU ~/imagenet/val/ ~/train/mobilenet-200_625.ckpt
```
> 训练过程中可以生成检查点。
> 训练过程中可以生成检查点。
### 结果
推理结果保存示例路径中,可以在`val.log`中找到如下结果:
推理结果保存示例路径中,可以在`val.log`中找到如下结果:
```
```text
result:{'acc':0.71976314102564111} ckpt=/path/to/checkpoint/mobilenet-200_625.ckpt
```
@ -143,7 +141,7 @@ result:{'acc':0.71976314102564111} ckpt=/path/to/checkpoint/mobilenet-200_625.ck
修改`src/config.py`文件中的`export_mode`和`export_file`, 运行`export.py`。
```
```bash
python export.py --device_target [PLATFORM] --checkpoint_path [CKPT_PATH]
```
@ -173,8 +171,8 @@ python export.py --device_target [PLATFORM] --checkpoint_path [CKPT_PATH]
# 随机情况说明
[dataset.py](http://dataset.py/)中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。
`dataset.py`中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。
# ModelZoo主页
请浏览官网[主页](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。
请浏览官网[主页](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。

@ -52,7 +52,7 @@
- 框架
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutory/training/en/master/index.html)
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
- 安装Mindspore
- 安装[pyblind11](https://github.com/pybind/pybind11)

@ -491,7 +491,7 @@ result:{'top_5_accuracy':0.9342589628681178, 'top_1_accuracy':0.768065781049936}
# 随机情况说明
[dataset.py](http://dataset.py/)中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。
`dataset.py`中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。
# ModelZoo主页

File diff suppressed because it is too large Load Diff

@ -67,8 +67,8 @@ RetinaFace使用ResNet50骨干提取图像特征进行检测。从ModelZoo获取
- 框架
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
# 快速入门

@ -53,7 +53,7 @@ Dataset used: COCO2017
## [Mixed Precision](#contents)
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
# [Environment Requirements](#contents)
@ -68,8 +68,8 @@ To run the python scripts in the repository, you need to prepare the environment
- opencv-python 4.3.0.36
- pycocotools 2.0
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Quick Start](#contents)

File diff suppressed because it is too large Load Diff

@ -52,7 +52,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap
- Install [MindSpore](https://www.mindspore.cn/install/en).
- For more information, please check the resources below:
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
## Software

@ -550,8 +550,8 @@ The comparisons between MASS and other baseline methods in terms of PPL on Corne
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
## Requirements
@ -562,7 +562,7 @@ subword-nmt
rouge
```
<https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html>
<https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html>
# Get started
@ -624,7 +624,7 @@ Get the log and output files under the path `./train_mass_*/`, and the model fil
## Inference
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html).
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html).
For inference, config the options in `config.json` firstly:
- Assign the `test_dataset` under `dataset_config` node to the dataset path.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

@ -32,7 +32,7 @@ FCN-4 is a convolutional neural network architecture, its name FCN-4 comes from
### Mixed Precision
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
## [Environment Requirements](#contents)
@ -42,8 +42,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
## [Quick Start](#contents)

@ -90,8 +90,8 @@ We use about 91K face images as training dataset and 11K as evaluating dataset i
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below:
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Script Description](#contents)

@ -74,8 +74,8 @@ We use about 13K images as training dataset and 3K as evaluating dataset in this
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Script Description](#contents)

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save