modify code formats for master

pull/11196/head
lvmingfu 4 years ago
parent bc276774e3
commit 27848587a3

@ -120,7 +120,7 @@ The following optimizers add the target interface: Adam, FTRL, LazyAdam, Proxim
</tr>
</table>
###### `export` Modify the input parameters and export's file name ([!7385](https://gitee.com/mind_spore/dashboard/projects/mindspore/mindspore/pulls/7385?tab=diffs) [!9057](https://gitee.com/mindspore/mindspore/pulls/9057/files))
###### `export` Modify the input parameters and export's file name ([!7385](https://gitee.com/mindspore/mindspore/pulls/7385) [!9057](https://gitee.com/mindspore/mindspore/pulls/9057/files))
Export the MindSpore prediction model to a file in the specified format.
@ -227,7 +227,7 @@ However, from a user's perspective, tensor.size and tensor.ndim (methods -> prop
</tr>
</table>
###### `EmbeddingLookup` add a config in the interface: sparse ([!8202](https://gitee.com/mind_spore/dashboard/projects/mindspore/mindspore/pulls/8202?tab=diffs))
###### `EmbeddingLookup` add a config in the interface: sparse ([!8202](https://gitee.com/mindspore/mindspore/pulls/8202))
sparse (bool): Using sparse mode. When 'target' is set to 'CPU', 'sparse' has to be true. Default: True.
@ -878,7 +878,7 @@ Contributions of any kind are welcome!
- Fix bug of list cannot be used as input in pynative mode([!1765](https://gitee.com/mindspore/mindspore/pulls/1765))
- Fix bug of kernel select ([!2103](https://gitee.com/mindspore/mindspore/pulls/2103))
- Fix bug of pattern matching for batchnorm fusion in the case of auto mix precision.([!1851](https://gitee.com/mindspore/mindspore/pulls/1851))
- Fix bug of generate hccl's kernel info.([!2393](https://gitee.com/mindspore/mindspore/mindspore/pulls/2393))
- Fix bug of generate hccl's kernel info.([!2393](https://gitee.com/mindspore/mindspore/pulls/2393))
- GPU platform
- Fix bug of summary feature invalid([!2173](https://gitee.com/mindspore/mindspore/pulls/2173))
- Data processing

File diff suppressed because it is too large Load Diff

@ -2,11 +2,15 @@
<!-- TOC -->
- [目录](#目录)
- [概述](#概述)
- [数据集](#数据集)
- [环境要求](#环境要求)
- [快速入门](#快速入门)
- [脚本详述](#脚本详述)
- [模型准备](#模型准备)
- [模型训练](#模型训练)
- [工程目录](#工程目录)
<!-- /TOC -->
@ -14,7 +18,7 @@
本文主要讲解如何在端侧进行LeNet模型训练。首先在服务器或个人笔记本上进行模型转换然后在安卓设备上训练模型。LeNet由2层卷积和3层全连接层组成模型结构简单因此可以在设备上快速训练。
# Dataset
# 数据集
本例使用[MNIST手写字数据集](http://yann.lecun.com/exdb/mnist/)
@ -40,8 +44,9 @@ mnist/
# 环境要求
- 服务器或个人笔记本
- [MindSpore Framework](https://www.mindspore.cn/install/en): 建议使用Docker安装
- [MindSpore ToD Framework](https://www.mindspore.cn/tutorial/tod/en/use/prparation.html)
- [MindSpore Framework](https://www.mindspore.cn/install): 建议使用Docker安装
- [MindSpore ToD Download](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/downloads.html)
- [MindSpore ToD Build](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/build.html)
- [Android NDK r20b](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip)
- [Android SDK](https://developer.android.com/studio?hl=zh-cn#cmdline-tools)
- Android移动设备

@ -2,10 +2,14 @@
<!-- TOC -->
- [目录](#目录)
- [概述](#概述)
- [数据集](#环境要求)
- [数据集](#数据集)
- [环境要求](#环境要求)
- [快速入门](#快速入门)
- [脚本详述](#脚本详述)
- [模型准备](#模型准备)
- [模型训练](#模型训练)
- [工程目录](#工程目录)
<!-- /TOC -->
@ -22,6 +26,7 @@
- 数据格式jpeg
> 注意
>
> - 当前发布版本中数据通过dataset.cc中自定义的`DataSet`类加载。我们使用[ImageMagick convert tool](https://imagemagick.org/)进行数据预处理包括图像裁剪、转换为BMP格式。
> - 本例将使用10分类而不是365类。
> - 训练、验证和测试数据集的比例分别是3:1:1。
@ -42,7 +47,8 @@ places
- 服务端
- [MindSpore Framework](https://www.mindspore.cn/install/en) - 建议使用安装docker环境
- [MindSpore ToD Framework](https://www.mindspore.cn/tutorial/tod/en/use/prparation.html)
- [MindSpore ToD Download](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/downloads.html)
- [MindSpore ToD Build](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/build.html)
- [Android NDK r20b](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip)
- [Android SDK](https://developer.android.com/studio?hl=zh-cn#cmdline-tools)
- [ImageMagick convert tool](https://imagemagick.org/)

@ -1,6 +1,9 @@
# Contents
- [CenterFace Description](#CenterFace-description)
<!-- TOC -->
- [Contents](#contents)
- [CenterFace Description](#centerface-description)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
- [Environment Requirements](#environment-requirements)
@ -11,7 +14,7 @@
- [Training Process](#training-process)
- [Training](#training)
- [Testing Process](#testing-process)
- [Evaluation](#testing)
- [Testing](#testing)
- [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation)
- [Convert Process](#convert-process)
@ -20,8 +23,11 @@
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
- [Inference Performance](#inference-performance)
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
<!-- /TOC -->
# [CenterFace Description](#contents)
CenterFace is a practical anchor-free face detection and alignment method for edge devices, we support training and evaluation on Ascend910.
@ -80,8 +86,8 @@ other datasets need to use the same format as WiderFace.
- Framework
- [MindSpore](https://cmc-szv.clouddragon.huawei.com/cmcversion/index/search?searchKey=Do-MindSpore%20V100R001C00B622)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Quick Start](#contents)
@ -226,7 +232,7 @@ sh eval_all.sh
the command is: python train.py [train parameters]
Major parameters train.py as follows:
```python
```text
--lr: learning rate
--per_batch_size: batch size on each device
--is_distributed: multi-device or not

File diff suppressed because it is too large Load Diff

@ -104,7 +104,7 @@ python src/preprocess_dataset.py
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)

@ -67,7 +67,7 @@ All the models in this repository are trained and validated on ImageNet-1K. The
## [Mixed Precision](#contents)
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
# [Environment Requirements](#contents)
@ -81,8 +81,8 @@ To run the python scripts in the repository, you need to prepare the environment
- Easydict
- MXNet 1.6.0 if running the script `param_convert.py`
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Quick Start](#contents)

@ -50,7 +50,7 @@ MaskRCNN是一个两级目标检测网络作为FasterRCNN的扩展模型
- 注释241M包括实例、字幕、人物关键点等
- 数据格式图像及JSON文件
- 注:数据在[dataset.py](http://dataset.py/)中处理。
- 注:数据在`dataset.py`中处理。
# 环境要求
@ -583,7 +583,7 @@ Accumulating evaluation results...
# 随机情况说明
[dataset.py](http://dataset.py/)中设置了“create_dataset”函数内的种子同时还使用[train.py](http://train.py/)中的随机种子进行权重初始化。
`dataset.py`中设置了“create_dataset”函数内的种子同时还使用`train.py`中的随机种子进行权重初始化。
# ModelZoo主页

@ -58,7 +58,7 @@ MobileNetV2总体网络架构如下
- 硬件Ascend/GPU/CPU
- 使用Ascend、GPU或CPU处理器来搭建硬件环境。如需试用Ascend处理器请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com审核通过即可获得资源。
- 框架
- [MindSpore](https://www.mindspore.cn/install/en)
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
@ -222,7 +222,7 @@ python export.py --platform [PLATFORM] --ckpt_file [CKPT_PATH] --file_format [EX
# 随机情况说明
<!-- [dataset.py](http://dataset.py/)中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。-->
<!-- `dataset.py`中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。-->
在train.py中设置了numpy.random、minspore.common.Initializer、minspore.ops.composite.random_ops和minspore.nn.probability.distribution所使用的种子。
# ModelZoo主页

@ -1,4 +1,5 @@
# 目录
<!-- TOC -->
- [目录](#目录)
@ -30,7 +31,6 @@
# MobileNetV2描述
MobileNetV2结合硬件感知神经网络架构搜索NAS和NetAdapt算法已经可以移植到手机CPU上运行后续随新架构进一步优化改进。2019年11月20日
[论文](https://arxiv.org/pdf/1905.02244)Howard, Andrew, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang et al."Searching for MobileNetV2."In Proceedings of the IEEE International Conference on Computer Vision, pp. 1314-1324.2019.
@ -47,13 +47,14 @@ MobileNetV2总体网络架构如下
使用的数据集:[imagenet](http://www.image-net.org/)
-数据集大小125G共1000个类、1.2万张彩色图像
- 数据集大小125G共1000个类、1.2万张彩色图像
- 训练集: 120G共1.2万张图像
- 测试集5G共5万张图像
- 数据格式RGB
- 注数据在src/dataset.py中处理。
# 特性
## 混合精度
@ -66,12 +67,11 @@ MobileNetV2总体网络架构如下
- 硬件昇腾处理器Ascend
- 使用昇腾处理器来搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com审核通过即可获得资源。
- 框架
- [MindSpore](https://www.mindspore.cn/install/en)
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
# 脚本说明
## 脚本和样例代码
@ -94,7 +94,6 @@ MobileNetV2总体网络架构如下
├── export.py # 导出检查点文件到air/onnx中
```
## 脚本参数
在config.py中可以同时配置训练参数和评估参数。
@ -123,13 +122,11 @@ MobileNetV2总体网络架构如下
### 用法
使用python或shell脚本开始训练。shell脚本的使用方法如下
- bash run_train.sh [Ascend] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH]\(可选)
- bash run_train.sh [GPU] [DEVICE_ID_LIST] [DATASET_PATH] [PRETRAINED_CKPT_PATH]\(可选)
### 启动
``` bash
@ -143,7 +140,7 @@ MobileNetV2总体网络架构如下
训练结果保存在示例路径中。`Ascend`处理器训练的检查点默认保存在`./train/device$i/checkpoint`,训练日志重定向到`./train/device$i/train.log`。`GPU`处理器训练的检查点默认保存在`./train/checkpointckpt_$i`中,训练日志重定向到`./train/train.log`中。
`train.log`内容如下:
```
```text
epoch:[ 0/200], step:[ 624/ 625], loss:[5.258/5.258], time:[140412.236], lr:[0.100]
epoch time:140522.500, per step time:224.836, avg loss:5.258
epoch:[ 1/200], step:[ 624/ 625], loss:[3.917/3.917], time:[138221.250], lr:[0.200]
@ -160,7 +157,7 @@ epoch time:138331.250, per step time:221.330, avg loss:3.917
### 启动
```
```bash
# 推理示例
shell:
Ascend: sh run_infer_quant.sh Ascend ~/imagenet/val/ ~/train/mobilenet-60_1601.ckpt
@ -172,7 +169,7 @@ epoch time:138331.250, per step time:221.330, avg loss:3.917
推理结果保存在示例路径,可以在`./val/infer.log`中找到如下结果:
```
```text
result:{'acc':0.71976314102564111}
```
@ -218,7 +215,7 @@ result:{'acc':0.71976314102564111}
# 随机情况说明
[dataset.py](http://dataset.py/)中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。
`dataset.py`中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。
# ModelZoo主页

@ -1,4 +1,5 @@
# 目录
<!-- TOC -->
- [目录](#目录)
@ -27,7 +28,6 @@
# MobileNetV3描述
MobileNetV3结合硬件感知神经网络架构搜索NAS和NetAdapt算法已经可以移植到手机CPU上运行后续随新架构进一步优化改进。2019年11月20日
[论文](https://arxiv.org/pdf/1905.02244)Howard, Andrew, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang et al."Searching for mobilenetv3."In Proceedings of the IEEE International Conference on Computer Vision, pp. 1314-1324.2019.
@ -48,18 +48,16 @@ MobileNetV3总体网络架构如下
- 数据格式RGB
- 注数据在src/dataset.py中处理。
# 环境要求
- 硬件GPU
- 准备GPU处理器搭建硬件环境。
- 框架
- [MindSpore](https://www.mindspore.cn/install/en)
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
# 脚本说明
## 脚本和样例代码
@ -91,7 +89,7 @@ MobileNetV3总体网络架构如下
### 启动
```
```bash
# 训练示例
python:
GPU: python train.py --dataset_path ~/imagenet/train/ --device_targe GPU
@ -103,7 +101,7 @@ MobileNetV3总体网络架构如下
训练结果保存在示例路径中。检查点默认保存在`./checkpoint`中,训练日志重定向到`./train/train.log`,如下所示:
```
```text
epoch:[ 0/200], step:[ 624/ 625], loss:[5.258/5.258], time:[140412.236], lr:[0.100]
epoch time:140522.500, per step time:224.836, avg loss:5.258
epoch:[ 1/200], step:[ 624/ 625], loss:[3.917/3.917], time:[138221.250], lr:[0.200]
@ -120,7 +118,7 @@ epoch time:138331.250, per step time:221.330, avg loss:3.917
### 启动
```
```bash
# 推理示例
python:
GPU: python eval.py --dataset_path ~/imagenet/val/ --checkpoint_path mobilenet_199.ckpt --device_targe GPU
@ -135,7 +133,7 @@ epoch time:138331.250, per step time:221.330, avg loss:3.917
推理结果保存示例路径中,可以在`val.log`中找到如下结果:
```
```text
result:{'acc':0.71976314102564111} ckpt=/path/to/checkpoint/mobilenet-200_625.ckpt
```
@ -143,7 +141,7 @@ result:{'acc':0.71976314102564111} ckpt=/path/to/checkpoint/mobilenet-200_625.ck
修改`src/config.py`文件中的`export_mode`和`export_file`, 运行`export.py`。
```
```bash
python export.py --device_target [PLATFORM] --checkpoint_path [CKPT_PATH]
```
@ -173,7 +171,7 @@ python export.py --device_target [PLATFORM] --checkpoint_path [CKPT_PATH]
# 随机情况说明
[dataset.py](http://dataset.py/)中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。
`dataset.py`中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。
# ModelZoo主页

@ -52,7 +52,7 @@
- 框架
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutory/training/en/master/index.html)
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
- 安装Mindspore
- 安装[pyblind11](https://github.com/pybind/pybind11)

@ -491,7 +491,7 @@ result:{'top_5_accuracy':0.9342589628681178, 'top_1_accuracy':0.768065781049936}
# 随机情况说明
[dataset.py](http://dataset.py/)中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。
`dataset.py`中设置了“create_dataset”函数内的种子同时还使用了train.py中的随机种子。
# ModelZoo主页

@ -22,7 +22,6 @@
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
# [RetinaFace Description](#contents)
Retinaface is a face detection model, which was proposed in 2019 and achieved the best results on the wideface dataset at that time. Retinaface, the full name of the paper is retinaface: single stage dense face localization in the wild. Compared with s3fd and mtcnn, it has a significant improvement, and has a higher recall rate for small faces. It is not good for multi-scale face detection. In order to solve these problems, retinaface feature pyramid structure is used for feature fusion between different scales, and SSH module is added.
@ -33,6 +32,7 @@ Retinaface is a face detection model, which was proposed in 2019 and achieved th
Retinaface needs a resnet50 backbone to extract image features for detection. You could get resnet50 train script from our modelzoo and modify the pad structure of resnet50 according to resnet in ./src/network.py, Final train it on imagenet2012 to get resnet50 pretrain model.
Steps:
1. Get resnet50 train script from our modelzoo.
2. Modify the resnet50 architecture according to resnet in ```./src/network.py```.(You can also leave the structure of a unchanged, but the accuracy will be 2-3 percentage points lower.)
3. Train resnet50 on imagenet2012.
@ -41,21 +41,20 @@ Steps:
Specifically, the retinaface network is based on retinanet. The feature pyramid structure of retinanet is used in the network, and SSH structure is added. Besides the traditional detection branch, the prediction branch of key points and self-monitoring branch are added in the network. The paper indicates that the two branches can improve the performance of the model. Here we do not implement the self-monitoring branch.
# [Dataset](#contents)
Dataset used: [WIDERFACE](<http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/WiderFace_Results.html>)
Dataset used: [WIDERFACE](http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/WiderFace_Results.html)
Dataset acquisition:
1. Get the dataset and annotations from [here](<https://github.com/peteryuX/retinaface-tf2>).
2. Get the eval ground truth label from [here](<https://github.com/peteryuX/retinaface-tf2/tree/master/widerface_evaluate/ground_truth>).
1. Get the dataset and annotations from [here](https://github.com/peteryuX/retinaface-tf2).
2. Get the eval ground truth label from [here](https://github.com/peteryuX/retinaface-tf2/tree/master/widerface_evaluate/ground_truth).
- Dataset size3.42G32,203 colorful images
- Train1.36G12,800 images
- Val345.95M3,226 images
- Test1.72G16,177 images
# [Environment Requirements](#contents)
- HardwareGPU
@ -63,10 +62,8 @@ Dataset acquisition:
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Quick Start](#contents)
@ -89,13 +86,11 @@ After installing MindSpore via the official website and download the dataset, yo
bash run_standalone_gpu_eval.sh 0
```
# [Script Description](#contents)
## [Script and Sample Code](#contents)
```
```text
├── model_zoo
├── README.md // descriptions about all the models
├── retinaface
@ -167,14 +162,13 @@ Parameters for both training and evaluation can be set in config.py
'val_gt_dir': './data/ground_truth/', # Path of val set ground_truth
```
## [Training Process](#contents)
### Training
- running on GPU
```
```bash
export CUDA_VISIBLE_DEVICES=0
python train.py > train.log 2>&1 &
```
@ -183,12 +177,11 @@ Parameters for both training and evaluation can be set in config.py
After training, you'll get some checkpoint files under the folder `./checkpoint/` by default.
### Distributed Training
- running on GPU
```
```bash
bash scripts/run_distribute_gpu_train.sh 4 0,1,2,3
```
@ -196,7 +189,6 @@ Parameters for both training and evaluation can be set in config.py
After training, you'll get some checkpoint files under the folder `./checkpoint/ckpt_0/` by default.
## [Evaluation Process](#contents)
### Evaluation
@ -205,14 +197,14 @@ Parameters for both training and evaluation can be set in config.py
Before running the command below, please check the checkpoint path used for evaluation. Please set the checkpoint path to be the absolute full path in src/config.py, e.g., "username/retinaface/checkpoint/ckpt_0/RetinaFace-100_402.ckpt".
```
```bash
export CUDA_VISIBLE_DEVICES=0
python eval.py > eval.log 2>&1 &
```
The above python command will run in the background. You can view the results through the file "eval.log". The result of the test dataset will be as follows:
```
```text
# grep "Val AP" eval.log
Easy Val AP : 0.9422
Medium Val AP : 0.9325
@ -221,23 +213,21 @@ Parameters for both training and evaluation can be set in config.py
OR,
```
```bash
bash run_standalone_gpu_eval.sh 0
```
The above python command will run in the background. You can view the results through the file "eval/eval.log". The result of the test dataset will be as follows:
```
```text
# grep "Val AP" eval.log
Easy Val AP : 0.9422
Medium Val AP : 0.9325
Hard Val AP : 0.8900
```
# [Model Description](#contents)
## [Performance](#contents)
### Evaluation Performance
@ -260,14 +250,13 @@ Parameters for both training and evaluation can be set in config.py
| Checkpoint for Fine tuning | 336.3M (.ckpt file) |
| Scripts | [retinaface script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/retinaface) |
## [How to use](#contents)
### Continue Training on the Pretrained Model
- running on GPU
```
```python
# Load dataset
ds_train = create_dataset(training_dataset, cfg, batch_size, multiprocessing=True, num_worker=cfg['num_workers'])
@ -305,6 +294,6 @@ Parameters for both training and evaluation can be set in config.py
In train.py, we set the seed with setup_seed function.
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).

@ -67,8 +67,8 @@ RetinaFace使用ResNet50骨干提取图像特征进行检测。从ModelZoo获取
- 框架
- [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
# 快速入门

@ -53,7 +53,7 @@ Dataset used: COCO2017
## [Mixed Precision](#contents)
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
# [Environment Requirements](#contents)
@ -68,8 +68,8 @@ To run the python scripts in the repository, you need to prepare the environment
- opencv-python 4.3.0.36
- pycocotools 2.0
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Quick Start](#contents)

@ -20,8 +20,8 @@
- [Inference Performance](#inference-performance)
- [ModelZoo Homepage](#modelzoo-homepage)
# [YOLOv4 Description](#contents)
YOLOv4 is a state-of-the-art detector which is faster (FPS) and more accurate (MS COCO AP50...95 and AP50) than all available alternative detectors.
YOLOv4 has verified a large number of features, and selected for use such of them for improving the accuracy of both the classifier and the detector.
These features can be used as best-practice for future studies and developments.
@ -39,7 +39,8 @@ Dataset support: [MS COCO] or datasetd with the same format as MS COCO
Annotation support: [MS COCO] or annotation as the same format as MS COCO
- The directory structure is as follows, the name of directory and file is user define:
```
```text
├── dataset
├── YOLOv4
├── annotations
@ -55,6 +56,7 @@ Annotation support: [MS COCO] or annotation as the same format as MS COCO
└─picturen.jpg
```
we suggest user to use MS COCO dataset to experience our model,
other datasets need to use the same format as MS COCO.
@ -63,15 +65,16 @@ other datasets need to use the same format as MS COCO.
- HardwareAscend
- Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
- Framework
- [MindSpore](https://www.mindspore.cn/)
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Quick Start](#contents)
After installing MindSpore via the official website, you can start training and evaluation as follows:
```
```text
# The cspdarknet53_backbone.ckpt in the follow script is got from cspdarknet53 training like paper.
# The parameter of training_shape define image shape for network, default is
[416, 416],
@ -88,7 +91,7 @@ After installing MindSpore via the official website, you can start training and
# It means use 11 kinds of shape as input shape, or it can be set some kind of shape.
```
```
```bash
#run training example(1p) by python command
python train.py \
--data_dir=./dataset/xxx \
@ -102,17 +105,17 @@ python train.py \
--lr_scheduler=cosine_annealing > log.txt 2>&1 &
```
```
```bash
# standalone training example(1p) by shell script
sh run_standalone_train.sh dataset/xxx cspdarknet53_backbone.ckpt
```
```
```bash
# For Ascend device, distributed training example(8p) by shell script
sh run_distribute_train.sh dataset/xxx cspdarknet53_backbone.ckpt rank_table_8p.json
```
```
```bash
# run evaluation by python command
python eval.py \
--data_dir=./dataset/xxx \
@ -120,7 +123,7 @@ python eval.py \
--testing_shape=416 > log.txt 2>&1 &
```
```
```bash
# run evaluation by shell script
sh run_eval.sh dataset/xxx checkpoint/xxx.ckpt
```
@ -128,7 +131,8 @@ sh run_eval.sh dataset/xxx checkpoint/xxx.ckpt
# [Script Description](#contents)
## [Script and Sample Code](#contents)
```
```text
└─yolov4
├─README.md
├─mindspore_hub_conf.py # config for mindspore hub
@ -158,8 +162,10 @@ sh run_eval.sh dataset/xxx checkpoint/xxx.ckpt
```
## [Script Parameters](#contents)
Major parameters train.py as follows:
```
```text
optional arguments:
-h, --help show this help message and exit
--device_target device where the code will be implemented: "Ascend", default is "Ascend"
@ -219,16 +225,21 @@ optional arguments:
```
## [Training Process](#contents)
YOLOv4 can be trained from the scratch or with the backbone named cspdarknet53.
Cspdarknet53 is a classifier which can be trained on some dataset like ImageNet(ILSVRC2012).
It is easy for users to train Cspdarknet53. Just replace the backbone of Classifier Resnet50 with cspdarknet53.
Resnet50 is easy to get in mindspore model zoo.
### Training
For Ascend device, standalone training example(1p) by shell script
```
```bash
sh run_standalone_train.sh dataset/coco2017 cspdarknet53_backbone.ckpt
```
```
```text
python train.py \
--data_dir=/dataset/xxx \
--pretrained_backbone=cspdarknet53_backbone.ckpt \
@ -240,10 +251,12 @@ python train.py \
--training_shape=416 \
--lr_scheduler=cosine_annealing > log.txt 2>&1 &
```
The python command above will run in the background, you can view the results through the file log.txt.
After training, you'll get some checkpoint files under the outputs folder by default. The loss value will be achieved as follows:
```
```text
# grep "loss:" train/log.txt
2020-10-16 15:00:37,483:INFO:epoch[0], iter[0], loss:8248.610352, 0.03 imgs/sec, lr:2.0466639227834094e-07
@ -259,13 +272,16 @@ After training, you'll get some checkpoint files under the outputs folder by def
```
### Distributed Training
For Ascend device, distributed training example(8p) by shell script
```
```bash
sh run_distribute_train.sh dataset/coco2017 cspdarknet53_backbone.ckpt rank_table_8p.json
```
The above shell script will run distribute training in the background. You can view the results through the file train_parallel[X]/log.txt. The loss value will be achieved as follows:
```
```text
# distribute training result(8p, shape=416)
...
2020-10-16 14:58:25,142:INFO:epoch[0], iter[1000], loss:242.509259, 388.73 imgs/sec, lr:0.00032783843926154077
@ -286,7 +302,7 @@ The above shell script will run distribute training in the background. You can v
```
```
```text
# distribute training result(8p, dynamic shape)
...
2020-10-16 20:40:17,148:INFO:epoch[0], iter[800], loss:283.765033, 248.93 imgs/sec, lr:0.00026233625249005854
@ -305,12 +321,11 @@ The above shell script will run distribute training in the background. You can v
...
```
## [Evaluation Process](#contents)
### Valid
```
```bash
python eval.py \
--data_dir=./dataset/coco2017 \
--pretrained=yolov4.ckpt \
@ -320,7 +335,8 @@ sh run_eval.sh dataset/coco2017 checkpoint/yolov4.ckpt
```
The above python command will run in the background. You can view the results through the file "log.txt". The mAP of the test dataset will be as follows:
```
```text
# log.txt
=============coco eval reulst=========
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.442
@ -336,8 +352,10 @@ The above python command will run in the background. You can view the results th
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.638
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.717
```
### Test-dev
```
```bash
python test.py \
--data_dir=./dataset/coco2017 \
--pretrained=yolov4.ckpt \
@ -345,11 +363,13 @@ python test.py \
OR
sh run_test.sh dataset/coco2017 checkpoint/yolov4.ckpt
```
The predict_xxx.json will be found in test/outputs/%Y-%m-%d_time_%H_%M_%S/.
Rename the file predict_xxx.json to detections_test-dev2017_yolov4_results.json and compress it to detections_test-dev2017_yolov4_results.zip
Submit file detections_test-dev2017_yolov4_results.zip to the MS COCO evaluation server for the test-dev2019 (bbox) https://competitions.codalab.org/competitions/20794#participate
Submit file detections_test-dev2017_yolov4_results.zip to the MS COCO evaluation server for the test-dev2019 (bbox) <https://competitions.codalab.org/competitions/20794#participate>
You will get such results in the end of file View scoring output log.
```
```text
overall performance
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.447
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.642
@ -364,9 +384,11 @@ overall performance
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.627
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.711
```
## [Convert Process](#contents)
### Convert
If you want to infer the network on Ascend 310, you should convert the model to AIR:
```python
@ -378,6 +400,7 @@ python src/export.py --pretrained=[PRETRAINED_BACKBONE] --batch_size=[BATCH_SIZE
## [Performance](#contents)
### Evaluation Performance
YOLOv4 on 118K images(The annotation and data format must be the same as coco2017)
| Parameters | YOLOv4 |
@ -394,9 +417,10 @@ YOLOv4 on 118K images(The annotation and data format must be the same as coco201
| Speed | 1p 53FPS 8p 390FPS(shape=416) 220FPS(dynamic shape) |
| Total time | 48h(dynamic shape) |
| Checkpoint for Fine tuning | about 500M (.ckpt file) |
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/ |
| Scripts | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/> |
### Inference Performance
YOLOv4 on 20K images(The annotation and data format must be the same as coco test2017 )
| Parameters | YOLOv4 |
@ -416,4 +440,5 @@ In dataset.py, we set the seed inside ```create_dataset``` function.
In var_init.py, we set seed for weight initilization
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).

@ -52,7 +52,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap
- Install [MindSpore](https://www.mindspore.cn/install/en).
- For more information, please check the resources below:
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
## Software

@ -550,8 +550,8 @@ The comparisons between MASS and other baseline methods in terms of PPL on Corne
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
## Requirements
@ -562,7 +562,7 @@ subword-nmt
rouge
```
<https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html>
<https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html>
# Get started
@ -624,7 +624,7 @@ Get the log and output files under the path `./train_mass_*/`, and the model fil
## Inference
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html).
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html).
For inference, config the options in `config.json` firstly:
- Assign the `test_dataset` under `dataset_config` node to the dataset path.

File diff suppressed because it is too large Load Diff

@ -26,40 +26,42 @@
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
# [NCF Description](#contents)
NCF is a general framework for collaborative filtering of recommendations in which a neural network architecture is used to model user-item interactions. Unlike traditional models, NCF does not resort to Matrix Factorization (MF) with an inner product on latent features of users and items. It replaces the inner product with a multi-layer perceptron that can learn an arbitrary function from data.
[Paper](https://arxiv.org/abs/1708.05031): He X, Liao L, Zhang H, et al. Neural collaborative filtering[C]//Proceedings of the 26th international conference on world wide web. 2017: 173-182.
# [Model Architecture](#contents)
Two instantiations of NCF are Generalized Matrix Factorization (GMF) and Multi-Layer Perceptron (MLP). GMF applies a linear kernel to model the latent feature interactions, and and MLP uses a nonlinear kernel to learn the interaction function from data. NeuMF is a fused model of GMF and MLP to better model the complex user-item interactions, and unifies the strengths of linearity of MF and non-linearity of MLP for modeling the user-item latent structures. NeuMF allows GMF and MLP to learn separate embeddings, and combines the two models by concatenating their last hidden layer. [neumf_model.py](neumf_model.py) defines the architecture details.
# [Dataset](#contents)
The [MovieLens datasets](http://files.grouplens.org/datasets/movielens/) are used for model training and evaluation. Specifically, we use two datasets: **ml-1m** (short for MovieLens 1 million) and **ml-20m** (short for MovieLens 20 million).
### ml-1m
## ml-1m
ml-1m dataset contains 1,000,209 anonymous ratings of approximately 3,706 movies made by 6,040 users who joined MovieLens in 2000. All ratings are contained in the file "ratings.dat" without header row, and are in the following format:
```
```cpp
UserID::MovieID::Rating::Timestamp
```
- UserIDs range between 1 and 6040.
- MovieIDs range between 1 and 3952.
- Ratings are made on a 5-star scale (whole-star ratings only).
### ml-20m
- UserIDs range between 1 and 6040.
- MovieIDs range between 1 and 3952.
- Ratings are made on a 5-star scale (whole-star ratings only).
## ml-20m
ml-20m dataset contains 20,000,263 ratings of 26,744 movies by 138493 users. All ratings are contained in the file "ratings.csv". Each line of this file after the header row represents one rating of one movie by one user, and has the following format:
```
```text
userId,movieId,rating,timestamp
```
- The lines within this file are ordered first by userId, then, within user, by movieId.
- Ratings are made on a 5-star scale, with half-star increments (0.5 stars - 5.0 stars).
- The lines within this file are ordered first by userId, then, within user, by movieId.
- Ratings are made on a 5-star scale, with half-star increments (0.5 stars - 5.0 stars).
In both datasets, the timestamp is represented in seconds since midnight Coordinated Universal Time (UTC) of January 1, 1970. Each user has at least 20 ratings.
@ -67,11 +69,9 @@ In both datasets, the timestamp is represented in seconds since midnight Coordin
## Mixed Precision
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
# [Environment Requirements](#contents)
- HardwareAscend/GPU
@ -79,10 +79,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Quick Start](#contents)
@ -102,14 +100,11 @@ sh scripts/run_train.sh rank_table.json
sh run_eval.sh
```
# [Script Description](#contents)
## [Script and Sample Code](#contents)
```
```text
├── ModelZoo_NCF_ME
├── README.md // descriptions about NCF
├── scripts
@ -192,9 +187,8 @@ Parameters for both training and evaluation can be set in config.py.
HR:0.6846,NDCG:0.410
```
# [Model Description](#contents)
## [Performance](#contents)
### Evaluation Performance
@ -213,7 +207,6 @@ Parameters for both training and evaluation can be set in config.py.
| Speed | 1pc: 0.575 ms/step |
| Total time | 1pc: 5 mins |
### Inference Performance
| Parameters | Ascend |
@ -228,14 +221,14 @@ Parameters for both training and evaluation can be set in config.py.
| Accuracy | HR:0.6846,NDCG:0.410 |
## [How to use](#contents)
### Inference
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html). Following the steps below, this is a simple example:
### Inference
https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html). Following the steps below, this is a simple example:
<https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference.html>
```
```python
# Load unseen dataset for inference
dataset = dataset.create_dataset(cfg.data_path, 1, False)
@ -256,10 +249,9 @@ https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html
print("accuracy: ", acc)
```
### Continue Training on the Pretrained Model
```
```python
# Load dataset
dataset = create_dataset(cfg.data_path, cfg.epoch_size)
batch_num = dataset.get_dataset_size()
@ -291,12 +283,10 @@ https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html
print("train success")
```
# [Description of Random Situation](#contents)
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).

@ -32,7 +32,7 @@ FCN-4 is a convolutional neural network architecture, its name FCN-4 comes from
### Mixed Precision
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
## [Environment Requirements](#contents)
@ -42,8 +42,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
## [Quick Start](#contents)

@ -90,8 +90,8 @@ We use about 91K face images as training dataset and 11K as evaluating dataset i
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below:
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Script Description](#contents)

@ -74,8 +74,8 @@ We use about 13K images as training dataset and 3K as evaluating dataset in this
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Script Description](#contents)

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save