@ -77,9 +73,7 @@ To run the python scripts in the repository, you need to prepare the environment
- Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to [ascend@huawei.com](mailto:ascend@huawei.com). Once approved, you can get the resources.
- Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to [ascend@huawei.com](mailto:ascend@huawei.com). Once approved, you can get the resources.
- Python and dependencies
- Python and dependencies
- Python3.7
- Python3.7
- Mindspore 1.0.0
- Mindspore 1.1.0
- Easydict
- MXNet 1.6.0 if running the script `param_convert.py`
- For more information, please check the resources below:
- For more information, please check the resources below:
└─ README.md // descriptions about this repository
└─ README.md // descriptions about this repository
```
```
@ -196,7 +191,9 @@ The script will run training in the background, you can view the results through
```text
```text
epoch: 1 step: 40036, loss is 3.6232593
epoch: 1 step: 40036, loss is 3.6232593
Epoch time: 10048893.336, per step time: 250.996
epoch time: 10048893.336 ms, per step time: 250.996 ms
epoch: 2 step: 40036, loss is 3.200775
epoch time: 9306154.456 ms, per step time: 232.445 ms
...
...
```
```
@ -204,7 +201,7 @@ or as follows (eval_each_epoch = 1):
```text
```text
epoch: 1 step: 40036, loss is 3.6232593
epoch: 1 step: 40036, loss is 3.6232593
Epoch time: 10048893.336, per step time: 250.996
epoch time: 10048893.336 ms, per step time: 250.996 ms
Save the maximum accuracy checkpoint,the accuracy is 0.2629158669225848
Save the maximum accuracy checkpoint,the accuracy is 0.2629158669225848
...
...
```
```
@ -230,20 +227,10 @@ sh scripts/train_distributed.sh /home/rank_table.json /data/dataset/imagenet/ ..
The above shell script will run distribute training in the background. You can view the results through the file `train_parallel[X]/log.txt` as follows:
The above shell script will run distribute training in the background. You can view the results through the file `train_parallel[X]/log.txt` as follows:
```text
```text
train_parallel0/log:
epoch: 1 step 5004, loss is 4.5680037
epoch: 1 step 20018, loss is 5.74429988861084
epoch time: 2312519.441 ms, per step time: 462.134 ms
Epoch time: 7908183.789, per step time: 395.054, avg loss: 5.744
epoch: 2 step 5004, loss is 2.964888
train_parallel0/log:
Epoch time: 1350398.913 ms, per step time: 369.864 ms
epoch: 2 step 20018, loss is 4.53381872177124
Epoch time: 5036189.547, per step time: 251.583, avg loss: 4.534
...
train_parallel1/log:
poch: 1 step 20018, loss is 5.751555442810059
Epoch time: 7895946.079, per step time: 394.442, avg loss: 5.752
train_parallel1/log:
epoch: 2 step 20018, loss is 4.875896453857422
Epoch time: 5036190.008, per step time: 251.583, avg loss: 4.876
...
...
...
```
```
@ -262,7 +249,7 @@ sh scripts/eval.sh [device_id] [dataset_dir] [pretrained_ckpt]
For example, you can run the shell command below to launch the validation procedure.
For example, you can run the shell command below to launch the validation procedure.
```text
```text
sh scripts/eval.sh 0 /data/dataset/imageNet/ pretrain/dpn92.ckpt
sh scripts/eval.sh 0 /data/dataset/imagenet/ pretrain/dpn-180_5004.ckpt
```
```
The above shell script will run evaluation in the background. You can view the results through the file `eval_log.txt`. The result will be achieved as follows:
The above shell script will run evaluation in the background. You can view the results through the file `eval_log.txt`. The result will be achieved as follows:
@ -282,65 +269,14 @@ All results are validated at image size of 224x224. The dataset preprocessing an
### [Accuracy](#contents)
### [Accuracy](#contents)
The `Pretrain` tag in the table above means that the model's weights are converted from MXNet directly without further training. Relatively, the `Fine tune` tag means that the model is fine tuned after converted from MXNet.