Fix some documentations.

ISSUE=4611579

git-svn-id: https://svn.baidu.com/idl/trunk/paddle@1473 1ad973e4-5ce8-4261-8a94-b56d1f490c56
avx_docs
dangqingqing 9 years ago
parent 80790017e2
commit 7f1d6c5a75

@ -45,5 +45,5 @@ sphinx_add_target(paddle_docs
${SPHINX_HTML_DIR})
add_dependencies(paddle_docs
gen_proto_py
paddle_doxygen_docs)
gen_proto_py)
#paddle_doxygen_docs)

@ -173,7 +173,7 @@ python -m paddle.utils.plotcurve -i $log > plot.png
- The script `plotcurve.py` requires the python module of `matplotlib`, so if it fails, maybe you need to install `matplotlib`.
After training finishes, the training and testing error curve will be saved to `plot.png` using `plotcurve.py` script. An example of the plot is shown below:
After training finishes, the training and testing error curves will be saved to `plot.png` using `plotcurve.py` script. An example of the plot is shown below:
<center>![Training and testing curves.](./plot.png)</center>

@ -1,6 +1,6 @@
# Model Zoo - ImageNet #
[ImageNet](http://www.image-net.org/) is a popular dataset for generic object classification. This tutorial provided convolutional neural network(CNN) models for ImageNet.
[ImageNet](http://www.image-net.org/) is a popular dataset for generic object classification. This tutorial provides convolutional neural network(CNN) models for ImageNet.
## ResNet Introduction
@ -48,11 +48,11 @@ We present three ResNet models, which are converted from the models provided by
## ResNet Model
See ```demo/model_zoo/resnet/resnet.py```. This confgiure contains network of 50, 101 and 152 layers. You can specify layer number by adding argument like this ```--config_args=layer_num=50``` in command line arguments.
See ```demo/model_zoo/resnet/resnet.py```. This config contains network of 50, 101 and 152 layers. You can specify layer number by adding argument like ```--config_args=layer_num=50``` in command line arguments.
### Network Visualization
You can get a diagram of ResNet network by running the following command. The script generates dot file and then converts dot file to PNG file, which uses installed draw_dot tool in our server. If you can not access the server, just install graphviz to convert dot file.
You can get a diagram of ResNet network by running the following commands. The script generates dot file and then converts dot file to PNG file, which uses installed draw_dot tool in our server. If you can not access the server, just install graphviz to convert dot file.
```
cd demo/model_zoo/resnet
@ -190,8 +190,7 @@ Second, specify layers to extract features in `Outputs()` of `resnet.py`. For ex
Outputs("res5_3_branch2c_conv", "res5_3_branch2c_bn")
```
Third, specify model path and output directory in `extract_fea_c++.sh
`, and then run following commands
Third, specify model path and output directory in `extract_fea_c++.sh`, and then run the following commands.
```
cd demo/model_zoo/resnet

@ -10,7 +10,7 @@ customized, with sacrificing the efficiency only a little. This is extremly
useful when you have to dynamically generate certain kinds of data according to,
for example, the training performance.
Besides, users also can also customize a C++ :code:`DataProvider` for a more
Besides, users also can customize a C++ :code:`DataProvider` for a more
complex usage, or for a higher efficiency.
The following parameters are required to define in the PaddlePaddle network

@ -17,10 +17,10 @@ how to write a simple PyDataProvider.
MNIST is a handwriting classification data set. It contains 70,000 digital
grayscale images. Labels of the training sample range from 0 to 9. All the
images have been size-normalized and centered into images with a same size
images have been size-normalized and centered into images with the same size
of 28 x 28 pixels.
A small part of the original data as an example can be found in the path below:
A small part of the original data as an example is shown as below:
.. literalinclude:: ../../../doc_cn/ui/data_provider/mnist_train.txt
@ -31,10 +31,9 @@ Just write path of the above data into train.list. It looks like this:
.. literalinclude:: ../../../doc_cn/ui/data_provider/train.list
The corresponding dataprovider can be found in the path below:
The corresponding dataprovider is shown as below:
.. literalinclude:: ../../../doc_cn/ui/data_provider/mnist_provider.py
: linenos:
The first line imports PyDataProvider2 package.
The main function is the process function, that has two parameters.
@ -45,8 +44,8 @@ This parameter is passed to the process function by PaddlePaddle.
:code:`@provider` is a Python
`Decorator <http://www.learnpython.org/en/Decorators>`_ .
It sets some properties to DataProvider, and constructs a real PaddlePaddle
DataProvider from a very sample user implemented python function. It does not
matter if you are not familiar with `Decorator`_. You can keep it sample by
DataProvider from a very simple user implemented python function. It does not
matter if you are not familiar with `Decorator`_. You can keep it simple by
just taking :code:`@provider` as a fixed mark above the provider function you
implemented.
@ -59,9 +58,9 @@ document of `input_types`_ for more details.
The process method is the core part to construct a real DataProvider in
PaddlePaddle. It implements how to open the text file, how to read one sample
from the original text file, converted them into `input_types`_, and give them
from the original text file, convert them into `input_types`_, and give them
back to PaddlePaddle process at line 23.
Note that data yields by the process function must follow a same order that
Note that data yielded by the process function must follow the same order that
`input_types`_ are defined.
@ -111,7 +110,7 @@ The corresponding data provider can be found in the path below:
.. literalinclude:: ../../../doc_cn/ui/data_provider/sentimental_provider.py
This data provider for sequential model is a little bit complex than that
This data provider for sequential model is a little more complex than that
for MINST dataset.
A new initialization method is introduced here.
The method :code:`on_init` is configured to DataProvider by :code:`@provider`'s
@ -243,7 +242,7 @@ parameters which your init_hook does not use.
cache
+++++
DataProvider provides two simple cache strategy. They are
* CacheType.NO_CACHE means do not cache any data, then data is read runtime by
* CacheType.NO_CACHE means do not cache any data, then data is read at runtime by
the user implemented python module every pass.
* CacheType.CACHE_PASS_IN_MEM means the first pass reads data by the user
implemented python module, and the rest passes will directly read data from

File diff suppressed because it is too large Load Diff
Loading…
Cancel
Save