From 0cc7d6eaf6be674ba1da4f197eca422ea2394891 Mon Sep 17 00:00:00 2001 From: tianbingsz Date: Thu, 17 Nov 2016 15:18:03 -0800 Subject: [PATCH 01/37] Update install_deps.rst --- doc_cn/build_and_install/cmake/install_deps.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc_cn/build_and_install/cmake/install_deps.rst b/doc_cn/build_and_install/cmake/install_deps.rst index 7fa4665a95..6d8727e329 100644 --- a/doc_cn/build_and_install/cmake/install_deps.rst +++ b/doc_cn/build_and_install/cmake/install_deps.rst @@ -1,4 +1,4 @@ 安装编译PaddlePaddle需要的依赖 ============================== -参见 `安装编译依赖 <../../../doc/build/build_from_source.html#install-dependencies>`_ +参见 `安装编译依赖 `_ From d980c2642fa8283563be012ab15df6b32f049981 Mon Sep 17 00:00:00 2001 From: tianbingsz Date: Thu, 17 Nov 2016 15:51:03 -0800 Subject: [PATCH 02/37] Update make_and_install.rst --- doc_cn/build_and_install/cmake/make_and_install.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc_cn/build_and_install/cmake/make_and_install.rst b/doc_cn/build_and_install/cmake/make_and_install.rst index 212b9c9352..8a390ef581 100644 --- a/doc_cn/build_and_install/cmake/make_and_install.rst +++ b/doc_cn/build_and_install/cmake/make_and_install.rst @@ -1,4 +1,4 @@ make和make install ================== -参见 `make和make install <../../../doc/build/build_from_source.html#build-and-install>`_ +参见 `make和make install `_ From af335da056e931085037ff2631df4bad7a34bd03 Mon Sep 17 00:00:00 2001 From: tianbingsz Date: Thu, 17 Nov 2016 16:34:19 -0800 Subject: [PATCH 03/37] Update index.md --- doc_cn/introduction/index.md | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/doc_cn/introduction/index.md b/doc_cn/introduction/index.md index 164cb7d494..d6efab0cf1 100644 --- a/doc_cn/introduction/index.md +++ b/doc_cn/introduction/index.md @@ -1,17 +1,17 @@ # 简介 -PaddlePaddle 是起源于百度的开源深度学习平台。它是简单易用的:你可以通过简单的十数行配置搭建经典的神经网络模型;它也是高效强大的:PaddlePaddle可以支撑复杂集群环境下超大模型的训练,令你受益于深度学习的前沿成果。在百度内部,已经有大量产品线使用了基于PaddlePaddle的深度学习技术。 +PaddlePaddle 源于百度的开源深度学习平台,有如下几个特点。首先,简单易用的:用户可以通过简单的十几行配置脚本搭建经典的神经网络模型。其次,高效强大的:PaddlePaddle可以支撑复杂集群环境下超大模型的训练,令你受益于深度学习的前沿成果。最后,在百度内部,已经有大量产品线使用了基于PaddlePaddle的深度学习技术。 -这份简短的介绍将像你展示如何利用PaddlePaddle解决一个经典的学习问题。 +这份简短的介绍将像你展示如何利用PaddlePaddle来解决一个经典的机器学习问题。 ## 1. 一个经典的任务 -让我们从一个基础问题开始:单变量的线性回归。问题假定观测到了一批二维空间上的点`(x, y) `,并且已知 `x` 和 `y` 之间存在着某种线性关系,我们的目标是通过观测数据还原这个线性关系。作为一个简单基础的模型,线性回归却有着广泛的应用场景。比如可以想象一个资产定价的简化场景,其中 `x` 对应于房屋的大小,`y` 对应于房屋价格。我们可以通过观察市场上房屋的情况获得二者之间的关系,从而为新房屋的定价提供参考。 +让我们从一个基础问题开始:单变量的线性回归。问题假定观测到了一批二维空间上的点`(x, y) `,并且已知 `x` 和 `y` 之间存在着某种线性关系,我们的目标是通过观测数据来学习这个线性关系。作为一个简单基础的模型,线性回归有着广泛的应用场景。以一个资产定价的问题为例,`x` 对应于房屋的大小,`y` 对应于房屋价格。我们可以通过观察市场上房屋销售的情况拟合 `x` 和 `y` 之间的关系,从而为新房屋的定价提供预测和参考。 ## 2. 准备数据 -假设变量 `X` 和 `Y` 的真实关系为: `Y = 2X + 0.3`,这里展示如何使用观测数据还原这一线性关系。如下Python代码将随机产生2000个观测点,它们将被用作PaddlePaddle的输入。产生PaddlePaddle的输入数据和写一段普通的Python脚本几乎一样,你唯一需要增加的就是定义输入数据的类型。 +假设变量 `X` 和 `Y` 的真实关系为: `Y = 2X + 0.3`,这里展示如何使用观测数据来拟合这一线性关系。首先,Python代码将随机产生2000个观测点,作为PaddlePaddle的输入。产生PaddlePaddle的输入数据和写一段普通的Python脚本几乎一样,你唯一需要增加的就是定义输入数据的类型。 ```python # -*- coding:utf-8 -*- @@ -29,7 +29,7 @@ def process(settings, input_file): ## 3. 训练模型 -为了还原 `Y = 2X + 0.3`,我们先从一条随机的直线 `Y' = wX + b` 开始,然后利用观测数据调整 `w` 和 `b` 使得 `Y'` 和 `Y` 的差距不断减小,最终趋于相同。这个过程就是模型的训练过程,而 `w` 和 `b` 就是模型的参数,即我们的训练目标。 +为了还原 `Y = 2X + 0.3`,我们先从一条随机的直线 `Y' = wX + b` 开始,然后利用观测数据调整 `w` 和 `b` 使得 `Y'` 和 `Y` 的差距不断减小,最终趋于接近。这个过程就是模型的训练过程,而 `w` 和 `b` 就是模型的参数,即我们的训练目标。 在PaddlePaddle里,该模型的网络配置如下。 @@ -50,33 +50,33 @@ settings(batch_size=12, learning_rate=1e-3, learning_method=MomentumOptimizer()) # 3. 神经网络配置 x = data_layer(name='x', size=1) y = data_layer(name='y', size=1) -# 线性计算单元: y_predict = wx + b +# 线性计算网络层: y_predict = wx + b y_predict = fc_layer(input=x, param_attr=ParamAttr(name='w'), size=1, act=LinearActivation(), bias_attr=ParamAttr(name='b')) -# 损失计算,度量 y_predict 和真实 y 之间的差距 +# 计算误差函数,即 y_predict 和真实 y 之间的距离 cost = regression_cost(input=y_predict, label=y) outputs(cost) ``` 这段简短的配置展示了PaddlePaddle的基本用法: -- 首先,第一部分定义了数据输入。一般情况下,PaddlePaddle先从一个文件列表里获得数据文件地址,然后交给用户自定义的函数(例如上面的`process`函数)进行读入和预处理从而得到真实输入。本文中由于输入数据是随机生成的不需要读输入文件,所以放一个空列表(`empty.list`)即可。 +- 第一部分定义了数据输入。一般情况下,PaddlePaddle先从一个文件列表里获得数据文件地址,然后交给用户自定义的函数(例如上面的`process`函数)进行读入和预处理从而得到真实输入。本文中由于输入数据是随机生成的不需要读输入文件,所以放一个空列表(`empty.list`)即可。 -- 第二部分主要是选择学习算法,它定义了模型参数如何改变。PaddlePaddle提供了很多优秀的学习算法,但这里使用一个简单的基于momentum的算法就足够了,它每次读取12个数据进行计算和模型更新。 +- 第二部分主要是选择学习算法,它定义了模型参数改变的规则。PaddlePaddle提供了很多优秀的学习算法,这里使用一个基于momentum的随机梯度下降(SGD)算法,该算法每批量(batch)读取12个采样数据进行随机梯度计算来更新更新。 -- 最后一部分是神经网络的配置。由于PaddlePaddle已经实现了丰富的网络单元(Layer),所以很多时候你需要做的只是声明正确的网络单元并把它们拼接起来。这里使用了三种网络单元: - - **数据层**:数据层 `data_layer` 是神经网络的入口,它读入数据并将它们传输到下游的其它单元。这里数据层有两个,分别对应于变量 `X` 和 `Y`。 - - **全连接层**:全连接层 `fc_layer` 是基础的计算单元,这里利用它建模变量之间的线性关系。计算单元是神经网络的核心,PaddlePaddle支持大量的计算单元和任意深度的网络连接,从而可以挖掘复杂的数据关系。 - - **回归损失层**:回归损失层 `regression_cost`是众多损失函数层的一种,它们在训练过程作为网络的出口,用来计算模型的表现,并指导模型参数的改变。 +- 最后一部分是神经网络的配置。由于PaddlePaddle已经实现了丰富的网络层,所以很多时候你需要做的只是定义正确的网络层并把它们连接起来。这里使用了三种网络单元: + - **数据层**:数据层 `data_layer` 是神经网络的入口,它读入数据并将它们传输到接下来的网络层。这里数据层有两个,分别对应于变量 `X` 和 `Y`。 + - **全连接层**:全连接层 `fc_layer` 是基础的计算单元,这里利用它建模变量之间的线性关系。计算单元是神经网络的核心,PaddlePaddle支持大量的计算单元和任意深度的网络连接,从而可以拟合任意的函数来学习复杂的数据关系。 + - **回归误差代价层**:回归误差代价层 `regression_cost`是众多误差代价函数层的一种,它们在训练过程作为网络的出口,用来计算模型的误差,是模型参数优化的目标函数。 -这样定义了网络结构并保存为`trainer_config.py`之后,运行训练命令即可: +定义了网络结构并保存为`trainer_config.py`之后,运行以下训练命令: ``` paddle train --config=trainer_config.py --save_dir=./output --num_passes=30 ``` -PaddlePaddle将在观测数据集上迭代训练30轮,并将每轮的模型结果存放在 `./output` 路径下。从输出日志可以看到,随着轮数增加损失函数的输出在不断的减小,这意味着模型在不断的改进,直到逼近真实解:` Y = 2X + 0.3 ` +PaddlePaddle将在观测数据集上迭代训练30轮,并将每轮的模型结果存放在 `./output` 路径下。从输出日志可以看到,随着轮数增加误差代价函数的输出在不断的减小,这意味着模型在训练数据上不断的改进,直到逼近真实解:` Y = 2X + 0.3 ` ## 4. 模型检验 -训练完成后,我们希望能够检验模型的好坏。一种常用的做法是用模型对另外一组数据进行预测,然后评价预测的效果。但在这个例子中,由于已经知道了真实答案,我们可以直接观察模型的参数是否符合预期来进行检验。 +训练完成后,我们希望能够检验模型的好坏。一种常用的做法是用学习的模型对另外一组测试数据进行预测,评价预测的效果。在这个例子中,由于已经知道了真实答案,我们可以直接观察模型的参数是否符合预期来进行检验。 PaddlePaddle将每个模型参数作为一个numpy数组单独存为一个文件,所以可以利用如下方法读取模型的参数。 @@ -94,9 +94,9 @@ print 'w=%.6f, b=%.6f' % (load('output/pass-00029/w'), load('output/pass-00029/b ```
![](./parameters.png)
-从图中可以看到,虽然 `w` 和 `b` 都使用随机值初始化,但在起初的几轮训练中它们都在快速逼近真实值,并且后续仍在不断改进,使得最终得到的模型几乎与真实模型重合。 +从图中可以看到,虽然 `w` 和 `b` 都使用随机值初始化,但在起初的几轮训练中它们都在快速逼近真实值,并且后续仍在不断改进,使得最终得到的模型几乎与真实模型一致。 -这样,我们就完成了对单变量线性回归问题的解决:将数据输入PaddlePaddle,训练模型,最后验证结果。 +这样,我们用PaddlePaddle解决了单变量线性回归问题, 包括数据输入,模型训练和最后的结果验证。 ## 5. 推荐后续阅读 From 07dcf7bfc69166fcf889a82533cd9eca15596246 Mon Sep 17 00:00:00 2001 From: tianbingsz Date: Fri, 18 Nov 2016 13:32:56 -0800 Subject: [PATCH 04/37] Update docker_install.rst --- doc_cn/build_and_install/install/docker_install.rst | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/doc_cn/build_and_install/install/docker_install.rst b/doc_cn/build_and_install/install/docker_install.rst index a5f5fb117e..21a2d8bb94 100644 --- a/doc_cn/build_and_install/install/docker_install.rst +++ b/doc_cn/build_and_install/install/docker_install.rst @@ -1,8 +1,7 @@ 安装PaddlePaddle的Docker镜像 ============================ -PaddlePaddle提供了Docker的使用镜像。PaddlePaddle推荐使用Docker进行PaddlePaddle的部署和 -运行。Docker是一个基于容器的轻量级虚拟环境。具有和宿主机相近的运行效率,并提供 +PaddlePaddle提供了Docker的使用镜像。PaddlePaddle推荐使用Docker进行PaddlePaddle的部署和运行。Docker是一个基于容器的轻量级虚拟环境。具有和宿主机相近的运行效率,并提供 了非常方便的二进制分发手段。 下述内容将分为如下几个类别描述。 @@ -41,7 +40,7 @@ PaddlePaddle提供的Docker镜像版本 * CPU WITHOUT AVX: CPU版本,不支持AVX指令集的CPU也可以运行 * GPU WITHOUT AVX: GPU版本,不需要AVX指令集的CPU也可以运行。 -用户可以选择对应版本的docker image。使用如下脚本可以确定本机的CPU知否支持 :code:`AVX` 指令集\: +用户可以选择对应版本的docker image。使用如下脚本可以确定本机的CPU是否支持 :code:`AVX` 指令集\: .. code-block:: bash From faa32c34182ab2c2383166bcd6bb5dee1b830a10 Mon Sep 17 00:00:00 2001 From: tianbingsz Date: Fri, 18 Nov 2016 17:51:50 -0800 Subject: [PATCH 05/37] Update docker_install.rst --- doc_cn/build_and_install/install/docker_install.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc_cn/build_and_install/install/docker_install.rst b/doc_cn/build_and_install/install/docker_install.rst index 21a2d8bb94..0872fd0b7a 100644 --- a/doc_cn/build_and_install/install/docker_install.rst +++ b/doc_cn/build_and_install/install/docker_install.rst @@ -66,7 +66,7 @@ mac osx或者是windows机器,请参考 .. code-block:: bash - $ docker run -it paddledev/paddlepaddle:cpu-latest + $ docker run -it paddledev/paddle:cpu-latest 即可启动和进入PaddlePaddle的container。如果运行GPU版本的PaddlePaddle,则需要先将 cuda相关的Driver和设备映射进container中,脚本类似于 @@ -75,7 +75,7 @@ cuda相关的Driver和设备映射进container中,脚本类似于 $ export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" $ export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') - $ docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddlepaddle:latest-gpu + $ docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest 进入Docker container后,运行 :code:`paddle version` 即可打印出PaddlePaddle的版本和构建 信息。安装完成的PaddlePaddle主体包括三个部分, :code:`paddle` 脚本, python的 From b34ab48b5c397245972335925e2095dd60cd330d Mon Sep 17 00:00:00 2001 From: tianbingsz Date: Mon, 21 Nov 2016 14:20:25 -0800 Subject: [PATCH 06/37] Update docker_install.rst address @wangkuiyi comments. --- doc_cn/build_and_install/install/docker_install.rst | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/doc_cn/build_and_install/install/docker_install.rst b/doc_cn/build_and_install/install/docker_install.rst index 0872fd0b7a..40339659be 100644 --- a/doc_cn/build_and_install/install/docker_install.rst +++ b/doc_cn/build_and_install/install/docker_install.rst @@ -1,8 +1,7 @@ 安装PaddlePaddle的Docker镜像 ============================ -PaddlePaddle提供了Docker的使用镜像。PaddlePaddle推荐使用Docker进行PaddlePaddle的部署和运行。Docker是一个基于容器的轻量级虚拟环境。具有和宿主机相近的运行效率,并提供 -了非常方便的二进制分发手段。 +PaddlePaddle项目提供官方 `Docker `_ 镜像。Docker镜像是我们目前唯一官方支持的部署和运行方式。 下述内容将分为如下几个类别描述。 From 926f10b8d1beb24c3b8ce2f2890e0c95cc13dff8 Mon Sep 17 00:00:00 2001 From: tianbingsz Date: Mon, 21 Nov 2016 15:12:19 -0800 Subject: [PATCH 07/37] Update index.md Thanks for @wangkuiyi comments. --- doc_cn/introduction/index.md | 25 +++++++++++-------------- 1 file changed, 11 insertions(+), 14 deletions(-) diff --git a/doc_cn/introduction/index.md b/doc_cn/introduction/index.md index d6efab0cf1..f6eb5456c0 100644 --- a/doc_cn/introduction/index.md +++ b/doc_cn/introduction/index.md @@ -1,20 +1,18 @@ # 简介 -PaddlePaddle 源于百度的开源深度学习平台,有如下几个特点。首先,简单易用的:用户可以通过简单的十几行配置脚本搭建经典的神经网络模型。其次,高效强大的:PaddlePaddle可以支撑复杂集群环境下超大模型的训练,令你受益于深度学习的前沿成果。最后,在百度内部,已经有大量产品线使用了基于PaddlePaddle的深度学习技术。 - -这份简短的介绍将像你展示如何利用PaddlePaddle来解决一个经典的机器学习问题。 +PaddlePaddle是源于百度的一个深度学习平台。这份简短的介绍将向你展示如何利用PaddlePaddle来解决一个经典的线性回归问题。 ## 1. 一个经典的任务 -让我们从一个基础问题开始:单变量的线性回归。问题假定观测到了一批二维空间上的点`(x, y) `,并且已知 `x` 和 `y` 之间存在着某种线性关系,我们的目标是通过观测数据来学习这个线性关系。作为一个简单基础的模型,线性回归有着广泛的应用场景。以一个资产定价的问题为例,`x` 对应于房屋的大小,`y` 对应于房屋价格。我们可以通过观察市场上房屋销售的情况拟合 `x` 和 `y` 之间的关系,从而为新房屋的定价提供预测和参考。 +我们展示如何用PaddlePaddle解决单变量的线性回归问题。线性回归的输入是一批点`(x, y) `,其中 `y = wx + b + ε`, 而 ε 是一个符合高斯分布的随机变量。线性回归的输出是从这批点估计出来的参数 w 和 b。 +一个例子是房产估值。我们假设房产的价格(y)是其大小(x)的一个线性函数,那么我们可以通过收集市场上房子的大小和价格,用来估计线性函数的参数w 和 b。 ## 2. 准备数据 -假设变量 `X` 和 `Y` 的真实关系为: `Y = 2X + 0.3`,这里展示如何使用观测数据来拟合这一线性关系。首先,Python代码将随机产生2000个观测点,作为PaddlePaddle的输入。产生PaddlePaddle的输入数据和写一段普通的Python脚本几乎一样,你唯一需要增加的就是定义输入数据的类型。 +假设变量 `x` 和 `y` 的真实关系为: `y = 2x + 0.3 + ε`,这里展示如何使用观测数据来拟合这一线性关系。首先,Python代码将随机产生2000个观测点,作为线性回归的输入。下面脚本符合PaddlePaddle期待的读取数据的Python程序的模式。 ```python -# -*- coding:utf-8 -*- # dataprovider.py from paddle.trainer.PyDataProvider2 import * import random @@ -29,12 +27,11 @@ def process(settings, input_file): ## 3. 训练模型 -为了还原 `Y = 2X + 0.3`,我们先从一条随机的直线 `Y' = wX + b` 开始,然后利用观测数据调整 `w` 和 `b` 使得 `Y'` 和 `Y` 的差距不断减小,最终趋于接近。这个过程就是模型的训练过程,而 `w` 和 `b` 就是模型的参数,即我们的训练目标。 +为了还原 `y = 2x + 0.3`,我们先从一条随机的直线 `y' = wx + b` 开始,然后利用观测数据调整 `w` 和 `b` 使得 `y'` 和 `y` 的差距不断减小,最终趋于接近。这个过程就是模型的训练过程,而 `w` 和 `b` 就是模型的参数,即我们的训练目标。 在PaddlePaddle里,该模型的网络配置如下。 ```python -# -*- coding:utf-8 -*- # trainer_config.py from paddle.trainer_config_helpers import * @@ -50,10 +47,10 @@ settings(batch_size=12, learning_rate=1e-3, learning_method=MomentumOptimizer()) # 3. 神经网络配置 x = data_layer(name='x', size=1) y = data_layer(name='y', size=1) -# 线性计算网络层: y_predict = wx + b -y_predict = fc_layer(input=x, param_attr=ParamAttr(name='w'), size=1, act=LinearActivation(), bias_attr=ParamAttr(name='b')) -# 计算误差函数,即 y_predict 和真实 y 之间的距离 -cost = regression_cost(input=y_predict, label=y) +# 线性计算网络层: ȳ = wx + b +ȳ = fc_layer(input=x, param_attr=ParamAttr(name='w'), size=1, act=LinearActivation(), bias_attr=ParamAttr(name='b')) +# 计算误差函数,即 ȳ 和真实 y 之间的距离 +cost = regression_cost(input= ȳ, label=y) outputs(cost) ``` 这段简短的配置展示了PaddlePaddle的基本用法: @@ -63,7 +60,7 @@ outputs(cost) - 第二部分主要是选择学习算法,它定义了模型参数改变的规则。PaddlePaddle提供了很多优秀的学习算法,这里使用一个基于momentum的随机梯度下降(SGD)算法,该算法每批量(batch)读取12个采样数据进行随机梯度计算来更新更新。 - 最后一部分是神经网络的配置。由于PaddlePaddle已经实现了丰富的网络层,所以很多时候你需要做的只是定义正确的网络层并把它们连接起来。这里使用了三种网络单元: - - **数据层**:数据层 `data_layer` 是神经网络的入口,它读入数据并将它们传输到接下来的网络层。这里数据层有两个,分别对应于变量 `X` 和 `Y`。 + - **数据层**:数据层 `data_layer` 是神经网络的入口,它读入数据并将它们传输到接下来的网络层。这里数据层有两个,分别对应于变量 `x` 和 `y`。 - **全连接层**:全连接层 `fc_layer` 是基础的计算单元,这里利用它建模变量之间的线性关系。计算单元是神经网络的核心,PaddlePaddle支持大量的计算单元和任意深度的网络连接,从而可以拟合任意的函数来学习复杂的数据关系。 - **回归误差代价层**:回归误差代价层 `regression_cost`是众多误差代价函数层的一种,它们在训练过程作为网络的出口,用来计算模型的误差,是模型参数优化的目标函数。 @@ -72,7 +69,7 @@ outputs(cost) paddle train --config=trainer_config.py --save_dir=./output --num_passes=30 ``` -PaddlePaddle将在观测数据集上迭代训练30轮,并将每轮的模型结果存放在 `./output` 路径下。从输出日志可以看到,随着轮数增加误差代价函数的输出在不断的减小,这意味着模型在训练数据上不断的改进,直到逼近真实解:` Y = 2X + 0.3 ` +PaddlePaddle将在观测数据集上迭代训练30轮,并将每轮的模型结果存放在 `./output` 路径下。从输出日志可以看到,随着轮数增加误差代价函数的输出在不断的减小,这意味着模型在训练数据上不断的改进,直到逼近真实解:` y = 2x + 0.3 ` ## 4. 模型检验 From 102dfc217e6f64896abb4e1d80496799448e07cf Mon Sep 17 00:00:00 2001 From: tianbingsz Date: Mon, 21 Nov 2016 19:13:45 -0800 Subject: [PATCH 08/37] Update make_and_install.rst Per @luotao1's comments. Thanks. --- doc_cn/build_and_install/cmake/make_and_install.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc_cn/build_and_install/cmake/make_and_install.rst b/doc_cn/build_and_install/cmake/make_and_install.rst index 8a390ef581..212b9c9352 100644 --- a/doc_cn/build_and_install/cmake/make_and_install.rst +++ b/doc_cn/build_and_install/cmake/make_and_install.rst @@ -1,4 +1,4 @@ make和make install ================== -参见 `make和make install `_ +参见 `make和make install <../../../doc/build/build_from_source.html#build-and-install>`_ From 2eb79c197d4893632ee0db5d28dc213b13d259ef Mon Sep 17 00:00:00 2001 From: tianbingsz Date: Mon, 21 Nov 2016 19:15:39 -0800 Subject: [PATCH 09/37] Update install_deps.rst Thanks for @luotao1's comments. --- doc_cn/build_and_install/cmake/install_deps.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc_cn/build_and_install/cmake/install_deps.rst b/doc_cn/build_and_install/cmake/install_deps.rst index 6d8727e329..7fa4665a95 100644 --- a/doc_cn/build_and_install/cmake/install_deps.rst +++ b/doc_cn/build_and_install/cmake/install_deps.rst @@ -1,4 +1,4 @@ 安装编译PaddlePaddle需要的依赖 ============================== -参见 `安装编译依赖 `_ +参见 `安装编译依赖 <../../../doc/build/build_from_source.html#install-dependencies>`_ From 0de93a437ae0220fe9548302fbf4a41990119806 Mon Sep 17 00:00:00 2001 From: tianbingsz Date: Mon, 21 Nov 2016 19:21:14 -0800 Subject: [PATCH 10/37] Rename index.md to index.rst Thanks for @luotao's comments. --- doc_cn/introduction/{index.md => index.rst} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename doc_cn/introduction/{index.md => index.rst} (100%) diff --git a/doc_cn/introduction/index.md b/doc_cn/introduction/index.rst similarity index 100% rename from doc_cn/introduction/index.md rename to doc_cn/introduction/index.rst From ebd725265a0af43a563742eb58901e622a237daa Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Wed, 23 Nov 2016 16:14:27 +0800 Subject: [PATCH 11/37] Update the data of quick start. --- demo/quick_start/data/README.md | 9 ++++++ demo/quick_start/data/get_data.sh | 15 ++++----- demo/quick_start/data/pred.list | 1 - demo/quick_start/data/pred.txt | 2 -- .../proc_from_raw_data/get_data.sh} | 32 +++++++++++++------ .../proc_from_raw_data}/preprocess.py | 10 ++++-- doc/demo/quick_start/index_en.md | 3 +- doc_cn/demo/quick_start/index.md | 4 +-- 8 files changed, 47 insertions(+), 29 deletions(-) create mode 100644 demo/quick_start/data/README.md delete mode 100644 demo/quick_start/data/pred.list delete mode 100644 demo/quick_start/data/pred.txt rename demo/quick_start/{preprocess.sh => data/proc_from_raw_data/get_data.sh} (68%) rename demo/quick_start/{ => data/proc_from_raw_data}/preprocess.py (95%) diff --git a/demo/quick_start/data/README.md b/demo/quick_start/data/README.md new file mode 100644 index 0000000000..63abcf7ebf --- /dev/null +++ b/demo/quick_start/data/README.md @@ -0,0 +1,9 @@ +This dataset consists of electronics product reviews associated with +binary labels (positive/negative) for sentiment classification. + +The preprocessed data can be downloaded by script `get_data.sh`. +The data was derived from reviews_Electronics_5.json.gz at + +http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Electronics_5.json.gz + +If you want to process the raw data, you can use the script `proc_from_raw_data/get_data.sh`. diff --git a/demo/quick_start/data/get_data.sh b/demo/quick_start/data/get_data.sh index f355d63225..952de3f3c8 100755 --- a/demo/quick_start/data/get_data.sh +++ b/demo/quick_start/data/get_data.sh @@ -17,14 +17,11 @@ set -e DIR="$( cd "$(dirname "$0")" ; pwd -P )" cd $DIR -echo "Downloading Amazon Electronics reviews data..." -# http://jmcauley.ucsd.edu/data/amazon/ -wget http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Electronics_5.json.gz +# Download the preprocessed data +wget http://paddlepaddle.bj.bcebos.com/demo/quick_start_preprocessed_data/preprocessed_data.tar.gz -echo "Downloading mosesdecoder..." -#https://github.com/moses-smt/mosesdecoder -wget https://github.com/moses-smt/mosesdecoder/archive/master.zip +# Extract package +tar zxvf preprocessed_data.tar.gz -unzip master.zip -rm master.zip -echo "Done." +# Remove compressed package +rm preprocessed_data.tar.gz diff --git a/demo/quick_start/data/pred.list b/demo/quick_start/data/pred.list deleted file mode 100644 index d88b2b6385..0000000000 --- a/demo/quick_start/data/pred.list +++ /dev/null @@ -1 +0,0 @@ -./data/pred.txt diff --git a/demo/quick_start/data/pred.txt b/demo/quick_start/data/pred.txt deleted file mode 100644 index 6ed5f738dd..0000000000 --- a/demo/quick_start/data/pred.txt +++ /dev/null @@ -1,2 +0,0 @@ -the device is cute , but that 's just about all that 's good. the specs are what you 'd expect : it 's a wifi mic , with some noise filter options. the app has the option to upload your baby 's name and photo , which is a cutesy touch. but the app is otherwise unstable and useless unless you upgrade for $ 60 / year.set up involves downloading the app , turning on the mic , switching your phone to the wifi network of the mic , telling the app your wifi settings , switching your wifi back to your home router. the app is then directly connected to your mic.the app is adware ! the main screen says " cry notifications on / off : upgrade to evoz premium and receive a text message of email when your baby is crying " .but the adware points out an important limitation , this monitor is only intended to be used from your home network. if you want to access it remotely , get a webcam. this app would make a lot more sense of the premium features were included with the hardware . -don 't be fooled by my one star rating. if there was a zero , i would have selected it. this product was a waste of my money.it has never worked like the company said it supposed to. i only have one device , an iphone 4gs. after charging the the iphone mid way , the i.sound portable power max 16,000 mah is completely drained. the led light no longer lit up. when plugging the isound portable power max into a wall outlet to charge , it would charge for about 20-30 minutes and then all four battery led indicator lit up showing a full charge. i would leave it on to charge for the full 8 hours or more but each time with the same result upon using. don 't buy this thing. put your money to good use elsewhere . diff --git a/demo/quick_start/preprocess.sh b/demo/quick_start/data/proc_from_raw_data/get_data.sh similarity index 68% rename from demo/quick_start/preprocess.sh rename to demo/quick_start/data/proc_from_raw_data/get_data.sh index c9190e2dd2..9c3e9db248 100755 --- a/demo/quick_start/preprocess.sh +++ b/demo/quick_start/data/proc_from_raw_data/get_data.sh @@ -16,10 +16,23 @@ # 1. size of pos : neg = 1:1. # 2. size of testing set = min(25k, len(all_data) * 0.1), others is traning set. # 3. distinct train set and test set. -# 4. build dict set -e +DIR="$( cd "$(dirname "$0")" ; pwd -P )" +cd $DIR + +# Download data +echo "Downloading Amazon Electronics reviews data..." +# http://jmcauley.ucsd.edu/data/amazon/ +#wget http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Electronics_5.json.gz +echo "Downloading mosesdecoder..." +#https://github.com/moses-smt/mosesdecoder +#wget https://github.com/moses-smt/mosesdecoder/archive/master.zip +#unzip master.zip +#rm master.zip +echo "Done." + export LC_ALL=C UNAME_STR=`uname` @@ -29,10 +42,11 @@ else SHUF_PROG='gshuf' fi -mkdir -p data/tmp -python preprocess.py -i data/reviews_Electronics_5.json.gz +# Start preprocess +mkdir -p tmp +python preprocess.py -i reviews_Electronics_5.json.gz # uniq and shuffle -cd data/tmp +cd tmp echo 'uniq and shuffle...' cat pos_*|sort|uniq|${SHUF_PROG}> pos.shuffed cat neg_*|sort|uniq|${SHUF_PROG}> neg.shuffed @@ -53,11 +67,11 @@ cat train.pos train.neg | ${SHUF_PROG} >../train.txt cat test.pos test.neg | ${SHUF_PROG} >../test.txt cd - -echo 'data/train.txt' > data/train.list -echo 'data/test.txt' > data/test.list +echo 'train.txt' > train.list +echo 'test.txt' > test.list # use 30k dict -rm -rf data/tmp -mv data/dict.txt data/dict_all.txt -cat data/dict_all.txt | head -n 30001 > data/dict.txt +rm -rf tmp +mv dict.txt dict_all.txt +cat dict_all.txt | head -n 30001 > dict.txt echo 'preprocess finished' diff --git a/demo/quick_start/preprocess.py b/demo/quick_start/data/proc_from_raw_data/preprocess.py similarity index 95% rename from demo/quick_start/preprocess.py rename to demo/quick_start/data/proc_from_raw_data/preprocess.py index d87fad632a..56c2c5f16c 100755 --- a/demo/quick_start/preprocess.py +++ b/demo/quick_start/data/proc_from_raw_data/preprocess.py @@ -14,7 +14,7 @@ # See the License for the specific language governing permissions and # limitations under the License. """ -1. (remove HTML before or not)tokensizing +1. Tokenize the words and punctuation 2. pos sample : rating score 5; neg sample: rating score 1-2. Usage: @@ -76,7 +76,11 @@ def tokenize(sentences): sentences : a list of input sentences. return: a list of processed text. """ - dir = './data/mosesdecoder-master/scripts/tokenizer/tokenizer.perl' + dir = './mosesdecoder-master/scripts/tokenizer/tokenizer.perl' + if not os.path.exists(dir): + sys.exit( + "The ./mosesdecoder-master/scripts/tokenizer/tokenizer.perl does not exists." + ) tokenizer_cmd = [dir, '-l', 'en', '-q', '-'] assert isinstance(sentences, list) text = "\n".join(sentences) @@ -104,7 +108,7 @@ def tokenize_batch(id): num_batch, instance, pre_fix = parse_queue.get() if num_batch == -1: ### parse_queue finished tokenize_queue.put((-1, None, None)) - sys.stderr.write("tokenize theread %s finish\n" % (id)) + sys.stderr.write("Thread %s finish\n" % (id)) break tokenize_instance = tokenize(instance) tokenize_queue.put((num_batch, tokenize_instance, pre_fix)) diff --git a/doc/demo/quick_start/index_en.md b/doc/demo/quick_start/index_en.md index 80d816a768..01f6f8ef54 100644 --- a/doc/demo/quick_start/index_en.md +++ b/doc/demo/quick_start/index_en.md @@ -59,12 +59,11 @@ To build your text classification system, your code will need to perform five st ## Preprocess data into standardized format In this example, you are going to use [Amazon electronic product review dataset](http://jmcauley.ucsd.edu/data/amazon/) to build a bunch of deep neural network models for text classification. Each text in this dataset is a product review. This dataset has two categories: “positive” and “negative”. Positive means the reviewer likes the product, while negative means the reviewer does not like the product. -`demo/quick_start` in the [source code](https://github.com/baidu/Paddle) provides scripts for downloading data and preprocessing data as shown below. The data process takes several minutes (about 3 minutes in our machine). +`demo/quick_start` in the [source code](https://github.com/baidu/Paddle) provides script for downloading the preprocessed data as shown below. (If you want to process the raw data, you can use the script `demo/quick_start/data/proc_from_raw_data/get_data.sh`). ```bash cd demo/quick_start ./data/get_data.sh -./preprocess.sh ``` ## Transfer Data to Model diff --git a/doc_cn/demo/quick_start/index.md b/doc_cn/demo/quick_start/index.md index 4d9b24ba85..514b45a487 100644 --- a/doc_cn/demo/quick_start/index.md +++ b/doc_cn/demo/quick_start/index.md @@ -32,13 +32,11 @@ ## 数据格式准备(Data Preparation) 在本问题中,我们使用[Amazon电子产品评论数据](http://jmcauley.ucsd.edu/data/amazon/), -将评论分为好评(正样本)和差评(负样本)两类。[源码](https://github.com/baidu/Paddle)的`demo/quick_start`里提供了数据下载脚本 -和预处理脚本。 +将评论分为好评(正样本)和差评(负样本)两类。[源码](https://github.com/baidu/Paddle)的`demo/quick_start`里提供了下载已经预处理数据的脚本(如果想从最原始的数据处理,可以使用脚本 `./demo/quick_start/data/proc_from_raw_data/get_data.sh`)。 ```bash cd demo/quick_start ./data/get_data.sh -./preprocess.sh ``` ## 数据向模型传送(Transfer Data to Model) From 0cac5391a62cbb20a97067b1563df6213a38e1de Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Wed, 23 Nov 2016 17:10:27 +0800 Subject: [PATCH 12/37] fix swig_py_paddle.rst --- doc_cn/ui/predict/swig_py_paddle.rst | 87 ++++++++++++++++------------ 1 file changed, 51 insertions(+), 36 deletions(-) diff --git a/doc_cn/ui/predict/swig_py_paddle.rst b/doc_cn/ui/predict/swig_py_paddle.rst index 012ac4ff6e..f9750d80c9 100644 --- a/doc_cn/ui/predict/swig_py_paddle.rst +++ b/doc_cn/ui/predict/swig_py_paddle.rst @@ -1,38 +1,53 @@ -PaddlePaddle的Python预测接口 -================================== - -PaddlePaddle目前使用Swig对其常用的预测接口进行了封装,使在Python环境下的预测接口更加简单。 -在Python环境下预测结果,主要分为以下几个步骤。 - -* 读入解析训练配置 -* 构造GradientMachine -* 准备数据 -* 预测 - -典型的预测代码如下,使用mnist手写识别作为样例, 完整代码见 -:code:`src_root/doc/ui/predict/predict_sample.py` 。 - -.. literalinclude:: ../../../doc/ui/predict/predict_sample.py - :language: python - :lines: 15-18,90-100,101-104 - -主要的软件包为py_paddle.swig_paddle,这个软件包文档相对完善。可以使用python的 -:code:`help()` 函数查询文档。主要步骤为: - -* 在程序开始阶段,使用 :code:`swig_paddle.initPaddle()` 传入命令行参数初始化 - PaddlePaddle。详细的命令行参数请参考 - `命令行参数 <../cmd_argument/detail_introduction.html>`_ 。 -* 接下来使用 :code:`parse_config()` 解析训练时的配置文件。这里要注意预测数据通常 - 不包含label, 而且预测网络通常直接输出最后一层的结果而不是像训练时一样以cost - layer作为输出,所以用于预测的配置文件要做相应的修改。 -* 使用 :code:`swig_paddle.GradientMachine.createFromConfigproto()` 根据上一步解 - 析好的配置创建神经网络。 -* 创建一个 :code:`DataProviderConverter` 对象converter。 - - swig_paddle接受的原始数据是C++的Matrix,也就是直接写内存的float数组。 - 这个接口并不用户友好。所以,我们提供了一个工具类DataProviderConverter。 - 这个工具类接收和PyDataProvider2一样的输入数据,详情请参考 - `PyDataProvider2文档 <../../../doc/ui/data_provider/pydataprovider2.html>`_ 。 -* 最后使用 :code:`forwardTest()` 直接提取出神经网络Output层的输出结果。典型的输出结果为\: +基于Python的预测 +================ + +Python预测接口 +-------------- + +PaddlePaddle使用swig对常用的预测接口进行了封装,通过编译会生成py_paddle软件包,安装该软件包就可以在python环境下实现模型预测。可以使用python的 ``help()`` 函数查询软件包相关API说明。 + +基于Python的模型预测,主要包括以下五个步骤。 + +1. 初始化PaddlePaddle环境 + 在程序开始阶段,通过调用 ``swig_paddle.initPaddle()`` 并传入相应的命令行参数初始化PaddlePaddle。 +2. 解析模型配置文件 + 初始化之后,可以通过调用 ``parse_config()`` 解析训练模型时用的配置文件。注意预测数据通常不包含label, 同时预测网络通常直接输出最后一层的结果而不是像训练网络一样再接一层cost layer,所以一般需要对训练用的模型配置文件稍作相应修改才能在预测时使用。 +3. 构造paddle.GradientMachine + 通过调用 ``swig_paddle.GradientMachine.createFromConfigproto()`` 传入上一步解析出来的模型配置就可以创建一个 ``GradientMachine``。 +4. 准备预测数据 + swig_paddle中的预测接口的参数是自定义的C++数据类型,py_paddle里面提供了一个工具类 ``DataProviderConverter`` 可以用于接收和PyDataProvider2一样的输入数据并转换成预测接口所需的数据类型。 +5. 模型预测 + 通过调用 ``forwardTest()`` 传入预测数据,直接返回计算结果。 + + +基于Python的预测Demo +-------------------- + +如下是一段使用mnist model来实现手写识别的预测代码。完整的代码见 ``src_root/doc/ui/predict/predict_sample.py`` 。mnist model可以通过 ``src_root\demo\mnist`` 目录下的demo训练出来。 + +.. code-block:: python + + from py_paddle import swig_paddle, DataProviderConverter + from paddle.trainer.PyDataProvider2 import dense_vector + from paddle.trainer.config_parser import parse_config + + TEST_DATA = [...] + + def main(): + conf = parse_config("./mnist_model/trainer_config.py", "") + network = swig_paddle.GradientMachine.createFromConfigProto(conf.model_config) + assert isinstance(network, swig_paddle.GradientMachine) # For code hint. + network.loadParameters("./mnist_model/") + converter = DataProviderConverter([dense_vector(784)]) + inArg = converter(TEST_DATA) + print network.forwardTest(inArg) + + if __name__ == '__main__': + swig_paddle.initPaddle("--use_gpu=0") + main() + + +Demo预测输出如下,其中value即为softmax层的输出。由于TEST_DATA包含两条预测数据,所以输出的value包含两个向量 。 .. code-block:: text @@ -45,4 +60,4 @@ PaddlePaddle目前使用Swig对其常用的预测接口进行了封装,使在P 2.70634608e-08, 3.48565123e-08, 5.25639710e-09, 4.48684503e-08]], dtype=float32)}] -其中,value即为softmax层的输出。由于数据是两条,所以输出的value包含两个向量 。 + From 0bcacea5815b710f3d798307d8906cc49347465b Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Wed, 23 Nov 2016 17:35:35 +0800 Subject: [PATCH 13/37] refine ubuntu installation and FAQ doc --- .../install/ubuntu_install.rst | 81 ++++++++----------- doc_cn/faq/index.rst | 58 ++++++------- 2 files changed, 55 insertions(+), 84 deletions(-) diff --git a/doc_cn/build_and_install/install/ubuntu_install.rst b/doc_cn/build_and_install/install/ubuntu_install.rst index 0fb59e25f6..08d55f98d9 100644 --- a/doc_cn/build_and_install/install/ubuntu_install.rst +++ b/doc_cn/build_and_install/install/ubuntu_install.rst @@ -1,83 +1,66 @@ -使用deb包在Ubuntu上安装PaddlePaddle +Ubuntu部署PaddlePaddle =================================== -PaddlePaddle目前支持使用deb包安装。Paddle的 :code:`deb` 安装包在ubuntu 14.04中正确,但理论上支持其他的 debian 发行版。 +PaddlePaddle提供了deb安装包,并在ubuntu 14.04做了完备测试,理论上也支持其他的debian发行版。 +安装 +------ -PaddlePaddle的ubuntu安装包分为四个版本,他们是 cpu、gpu、cpu-noavx、gpu-noavx 四个版本。其中 noavx 用于不支持AVX指令集的cpu。安装包的下载地址是\: https://github.com/baidu/Paddle/releases/ +安装包的下载地址是\: https://github.com/PaddlePaddle/Paddle/releases +它包含四个版本\: -用户需要先将PaddlePaddle安装包下载到本地,然后执行如下 :code:`gdebi` 命令即可完成安装。 +* cpu版本: 支持主流intel x86处理器平台, 支持avx指令集。 -.. code-block:: shell +* cpu-noavx版本:支持主流intel x86处理器平台,不支持avx指令集。 + +* gpu版本:支持主流intel x86处理器平台,支持nvidia cuda平台,支持avx指令集。 + +* gpu-noavx版本:支持主流intel x86处理器平台,支持nvidia cuda平台,不支持avx指令级。 - gdebi paddle-*-cpu*.deb +下载完相关安装包后,执行: -如果 :code:`gdebi` 没有安装,则需要使用 :code:`sudo apt-get install gdebi`, 来安装 :code:`gdebi` 。 +.. code-block:: shell + sudo apt-get install gdebi + gdebi paddle-*-cpu.deb -或者使用下面一条命令安装. +或者: .. code-block:: shell - dpkg -i paddle-*-cpu*.deb + dpkg -i paddle-*-cpu.deb apt-get install -f + 在 :code:`dpkg -i` 的时候如果报一些依赖未找到的错误是正常的, 在 :code:`apt-get install -f` 里会继续安装 PaddlePaddle。 -需要注意的是,如果使用GPU版本的PaddlePaddle,请安装CUDA 7.5 和CUDNN 5到本地环境中, -并设置好对应的环境变量(LD_LIBRARY_PATH等等)。 - -安装完成后,可以使用命令 :code:`paddle version` 查看安装后的paddle 版本。可能的输出为 +安装完成后,可以使用命令 :code:`paddle version` 查看安装后的paddle 版本: .. literalinclude:: paddle_version.txt 可能遇到的问题 -------------- -libcudart.so/libcudnn.so找不到 -++++++++++++++++++++++++++++++ - -安装完成PaddlePaddle后,运行 :code:`paddle train` 报错\: - -.. code-block:: shell - - 0831 12:36:04.151525 1085 hl_dso_loader.cc:70] Check failed: nullptr != *dso_handle For Gpu version of PaddlePaddle, it couldn't find CUDA library: libcudart.so Please make sure you already specify its path.Note: for training data on Cpu using Gpu version of PaddlePaddle,you must specify libcudart.so via LD_LIBRARY_PATH. - -PaddlePaddle使用运行时动态连接CUDA的so,如果在 LD_LIBRARY_PATH里面找不到这些动态 -库的话,会报寻找不到这些动态库。 - -解决方法很简单,就是将这些动态库加到环境变量里面。比较可能的命令如下。 - -.. code-block:: text - - export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH - -CUDA Driver找不到 -+++++++++++++++++ +如何设置gpu版本运行时cuda环境运行GPU版本 +++++++++++++++++++++++++++++++++++++++++ -运行 :code:`paddle train` 报错\: +如果使用GPU版本的PaddlePaddle,请安装CUDA 7.5 和CUDNN 5到本地环境中,并设置: -.. code-block:: text +.. code-block:: shell + export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/lib:$LD_LIBRARY_PATH + export PATH=/usr/local/cuda/bin:$PATH - F0831 12:39:16.699000 1090 hl_cuda_device.cc:530] Check failed: cudaSuccess == cudaStat (0 vs. 35) Cuda Error: CUDA driver version is insufficient for CUDA runtime version -PaddlePaddle运行时如果没有寻找到cuda的driver,变会报这个错误。解决办法是将cuda -driver添加到LD_LIBRARY_PATH中。比较可能的命令如下。 - -.. code-block:: text - - export LD_LIBRARY_PATH=/usr/lib64:$LD_LIBRARY_PATH +libcudart.so/libcudnn.so找不到 +++++++++++++++++++++++++++++++ -config文件找不到 -++++++++++++++++ +安装完成后,运行 :code:`paddle train` 报错\: -运行 :code:`paddle train` 得到结果\: +.. code-block:: shell -.. code-block:: text + 0831 12:36:04.151525 1085 hl_dso_loader.cc:70] Check failed: nullptr != *dso_handle For Gpu version of PaddlePaddle, it couldn't find CUDA library: libcudart.so Please make sure you already specify its path.Note: for training data on Cpu using Gpu version of PaddlePaddle,you must specify libcudart.so via LD_LIBRARY_PATH. - F0831 20:53:07.525789 1302 TrainerMain.cpp:94] Check failed: config != nullptr no valid config +原因是未设置cuda运行时环境变量,请参考** 设置gpu版本运行时cuda环境** 解决方案。 -PaddlePaddle在运行时找不到对应的config文件,说明命令行参数 :code:`config` 没有设置。 -而这个一般说明PaddlePaddle已经安装完毕了。 \ No newline at end of file diff --git a/doc_cn/faq/index.rst b/doc_cn/faq/index.rst index 3eb0e10ae2..6e1102e552 100644 --- a/doc_cn/faq/index.rst +++ b/doc_cn/faq/index.rst @@ -4,22 +4,18 @@ PaddlePaddle常见问题 .. contents:: -1. 如何减少PaddlePaddle的内存占用 +1. 如何减少内存占用 --------------------------------- -神经网络的训练本身是一个非常消耗内存和显存的工作。经常会消耗数十G的内存和数G的显存。 +神经网络的训练本身是一个非常消耗内存和显存的工作,经常会消耗数十G的内存和数G的显存。 PaddlePaddle的内存占用主要分为如下几个方面\: -* DataProvider缓冲池内存 (只针对内存) -* 神经元激活内存 (针对内存和显存) -* 参数内存 (针对内存和显存) +* DataProvider缓冲池内存(只针对内存) +* 神经元激活内存(针对内存和显存) +* 参数内存 (针对内存和显存) * 其他内存杂项 -这其中,其他内存杂项是指PaddlePaddle本身所用的一些内存,包括字符串分配,临时变量等等, -这些内存就不考虑如何缩减了。 - -其他的内存的减少方法依次为 - +其中,其他内存杂项是指PaddlePaddle本身所用的一些内存,包括字符串分配,临时变量等等,暂不考虑在内。 减少DataProvider缓冲池内存 ++++++++++++++++++++++++++ @@ -39,28 +35,28 @@ PyDataProvider使用的是异步加载,同时在内存里直接随即选取数 .. literalinclude:: reduce_min_pool_size.py -这样做可以极大的减少内存占用,并且可能会加速训练过程。 详细文档参考 `这里 +这样做可以极大的减少内存占用,并且可能会加速训练过程,详细文档参考 `这里 <../ui/data_provider/pydataprovider2.html#provider>`_ 。 神经元激活内存 ++++++++++++++ -神经网络在训练的时候,会对每一个激活暂存一些数据,包括激活,參差等等。 +神经网络在训练的时候,会对每一个激活暂存一些数据,如神经元激活值等。 在反向传递的时候,这些数据会被用来更新参数。这些数据使用的内存主要和两个参数有关系, 一是batch size,另一个是每条序列(Sequence)长度。所以,其实也是和每个mini-batch中包含 的时间步信息成正比。 -所以,做法可以有两种。他们是 +所以做法可以有两种: * 减小batch size。 即在网络配置中 :code:`settings(batch_size=1000)` 设置成一个小一些的值。但是batch size本身是神经网络的超参数,减小batch size可能会对训练结果产生影响。 * 减小序列的长度,或者直接扔掉非常长的序列。比如,一个数据集大部分序列长度是100-200, - 但是突然有一个10000长的序列,就很容易导致内存超限。特别是在LSTM等RNN中。 + 但是突然有一个10000长的序列,就很容易导致内存超限,特别是在LSTM等RNN中。 参数内存 ++++++++ PaddlePaddle支持非常多的优化算法(Optimizer),不同的优化算法需要使用不同大小的内存。 -例如如果使用 :code:`adadelta` 算法,则需要使用参数规模大约5倍的内存。 如果参数保存下来的 +例如使用 :code:`adadelta` 算法,则需要使用等于权重参数规模大约5倍的内存。举例,如果参数保存下来的模型目录 文件为 :code:`100M`, 那么该优化算法至少需要 :code:`500M` 的内存。 可以考虑使用一些优化算法,例如 :code:`momentum`。 @@ -68,11 +64,11 @@ PaddlePaddle支持非常多的优化算法(Optimizer),不同的优化算法需 2. 如何加速PaddlePaddle的训练速度 --------------------------------- -PaddlePaddle是神经网络训练平台,加速PaddlePaddle训练有如下几个方面\: +加速PaddlePaddle训练可以考虑从以下几个方面\: * 减少数据载入的耗时 * 加速训练速度 -* 利用更多的计算资源 +* 利用分布式训练驾驭更多的计算资源 减少数据载入的耗时 ++++++++++++++++++ @@ -108,25 +104,20 @@ PaddlePaddle支持Sparse的训练,sparse训练需要训练特征是 :code:`spa 利用更多的计算资源可以分为一下几个方式来进行\: * 单机CPU训练 - * 使用多线程训练。设置命令行参数 :code:`trainer_count`,即可以设置参与训练的线程数量。使用方法为 :code:`paddle train --trainer_count=4` + * 使用多线程训练。设置命令行参数 :code:`trainer_count`。 + * 单机GPU训练 - * 使用显卡训练。设置命令行参数 :code:`use_gpu`。 使用方法为 :code:`paddle train --use_gpu=true` - * 使用多块显卡训练。设置命令行参数 :code:`use_gpu` 和 :code:`trainer_count`。使用 :code:`--use_gpu=True` 开启GPU训练,使用 :code:`trainer_count` 指定显卡数量。使用方法为 :code:`paddle train --use_gpu=true --trainer_count=4` + * 使用显卡训练。设置命令行参数 :code:`use_gpu`。 + * 使用多块显卡训练。设置命令行参数 :code:`use_gpu` 和 :code:`trainer_count` 。 + * 多机训练 - * 使用多机训练的方法也比较简单,需要先在每个节点启动 :code:`paddle pserver`,在使用 :code:`paddle train --pservers=192.168.100.1,192.168.100.2` 来指定每个pserver的ip地址 - * 具体的多机训练方法参考 `多机训练 `_ 文档。 + * 具体的多机训练方法参考 `多机训练文档 <../ui/data_provider/pydataprovider2.html#provider>`_ 。 3. 遇到“非法指令”或者是“illegal instruction” -------------------------------------------- -paddle在进行计算的时候为了提升计算性能,使用了avx指令。部分老的cpu型号无法支持这样的指令。通常来说执行下grep avx /proc/cpuinfo看看是否有输出即可知道是否支持。(另:用此方法部分虚拟机可能检测到支持avx指令但是实际运行会挂掉,请当成是不支持,看下面的解决方案) - -解决办法是\: - -* 使用 NO_AVX的 `安装包 <../build_and_install/index.html>`_ 或者 `Docker image <../build_and_install/install/docker_install.html>`_ -* 或者,使用 :code:`-DWITH_AVX=OFF` 重新编译PaddlePaddle。 - +PaddlePaddle使用avx SIMD指令提高cpu执行效率,因此错误的使用二进制发行版可能会导致这种错误,请选择正确的版本。 4. 如何选择SGD算法的学习率 -------------------------- @@ -158,7 +149,7 @@ paddle在进行计算的时候为了提升计算性能,使用了avx指令。 6. 如何共享参数 --------------- -PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字的参数,会共享参数。设置参数的名字,可以使用 :code:`ParamAttr(name="YOUR_PARAM_NAME")` 来设置。更方便的设置方式,是想要共享的参数使用同样的 :code:`ParamAttr` 对象。 +PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字的参数,会共享参数。设置参数的名字,可以使用 :code:`ParamAttr(name="YOUR_PARAM_NAME")` 来设置。更方便的设置方式,是使得要共享的参数使用同样的 :code:`ParamAttr` 对象。 简单的全连接网络,参数共享的配置示例为\: @@ -208,9 +199,6 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 paddle package is already in your PYTHONPATH. But unittest need a clean environment. Please uninstall paddle package before start unittest. Try to 'pip uninstall paddle'. -解决办法是:卸载paddle包 :code:`pip uninstall paddle`。 - -原因是:单元测试使用了一个旧版本的python包,而没有测试到代码中实际修改的python包。即单元测试需要一个干净的环境: +解决办法是: -* 如果paddle包已经在python的site-packages里面了,那么单元测试时使用的paddle包,就是site-packages里面的python包,而不是源码目录里 :code:`/python` 目录下的python包。 -* 即便设置了 :code:`PYTHONPATH` 到 :code:`/python` 也没用,因为python的搜索路径是优先已经安装的python包。 \ No newline at end of file +* 卸载PaddlePaddle包 :code:`pip uninstall paddle`, 清理掉老旧的PaddlePaddle安装包,使得单元测试有一个干净的环境。如果PaddlePaddle包已经在python的site-packages里面,单元测试会引用site-packages里面的python包,而不是源码目录里 :code:`/python` 目录下的python包。同时,即便设置 :code:`PYTHONPATH` 到 :code:`/python` 也没用,因为python的搜索路径是优先已经安装的python包。 From ec0214392152dc88f04b7440b11edbfe0e022d67 Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Wed, 23 Nov 2016 20:47:04 +0800 Subject: [PATCH 14/37] use literalinclude --- doc_cn/ui/predict/swig_py_paddle.rst | 23 +++-------------------- 1 file changed, 3 insertions(+), 20 deletions(-) diff --git a/doc_cn/ui/predict/swig_py_paddle.rst b/doc_cn/ui/predict/swig_py_paddle.rst index f9750d80c9..4c0a0de820 100644 --- a/doc_cn/ui/predict/swig_py_paddle.rst +++ b/doc_cn/ui/predict/swig_py_paddle.rst @@ -25,26 +25,9 @@ PaddlePaddle使用swig对常用的预测接口进行了封装,通过编译会 如下是一段使用mnist model来实现手写识别的预测代码。完整的代码见 ``src_root/doc/ui/predict/predict_sample.py`` 。mnist model可以通过 ``src_root\demo\mnist`` 目录下的demo训练出来。 -.. code-block:: python - - from py_paddle import swig_paddle, DataProviderConverter - from paddle.trainer.PyDataProvider2 import dense_vector - from paddle.trainer.config_parser import parse_config - - TEST_DATA = [...] - - def main(): - conf = parse_config("./mnist_model/trainer_config.py", "") - network = swig_paddle.GradientMachine.createFromConfigProto(conf.model_config) - assert isinstance(network, swig_paddle.GradientMachine) # For code hint. - network.loadParameters("./mnist_model/") - converter = DataProviderConverter([dense_vector(784)]) - inArg = converter(TEST_DATA) - print network.forwardTest(inArg) - - if __name__ == '__main__': - swig_paddle.initPaddle("--use_gpu=0") - main() +.. literalinclude:: ../../../doc/ui/predict/predict_sample.py + :language: python + :lines: 15-18,121-136 Demo预测输出如下,其中value即为softmax层的输出。由于TEST_DATA包含两条预测数据,所以输出的value包含两个向量 。 From 0561dd01b874c94525b42bd596bafb8020b547bc Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Thu, 24 Nov 2016 10:13:16 +0800 Subject: [PATCH 15/37] Update doc and proc_from_raw_data/get_data.sh --- .../data/proc_from_raw_data/get_data.sh | 20 ++++++++++--------- doc/demo/quick_start/index_en.md | 2 +- doc_cn/demo/quick_start/index.md | 4 ++-- 3 files changed, 14 insertions(+), 12 deletions(-) diff --git a/demo/quick_start/data/proc_from_raw_data/get_data.sh b/demo/quick_start/data/proc_from_raw_data/get_data.sh index 9c3e9db248..cd85e26842 100755 --- a/demo/quick_start/data/proc_from_raw_data/get_data.sh +++ b/demo/quick_start/data/proc_from_raw_data/get_data.sh @@ -25,14 +25,17 @@ cd $DIR # Download data echo "Downloading Amazon Electronics reviews data..." # http://jmcauley.ucsd.edu/data/amazon/ -#wget http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Electronics_5.json.gz +wget http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Electronics_5.json.gz echo "Downloading mosesdecoder..." -#https://github.com/moses-smt/mosesdecoder -#wget https://github.com/moses-smt/mosesdecoder/archive/master.zip -#unzip master.zip -#rm master.zip -echo "Done." +# https://github.com/moses-smt/mosesdecoder +wget https://github.com/moses-smt/mosesdecoder/archive/master.zip +unzip master.zip +rm master.zip + +################## +# Preprocess data +echo "Preprocess data..." export LC_ALL=C UNAME_STR=`uname` @@ -42,12 +45,11 @@ else SHUF_PROG='gshuf' fi -# Start preprocess mkdir -p tmp python preprocess.py -i reviews_Electronics_5.json.gz # uniq and shuffle cd tmp -echo 'uniq and shuffle...' +echo 'Uniq and shuffle...' cat pos_*|sort|uniq|${SHUF_PROG}> pos.shuffed cat neg_*|sort|uniq|${SHUF_PROG}> neg.shuffed @@ -74,4 +76,4 @@ echo 'test.txt' > test.list rm -rf tmp mv dict.txt dict_all.txt cat dict_all.txt | head -n 30001 > dict.txt -echo 'preprocess finished' +echo 'Done.' diff --git a/doc/demo/quick_start/index_en.md b/doc/demo/quick_start/index_en.md index 01f6f8ef54..11d568b8f9 100644 --- a/doc/demo/quick_start/index_en.md +++ b/doc/demo/quick_start/index_en.md @@ -59,7 +59,7 @@ To build your text classification system, your code will need to perform five st ## Preprocess data into standardized format In this example, you are going to use [Amazon electronic product review dataset](http://jmcauley.ucsd.edu/data/amazon/) to build a bunch of deep neural network models for text classification. Each text in this dataset is a product review. This dataset has two categories: “positive” and “negative”. Positive means the reviewer likes the product, while negative means the reviewer does not like the product. -`demo/quick_start` in the [source code](https://github.com/baidu/Paddle) provides script for downloading the preprocessed data as shown below. (If you want to process the raw data, you can use the script `demo/quick_start/data/proc_from_raw_data/get_data.sh`). +`demo/quick_start` in the [source code](https://github.com/PaddlePaddle/Paddle) provides script for downloading the preprocessed data as shown below. (If you want to process the raw data, you can use the script `demo/quick_start/data/proc_from_raw_data/get_data.sh`). ```bash cd demo/quick_start diff --git a/doc_cn/demo/quick_start/index.md b/doc_cn/demo/quick_start/index.md index 514b45a487..4a6e07ee1f 100644 --- a/doc_cn/demo/quick_start/index.md +++ b/doc_cn/demo/quick_start/index.md @@ -32,7 +32,7 @@ ## 数据格式准备(Data Preparation) 在本问题中,我们使用[Amazon电子产品评论数据](http://jmcauley.ucsd.edu/data/amazon/), -将评论分为好评(正样本)和差评(负样本)两类。[源码](https://github.com/baidu/Paddle)的`demo/quick_start`里提供了下载已经预处理数据的脚本(如果想从最原始的数据处理,可以使用脚本 `./demo/quick_start/data/proc_from_raw_data/get_data.sh`)。 +将评论分为好评(正样本)和差评(负样本)两类。[源码](https://github.com/PaddlePaddle/Paddle)的`demo/quick_start`里提供了下载已经预处理数据的脚本(如果想从最原始的数据处理,可以使用脚本 `./demo/quick_start/data/proc_from_raw_data/get_data.sh`)。 ```bash cd demo/quick_start @@ -141,7 +141,7 @@ PyDataProvider2。 我们将以基本的逻辑回归网络作为起点,并逐渐展示更加深入的功能。更详细的网络配置 连接请参考Layer文档。 -所有配置在[源码](https://github.com/baidu/Paddle)`demo/quick_start`目录,首先列举逻辑回归网络。 +所有配置在[源码](https://github.com/PaddlePaddle/Paddle)`demo/quick_start`目录,首先列举逻辑回归网络。 ### 逻辑回归模型(Logistic Regression) From 341688b583a01d90609e2da362f4ecabdf0ff489 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Fri, 25 Nov 2016 16:56:18 +0800 Subject: [PATCH 16/37] Bumping up version number --- CMakeLists.txt | 2 +- paddle/scripts/docker/Dockerfile.m4 | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/CMakeLists.txt b/CMakeLists.txt index 090ac9e188..0e64279976 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -3,7 +3,7 @@ cmake_minimum_required(VERSION 2.8) project(paddle CXX C) set(PADDLE_MAJOR_VERSION 0) set(PADDLE_MINOR_VERSION 9) -set(PADDLE_PATCH_VERSION 0a0) +set(PADDLE_PATCH_VERSION 0) set(PADDLE_VERSION ${PADDLE_MAJOR_VERSION}.${PADDLE_MINOR_VERSION}.${PADDLE_PATCH_VERSION}) set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake") diff --git a/paddle/scripts/docker/Dockerfile.m4 b/paddle/scripts/docker/Dockerfile.m4 index e14493ed9e..761aa975d6 100644 --- a/paddle/scripts/docker/Dockerfile.m4 +++ b/paddle/scripts/docker/Dockerfile.m4 @@ -1,7 +1,7 @@ FROM PADDLE_BASE_IMAGE MAINTAINER PaddlePaddle Dev Team COPY build.sh /root/ -ENV GIT_CHECKOUT=v0.9.0a0 +ENV GIT_CHECKOUT=v0.9.0 ENV WITH_GPU=PADDLE_WITH_GPU ENV IS_DEVEL=PADDLE_IS_DEVEL ENV WITH_DEMO=PADDLE_WITH_DEMO From 64f4e547ae7ee332aae6496edb4eafc051979597 Mon Sep 17 00:00:00 2001 From: liaogang Date: Fri, 25 Nov 2016 17:12:18 +0800 Subject: [PATCH 17/37] Refine eng docs structure --- doc/about/index.rst | 10 + doc/algorithm/index.rst | 7 - doc/algorithm/rnn/bi_lstm.jpg | 1 - .../rnn/encoder-decoder-attention-model.png | 1 - doc/{ui => api}/data_provider/index.rst | 0 .../data_provider/pydataprovider2.rst | 0 doc/api/index.md | 14 + doc/{ui => api}/predict/predict_sample.py | 0 doc/{ui => api}/predict/swig_py_paddle_en.rst | 0 .../trainer_config_helpers/activations.rst | 0 .../api/trainer_config_helpers/attrs.rst | 0 .../trainer_config_helpers/data_sources.rst | 0 .../api/trainer_config_helpers/evaluators.rst | 0 .../api/trainer_config_helpers/index.rst | 0 .../api/trainer_config_helpers/layers.rst | 0 .../api/trainer_config_helpers/networks.rst | 0 .../api/trainer_config_helpers/optimizers.rst | 0 .../api/trainer_config_helpers/poolings.rst | 0 doc/cluster/index.rst | 8 - doc/dev/layer.md | 4 - doc/howto/algorithm/index.rst | 7 + doc/{ => howto}/algorithm/rnn/rnn.rst | 0 .../cluster}/cluster_train.md | 2 +- .../cmd_argument/argument_outline.md | 0 .../cmd_argument/detail_introduction.md | 0 doc/howto/cmd_argument/index.md | 5 + doc/{ui => howto}/cmd_argument/use_case.md | 0 doc/{build => howto}/contribute_to_paddle.md | 0 doc/{ => howto}/dev/index.rst | 4 +- doc/howto/dev/layer.md | 5 + .../dev/new_layer/FullyConnected.jpg | Bin doc/{ => howto}/dev/new_layer/new_layer.rst | 0 doc/{ => howto/dev}/source/api.rst | 0 doc/{ => howto/dev}/source/cuda/index.rst | 0 doc/{ => howto/dev}/source/cuda/matrix.rst | 0 doc/{ => howto/dev}/source/cuda/nn.rst | 0 doc/{ => howto/dev}/source/cuda/utils.rst | 0 .../dev}/source/gserver/activations.rst | 0 .../dev}/source/gserver/dataproviders.rst | 0 .../dev}/source/gserver/evaluators.rst | 0 .../dev}/source/gserver/gradientmachines.rst | 0 doc/{ => howto/dev}/source/gserver/index.rst | 0 doc/{ => howto/dev}/source/gserver/layers.rst | 0 .../dev}/source/gserver/neworks.rst | 0 doc/{ => howto/dev}/source/index.rst | 0 doc/{ => howto/dev}/source/math/functions.rst | 0 doc/{ => howto/dev}/source/math/index.rst | 0 doc/{ => howto/dev}/source/math/matrix.rst | 0 doc/{ => howto/dev}/source/math/utils.rst | 0 doc/{ => howto/dev}/source/math/vector.rst | 0 .../dev}/source/parameter/index.rst | 0 .../dev}/source/parameter/optimizer.rst | 0 .../dev}/source/parameter/parameter.rst | 0 .../dev}/source/parameter/updater.rst | 0 doc/{ => howto/dev}/source/pserver/client.rst | 0 doc/{ => howto/dev}/source/pserver/index.rst | 0 .../dev}/source/pserver/network.rst | 0 doc/{ => howto/dev}/source/pserver/server.rst | 0 doc/{ => howto/dev}/source/trainer.rst | 0 .../dev}/source/utils/customStackTrace.rst | 0 doc/{ => howto/dev}/source/utils/enum.rst | 0 doc/{ => howto/dev}/source/utils/index.rst | 0 doc/{ => howto/dev}/source/utils/lock.rst | 0 doc/{ => howto/dev}/source/utils/queue.rst | 0 doc/{ => howto/dev}/source/utils/thread.rst | 0 doc/howto/index.rst | 11 + doc/index.rst | 10 +- doc/introduction/basic_usage/basic_usage.rst | 109 +++++ doc/introduction/basic_usage/parameters.png | Bin 0 -> 44469 bytes .../build_and_install}/build_from_source.md | 0 .../build_and_install}/cmake.png | Bin .../build_and_install}/docker_install.rst | 0 .../build_and_install}/index.rst | 5 +- .../build_and_install}/ubuntu_install.rst | 0 doc/introduction/index.md | 100 ----- doc/introduction/index.rst | 8 + doc/introduction/parameters.png | 1 - .../embedding_model/index.md | 0 .../embedding_model/neural-n-gram-model.png | Bin .../image_classification/cifar.png | Bin .../image_classification.md | 0 .../image_classification.png | Bin .../image_classification/index.rst | 0 .../image_classification/lenet.png | Bin .../image_classification/plot.png | Bin .../imagenet_model/resnet_block.jpg | Bin .../imagenet_model/resnet_model.md | 0 doc/{demo => tutorials}/index.md | 2 +- .../quick_start/NetContinuous_en.png | Bin .../quick_start/NetConv_en.png | Bin .../quick_start/NetLR_en.png | Bin .../quick_start/NetRNN_en.png | Bin .../quick_start/PipelineNetwork_en.jpg | Bin .../quick_start/PipelineTest_en.png | Bin .../quick_start/PipelineTrain_en.png | Bin .../quick_start/Pipeline_en.jpg | Bin .../quick_start/index_en.md | 0 doc/{demo => tutorials}/rec/ml_dataset.md | 0 doc/{demo => tutorials}/rec/ml_regression.rst | 0 .../rec/rec_regression_network.png | Bin .../semantic_role_labeling/curve.jpg | Bin .../semantic_role_labeling/feature.jpg | Bin .../semantic_role_labeling/index.rst | 0 .../semantic_role_labeling/network_arch.png | Bin .../semantic_role_labeling.md | 400 +++++++++--------- .../sentiment_analysis/bi_lstm.jpg | Bin .../sentiment_analysis/index.rst | 0 .../sentiment_analysis/lstm.png | Bin .../sentiment_analysis/sentiment_analysis.md | 0 .../sentiment_analysis/stacked_lstm.jpg | Bin .../encoder-decoder-attention-model.png | Bin .../text_generation/index.rst | 0 .../text_generation/text_generation.md | 0 doc/ui/index.md | 20 - doc/user_guide.rst | 13 - 115 files changed, 380 insertions(+), 367 deletions(-) create mode 100644 doc/about/index.rst delete mode 100644 doc/algorithm/index.rst delete mode 120000 doc/algorithm/rnn/bi_lstm.jpg delete mode 120000 doc/algorithm/rnn/encoder-decoder-attention-model.png rename doc/{ui => api}/data_provider/index.rst (100%) rename doc/{ui => api}/data_provider/pydataprovider2.rst (100%) create mode 100644 doc/api/index.md rename doc/{ui => api}/predict/predict_sample.py (100%) rename doc/{ui => api}/predict/swig_py_paddle_en.rst (100%) rename doc/{ui => }/api/trainer_config_helpers/activations.rst (100%) rename doc/{ui => }/api/trainer_config_helpers/attrs.rst (100%) rename doc/{ui => }/api/trainer_config_helpers/data_sources.rst (100%) rename doc/{ui => }/api/trainer_config_helpers/evaluators.rst (100%) rename doc/{ui => }/api/trainer_config_helpers/index.rst (100%) rename doc/{ui => }/api/trainer_config_helpers/layers.rst (100%) rename doc/{ui => }/api/trainer_config_helpers/networks.rst (100%) rename doc/{ui => }/api/trainer_config_helpers/optimizers.rst (100%) rename doc/{ui => }/api/trainer_config_helpers/poolings.rst (100%) delete mode 100644 doc/cluster/index.rst delete mode 100644 doc/dev/layer.md create mode 100644 doc/howto/algorithm/index.rst rename doc/{ => howto}/algorithm/rnn/rnn.rst (100%) rename doc/{cluster/opensource => howto/cluster}/cluster_train.md (99%) rename doc/{ui => howto}/cmd_argument/argument_outline.md (100%) rename doc/{ui => howto}/cmd_argument/detail_introduction.md (100%) create mode 100644 doc/howto/cmd_argument/index.md rename doc/{ui => howto}/cmd_argument/use_case.md (100%) rename doc/{build => howto}/contribute_to_paddle.md (100%) rename doc/{ => howto}/dev/index.rst (71%) create mode 100644 doc/howto/dev/layer.md rename doc/{ => howto}/dev/new_layer/FullyConnected.jpg (100%) rename doc/{ => howto}/dev/new_layer/new_layer.rst (100%) rename doc/{ => howto/dev}/source/api.rst (100%) rename doc/{ => howto/dev}/source/cuda/index.rst (100%) rename doc/{ => howto/dev}/source/cuda/matrix.rst (100%) rename doc/{ => howto/dev}/source/cuda/nn.rst (100%) rename doc/{ => howto/dev}/source/cuda/utils.rst (100%) rename doc/{ => howto/dev}/source/gserver/activations.rst (100%) rename doc/{ => howto/dev}/source/gserver/dataproviders.rst (100%) rename doc/{ => howto/dev}/source/gserver/evaluators.rst (100%) rename doc/{ => howto/dev}/source/gserver/gradientmachines.rst (100%) rename doc/{ => howto/dev}/source/gserver/index.rst (100%) rename doc/{ => howto/dev}/source/gserver/layers.rst (100%) rename doc/{ => howto/dev}/source/gserver/neworks.rst (100%) rename doc/{ => howto/dev}/source/index.rst (100%) rename doc/{ => howto/dev}/source/math/functions.rst (100%) rename doc/{ => howto/dev}/source/math/index.rst (100%) rename doc/{ => howto/dev}/source/math/matrix.rst (100%) rename doc/{ => howto/dev}/source/math/utils.rst (100%) rename doc/{ => howto/dev}/source/math/vector.rst (100%) rename doc/{ => howto/dev}/source/parameter/index.rst (100%) rename doc/{ => howto/dev}/source/parameter/optimizer.rst (100%) rename doc/{ => howto/dev}/source/parameter/parameter.rst (100%) rename doc/{ => howto/dev}/source/parameter/updater.rst (100%) rename doc/{ => howto/dev}/source/pserver/client.rst (100%) rename doc/{ => howto/dev}/source/pserver/index.rst (100%) rename doc/{ => howto/dev}/source/pserver/network.rst (100%) rename doc/{ => howto/dev}/source/pserver/server.rst (100%) rename doc/{ => howto/dev}/source/trainer.rst (100%) rename doc/{ => howto/dev}/source/utils/customStackTrace.rst (100%) rename doc/{ => howto/dev}/source/utils/enum.rst (100%) rename doc/{ => howto/dev}/source/utils/index.rst (100%) rename doc/{ => howto/dev}/source/utils/lock.rst (100%) rename doc/{ => howto/dev}/source/utils/queue.rst (100%) rename doc/{ => howto/dev}/source/utils/thread.rst (100%) create mode 100644 doc/howto/index.rst create mode 100644 doc/introduction/basic_usage/basic_usage.rst create mode 100644 doc/introduction/basic_usage/parameters.png rename doc/{build => introduction/build_and_install}/build_from_source.md (100%) rename doc/{build => introduction/build_and_install}/cmake.png (100%) rename doc/{build => introduction/build_and_install}/docker_install.rst (100%) rename doc/{build => introduction/build_and_install}/index.rst (80%) rename doc/{build => introduction/build_and_install}/ubuntu_install.rst (100%) delete mode 100644 doc/introduction/index.md create mode 100644 doc/introduction/index.rst delete mode 120000 doc/introduction/parameters.png rename doc/{demo => tutorials}/embedding_model/index.md (100%) rename doc/{demo => tutorials}/embedding_model/neural-n-gram-model.png (100%) rename doc/{demo => tutorials}/image_classification/cifar.png (100%) rename doc/{demo => tutorials}/image_classification/image_classification.md (100%) rename doc/{demo => tutorials}/image_classification/image_classification.png (100%) rename doc/{demo => tutorials}/image_classification/index.rst (100%) rename doc/{demo => tutorials}/image_classification/lenet.png (100%) rename doc/{demo => tutorials}/image_classification/plot.png (100%) rename doc/{demo => tutorials}/imagenet_model/resnet_block.jpg (100%) rename doc/{demo => tutorials}/imagenet_model/resnet_model.md (100%) rename doc/{demo => tutorials}/index.md (96%) rename doc/{demo => tutorials}/quick_start/NetContinuous_en.png (100%) rename doc/{demo => tutorials}/quick_start/NetConv_en.png (100%) rename doc/{demo => tutorials}/quick_start/NetLR_en.png (100%) rename doc/{demo => tutorials}/quick_start/NetRNN_en.png (100%) rename doc/{demo => tutorials}/quick_start/PipelineNetwork_en.jpg (100%) rename doc/{demo => tutorials}/quick_start/PipelineTest_en.png (100%) rename doc/{demo => tutorials}/quick_start/PipelineTrain_en.png (100%) rename doc/{demo => tutorials}/quick_start/Pipeline_en.jpg (100%) rename doc/{demo => tutorials}/quick_start/index_en.md (100%) rename doc/{demo => tutorials}/rec/ml_dataset.md (100%) rename doc/{demo => tutorials}/rec/ml_regression.rst (100%) rename doc/{demo => tutorials}/rec/rec_regression_network.png (100%) rename doc/{demo => tutorials}/semantic_role_labeling/curve.jpg (100%) rename doc/{demo => tutorials}/semantic_role_labeling/feature.jpg (100%) rename doc/{demo => tutorials}/semantic_role_labeling/index.rst (100%) rename doc/{demo => tutorials}/semantic_role_labeling/network_arch.png (100%) rename doc/{demo => tutorials}/semantic_role_labeling/semantic_role_labeling.md (97%) rename doc/{demo => tutorials}/sentiment_analysis/bi_lstm.jpg (100%) rename doc/{demo => tutorials}/sentiment_analysis/index.rst (100%) rename doc/{demo => tutorials}/sentiment_analysis/lstm.png (100%) rename doc/{demo => tutorials}/sentiment_analysis/sentiment_analysis.md (100%) rename doc/{demo => tutorials}/sentiment_analysis/stacked_lstm.jpg (100%) rename doc/{demo => tutorials}/text_generation/encoder-decoder-attention-model.png (100%) rename doc/{demo => tutorials}/text_generation/index.rst (100%) rename doc/{demo => tutorials}/text_generation/text_generation.md (100%) delete mode 100644 doc/ui/index.md delete mode 100644 doc/user_guide.rst diff --git a/doc/about/index.rst b/doc/about/index.rst new file mode 100644 index 0000000000..c70940ca85 --- /dev/null +++ b/doc/about/index.rst @@ -0,0 +1,10 @@ +Credits +======== + +PaddlPaddle is an easy-to-use, efficient, flexible and scalable deep learning platform, +which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu. + +PaddlePaddle is now open source but far from complete, which is intended to be built upon, improved, scaled, and extended. +We hope to build an active open source community both by providing feedback and by actively contributing to the source code. + +We owe many thanks to `all contributors and developers `_ of PaddlePaddle! diff --git a/doc/algorithm/index.rst b/doc/algorithm/index.rst deleted file mode 100644 index 6073add3c0..0000000000 --- a/doc/algorithm/index.rst +++ /dev/null @@ -1,7 +0,0 @@ -Algorithm Tutorial -================== - -.. toctree:: - :maxdepth: 1 - - rnn/rnn.rst diff --git a/doc/algorithm/rnn/bi_lstm.jpg b/doc/algorithm/rnn/bi_lstm.jpg deleted file mode 120000 index a53296cf80..0000000000 --- a/doc/algorithm/rnn/bi_lstm.jpg +++ /dev/null @@ -1 +0,0 @@ -../../demo/sentiment_analysis/bi_lstm.jpg \ No newline at end of file diff --git a/doc/algorithm/rnn/encoder-decoder-attention-model.png b/doc/algorithm/rnn/encoder-decoder-attention-model.png deleted file mode 120000 index db71321a43..0000000000 --- a/doc/algorithm/rnn/encoder-decoder-attention-model.png +++ /dev/null @@ -1 +0,0 @@ -../../demo/text_generation/encoder-decoder-attention-model.png \ No newline at end of file diff --git a/doc/ui/data_provider/index.rst b/doc/api/data_provider/index.rst similarity index 100% rename from doc/ui/data_provider/index.rst rename to doc/api/data_provider/index.rst diff --git a/doc/ui/data_provider/pydataprovider2.rst b/doc/api/data_provider/pydataprovider2.rst similarity index 100% rename from doc/ui/data_provider/pydataprovider2.rst rename to doc/api/data_provider/pydataprovider2.rst diff --git a/doc/api/index.md b/doc/api/index.md new file mode 100644 index 0000000000..8c4a65e0d5 --- /dev/null +++ b/doc/api/index.md @@ -0,0 +1,14 @@ +# API + +## Data Provider + +* [Introduction](data_provider/index.rst) +* [PyDataProvider2](data_provider/pydataprovider2.rst) + +## Trainer Configuration + +* [Model Config Interface](trainer_config_helpers/index.rst) + +## Predict + +* [Python Prediction API](predict/swig_py_paddle_en.rst) diff --git a/doc/ui/predict/predict_sample.py b/doc/api/predict/predict_sample.py similarity index 100% rename from doc/ui/predict/predict_sample.py rename to doc/api/predict/predict_sample.py diff --git a/doc/ui/predict/swig_py_paddle_en.rst b/doc/api/predict/swig_py_paddle_en.rst similarity index 100% rename from doc/ui/predict/swig_py_paddle_en.rst rename to doc/api/predict/swig_py_paddle_en.rst diff --git a/doc/ui/api/trainer_config_helpers/activations.rst b/doc/api/trainer_config_helpers/activations.rst similarity index 100% rename from doc/ui/api/trainer_config_helpers/activations.rst rename to doc/api/trainer_config_helpers/activations.rst diff --git a/doc/ui/api/trainer_config_helpers/attrs.rst b/doc/api/trainer_config_helpers/attrs.rst similarity index 100% rename from doc/ui/api/trainer_config_helpers/attrs.rst rename to doc/api/trainer_config_helpers/attrs.rst diff --git a/doc/ui/api/trainer_config_helpers/data_sources.rst b/doc/api/trainer_config_helpers/data_sources.rst similarity index 100% rename from doc/ui/api/trainer_config_helpers/data_sources.rst rename to doc/api/trainer_config_helpers/data_sources.rst diff --git a/doc/ui/api/trainer_config_helpers/evaluators.rst b/doc/api/trainer_config_helpers/evaluators.rst similarity index 100% rename from doc/ui/api/trainer_config_helpers/evaluators.rst rename to doc/api/trainer_config_helpers/evaluators.rst diff --git a/doc/ui/api/trainer_config_helpers/index.rst b/doc/api/trainer_config_helpers/index.rst similarity index 100% rename from doc/ui/api/trainer_config_helpers/index.rst rename to doc/api/trainer_config_helpers/index.rst diff --git a/doc/ui/api/trainer_config_helpers/layers.rst b/doc/api/trainer_config_helpers/layers.rst similarity index 100% rename from doc/ui/api/trainer_config_helpers/layers.rst rename to doc/api/trainer_config_helpers/layers.rst diff --git a/doc/ui/api/trainer_config_helpers/networks.rst b/doc/api/trainer_config_helpers/networks.rst similarity index 100% rename from doc/ui/api/trainer_config_helpers/networks.rst rename to doc/api/trainer_config_helpers/networks.rst diff --git a/doc/ui/api/trainer_config_helpers/optimizers.rst b/doc/api/trainer_config_helpers/optimizers.rst similarity index 100% rename from doc/ui/api/trainer_config_helpers/optimizers.rst rename to doc/api/trainer_config_helpers/optimizers.rst diff --git a/doc/ui/api/trainer_config_helpers/poolings.rst b/doc/api/trainer_config_helpers/poolings.rst similarity index 100% rename from doc/ui/api/trainer_config_helpers/poolings.rst rename to doc/api/trainer_config_helpers/poolings.rst diff --git a/doc/cluster/index.rst b/doc/cluster/index.rst deleted file mode 100644 index 9062f85f98..0000000000 --- a/doc/cluster/index.rst +++ /dev/null @@ -1,8 +0,0 @@ -Cluster Train -==================== - -.. toctree:: - :glob: - - opensource/cluster_train.md - internal/index.md diff --git a/doc/dev/layer.md b/doc/dev/layer.md deleted file mode 100644 index 930fb0de1a..0000000000 --- a/doc/dev/layer.md +++ /dev/null @@ -1,4 +0,0 @@ -# Layer Documents - -* [Layer Source Code Document](../source/gserver/layers/index.rst) -* [Layer Python API Document](../ui/api/trainer_config_helpers/index.rst) diff --git a/doc/howto/algorithm/index.rst b/doc/howto/algorithm/index.rst new file mode 100644 index 0000000000..b4ecbc4847 --- /dev/null +++ b/doc/howto/algorithm/index.rst @@ -0,0 +1,7 @@ +Algorithm Configuration +======================= + +.. toctree:: + :maxdepth: 1 + + rnn/rnn.rst diff --git a/doc/algorithm/rnn/rnn.rst b/doc/howto/algorithm/rnn/rnn.rst similarity index 100% rename from doc/algorithm/rnn/rnn.rst rename to doc/howto/algorithm/rnn/rnn.rst diff --git a/doc/cluster/opensource/cluster_train.md b/doc/howto/cluster/cluster_train.md similarity index 99% rename from doc/cluster/opensource/cluster_train.md rename to doc/howto/cluster/cluster_train.md index cb493a88f0..6b68596dc1 100644 --- a/doc/cluster/opensource/cluster_train.md +++ b/doc/howto/cluster/cluster_train.md @@ -9,7 +9,7 @@ In this article, we explain how to run distributed Paddle training jobs on clust 1. Aforementioned scripts use a Python library [fabric](http://www.fabfile.org/) to run SSH commands. We can use `pip` to install fabric: ```bash -pip install fabric + pip install fabric ``` 1. We need to install PaddlePaddle on all nodes in the cluster. To enable GPUs, we need to install CUDA in `/usr/local/cuda`; otherwise Paddle would report errors at runtime. diff --git a/doc/ui/cmd_argument/argument_outline.md b/doc/howto/cmd_argument/argument_outline.md similarity index 100% rename from doc/ui/cmd_argument/argument_outline.md rename to doc/howto/cmd_argument/argument_outline.md diff --git a/doc/ui/cmd_argument/detail_introduction.md b/doc/howto/cmd_argument/detail_introduction.md similarity index 100% rename from doc/ui/cmd_argument/detail_introduction.md rename to doc/howto/cmd_argument/detail_introduction.md diff --git a/doc/howto/cmd_argument/index.md b/doc/howto/cmd_argument/index.md new file mode 100644 index 0000000000..90472c44cb --- /dev/null +++ b/doc/howto/cmd_argument/index.md @@ -0,0 +1,5 @@ +# Command Line Argument + +* [Use Case](use_case.md) +* [Argument Outline](argument_outline.md) +* [Detailed Descriptions](detail_introduction.md) diff --git a/doc/ui/cmd_argument/use_case.md b/doc/howto/cmd_argument/use_case.md similarity index 100% rename from doc/ui/cmd_argument/use_case.md rename to doc/howto/cmd_argument/use_case.md diff --git a/doc/build/contribute_to_paddle.md b/doc/howto/contribute_to_paddle.md similarity index 100% rename from doc/build/contribute_to_paddle.md rename to doc/howto/contribute_to_paddle.md diff --git a/doc/dev/index.rst b/doc/howto/dev/index.rst similarity index 71% rename from doc/dev/index.rst rename to doc/howto/dev/index.rst index 0468dd492b..876c42e9db 100644 --- a/doc/dev/index.rst +++ b/doc/howto/dev/index.rst @@ -2,8 +2,8 @@ Development Guide ================= .. toctree:: - :maxdepth: 1 + :maxdepth: 2 layer.md new_layer/new_layer.rst - ../source/index.md + source/index.rst diff --git a/doc/howto/dev/layer.md b/doc/howto/dev/layer.md new file mode 100644 index 0000000000..1ce0cc5829 --- /dev/null +++ b/doc/howto/dev/layer.md @@ -0,0 +1,5 @@ +# Layer Documents + +* [Layer Python API](../../api/trainer_config_helpers/index.rst) +* [Layer Source Code](source/gserver/layers.rst) +* [Writing New Layers](new_layer/new_layer.rst) diff --git a/doc/dev/new_layer/FullyConnected.jpg b/doc/howto/dev/new_layer/FullyConnected.jpg similarity index 100% rename from doc/dev/new_layer/FullyConnected.jpg rename to doc/howto/dev/new_layer/FullyConnected.jpg diff --git a/doc/dev/new_layer/new_layer.rst b/doc/howto/dev/new_layer/new_layer.rst similarity index 100% rename from doc/dev/new_layer/new_layer.rst rename to doc/howto/dev/new_layer/new_layer.rst diff --git a/doc/source/api.rst b/doc/howto/dev/source/api.rst similarity index 100% rename from doc/source/api.rst rename to doc/howto/dev/source/api.rst diff --git a/doc/source/cuda/index.rst b/doc/howto/dev/source/cuda/index.rst similarity index 100% rename from doc/source/cuda/index.rst rename to doc/howto/dev/source/cuda/index.rst diff --git a/doc/source/cuda/matrix.rst b/doc/howto/dev/source/cuda/matrix.rst similarity index 100% rename from doc/source/cuda/matrix.rst rename to doc/howto/dev/source/cuda/matrix.rst diff --git a/doc/source/cuda/nn.rst b/doc/howto/dev/source/cuda/nn.rst similarity index 100% rename from doc/source/cuda/nn.rst rename to doc/howto/dev/source/cuda/nn.rst diff --git a/doc/source/cuda/utils.rst b/doc/howto/dev/source/cuda/utils.rst similarity index 100% rename from doc/source/cuda/utils.rst rename to doc/howto/dev/source/cuda/utils.rst diff --git a/doc/source/gserver/activations.rst b/doc/howto/dev/source/gserver/activations.rst similarity index 100% rename from doc/source/gserver/activations.rst rename to doc/howto/dev/source/gserver/activations.rst diff --git a/doc/source/gserver/dataproviders.rst b/doc/howto/dev/source/gserver/dataproviders.rst similarity index 100% rename from doc/source/gserver/dataproviders.rst rename to doc/howto/dev/source/gserver/dataproviders.rst diff --git a/doc/source/gserver/evaluators.rst b/doc/howto/dev/source/gserver/evaluators.rst similarity index 100% rename from doc/source/gserver/evaluators.rst rename to doc/howto/dev/source/gserver/evaluators.rst diff --git a/doc/source/gserver/gradientmachines.rst b/doc/howto/dev/source/gserver/gradientmachines.rst similarity index 100% rename from doc/source/gserver/gradientmachines.rst rename to doc/howto/dev/source/gserver/gradientmachines.rst diff --git a/doc/source/gserver/index.rst b/doc/howto/dev/source/gserver/index.rst similarity index 100% rename from doc/source/gserver/index.rst rename to doc/howto/dev/source/gserver/index.rst diff --git a/doc/source/gserver/layers.rst b/doc/howto/dev/source/gserver/layers.rst similarity index 100% rename from doc/source/gserver/layers.rst rename to doc/howto/dev/source/gserver/layers.rst diff --git a/doc/source/gserver/neworks.rst b/doc/howto/dev/source/gserver/neworks.rst similarity index 100% rename from doc/source/gserver/neworks.rst rename to doc/howto/dev/source/gserver/neworks.rst diff --git a/doc/source/index.rst b/doc/howto/dev/source/index.rst similarity index 100% rename from doc/source/index.rst rename to doc/howto/dev/source/index.rst diff --git a/doc/source/math/functions.rst b/doc/howto/dev/source/math/functions.rst similarity index 100% rename from doc/source/math/functions.rst rename to doc/howto/dev/source/math/functions.rst diff --git a/doc/source/math/index.rst b/doc/howto/dev/source/math/index.rst similarity index 100% rename from doc/source/math/index.rst rename to doc/howto/dev/source/math/index.rst diff --git a/doc/source/math/matrix.rst b/doc/howto/dev/source/math/matrix.rst similarity index 100% rename from doc/source/math/matrix.rst rename to doc/howto/dev/source/math/matrix.rst diff --git a/doc/source/math/utils.rst b/doc/howto/dev/source/math/utils.rst similarity index 100% rename from doc/source/math/utils.rst rename to doc/howto/dev/source/math/utils.rst diff --git a/doc/source/math/vector.rst b/doc/howto/dev/source/math/vector.rst similarity index 100% rename from doc/source/math/vector.rst rename to doc/howto/dev/source/math/vector.rst diff --git a/doc/source/parameter/index.rst b/doc/howto/dev/source/parameter/index.rst similarity index 100% rename from doc/source/parameter/index.rst rename to doc/howto/dev/source/parameter/index.rst diff --git a/doc/source/parameter/optimizer.rst b/doc/howto/dev/source/parameter/optimizer.rst similarity index 100% rename from doc/source/parameter/optimizer.rst rename to doc/howto/dev/source/parameter/optimizer.rst diff --git a/doc/source/parameter/parameter.rst b/doc/howto/dev/source/parameter/parameter.rst similarity index 100% rename from doc/source/parameter/parameter.rst rename to doc/howto/dev/source/parameter/parameter.rst diff --git a/doc/source/parameter/updater.rst b/doc/howto/dev/source/parameter/updater.rst similarity index 100% rename from doc/source/parameter/updater.rst rename to doc/howto/dev/source/parameter/updater.rst diff --git a/doc/source/pserver/client.rst b/doc/howto/dev/source/pserver/client.rst similarity index 100% rename from doc/source/pserver/client.rst rename to doc/howto/dev/source/pserver/client.rst diff --git a/doc/source/pserver/index.rst b/doc/howto/dev/source/pserver/index.rst similarity index 100% rename from doc/source/pserver/index.rst rename to doc/howto/dev/source/pserver/index.rst diff --git a/doc/source/pserver/network.rst b/doc/howto/dev/source/pserver/network.rst similarity index 100% rename from doc/source/pserver/network.rst rename to doc/howto/dev/source/pserver/network.rst diff --git a/doc/source/pserver/server.rst b/doc/howto/dev/source/pserver/server.rst similarity index 100% rename from doc/source/pserver/server.rst rename to doc/howto/dev/source/pserver/server.rst diff --git a/doc/source/trainer.rst b/doc/howto/dev/source/trainer.rst similarity index 100% rename from doc/source/trainer.rst rename to doc/howto/dev/source/trainer.rst diff --git a/doc/source/utils/customStackTrace.rst b/doc/howto/dev/source/utils/customStackTrace.rst similarity index 100% rename from doc/source/utils/customStackTrace.rst rename to doc/howto/dev/source/utils/customStackTrace.rst diff --git a/doc/source/utils/enum.rst b/doc/howto/dev/source/utils/enum.rst similarity index 100% rename from doc/source/utils/enum.rst rename to doc/howto/dev/source/utils/enum.rst diff --git a/doc/source/utils/index.rst b/doc/howto/dev/source/utils/index.rst similarity index 100% rename from doc/source/utils/index.rst rename to doc/howto/dev/source/utils/index.rst diff --git a/doc/source/utils/lock.rst b/doc/howto/dev/source/utils/lock.rst similarity index 100% rename from doc/source/utils/lock.rst rename to doc/howto/dev/source/utils/lock.rst diff --git a/doc/source/utils/queue.rst b/doc/howto/dev/source/utils/queue.rst similarity index 100% rename from doc/source/utils/queue.rst rename to doc/howto/dev/source/utils/queue.rst diff --git a/doc/source/utils/thread.rst b/doc/howto/dev/source/utils/thread.rst similarity index 100% rename from doc/source/utils/thread.rst rename to doc/howto/dev/source/utils/thread.rst diff --git a/doc/howto/index.rst b/doc/howto/index.rst new file mode 100644 index 0000000000..e2d688e186 --- /dev/null +++ b/doc/howto/index.rst @@ -0,0 +1,11 @@ +How to +======= + +.. toctree:: + :maxdepth: 1 + + cmd_argument/index.md + cluster/cluster_train.md + algorithm/index.rst + dev/index.rst + contribute_to_paddle.md \ No newline at end of file diff --git a/doc/index.rst b/doc/index.rst index 668ad75a90..a7feed5239 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -4,7 +4,9 @@ PaddlePaddle Documentation .. toctree:: :maxdepth: 1 - introduction/index.md - user_guide.rst - dev/index.rst - algorithm/index.rst + introduction/index.rst + tutorials/index.md + howto/index.rst + api/index.rst + about/index.rst + diff --git a/doc/introduction/basic_usage/basic_usage.rst b/doc/introduction/basic_usage/basic_usage.rst new file mode 100644 index 0000000000..dca7a6b1f4 --- /dev/null +++ b/doc/introduction/basic_usage/basic_usage.rst @@ -0,0 +1,109 @@ +Basic Usage +============= + +PaddlePaddle is a deep learning platform open-sourced by Baidu. With PaddlePaddle, you can easily train a classic neural network within a couple lines of configuration, or you can build sophisticated models that provide state-of-the-art performance on difficult learning tasks like sentiment analysis, machine translation, image caption and so on. + +1. A Classic Problem +--------------------- + +Now, to give you a hint of what using PaddlePaddle looks like, let's start with a fundamental learning problem - `simple linear regression `_: you have observed a set of two-dimensional data points of ``X`` and ``Y``, where ``X`` is an explanatory variable and ``Y`` is corresponding dependent variable, and you want to recover the underlying correlation between ``X`` and ``Y``. Linear regression can be used in many practical scenarios. For example, ``X`` can be a variable about house size, and ``Y`` a variable about house price. You can build a model that captures relationship between them by observing real estate markets. + +2. Prepare the Data +-------------------- + +Suppose the true relationship can be characterized as ``Y = 2X + 0.3``, let's see how to recover this pattern only from observed data. Here is a piece of python code that feeds synthetic data to PaddlePaddle. The code is pretty self-explanatory, the only extra thing you need to add for PaddlePaddle is a definition of input data types. + + .. code-block:: python + + # dataprovider.py + from paddle.trainer.PyDataProvider2 import * + import random + + # define data types of input: 2 real numbers + @provider(input_types=[dense_vector(1), dense_vector(1)],use_seq=False) + def process(settings, input_file): + for i in xrange(2000): + x = random.random() + yield [x], [2*x+0.3] + +3. Train a NeuralNetwork +------------------------- + +To recover this relationship between ``X`` and ``Y``, we use a neural network with one layer of linear activation units and a square error cost layer. Don't worry if you are not familiar with these terminologies, it's just saying that we are starting from a random line ``Y' = wX + b`` , then we gradually adapt ``w`` and ``b`` to minimize the difference between ``Y'`` and ``Y``. Here is what it looks like in PaddlePaddle: + + .. code-block:: python + + # trainer_config.py + from paddle.trainer_config_helpers import * + + # 1. read data. Suppose you saved above python code as dataprovider.py + data_file = 'empty.list' + with open(data_file, 'w') as f: f.writelines(' ') + define_py_data_sources2(train_list=data_file, test_list=None, + module='dataprovider', obj='process',args={}) + + # 2. learning algorithm + settings(batch_size=12, learning_rate=1e-3, learning_method=MomentumOptimizer()) + + # 3. Network configuration + x = data_layer(name='x', size=1) + y = data_layer(name='y', size=1) + y_predict = fc_layer(input=x, param_attr=ParamAttr(name='w'), size=1, act=LinearActivation(), bias_attr=ParamAttr(name='b')) + cost = regression_cost(input=y_predict, label=y) + outputs(cost) + +Some of the most fundamental usages of PaddlePaddle are demonstrated: + +- The first part shows how to feed data into PaddlePaddle. In general cases, PaddlePaddle reads raw data from a list of files, and then do some user-defined process to get real input. In this case, we only need to create a placeholder file since we are generating synthetic data on the fly. + +- The second part describes learning algorithm. It defines in what ways adjustments are made to model parameters. PaddlePaddle provides a rich set of optimizers, but a simple momentum based optimizer will suffice here, and it processes 12 data points each time. + +- Finally, the network configuration. It usually is as simple as "stacking" layers. Three kinds of layers are used in this configuration: + - **Data Layer**: a network always starts with one or more data layers. They provide input data to the rest of the network. In this problem, two data layers are used respectively for ``X`` and ``Y``. + - **FC Layer**: FC layer is short for Fully Connected Layer, which connects all the input units to current layer and does the actual computation specified as activation function. Computation layers like this are the fundamental building blocks of a deeper model. + - **Cost Layer**: in training phase, cost layers are usually the last layers of the network. They measure the performance of current model, and provide guidence to adjust parameters. + +Now that everything is ready, you can train the network with a simple command line call: + + .. code-block:: bash + + paddle train --config=trainer_config.py --save_dir=./output --num_passes=30 + + +This means that PaddlePaddle will train this network on the synthectic dataset for 30 passes, and save all the models under path ``./output``. You will see from the messages printed out during training phase that the model cost is decreasing as time goes by, which indicates we are getting a closer guess. + + +4. Evaluate the Model +----------------------- + +Usually, a different dataset that left out during training phase should be used to evalute the models. However, we are lucky enough to know the real answer: ``w=2, b=0.3``, thus a better option is to check out model parameters directly. + +In PaddlePaddle, training is just to get a collection of model parameters, which are ``w`` and ``b`` in this case. Each parameter is saved in an individual file in the popular ``numpy`` array format. Here is the code that reads parameters from last pass. + + .. code-block:: python + + import numpy as np + import os + + def load(file_name): + with open(file_name, 'rb') as f: + f.read(16) # skip header for float type. + return np.fromfile(f, dtype=np.float32) + + print 'w=%.6f, b=%.6f' % (load('output/pass-00029/w'), load('output/pass-00029/b')) + # w=1.999743, b=0.300137 + + .. image:: parameters.png + :align: center + +Although starts from a random guess, you can see that value of ``w`` changes quickly towards 2 and ``b`` changes quickly towards 0.3. In the end, the predicted line is almost identical with real answer. + +There, you have recovered the underlying pattern between ``X`` and ``Y`` only from observed data. + + +5. Where to Go from Here +------------------------- + +- `Install and Build <../build_and_install/index.html>`_ +- `Tutorials <../demo/quick_start/index_en.html>`_ +- `Example and Demo <../demo/index.html>`_ diff --git a/doc/introduction/basic_usage/parameters.png b/doc/introduction/basic_usage/parameters.png new file mode 100644 index 0000000000000000000000000000000000000000..2ec67480951e21f0400bce1c34b3108dcd65c18c GIT binary patch literal 44469 zcmeGEWmr{P_dgE9ra`)-5u_XGMp9|%25CWR)1A^F2D#}}IwUvU-67rG-TW6v&;32W z_s`46>*A8V_KG>@nsba#j7gZXq6|7JF)9oU4Eh^c$#*a?AU_xw*b-z!;7H%1sXp)@ zEcl(wYnb9e(rw@aioL8h7zPH@_~{2WUo_7OI6>P=P0LA3L0-Vb?jx(Qsoi@sR=1D# zz}YY`LT&=UuOH2vj49nd+Sq~x+=QwAIYR*W{pm0p73DvtI9Ur*X(=dEO4vD?QF60# zv9eQ%pi)v&3OSmZ3%rw*{(Cv_Ntnvg$;n=Tjm_27mDTkXtDU0-8wWo>KN~wI8z(0V za0Uz5-PXz2jl~vB{pTkC-bd05Y~pBT?__0XOZjwPRgWXF(@={Zu=}dbg3wj~8)$ry}Aqnrt zJ4x*N%z|mNjsu;kasnq7>yr9u4N>j-_j$+x$N``L*~13J*9wV;-h*ojwx8Z0f8p$b z(lyntL%>Sci-NU7(7fxr6{A7q=djd9@R5Gl>>z2%_sEh`@(M+n(#`)l9PNi9=85&^ zjDL<}8NyyDVz;>%V1+hlc*!|yY^9Q}aDkErKTQe=xtHohpUv)2T`(Hq%8YeMN1=9w{p`nT@6dg;|i$1#-@^ZBlY@m}ua-7c}GdX`L_L64!! z2EOFhb#<5+m#Ij3McW>$`4pEsT>HwNog>y>AuRcz!~dV#+(3Td_(w;2v6P_~d!Hz_$P; z>e_`GpA9jff8r<>u!Ck@Tk9DVbj@vUa4B3MZN@y#_5LuHSBh!y9Oz2sZQF+W5VMVo z@HHr}4>3gFAmtT2%#B%TpCx$OL9*6=EH7Adv6L7p9zTy)?8Am)2AKyVrCHx8TfyIM z|5~UUvs-NR=DS||s{iQ2QqdU|8NRvP&Jjo;(*tM}98W@wW| zx*@Fkb$lZwLn>v3-|z)AL3p=F7^Ka>OpZ(y_B4OM>OZGKW1fV+@-~maYFfM#FL|G# zrP+&0@df6P8=dD6dWw<6^O3j=PKf#)%t=`K$)>2g&PCP7*-zQoFTXKq4(?m6zmb&{ z%aV?o6Z^a~QDE|Lcb%_WU5p##W$(r!C0?LW#O`%vuU4qVu@-2aN)rNe8|t{`mZzBV zRxziLmLF&75+aIo4L71^ie*%wlqTX$l~t}GS7^{6pxg8)D}mzxQXYW5vKVZy7!uhv z*Z^Su7u=s_190EsHk2pM}*Yi>g4Yp~j@^0+6YrW}$gsD!8P}A8R)5C8!dM_gjmY6P~Y2iQMSCa5-z`5wG3C974jh=oh6o zXUkrtHn5o$hps(!ZFn9Fu9fqKlq>M*BBMES@q?tx1wLM4eDsoJL)W95A&L}~=+)k) zSi`<}H_X#uETYR4&`}VsVrXQq4htEq%WY4@s|b4o`d}r}lB0^pMeR$P_Xw2ON8-wP zn4vcaX|GaIVH-LxV&62U26opTUX+JIJO`3CoSG)mio{pDVq6tED)Dx1smkf`n`X-U z25UD{cXiIg4Y$#X+p$?-BpSK{?*oR=4m7lSfh-$D0(<)qGE2rVeL}kLQ=(r_(DZP_ zH2pEuK>>zf@4h=`oImr{_whE3WVpvxpK;@qdVeKVS`w{!#pkr}?oXOlc7vxUpZR4S z!t-$k&mcE$%mH_6y3E9!SO)hx5u17m$m{TPrUjXQ^707WpnuZ=dmZ zw|oSyWs-5E%^jYdK95ez&Ag%qDn8l9Aa+rD9pXRFz`};74_A90jD!0OZ9PqsFUiGV zl~7WM48>dMoIcd?{x?^f|8^RSr?`OyZX*6meYx2`iQp}>zmc(c9}ywGGWL}PLJ($hN`QQmxOp(k1^jn{HJL^&au2dggS<`R1N7Ch8*8ES(@sI1gk0>MBxU0T zHzABhiV{(e zChuF9IiK78=~{4}T+BmK{AssYR;%=7WUsY*aoSSlxZ&!DlXycP{>c)?v_pBvZ7vfp>swnb)6(uaJwGwVN&q-=zsnLj)Fkr?V+2L&9e2Y8is%1A0I1-(T3iL{)YY?{aXW{YqR)+(;?9qAUc|b$3sn}#m@!>4{P!t zA1<3xLNoY&0!cSIEBY6y4bKK=F1lGx=Jr^gNqZ=E4Um7?&3f_XJa&t0^*vFuLMVKm z@6T4dX72XTrM1V1J|Gv{cfjIcid?1Fry`_xqtRq)b#{?^_Uq@Ny%aj>re^h?eC#B# zlLS(FOOD3oP+ANcYYmJ^XQab>$C*ki)<@vc%d0V>ol}OsQ5r=$%U|SJVf$LkMbF1o zwJI&K7ragt=zO$YCIijPwGW#feX6zEPS*R5`(F*E%fyfm&h*mgCKBup;x{@;v8}JJ zQ}hmzkgIfo+U7WLKC}7PzxL_5TK7M*o2l$cDR|_dwHWz>k0>KiwjM9sPdBd9d~UXF zXzrIoD5eRvMc$*FX#yemRPR2J7#07nr0Jo~cPLfZ zM|*XW?RD>BjvQf=7WF(08UgcBn)k(TRz3Iv-@Z5&AM6Cyy=1$_V?*Ej^TaoXP9ste zK(wj@W7j4=2rWR}!5hw0D|b`+t|@31sS3$F=MkUJBEAoH6zJDi`7XBx`;^Vq3iObM zv$FQpTTijN_V4z5j_xx3<#GN&(-zyu2fO+K2({OOPFqWaefbexM~&XMbwXP$UKHME zDw@2LhP7J{Sur9fzQ-Me2Ri4g20?j`08Z)KI8>xEH6ajlpXn$1%QW~jK$*C6bthEe z_J<9ZbsM@mot8vG_h(}--XDc-Oe_f{AjXs7#lm4T&>J>Fcj4#*5UD~T!O;%UlTJO( z1tAz@NwrQ4EVdAM_nq+{#m}gQk6Mr@mNt^@q%RNV^?j)B^oSC9Z|@vnB9Pxq36cDS z*o+b^hdQiw);ikGR-uD82Row41oZ^ef2#S1%DjB}vIg@Or`+&m5qOWJXID{hpUd8$ z5y)lr%zPzO$xv-O9at-LQ}^!Uc3dT=^+axKg2_T{xC5*8td1{}3X1*u4Ia+%yr=2~ zruM%ZHrqUZTeFc3rL!g#}52U7&=xq1o!Y;&NKD=t0{z~_FI$wD~*hNevC zE@kuO;Px48l$|2a1l3`Rt59u>!yC2FeJIHPn*!)y@ue#jqsRmaN}wmOlKEd;-#miS-qB$^n*C;Lsw6Gq4p^q|2O<1}0#YqGAVxl#^Sn^w zQ??R|Q(7czWyrdEKYI!aF0H9`U5;{AM|5Vyc6nl)RRu$pSznx@vcC8E#Gqr38#5gO z6&XvAM?aVa2}vLN_;9NT9ocyUE1NgA3|{6*E)vFazT1>gv204yxEk+FbP`=;uD=*x!fDycZBzIvSW6dxF?7ddG!aLn~1g{j&c=6zg9sE zt`YIG;mNI8Mc%qO-?cF5p|wzilMwIa(lQj-kSn0tU2)bij}|cJpO$%dU$gm5I1l0& zPQqXKu&^t4Q>G0gcSBZn^CyIe`Qs|XiXV5G2>gPq(|{W`hV+KaRW3qQyS#o0alY1Q zcy;+K>$w^l37qMfgZUK-9b_q*8$H!BDRMKa&!2S%YL=dc%0)g^pczw7O?sNH{+U7` z6>KC!Kc$Qq7UbA+{VCkUM+S!-*6k(k>A%#`ULGF$e#jFtB?+B!i#hf1E?rk1OK%dm z*$#OfK_PH+ti=)!m0krngj=nCo)x=~y^yd)LgyHi`*8`}sZRKl^%+`meP^}z0U*|F z40gT#mpcau!{V#8APn?~YAqe>=_^jL-)R+}<`Q|OY>^75KH8=vCgSQzgj0;3dHma3TnQt`q-Qk5BfM`+QBa^UtuxiB9}7n z&_!mezX+sB5+Da8>F(HcMiMPKTH^UMJwDXy-s}Nj23`}(eKm@=`InQR%N}yB_@jvT zP0cTl+5@(!ZT(8iSiHk`&q(>$b$_UNjow_|?3SzodC+AgHrBT3-X(tXDcZYp*;c{t zWy_)@z%akNivy;{Q%?H!9E9g)(rH?v=p|&o;0_C$|Dv3gwKIC6qS42pZJcYcMTmKIXx~9qFxONjj>F)bl_|9`A?Ob z_4Pw*%n#S&4Wnzpn*zq&FsL;UNz$qPd!TX00H!w0ZQ<;P@Q| z!B|S`$CjIqVjMYyeQ!Vf=N!JC-_y zQV8JpBeh@)~(jp(T(Wk9Qf*d%FtmoZrK+yYa2hW$^c8l`8lo!&~wf#1N+o6z)>G8OW z!WOF&0Q$r7%uxzSTn>RkmJ>|W_gD;Dmh)NMF;ouQqd5mPa`CJ+N3CcryH#=&of8F` zf{(@e^#YG!xOC(yufr9?ngJ zjzA0c??RBo?vB7`V{?dxMU|Qa*#3d^@?hetE|iL={KUQauuAc04~nApvO*AgygN;y z%yRSGNVJ%I1YzHD zyeXj6KXT$69OIEv1eRg!B8cf zmK@&{P;8>Dg#^S*N}`Hi5Z>J`K1Od0BwK7`Y;QTU8qWri0ZYZ$RjyF%&rAeDx}x8E z0PB57Bn-kaFld5Azry_tXGeN-3ZedVeJrq1G|<{I<+l3U;p!Wjb}^)|JvGNR907{p!bcFe2M^m zj;XNeAiCS6wnc5uRx}Nn!q7#q1n^dt?W^?u%w9?=JsWrDG@HqQ?a!egbsYzQ1zATVhyW;?G>F z1pUJFD@q!(M|q$eQ?p3Nlr8qdTk&SkbG+^#`rZ{&4yj73Kr2fO<2(HVDuj zXKT32?Xj+$mUF)X&5~#Ur}}r@-<2Jqz zL866eJBZWr|GV0?aMEGs#?fSBN5RqP{8F!(uJ4{>d6Ek6;WEwm8~Qu9!Rp6G9q)P* zV)+rvK%+ZG!Y^CV*o*)L(pF(1M&t$=NOyvO9H?LUahBjoEZ0qoXL5x&bP`#P<)T{_ z?M@bv_&i)&C|B-Jm%-_cxdM9ZRiEKQ0Kngtv%e}nv6WX5nVD>35*Xt#8nQgFPcj#v zMM@TEymM8m5iC=f23GKMp6XWA_kX8*vR`QZm7w3adS-b0NqnP>q`n;gpt8qI0I|L{ zvrn7kOO5!Q@OSdMRB^4SUFyy}Qwu<5R@~#rXbD6zS?i89dS%n#an9~@=T`b7&ibNQ zyQ0(k!+5?w09TLzB;BaH8vifICDE}w}vwmpS=@Wa9fSK0@B5kYXJdt1mGhtB!Ar1 z++5mGO!R^``-()wiw}S!fv1ClLsBT`w&w=cT~acXnFy2={d44sc5b%E;RhGmgMXGT zmIi!gxujO~dZyGcVVcQN8p;x7R7v9vYW2KJGH$@$ZZ&|!#tXG^0nvhpRuQswbFFg4 zqFw$HOo?1V!fxbu%Vjm%&P@VvP_p&Dc-MRQ-VR8;dRhS1^*`MaXo8Ma->pTPYA5Yag&%j`#i- z$p~O|9>#xsR~m9YjFM=AtXSR6T~;=aeCcW-M?pbx+y6DCm)2b8yj%C+6>s3dQ0KHA z0Ei{EA|5v7W=$=+wzVDFNKak>z%N_1(fFhKimDfObQ*qY6z$}?)!rV|>%*!Y9ZSYw zlJfbfWJxarYt-3Y=ytE7SaiA6xNR7KME!2HQ_Zi@lEvCl#Jr8HW_$p}WTebQ=F)$m??U!Gl?M(DJhd1n%_W_aP0{6r9Kl~@0%pV$B3vKW5?6B&0I~$Qn znKo$><(VDLlyL<*kO?>#gG=J#vLjt8t&o#og@($7y48^^8H*e6F!KF2 zlt4__9DLe_DY_>D3HTJf@?}y6ye#*R(p|c@cXKgoSC3J4SN=rT6NKE$UquwiuCe{> zV{0P4zr7rS20nXb@s~b`E#fbW1+-e9ZVV_oxd#*Wb^BB7T@v;>BqbPiU7FHAF1dLO zo;kOd+-!?Lo|nllYml$2;oSlvK5Ff|oVN$!OZ*frNww;vJE_!33m&^HQ@)wn73NQh zASXg*ZksR#{rv8)!8V>hlTGKjEdUSd59EW2z6y+Sc5$ANxj&^ekanL|($0~m3~*2TK#^u*UwbW`2Y$I9I2vEp%`lY_b>v7_YVKu6$?$ z7AkpTLUUms|Azc_tYM0)I?Rx8{7%7;^L4suj>Fb&QNBeJ{wc>@O2SkC(adt|QpgF= zFxUc<`s2#F{iStrEuxah^~)#0hwf~;>kxPuu37&4R0#u(T-am1)y8dzgalrvM1zTA z4Y>A*U1i`up^u&)UTtryac!LWG6@eUBi<9;R9z&IN=84sshV05W`{6N;g{Z0Qh+jx z_nlO8ZRB*2To-n;9&T(jnhp59eCG()))3gmYr@1X;k5EE^7=*(x*;F-4*RJmqW0-f zUBE8>rnpfpAB_*?oV*{_+)U%R_*U4OUBp)FE{?e1WuG` z%=_U>{l(a2`nv!*n@i=mG>lI;D4Wn@FXg~sQVN}OwCFalrxDficB#NWbr>hol;eWp z3(PWl%yu!4jk^RBQ1{bP zPrFZb7CH~IvyZjvd2%o`BJg-#3z!8`zTRR`hw&14wfUca%P6kJD6`#g$_VcW84IB8 zD9-xyqHz+7`YpGRR<4w#E1{w7B<1r7r%HZjAeP%2tu;N~AKJ`kaT%x_f(s;6;)_a8 zV`R~cj3uh>{=V!r>~yUY0iE-Ut4lGDcz*2e(|0HFOLF#`#FwRy>=!*0Qj}z?@!y;H z3?b4zrv@D}0SIWNUe}JiuQ~urUZu&$`-vu=>u-L^g>=OQ5-^O{$U}w=Oeg{a_V>r~h`hz;Og|W@`Zmctpx~ z$4N*OUc>0AHuH7dfDubj?tVQq&LUcZ(r9s==;9#P$L!a!AFb{F*AQSvhL3nbz$o>A z(L+(>8*&%&c}FgIF}}et{gc#nHp!PevTvgJJ|FP!Ug58en^Vg~JHuehMxjT`@#_v0MY?J*tK1DBggg}zES)h`?#u(o>!1d}w$DXe`X>~CR`X*`UY$-Q}41L-EFX|)JH8v+kt-z_pmAY z*wP+`dwkS}S$}Geqp!0;&4}wy4sfWWT;1<~X5P{jN)Qf@JHDfPRQ%LCasj6&I9`tn zt~@&5pPh=@%AvgN5`FLoYy~_>=K2!=8>rJm_E)ITTMoCLANyrBhG>!MPRoZC)R#Sy z7Vrb`CZ44H2nlP$QWMg9_Q-8zp`X*E-bc<-4&*zj_)>s?qV)d?D0|33UrU$OSO9y< zCCi;ZQ9A}kXG=p3{X^}r!xLlrmPLKZ=0on94RFD#3Fz(X`?@J^#;GZR(o~yq}$vg zT0leepkb=^EBkLtqPA>4DV=Ayep2C!oPQ_L%3HBu6_q%{zTrG23W(Q9PceOqhCt_t zTO^iO;bz?w-vStwZjeq^j(RMwdo9&R?_ppUI^}-fw#*0V)0p-G5E_p|-w;sM_Fkg7 z5TRp31OJV9+xGPm{ZfDvo}#eFnSGk_br5cZ^15o%$~#3CJMja+l>g&x`@dIsy5{Br z-))(C|2o}mg>E`l!ogTYua~I5hH_Bia)UV(!u*dUA;v+~oXawpWe$`I`oqA6)A8IQ z`pT=}57D*58y!RKG(G@{6d=+?Br|DNwah6U*kAOx{T1)gD0SsVpJcxeG|$W!m^r>P zzrg2!)p~84t=G}SpPB$nUIM(khl#b+1nYj@Wrn`SJ& z@QG=iV6?ns-c1E;;6Dn07(DJ0nl*rx6pv_Le(Q?=>g&Ws?sX*oMCz72@&XajUw1{5 zy)tydj_mu%3g7x>%?%+ToRnu`)G9$RR-sgE%*y0>v9V0O^w?-a`BTuP4!#a#c<}<3 zgM)*ZhWSEgu#{dNo3NGHPs$69*%Rnrc&c9`d+poS%dd@{U82mL>r;y%9IhgthD2_k z#p}BydWJc^VF62vp=^P(Hl==9@YCDSUpXD(`x=M>wMT;YXKghLX94%%f>pp}-qN60 z4u2Vlgo(I`o_)IUefB&Un?TWwA@XJ;4$!*5StiE7tdoR?^VWxUb{gv}PS zm93$irL7?@MZRTtwIZE~j$>OV5iwTLOCVgV0*D-*E@$KXgZ8tIo@8lb@(@*R)QW|S z6HeIY;o(o50gD5ke#>%wN$#X-*0v3ZHhS|i=!c(FLl7hv0g4Mr3u_8>XFgF=@g}EV zqDN4bG!nj^(42(9L(PtzWk%DTwG1hr|t4@3Y_6^GLvbM5&; zldqfsI;h)aof5k3WhyZOOzL85xUrhGFLDqVuk#7b`I#1gb+@i`L~y)sM+P)gwpXHY zN=OyLQ&F0~c)1qY0&i>Wt(YO=F0V~Hx2FCl zw#K{8j>F+HIC77p7iZt%PpnJsNuI2A&)Q&J-&{Q=3x5Pfv7$F0kk0ycp1VF=kd#{C zVo_kq`qt=>2`o8cs|<$0;DL#G4(21`^ArN*kwgI?woJ4^HbxkN20h&R-zKXo`94gXgLE(gw?^At=rE zO%fU+z%n>VfiaY={C#9rYJN|9$L9Kfez0fpfdgOY)r}R$>J={rnrR@xjSmx0yH0} zHPWj>dj0$pLNoVpPCFnrzgaxt0ZFG1qrgY9!zK@uf4Ana_|2;JQv)ebsR=nAGJ%`- zY%9{zkl1BUKk(%E9E+OmJPyzRJos7Kj_A?@6y=CnUT>+DU9GQi`K&UkH!IQSIlK;D z-{{)4oLB97@VU*lN~Agiib(06DnBai78lO9XUj}NNPhIym;n} zLqm@6mkI+zl`Jez+(t|b8~N@A6VK}SMbx+}^{!9q+(g+Z3nL4%-(19#jN4qHYQQY2*)XBW)W{9eD*$;mzF|?)# z27DeQ4ghui<1c?xc{!LSx`-lmF_ne?8DtL#dc}(r2&O;`3I_^Zr!yzVD2q37F9 z5a$Q_y@dv}{hfHJ0@`(lr6rduZ?mJTGj})R86l{PkBPmSMJ05rpU5AI)g3YHprk4y zstOp}#VLFaR0lv^joMGOp8el8x82k+OAA2#Qx>@1^)Q;mwx%Tt(cGKQ%>=M12E#RT z<%ujuofAYYCVIFBpW)%CEtjTWtU%UZ~su)yeN3I`3E*ZMBRV2n=QMqbBEZ|YRR6kRWVPV9%5_?%9H&-Bmp=6U zpe^GyX+xiguagezuR#YTQGs(X9TACL0`J`JP(DQNGetDcBuYBJ?*>ct+{z6xXY?hp zoz62fq^^@-liKirD3s>RbX#|T2rDnakB;%TR17(LrHQx_U4*9BA8uckI`>r996ltg zH0V&7Gv6Y_d-1^LtKgHc)jbOfYmR3ye9==)k|&eGN>y5C`5Mcm_?K;kfvQc|@B(+-;fHthwTO$gcHdXXjQ_LaGQMNtsmK z_S-g%M-8=jQLr>69cbB9;l1`1%0X7V>Tl`^QOJR|$|As<;reMa?{z-`S9!4x3K>R6Z&g+^DOdY8c8JTxbJu}@At74*sRiAT2G@v35-zmcNjyyz1vEkc>(R%rMF(k>%6RCu8Zy@)Po{~a@8Lt zytiLcZ00tFN|lQenPtie#%?0Icgr!w`8H}{zz<^i`e&K2M0BuS!&efiTrR&e!OvxE zhOVoM#!c)FokCh2Pd2#?y4)7H8oCzTmhDLg&6sRNfI^4lnsq>_6l=t`XUmUlRNCdS zY2mHVH1>RJwEY8cAm9Op+G>GDXma%^(mLDrX(e+|-^{*?!0Y=1chXH#rmk9bbA%kN z|HUncK-BPrG+hlL-3izIRc_zxD6{{yOWRx7*NkpXX$?BCa#}7;%1k}*U1<1po{Tbr zst#G6cGu%dW5ew81)`TbgkDj+(d5>8gP2!9+^bcW;^U>v%jO5)GwFDJ$dtw6~3ij(;y2p3Z1*YwGXhgTMLHloZbI7jP z8+98Ii)P}w?h+EG*Ogj@uC1wj)z@NHhUf+wr>{9muWhh>+x+StVc+v{RBD4B$I%&E z2-jI7PZ-xuh}rBYG06nhf(Tfa_1faUimQ5!lzGD6S0+o=I<9lFxD?7cI+p8_1$99` zF$D~}-}8@DD8AeW1@b%6`CVl#$bGGa9s3eaN6O@!fz9_0`Nr(Ja{lBu zLz?~xoGINqGgJ2^8A*+@(J{BBz<lN{j5KN=n7RzS|?6|H~9|D^&iNn1) zW=rYUEyh2iWFSeW1oWzWzvcmJNbcQ$rXBJMV`;)@^-#yXutJNd>CEqOkvAMEm%Isx zf;Sl$W;;gXs0ga)`IWw!jrI1@y=Yq@a7!H>rHA2-Ff@9@(bcO#CMfQ(X;tEYphs;a8pA>WrF7#qcT1r#rbKbZ!LzK_CRzI=h;Kj|B; zBdu?otlWRy#X@;upZB;f?%2YI6Tu#e9-`njHe2PrQ7z&tR`A&Gut@hw`EYU~99fwU zXQ^BZBG5k}(eG^@=#~v2 zFQ;M3QLYZ^4_yfO4BixwvG@b~C0>8j-Z%mZt~-ETAoEELzIVq+lg0WGfWyS!as?Qi zHD)-1&O2+9=j)e|R>G(KuWGh3BZf01kO0%`)f8xCrh7uRxMl@4(exhXNfT$*E^kK} z&|_m>YhLoy=>8p|?gIF%J1$zrK)&h@^V7V$KlFWUbKgwa%uF4>eyQgntu#(>EVKEQ z!QlPcS4LkJvsjP1=3inghDFV_HWjpP1Q~?-*!icBy}mc0T4pv!A*-gzb0}_UJ!0}{ zXANjVCbjxRWMpiUd=K5-qFv=~w_7S(l^-ThNg*xf%Yb`q1Z>+VHY6|duRIwk+ja#E z+jRWTeb9#ci&;LPKjIM!0At!|m5w2g0>E+awdZbOSLIb%?G?bF#Wn|1^y>f+*9n-@ zTZLctRP!D5jj(N7b4aB^DPLLd!X<>YVgNlu)&wO!R!S9k-(627t{g6h*tI>1xXZ;k zM6@WoJA90#WuDQ}$S!K``axl0aige7^XqIRcT*G!+c+kFObxuD7i-BU$#_J7|DY{a?yc=x21n|$>uV6?4L9)wl zQm+Yh&D!0=`a={T)Xp)66uE7?jIO2zMm))n1_mK)``8E z=O7$s4Sf!}k>3#^WmL@grLa-&wH7O*=dj?uNd`Ti)GbbopA^zLpC-N|izRK}3-!#Qb zpsF%Cj9;oBSad1w=y?vtW2^{c+*5jAx6@Qp_(5(!(Yoy#+o}-2AM^FJ5p}})wuuR~ zx|4wf`a<(QbWa|J+`nDuY0e>m1T3__aLFhnHQ=e36;QuTDp)baXmW{K!6Mo33WyRM zHomK_f!b2vokpsb)tr&a%V`HxHCAnr%o85lvWJXs91$4le!7aVewpR97(3Fa4=$!H zBAu~?fSJ)aK*kb&aBMubbD`n*Om#5?k)UXAI1*mlLWAEIq?9XwX+M(wA5GRF$|+uBVcsQ3k$wvcc6%v ziFYPmq1(pAyh&6CFPYmqjz7`KUPp+R&<7rHNIfWQ7YObVaJ?d+hQ#;p;s5w`Co#Zk z0M^*Doo_w*<9&(NZo9s{j$vZc9<%G-WtX6_W(khHSY`Fg^n~%b@%bA;qMz*(0@>=B%s<)H;!0Z78#h~v@nstU6kY{60MSs^ylt(A1T2zyTi;+XQ zECk2ZI9W1}2$u`q)T6!X(ua??I82i^otRmy(q60c32_f_@778nC8Wf+#1Y8fc2!NC zgIR`P>M0ORX8{-H#_@gAVWGBc{>`_ZL&;XIWdIXK?ptUa=n<3)+}(2BJoG`Wsz^=TG$RU(j# z4kbhmtUD-VQjqz61aIEm$>+&A8P*CRPI_zrI zw<>|49EHG#XeB1`MBaF0MNQB~oS+Urk>aW+>P^`cT@i@0@NN#Gc$n`Cy(c#CI+o)7$}gh? zltl%8wQ&4FszA+KN?;DaH*sm&g@6ARzzB+)Zxow$Eo?=+jZAR3daNToQG;S(*bZkz zd)-Gt`^$E1Sqc9B2RXw#6$gQ;+&!7Re8ny72{?}OAOf>gwry)l_d4cHS~~SLB>nHP zj?Hf@_4D_H66%NzXXRsQIDjBtl zj)^hG7hE>=$iGldAl#{%XKd8WzuCeYU*_ZSKttxv7 ztJh@ix=mR-Zs_gjk&qr0&k_Eg5&0s$g3)LAat7SmZM+|GmBkDF);-PeD1)v{LYZyZ znC2zo@Onb&>Q9G4`oAUXpZ!qo4>x;avO$zsZ^yS?^)44&vu}cBe~LAnc@H#@6Iy;U zhPbt(isAbLzF*fQWyCnsel zz?k@5raclG`uH2?VSq;rd~W5Q9gLxz@2W|8%or!O*W!JuEDVHYnR%h;Q+aI`wLmap zGlO2^2Q~35sd@LFweS`XbB-Y=Lp7Ah!ZwdUIB^if$zKe{bALbI6-I|R_)E!XslQ5k z2^n;Ax3fUvAEQ-@b}mTe30+a96DGg)VzQgroKBA<MdqbyL`dL#up0;UPAa&>ZA#X+?!8H< zwy>OJ+kW3QhLpTw_f7uLzLD$J{@^9PTVJ*k7LvF9y#(vWR}L$Z)ki`U&lM|sLS%hM z*jX;nNddVByKVK{7;TEgZk-b(B`((FYjv`Fdmkgxg|L|#@%1(}D1@R0AE|Gu)sLuY zX=A_MVe_+()^lE@ic#~duo9F{(wI_KihFya$cq=p<=L}2qjOB$p+D#b8b;&eU$d=J z2z6wED;Em5mD;#5H2FSbKfKTUJkmNkAQ7DT3F%#F5q9@?KanBgu+2BO7=o};q9PhO znOVxLB0Rgtc#jecR@1JOG-ENxD{}F#x2H{fX{?F(WMfx8$w3|V7j8HMyn)I?+vr9^ zpKFAtjle8wHb3gFRwCwaBh9kc5B94UU5X6#a1H?S;p9E9*@H*j!Z7LEGDK*;e(kPk(uVk_5JRx9)t4iDbVMtRV~#AcK?jM5A~aP zY$oz#@9h9|`a27yn(&pVCNOwJ)&L`u{QY)lpG>@7K)GAtBu*Azey`Al;JEh)8$m z3?)dHgrt&6w{(MaDIguv-2)86d(qGLx8Akp?>qNC=bR_^-ltF@^JvSdC$(nAe9Cht zb@#)%TC90Du`pNb`74%FYx8-4tB<%=e|;fGB};-zwQO0G6WFwOSqrcuuMXEl+8yR< zJDF0A*1Ckg@-N#ExdUmC$NR+9?(iW!**5mI;5`*dg9%o|Y>Tpd=XYQnq`kGm=0cS9 z%=V{q|3alrS_oTRF9&}vJ}#^E&i5@A?EtLLWabIIRrd=RQE%?ArLFHCLSA#jZJz ze0pX*UthMj+2cipN@LZXyMC2iX1Jg=>oScaf;F3olGJbym%$Sc1rWCV7s!7+GF?JA zZK+fggA|7tjMZ!!JHSgsmP@Ecbz&^EfEdBRF{yHKJ>=l5-}|Pr1UBWS+I4`wa&{Lx?yoed%?`sW@^u0Q-i#4lonDg_$xVZDz#@g=i z#bYwBM3J88t`iajXIS35(3cpIeu1 zO%w0*v_^@Qql0BhRn6pis9K#68gJ3`TWfl;fpy%qldRy$Y90$mM4R;QWs*N&x5Pdh zojAqCEkY_WsQY&Iu5p%QzC@3^a*W!rMFO^hJY%fL=`$yJ1oTS!n~&P|HoFT7Vn=F; z-p7fZM%YmlGR1>WOkR^e=pUpX$vPp1a{B~I-05G#@q&=OcJu7D|`Q|RsQcnmqPWn^FjlCA$|2&Q1J+F1scz%N~ z(oh@{%2%B$)^j>uD^_1CiuvpD;D8|^8417idpDF%wwW+3Y|{d$2V+NaiUe=g59~hS zirhR*x!#;}r>@^tRyph3iB6Hf#FsHp+6lc!5rOhsR${rA>)ayl%F}W%3XiH1?G#!HT<{pr1jWb5c4fswd-^woT6rP zOI|Qj1d=IEEs}k5p}tyOPb!A-Skm0uIX@aBa}&{J!BM}b{fqT)7shE=eXr1Gen`bx z^-Ow2k=^81WZPX8sLEzVNVI28ffcIc<>^_u8McISa$4SMRM5PInRftO@Z+j7f(JT$ zV9SSN-l5Gu3Fi`x6eLvYw8oDlr^RotClb9T6We-jU-4q%CRQ-->EGleD&ja7$?Wl%m-kMzG_hUlPG;gs z(^taiS@w|z!?(UE$Y+-MPSp`YJ6Bs#XUsVT=r*7Q4%IHB(K5xPeFhDYW-AdwNIL62 z-91KVHS>nfkBw(0Cdxy?2i8^VpDrHcD7Zd?=-KAB)h8Mp7e#=yj*6eoRVhd^Pr-JunHpIH>9(4_cFlqTwTwjO#oJB{GGUKeRiw z-%4gK)UdZ1HPJ9iqy_5?E zp#trDnDDZ=Ij^M{g59nMv`Ht2#F{aFe!sq}zewqijgcb5Uy(*7(ym21&5I4BiW&N9 zDlHt@rV%U6lRWhz5l4%+H!#eqCPNmZA5K2HW7$&9+LI#)5hXEq{3qvi{Q`PFz5{!u zf8MV&Y?u_?SJu1|_0?^n*-o}65;a%seqFm9`T`j~tLMwU1~1@!VvF?2%>%D}co1?d z2#q@ExgzGhD&05aS9DR+efL2${Zx9cna*J-$jtHg{Ri;)o(jyzX4XZ55QBxF`!zd- zfc1*)4}>?IW;cA}JW#T30jEfIZ14P&27>?Ha0dXqSVSUJ4Pgueu!FmscT@h)#8soV zmd;xc6b+S^28UgP+?i**AhJ=f{=s)Bk(faH_E(s)uEpV?INzcSaee`nR7oN6#zxZn zt4*%nIyXJ@iN>uw?U5`Py$3Uz+@qJR#Z9l23B2t$%UqgO4mko4Y*BS|@859IvFY4f zuEhtcq_0(&O8yA7ihpg3+oC^f9 zLIV4R8PT@REZoWiw`_cV!qZX&4;CiR-V<-SsBN$h^Mxv#0BzO{HGSb|DPE>rZZHW1 zK#0RU9Iy;(9it8=my0a&CX2P6QhY_6&Zg>uYUDFNF#jr{A2FqVmS^`lvm#ijMN0mf zEM2Jny}-_-3lu%zI&s7Ad3V3lYUHfwq}9T1W80@BvR6#dk7iBU5LsoR9_)qV z3vEhzrmqjbzWXi`Y;-k~Xqy*P%z}{}ryIy? zas491(I$?5j2cKNEx})q67g=IJpD2|ObN%-b1B$ZGr6VvW;)x=6$<`@T8U3*NFtY; zPp~pqXtYQttGn;Lb-&bItjv|t*xI-Dr;TN`a|iq6e7CGv%o0p-sno@lzT?yY5uLjc z`D}732NTT3FjdO6B>fOKAYE`#v;H|ZZ+tx$_w_5RqZUGpB2>*3XZP4uin#vGXeoO9 z6>Ht{cbyVWM32)Yga5q#>Bjj!doLT|I~?FRno({-N+nq&w40L(i{4W;t@st-Xk$$+ zP%ijaQA)spoAAh^bG38Jml{C&T$!{3ooPYL%Lk4~;<(s?VyF|K(+5K~`y~kOSAt?M z$tzCnOH_@Hq%~0Zz1dx_6GX(EjqWp*;3oGM*hsWoEj8$;d?(v@%a!WN8i&xl)4$9C%!WT4LhZbP;W4O;w;YsKz(#l<}YR{ z&1J*@i4(qqKPFbFS`%(mit`1iD$uK9yP0x6)#717Hnk&5kd+}vq9910!!e0?5+f8H z^gOTf+A6LEiDkq94PwNwD`*|@{ukvBgTk+A4YrT6vm3CIxP)m3Vp8dz9?-pqp?y)( zHj8Ep78hC`q-2SEdHl1AOCBvMi6DFwJX@rmo32+D$OXVGa-A`qr7g-! z&C6i`fj!ORNCoL#RV5>k*1FOI^D(>VKv6r?H$yI;QU$c57enNJtySLU;&BfzLm}s4 z9ex5D(P^$z&CCcMeNlE7eY;5ZRd}YJyd%4AG3b^&~9`X zi%Luas<|O1*@2Jrhb||GsUpz~^gN&gFc@;zoB+)j*xq)@L~gf+``!d;!JVJWtEfJO zxWy=};$W-y(=S@i8_h!;dl&hG@~>7a#7WxHY}b+|hKE`O7^9wQ!t-q$1|FMDF++7F zJc#_NV~gF?SO*g#WnwIVO^$-1qCf<4AF7tZ@DB7PRk%rckk07MMB=2ezg&F6aFFT7 z-E-f1CDFIUi$z(_(e<@;?v0iIV~JDs&cf|Spp(=#yta|C&UPORxLjD1|{%X3r;m`o)WrUZbGP1?)m4I^| zALmB%qQQmHc`K`9XOe1I7_t4mC1XbOJpK6WG0=kn#v@74XRB7ATIQAiprL3WMAFZ~ za$-Bd)iAHN&X^g~P|?E{;twC$87v2l7S1m0b1y{Pzfy#Y>|DIfcXH~vep~(S^_vA2 z1H6xKed;NymE@Dp*QY*w5?%@U0^|!TC;iBcF*J{yBhfA|w4zMqgzf)hA)!7HJ3F20 zv}2^Im{y1(*!TJ$!uGE6tnR1A^ZW3%OAIKQFAw^ckZrurzm&7i2U6M=>B!%2OH`L; zs0?$=;Xb2HQT#NPtOk{l9k&$Vs~kvOi}e#pPXSf5vE>C>K(Q^E@OK2Q*SMj5^$%DK zD$t#|p|dbiPBMVw&GEBjdIm;S>tII&zWz=Hu@{R#NuM7(Qh)IgtDw^5ie`b|LxRiZ zH)_Bq!A{zzQ<~gO)N8$iTk=iYlA6#ezGc(>%IF}PS8rc)&?qWO{fho-_H$6sHkk~Y zAA$Nctg^$-njFA10o|RJ`*ntN$d-oyxET_VJ$HNfH#nlw_~Q%R?(YY{ z$>e);X85eV066di%?K+Dz-~x)+NYBR9Ww(4rS*r=zsDty{yo*!v2>8(;<`s6WaK4T zSit!rriabi#CY>!{^AjQ&!7vkqHH$*`k2~;usHYfL^-J~kpCT0cQ|^ATgLN8u$w&& zjHOQMrS(#>fBfMeUf^kz%&?!Ipf&ol2?P-eV3QL;#|6so@Ka6e<+CR(RUS-DsdFC| zFTU_{UZC%HIwq@DgsRPxMr=0=@c@&vfm0_TcYee{lX!V;CZ^2JMzH7FK~@_idEbaJHkRF4Td`0P$8?y*?T7 z#0pCePV`OL7Z))w?_(rGYGAa1A^KoM59R;dC;<7bPVtFA-M(8{oaOI)r6 zT|~O{%0fV<^M7egSXofSG3R-7H;)WJ8u8XDRglZ2!z_l)H?k}C@9t635nnVD?zRmdXVC=K&hz>)@LeLc*Nq>&!=}h%1TwDYM+~b#|6V zTm9bdB3GsljaERECy0U3jHH9({8Ve{VG5EUE!Df60J?$JCU#M@UzYfgY)&OblVIst00*Cdj4=Pnif3EsbU8BZDrk}QVSPTpH;Ht2~2MjAvc z!wsFM5vmGCevjccUzTBnRfADdYZ3dur@meNwgR=w)1sCqZ!Rl+tnRPk2aeD~bTI_9 zR=O{z^*f`b5CplaTuRLE5Eds#))UplOJr90~U6Jj)ikF2Fa1V>)|CD{|vJ+NXh zW)h!+WmTm_#k5k(jG8gk@3)Wv0;doTwFrjc6|)6gblKddQD!@e=sbEimaC1c8Y(XU ze?}Y3ec_pF&u#-&Lz`AM?cC%9|I$qGIvsfZYpQUSgZ4ije-IC|8JH*9DVBKTp$|k1 zQL$J})B*AVZnT*1WDXfG88^lQhhNta$GPtXv5k;?aZUHMW`wv0R-Iq3^gJL?W2#+J%8?jT@<6;1f z1qqv_kFf!5=R_*T&FBL*CqfMfEVISl$+08$o$eXAo*YvHmjVbt-l}%b;5hTFSGg>W zE%DPp^6kyO(qI3*^v|tzWj{dD+GTa67@h4=z#-y7j5XHZomedFY-P2}w2?(G9c9mS z(KSj8u%A3hmI-Tp^K0G}4<0T%b@d1d~(Z30Qr>@U6a18h5Am3sW5OpS10qH>iGi4!xbv+sz$EOnF%X z=-KvZu=nv#SZZ2fY(K;TFWQm`+KC{1@gd~_w;f6}q){I6LyeCpnM(qDP`d2@Q$g;{ zh60G9JI-3H2NKAgb5nVgv3fe+4LXEmA8h#Dnv4YGB8B;>eEqqqQa=j zgWf1h^_nUvVP*U8n$Ff)@B(UO$ndM`nzu@+u4#2wJh(J!y-z2KepoM$MtmEmMZ5#J z>ld$9hHN@K-k_=|4Sti9HEVPQo%9zhu)dDhzbExIx`X}zsEp1#sZ{a z_lUbR0Y7rKtZ@RXbL#i5mg^eqRcwccnk1?K{>W<|U*LRZpz>I@yP*c_crv8p?g|y7 zb_Xe%oKQ$tnQqMfX#*1BgPYRW$4o=z*(3S%4>_4TOQ3PmU|8y<1$iCTt%+G|l*i}E zRP@H-RE@(#kn-v$1Y5H?gZ{Z%X;3+3j{ceMkL~5Yn3`R`0VV(y^Lk5+FUr)D#uFI- z0WRYON-5@dltq;irMCoTrjm$Hi}e;Pg~P@nKetaKKCp$&eL&YNdL=OUM7>M+gWhNN z-MEOw3lV39PXO|SV2>8vbG-GsP#)1Z}gNYzU zGhW>{7vI+}MXM~gXrA;W_hs+0SGDfEqEbtb+-x(%XI#93TQ(^}+OMU8OYese+;Hu9 zf_8NY%;9uo_y#v1L4TX+Y@wSCXSQ80kJp6V7Q`PgRuVL61;4 zHuu`|B%5}W#LcP1!H$1VIOr~YJ$trVp>LYvo>aJe2_Ht+Hd_uOSL`!Qf31=Z}XO! zhWoS3q)Bv^9>x1Mw2{?Y7v|$y6&U=pe_P^5aWuREGL%H#MQl6!fV-}nHp5lG5 zfnn`i;y{yxP+W0&xJy`l^xsD zZ0s@2G7K-rRBcxf>3iVOLSs#L=<+{HOeCi4U@W?&Ck)Mmy_*?$#^aoz(T8CRBD`-QX_wM|vjR&^_+Xlw@kD>wwwmefi`!vjjs|~KU7g>*P#>NC)zPrV(9v~Q)iT!JVxx4+Kj!KT)E z$|{J5{>wXaq}uv5cb))(L1PwxgwywxVC=WVrJy&gm?~AD8&&~EYgg&(nzy41 zSPII@-GF|RW}~a+qxx0?Aer<>6YgII^p}q>AojO{I3HL8 zRtq4wOG;=}2CZ=aqJO=nQ3O}}`GF?fGmrqn zzH!ZVUmYP?1wZ+{QS_!43R^$xcO!)ObUL^^nB5EvhYaB@&n#U9JYo0%u+ooXa7i5f zSy|5kL5>viXUxoK%l>DN=IR|XcKU_vDYC_U_`d0-{vz!;`4k$8a%IN5kV)_(1}b4^ zr&rNu;nWh69U!0k%AIO@+VBJ0$LC+*(y!YbbRWT-W}}O@+R+fc9@DwFkrE<=fgs%a zzZVDmk%rzoL9RDQ0Uoyw!a;8JY8lcWPhTzeF0?6v7|J#ZGz*afw|{c~tih+G^a~A( z!z0WHzg4_10eI_8&Jx=4%Iw-Vo1{J{#mlhaD!g4CZn5yME)%Q`7MK)XvGQ-$lmvXt zah-3?(|pCcPZ(g@0ih;+UQ zGzX;S&$0{h^Bn8Gnp8U=L1}jKCS6x%LC-XRiQRg{eN|oT1O-rLJAj7> zvfa4TQaNtPbttdg&x_(Aef_o<1L3b}<;LdF@jYsdgho(vOi1>TH1`!(_0h!e+D7+% z(kB#+j0~Q|pu-6MqMfjG%f(@T#D6BaHEObi6a~iq&VOb+>vOfJx4F37c(!yk>Tt7` z`{H8W|7i$i13Ut< zU#(9OilP8T2~3B2oNrI|L9UCB7Z}&Xp!I{QzwQCk#{m)_d5THQ%uD6QqRFYLl=Q`$ zqeDadOXQ1J+>q-i8@%G3TM~*}sfMlT05XMK5pbvF+ttKhKL(fICxkRBe^%-`3(#ZW zV#H+Wi-U?TJ-W|;gxR&>T>2xfrHqVnvvRVNv)gya`C|UQMgp`IeecyhgD?Qks$Js* z#Q61$nX*3ybO~=oq!zFN*vUS*#wS4(gq4Cc;uFD4CCBTVe6{vJ$(yl)jC0BgWss%Js=#)LhxHhTyDRa0=DU80p{Vq7=Asle1qQIH#;W{@ zSMm2oNz8a8%=iv&d_YR5*a@i$e4k)FVTyjSriGZVl5qhDHV|)#CyFysEgOkAJNAEX2@Jppma`K8`Xq*X>Ne-Nu7K z=?jawcB&&8(G;ps6(^ihzT+b^zpu5KA&{H|CO%k4lfle%IBs%BX1A6IN^i+!V$X^{ zP8l894iENM$!VuK$oTcFrhA`jF8cPiDA(jpf|(K#s{j3ux*R@U^ijXylm-uXMFQ|f zp_t>y+hlRzaq~_x%%5q{3;4#LgT83gy@Go=Hr+qWbcJupcgK9G6iH)09^p^0Fib2& z{M@s$x-33vI74~Z01tb>8D$pjctyyC`!Vb_m@mKhYp2ebsahacVZ_$hfnvCzFpX-% zXt#^HL_~gyS4U1f+ZHCpRGdkB&cOww0`tR28tvO<-mNRAp;I<$C=wt^m=Mz zCUfkv>rV)z%Dexsq3$U;1GsR$oHFJej2fEmY!QgE00NYymM_ z+zs~p?_>{NYW6ioaDt{ChUB|z<*fB3wQgY|-7VIce~*Vkt}ebArBpx8Khv2(z`8IU zoz*6dAj?KdUUn;SK7KV>e?_K9WzY4{a@BF3nsNe2p>3K@+ilHTtiLB#s*GL=q;IFv ziT%|B<(u!pH~F!;5)Cwa1i`XXdUTuCq7Z*8W?!u}lHLp9*rRD4I>bwk>L8c08V2r& z2iqy+Qh@hSjUEv(CqU`x4z8kMx999I6LRRaGU7dz9>i!l_mRZ43+CqN9Zt93bJ--h zM{n`1BPCNP9*kCefC5WA)Q_N%dRRJqy4+uFufq1m=5S})O;9Hfdm{;~y0d(^)od2X zR1*am0{2MZJHb&Jdfx&9B+_qZ9M}RC)xD~MgTVIbg38CwE9$GKoogH`00;P&X@00KqB>f#!Lu*=RG24oQKl@{P5qP zup-F}o3$)3s^oran#ulkR3T8~S9)C5JuLB5!C(;^G5I!GhlGxQG>*pjmf`vHx!UVI zSk2GxPyW5YGvEb2zvDGIN}x9j%fX#VY@qbM3xKKbq3UIQO-owu?&#(97AQmhR5aax z`GrbA@nSV}i8|kNwi}mLz7H&h{jjD~Zs2{CHM$}scYPj`rzoYcDR`hHDWc>sWJr{} z+@#}+@eWzeg`U}Ka+(<9h&?S=7lVpJve^=gT!&n$AxedOBBD*;T+l6GR1!v5Xg)V{ z1)hTmJcm7DN7N($714ILLLorzO$mE$OE%pHfrhKm#CWOiLKjk(yE^&?$XK%_r+7ox zxF0KFoGAcS=|h2mkH}L0F7m2M2bN@pd>lH=B_VyftYvQ}LhW^U@wVh@{-y&y>4EIs zn!n23RSj9-nzI|qG6km)*FPuG6@-LpCZwO>Bg6wD4y3qL8DU<(-Qj`o@^*}Pnd`)1 zaZvmsr#|&HvvSlboi_-grre_E{VWV<8`$9%!1&2vFCLbNUcF;!%!7O&l>l3sPYqxUP2*j)3~4*Ea-#MrA-8)Ttln=U!`%9jU$^(`HM6WA90(3^L())bU%D@J2?M2Knur9WtjG1*8y1jXpTVm%GHI$B~sP8?TBGLHjD8o#> z({dXBdZPeL{jnV5!)Ecs=^aq?6FYfn6NhHpgAKo12RZolo?@S5LKWf$h;*t4ym@N@ z{(RO%qxK(Tw-CO9ftkVtMVC&hC#0AgAwIy?1mfp8TzK=}j{1gDx;?cwa~UX-<&Jqpr#hAK8Ms)H$sQ}&y&qQHY23%2O=vFMx;LgXe&n8pD%(A zXxYAxVFzJS-095?4Cq|y6mS~BwjrZ9rxq=Z{$;A>1}_=SHcGqted(npPBOREBUXg; zz-Ei?U;jk^5E-EV9vKEDhKLLgy-n|3Z(KbAe_m3fd+eIgOrACXq8-Z-}9^cEi z_#g{*P_wK#d8w=Dd!Zi2*~K|Dvz`2;JTkvr*s)F}4-XQb2Lf;lfqt6NL(&42>1Nz! z{iA=QV@V8(y+=jsbK{EL?avMf_#?hm5tsb@kr2CXeESQN7{OEdl}~NP8r+nwj#6+XPvbv77oiRKxdhv5jUNs5CpF@3R>MehwIGUTkMD|F11uAg zF=mQu=stGpYmi6j_Nz*<`L3P%GFgA3iK1?H)vS8Q^lmuq1HB9ydMp7*aV+OWC*H$a zMF@dg?nRlch${i3&dCA&c}Mb&YsH;aQTO8W^8j%1coawShG+WpOc?$cZytlo{6SKKX9b@R_>4k65jEP&% za=QtNoFMw3V^D<|#lqy2myM(7f`}kUreMAQ_E>QGq-N@a;L}Ho`w6Z9Bi{C z$XfF?)as}Si+xkQH=?cVDI&Ix`a0lW6s4C3vY|gJB+R5#`xm!6EGsQ%&@su1!Ay)U zHu~+j*n~#EptWN6Qb&JY=LK(~-i!WLasKflOU84XX^bKw`#$3K=Esnh8|i#zmdP_x zX4%l+4=V`diM~K|23khh*HRC`EO`lF?||G7as02DoG*X#RUA!AImN7PxYo|NFZ?*$ zS{;hSB%^I5U>8TVPW>YrT>VKC~A0MbRr2E`_RzNG0%E-%e6{I{CxZ`xFKLTYpFAtC)haC1Y&m~+`JLIn9 z@_iQc+uf+!Op?QYiLM=A=IgL>M5Io^$*Xv)GH0&zXXBRtg)bE-F1X(t=yQ$CIx{HX zp3C94+)3REv673@frtYx)k{2|m1l{mNs2rF5T6{d2-d#Gwc%bn0+H`-u%++mAxPW9 zD%kJC)i*7NyrDMMH6{F~n}Tu)k~9UfsN3UTFb`Yy)LzQ^up)3Vaws3tJ17bVNMeW+ z)5hTFB_sExn-kbTA<)t9&xFV@Rmp4(wdnrS{hO6ezOa7(oz;0A6muqci72SRPKCI# zq;tJkp^;3hXt?8hg>q$y2WH}A(EFG@h6*~on4~+t3MY92h^Aae^FcKIrRaYt5M59J z#&P2z(gz`sb(nF`xk$iPmRJ18k6{Fia%L+XL2?S#tChFfy1H-RuoL|@e_`Lt{SQ-W zqN?whqpXX*YPsSvzw6D>y$Xbx=a%KvUro7Zr7{{snqJXtssy%EmY=iqA`4Oc@W-3a zlUq~I8R;9p8LD(ypXYaw2+8yxE5DXd6fHwK( zVN!>Tx+BHk?S5A?xw^SknRN!+4grEwgv0f~T|=}NE3(I_f34x2my2FSnZD`BpGT39Zzo&rR#XyUei}ELNJM99o zm4#s2?efcWAuhDmrrIwsQ4IJ(*xzK=%n1y5V=;zNWb2rm?CjViHIV`D_~!tpBRnEN z^mtH(XqWzhF+K6JFRxh&Wth~|KPZ7wo~F`vmZo{@Z35NA*M(ZE3UAe`P>HpJlK8#= z3ME}oHW3~3)cN_y<-2w4`&5O9uTfL}sNak2ERyx66EI)2=-0UmOZeVvJzNyBJ>O6v zoLTw^E%2f02U1Qj=yreHGCv%UQJr3?$0r)pc=C%FMO9fzX-;b+BO$@OC9J8va6UJf z&p=c_A!2*i{MMrS;I{Ug|G^v*Gx-qSQ44D31*MG#n@M|1ddDTb)PkkddQs(QQq;~N zcLF=I2ewVDPcUqa@mx<;6N1QJsbKOH^Wr zE({gBGm7hRRVsR>AJ=$!7Tr@&^%9b&G4|Jsu#!LRv)X|a%)J{H^Yzm14&!9V5<7|UOIe|D$kRIuxv(lIi!M~QT75Z5}WYBJSF zE8Emj&xvdw$pEG2k-nyx6uNOduqJ8cFTu5{yHnVjG+(-m; zTx^x$5jb=8_Q2IUFvUW8!zTBqfV#2OT*e5Xv5Fx=>|&MWL1F^A^H^WK-Oh`9(5mYm z^aXjO$Hzl~YZ-D)+XKk}HRabcSzRV@8)k>N?etd#^P2CSeO6J^Te>OnL)pN;Mv6Y$ z?x;)ZY@Uh}BJwj#-1J3Y$2^O)2!0Q{weQW~r>`_AF2s41xs^Lw&d&nKHp<->&IkHg*~hvd z++>k>mc^w0$215;QAw4@1|Hc&12M(3VSy#x24RU;xqN^7a8h9TD5yk@4tg=6g57a& z&X)Y1sNJ3qV>#9=g<(mc6QIg1W)Q-O|d~izYiI+3N-A0x+ zs_J$Qxkq2{!y9kXp@V$nTTtG4Oy^Tv8by{|IoeF3CGJLtcvX8|-xJenkY_1}leo4% zUY4y*>a_eGkOwCv&GCDjsC8LxE6B2f{x@v^5=kgjqUa*7PU+PLJ>*z(x8e&dG%E${ zh{AaXPx2P^o}RF7lg{$J*+7bH2n}%M2>EtnRB6O4`#h3;Bf+cy&X3exb4=wH<+o|6 zV^#AFn9I7OFA78{%O(@60`zbIrQgWu1N8OJ;WT}~U5qSi+{6;su@Qtn-xEpxbG5M~ zAPGIHYJx40d-Y)tt~OopRLJ^h0s(#vwRmt1+uCes{VNCdN49{(Yp!^%dLN7(F6Vn= z!r5w}`)NQzqv^3*(idKIx7Aa*&#$$yqmKc&L#!)@zFwp&gs&~LmMBj0l+3a9z?1Rj zr|eimjoC_c(qBIo-;n@hOx|yO-Y@^I&697dJ(}7zu@SRF^VG%5B@TK-p^#l77D}8E zR3>97Mx~e-NZmHfy{bm}LA$hLfp##Vt~0DbJ}y*pD2y92`#$-+Y4Wfdsa^Q)CCc4UYaGJ6}7A#NTsGsSJfJ=!!j7#6ji~0ySm5ZBvdzl+4d4lmxi$`~I zFZN^AJpvQbeczEq7p&Q}@8r3j^)-lGDx2+wDvRIXvs!E6B!~qsCb3nhO&$IBPLTP13aOq6#Qq zfOLI8{G%Zazn_pF77$_$DZ9+~D0*A{k{)if7V7(Xu`;2|-4kwMDnEV6X`Qgj3uuxe zBcI9-B+CDFBU0f0SR~YyDhuQ0c1777;=oI#gJds0lgEYX4?(b4i>2}x)9qFuTCx-A z4omB|Xl#7_eZ$Csa@ph0;DQZ_pC7JPWBAeYAVcfU%BC0RAllYDr`n1wnf5~mS(#UG zROspdTzo7wNEbvaI$d{`QMz5@aI@cJaxlPxOvsGX>W3UdAv|-$o*YXv$;`9$9H!+6 zEg7&lZd^afIzPMv=1*7&aqf`EH~n6{2Q2O~1$y$P%dzUrf###UkWTRV_R?a{&b3?R zf4&j;sK|$pVlbv`XZap}6ep=f*`_b$cWFRSOf4lBgJslM_*H?!m}&h&9EGSAvwa%r z7F%;|q|g1Ht#u6`J_6ldt`Al4#mvG1=jf4echgBK`qR(R!2u-nFc&vYx$vVG9JFMoH)+F0RbEph6+ z+o;SEiB_1pLLms|bNt@Ebm9DQEH$7!sD1Qx;f868{nP6z-fYS%ao-u`&T*In>mXZK zeV27lzwBpLZpzgAcS5@}s)AD+E@`BozJL8P=V8ps-w7d^KxatSyFZ#mUDikgcB(87 z|9a)LE*BBSrGaWM7WfTOnxXdIX5x^Bcao$s-5aN*$JEABI6FreyZvrgbYY@FH1K%y z^A@dBN5fevBQGyO0jj9Ws~i&03CoPs329G`=D!-wLJ6wVf$IoV^dzg4*p7Qe(&O)= zxY67*+o#mg<>Ju9vcK&veY*A8~!w%|I^k0CnV+>Sn9wMgzzi ztK9u6M;si-WAedp%JoL1I|{*`%_#L3-N`84?5zqMoP1lpD?yU$f<@;*Q#;2OQOd## z=i!(CS4(|<0rVOA)qws*)N$17xcTX_j&3}3U)YknmQFbCKsnY+c7#B=bQK@^>`7Df ziC4++CH?J>WJfMN9M3$YDWFo|)Xc!&i;`7_*u5NDXyZMkdgA@2%F|A`?Pg8JC}3pK z8&J%429*Aliq`aP&g4%>15u-d|5dga9uM72$IhJQz>4ojg{sJ0?NqZZ!j(lpA-aC^ zbww-48c@;GYa*FtT2q%<+c*|%GM&47CR6Vv42-RYMo-ON+B)HUICAp*otYI}e!}7-B1fw>ULha$GllZ7g^F|DvP_@PT!sPr`j(OeYnE&wJ-c1SQ3g zBQJQ-j#pWjE!xi%d)%TXUQ8);Hl+?0N=-PY^CB(6-VCUJ-{mJqnac^@5-gWn{3`KM z=jRLCMYJCS120ON9C7vwv|3SnXy;!GsgN`NEnSPLQDFA2`x?I%rxeXNdN7L9rp)GV zQTs(b!0zwkNKG&zBo|vw^sbPQ*cTOFzR(ubhwJ=EgUlBYDr~5^VYLg~XesKmiA$3B zL0&eRio||j@HQ=U6!q9|P7|ue-ZT|>+&)$u5+xVip{FI;AG^=tZjH~mrryicn*$p7 z7fFH)6s1a_z$^~va5p}&t}teY*Oo00rw1*}g3E_SvyhQRm922{4u=s8XOR%_b7KZ- z2?Ub8pHZlJn7wCShbeSmiTn8L5uyLy-DqY-K&x+>=fr?effoC@^()$R*?Et5{zn+oHsrh|(a(7|Gfc`ufggz5Z% z%v8)z(p45|yhc7G_sjIP>m>K5AjIr*Ik@F@HSaG`RMExX5mzi7jK4aFm-yU=qu*Bq zT97Lif@`*hyPYbUUW~EIV2lFyl!!-bX^t`&Xy1kZudS~kim5M;Dju|XVk#QbQ*wg} zW~(RRLIEyZf$i6HjIw0nlO1ux#>Ky=pyXjPAf3tb3=(xj*x|ly!_^za|9XC55gUDy zdhk2;d!&0N{uCaykwMv>jE(HmX{!(QMyYW%@3TtK-|S3Y0~eym#LvX7sdw=FtE+y# z0Z(h1r`Ew<#oZKCVZn+SC8SMrY1&u58H*pJaK^%>gaafIWZRQWl{o}~y5~EiY~ybi zKOzZuKStD2t~iMhag7qn9cZ+3#e!ZeV9y>MU zen!94@ODaKZESz=f0;w^FO=Z&bUux7BS?{!mtUif?BJuH{8+c3U&E4mBo>A|F9LWJ0)PEwvW z0&gs(tdlp^PI-&y8O51?oLS*+{NJhaeP}i_e*wl9KikqSU^a&Bi+n6jVcN^nq+^-H zKA#ZiJ3XJ3A3bli-6r9Wo5v-;?~X7pPU($bVBDTNHcB4vpr+ zJeO9T!By+bPg|wDpo{aJW#IDAqd@e9Kv4}1d0jAnP1KPWD#^Jx0s9G)NX*kIRqt_-Y;uBWU9xJ<<#vYIrKH7vp9TVVTn ztUnAwrqDd_Pg38BL8w+5`%*SA|K-7fs1KF?_zvj77W$vH`_Rn$&&(1T8dngx`Y$7K*rW{HHJrAJt-#d}q*)e3j*T&g|dFGWD(Kw2>QHuSL zmsTOtEqfg|CYgHl(HZd+*?Bs`L_tMmt)=Zf#((Lz^hcnN9J|2^1j;&=cO|Zp*F6Lf zk?dtHbx=Po?RPc91G{N3akC$v2G{!eN0o3^5xRBa^tl6u1mCJ3mzJxHM<_ksVn5t} zbtBzN33xR~$y(B{Enc&-#r0(}?KyA7=$UOc*ya1IYJCTIwFK1Ey|&BZ&gB11;dCCp zIVBHRf4DC%JM0|G-lM$Fvi=bx%=ZFI`~JCY{?`{vz?B3PKS!+AAFGNg&SAHKAnwdY zHjE>>;!e~*dvmwjV61z7J&sP1n~Mh9oe{B0J(tE4Fq<0E46mP^yOSy2*9D@^zc=(c z7~do~AaI!f>tlopJ@osLUjczSFiYPVZ+YdrNz8CZ5S|25Vp0&yn> z>TD=Cu$(OB02AT6d!c1sJjuhO;FdhZ*j0f^bM5@R(wriZjqn)K>T5A_Q1i$`znlur`QXf5pmw%^&H|?$@g~#48&{m7tk!JKvn*44eBMXttMvs`YUxamu(b_ z<*k7_@ZSLr@^Ne~;c%z?n*JG^h z0YHI^y#?F0_OQbo%7O>!@%%Da&j>V3a+HbIY@GU>W|l5z_cx)_%4h?$EF{SIK6$B) zJH4z)_1@vqq3d?(ig!$KEPsoL*kUDAowrbtLPF@u+dtsMI-ivIjaj&-e*IgJt7Tex ztm@iCk)dIY_jr=7qE)@3IuWIczMV z<8Q<1|22f)nHt4{K+wI_ActSnFPnF+<&3v#*|~uu+u7oc{`uNa;)k_&dszEe$Kf^) z6zrz_CtIS+et1H+_B#pmT8{cb+xCkrx=+ul@60lx3azq*o=GR5Jj%2Hk{gmYkzQ zAM3f`@eDQNvw^BaHm#1%b`CKHg~wV6=QRw&{0DSDK_QK!Lc>LrdDzMpb+93f(7CjuPKkdXFl4>pRT z{4b>oVPWj8vnDIjC29fA_ZIhky<3?P(`EC{X{5;6wEz2TVF14I$NKkNq@?(L>FbVS z&3Ip3v-Q3XL4(dB<>n)kHFYmSAH6D_*lXl?QAX!Xf-s#ony&`$EKhfq);kSTK86|g zjdot`d9K!dlzbzrTkk?K3a}U{-N5~gY*@~B#1_#ot%)DzCf~+g|8~)VKPHd{uo~86 zb68FG1g3G~BxJ)r2zZBJnQ`-#;J590b(qewL;mvu=Hzh9qD}buN*mM>M-eub$AMR*=KQIkFjyj-eRPPT1HDe%9BUNCI!oc zi_l2ctRm40(}FY5!IqMPVr|vxt#QcwehS*{8-7PuX}cMbzYx#T_HIWr^BrHY9q#`o zUmvB4o)W3+f^OP)?}Tl+%ySB^9Ndd>XN5Pc=OY|6S+dr~kp4?yrCt)`PIdDN&I5ZM zm%DID^dYCqiP`(DMpTi7)p72=GG`Z*oZR8Hzjad6rP=C7p;wWcsoWPXX9U^VxW$1h z21FlKNWZ&F+sO!%ZR?G2@f%-SCJ9&!81dmJ%yS;AJWMnRI>qAdgTO(K?J|92$vpYD zED1=UIEN*lX|gzV9zAh&tvMNLA7z6%Gf`xb;&OmLC=hJ!Z((DeD%-bBWU*Qb&$R zs^4or^-T8Xt~EEFXnaB>)u`Ho{MqY)8wHNR@$bU9rxXZ80iiVL#W{Yh=3I-`mVDr zMo@IT2#SuH*NW20h-jU`DgS`;)O81>xk*`l`svBSX6;0r&*TLY-;HL$AYt`Z%j7a9 z$6s$F+8*83|MZbgC-yp5v1t0Q64@f+B5i~45hlUL6?y)S&V7VhiZ70BB3Y_|}5hMq{)ALBo~qQ6!RvH(l=X7|}x#%_Au8!-**UPV2F ze%F*f9H=J-^vLuS-w@Pqg+dx69+T}?Iz|k(W+(9yl|4ys+U>~W$hbRjZgEh8YEr5~ zl-D|V#6G4`<@b|7cA?Scouvl=6}4q{=q_HoL{w^XDpiUkRz71lkM1efNQRqrvgOAH zALrw@WjnZ-2!z51mj-zb=w3s!;j~m1rF4xHcT1{y?D=mE><>)kST*li(h8#X`3*(Y zTcKuRaUC4HJlR@!qSUcU5!~ktqtU-g?jQ*S-!;lDNQlx9VSwK#sl_f2ahZx-KV>6n zCJ3E$G~CpKr9?C>?kk{};GM>tSA?R^9*C?*p?~qe3UKgupk0Vn<uM(SF(9~XnkUXY4)YM%Z&xcsdprDsiL&($$q1!X-&)Yv z+Ro>z8Ec#k#ziuoQLgW+Xk=P z2cdFDZgW8*-8Eo%ce3z-5tkZmvMlkSZ1VtVXO5JwUC%hvx{6Kb7afY^=fnSy@qrRL zuyy`Q$sHiA*vo8b%sCIFff$44r5@e*P$evPXhHfE*DLMqxO}8@&53f-Ig#Qbb5ynb zIZK$9Ojw#Pm2(}6&|hd>whUBkD)ufPk=;3y57mZ-wDWmW%Awl<T9{DfaoA4TU$}y^3H^r?&gK_~GjbVBi2cO_Qb)(^#4FTm5SE}5-!9JT- zXdHqJifd@MNFH`74Y;ng^Y+qppVpa@B1HRp67|iA9F%aK$>TD+L6(Mo5t*YggF%N$ zX#4Bf&`Jr?7GE)C)l|_)#9>S$!{4v0bS@g8fi#I&wlXTSrIo|2UOzSuZU?6gSKL}9 z>TqXT?JYXL9QWf31;??grE2W?D?JBSTUX9CTGfZ+9_wxC|J3#Tigs~>5zPs%Jzcu| zOO5kt{hfpA7i(SZ9@8F<7#HPhb2=S*YO$1LeQcx@D$`-lBtG7Md?9MIt8Y3w{0Pg9 z$B$0CF2zvKO_Zvd+fF~Yl`w1L|B}NOkF6xW|7S*L6o?HNuf6=C;M#m+GO`w!VOjv| zhFhAN330}B-vGutAJ}{W3E%JWo|8Mr6pzjXzHdN7`7JOZ_p2aLJHRU#$)!(IJ_o8c zvGl@;TWqv}JytnMke8Q&bg++aZv-yq=YqFIs-Ga?Tj^o@|YP=Q?ZeO^S*U$hEQjwsqb5H$|-(y z8K-=1==_Qw^P2Se<7+d zVvU$iM!e8#T_=u-jn7o-%mZfoafFsg<742?=mumtEqmWe`X%sB7QN4>n}$i#M`cfh zki>7?yj`L}$-9(Gc!OBV@76uNp<*vIk|^9LiI<8C85yG(cn$y%Ge#6IRhQPQihbDTE zri7>%Xc>;`UaYxl6_Eg{Y{MQ{2_3~RJv|A+yYF_sU(;)tGRcX5EVb-ArNVyHwfRaQ zcT5ci3+#oE;ED2g3dEP%B`*`X+1wU}yhTeo@A*mc6d4r%dtIGON%)hodd~}yV z)OnchC`FEq$;7|pyH;O%Q{zc;MdA?=u)+n*UWkZ@h{kr&Ei8emF=H`iosvzAV#MSL zJCG-Tk(wFEB zaB5yG@71;25LdW2bcPu@EQvp_fzfIc+j~-TS;L@?!Lp{ji6m9p z!W)-1Unb8F4Gv|CeDUT1sn1@ut;tACgY4-fCx(F2r%nO{0L)(8hNf?e4HiMCqu-g2 z@HnvmUE9!CUU=_cUwOBH4^k#@-T_vG#U(HxVbL=lSZIsc5MYVdzWFXvk=@b9h4#`= zb%NDp&nJo50UOOsQ%InX1XANFkHcLD#bAXY8fU>1=(h z5BxcC74KoTAP9qkEVNn;ABpm03$pX#Ho9}zc8khXI0rCNuE(9@IE%;+zzi+Sg`T(4 z!ma0N&wdZUv$Ky^2(4iKddN?v4Uib3;A_MdiQ%3vg(NQn%->jVqypc3nUL)pm~^q5 zgN?}uqCSXRj%A$sE{on*YXNF{ZQ5@m9Yaif#`5OHM!AWHb6&4oFchZg6m%vf^j83d zrwDC6mRKrek0zO~nySamR0qgJP%#APY5a+$kL!N@#M~A15kb0&ADabtH5YNj@xl4OThNWt^a|QX? zjT}<7a!&Nc7Flq3j^xXcuaA#QV!wak!0kKVTrxscbQm{c366G{4;CQ~#1@a_rr*9( zk`eV00YWeYTb=`Men*6hlPb03r+$hQOuUlp`mlW~xBEPIOMMm08=Z-Zd;LaZOJCV< z)Ai41OM)iw@QpjWZ3o${j|BOFaUjr^_S3*>spcLGa6#SuYmvwSf@+b$t^vN0ncYj2 zo6qBM8TVUIFPIa#xA*5~d)(6BDURc<3GTRTX}z65l%wxt@8$(w@xJ`0?+4t3Vg>Fr zH$B)CNLyyxbP0TaLNj%bvcDioKh%T60>4?o@^H&sRBy*w8w!QKb`0&xmA2m06kz?m zGe-@qf~u!Al%tCXc&Tk2!rpJ~9r%^sl&RyJyZa}GRv;qf20@E*7^tMRnYVT7gi#e@ zC72dCr{c|8)6WSNRWh(I`{*ob4n2g&m6lTMOl))FanmE+JCEAuXImSby3vmbw<@)% zQ7tvB5S5eHx&jeG1H{RREbOk->=Tl0snN9Gcu}=_<8jF^;Z*(KTBzs^4DW_5UKX#g zN|-H_hBS{pnHE{t?+d^Os|!@O>Js7}*{m@%|IVu(?@W&~ph7d-1rqsfTphXu?bY;~ z!}%OSt1VB2%?`XByZRDmtvtw*B`#bPYpaR5xa(|n(GUYM@0B;jRs-wl=-q*!#o<(z zKi_ab-H-xRVCjhC1nGJAZXQcA>$R!S1#68{B5bH1KXg+|d)}&fPJY%KFE&f1wp;Q< zW?NGGV=#^svT57<5D{C*N!QelSu9k*3i){cJi6st+fgy}erCEO4Yy*LXfK;Dbz%(a zN|-;{*#QT8(i+hiwLSrp9RZ>>c<-E`|eY$@ZS;z7?-%56zh6q1D5DSrWA-NMNn{-|N z>;m+#Sh6D-Scj)RnNkXmWzTCTk@<0do8aWM2i`LF#0DhBr^*Bpjk3M?T0cUI)H>3F zw~7+eT5_w?XL`q7ln=YRWrLMQX88Wz4aXm5H*nKo;!o4z%uAS*dx%dUi7zWjRXOE; zwhgM8^vmGn7J*q>aH8VV5uUB`mE*xxNe0(|mWzNy0mr#+T;^$;ADMsefD5i@@NiYB zMju8*#R72A`P{KC5K+*B8j*d=n>6XsP#lSp>f!v)# zc?Q#=#%E0(SH%o5`PoV{=Am~b%7YnP=wI+>wg~z z8?Tyuq|o`FcgZL6`oCOIftfyo6e%ol438K7d(1+Yj?*&6zZpI!+tys;xWHZYoB#BC zh~O_LH<=EBYDjYO?UUAobgtlE{~k1FU<9WEAv3MD^`+k^vf~+Q1Z1fHG2UXP;{R~d zdN@1kzfaUup3tTkK9>;_zc z;i3hQ-Bo26TeIeU8X`;Run`nQ!My;zCv-=p_iG`Eqa~WWld$082}zdmf7969Y@a06LO_0njSnQ9-O*L zkSGX^1^$h!e%nnY9NEQpba2b+2z-Mqqz|H|U|{t!S=ejHPdt4@0pJU9LL=h1LX+Az z>B>q4YEjO4QQskYJA`prKC-wQ8@CqtkzKv8Sn8d4Bugrkm4zi*dD_6W@ zYJ8TLXx|SHw0paw5csJjE^^+L^;D+v^5HqaTV7b4MjTDC@!0ZPkeInLrjb`q);{5p zm4yioUMp`JWJ;)S$UuMpi?}#K z{EN-<>^`yyyGBCaBjj2MQEv85(qm*4i5v^zVTThBGTB zPi^5{ltA;Rk7@QQGZC~r83AK6N6!Necnm9@okFJ5@n~MuMG7orc@HgE6{8cvMOpjn zbCK+tW|MVUP0A@J=cSN&cWBDGLm#ZGwP9TO`beDL5aQ7Wtdj4_n2cywS3`xa^m4l9 zh}C9IdHE#dO|@KAUaSNmv!ijr@OX`99_^E{Fzprq&dpdmbVRwVapebuhK63Me|or| z%>(Bb^7^FLI3}l&7h2B5q70bM*oo@LyzcJq8Q9@$LNSxaABL32AUcWr+?iHwUgyYr zjJI>k&vA;2j{wQz0gj2U9b=~BN6Rk88`PR_M7hdnE@kxip~VX|{RgKiP~klvnIHHhvsm4FJ^`d`C%QeUwAbo?%~aBWQ~<#(YNO(bP%l9 zhq|>f?gGd1XVV}-G`~|YE}ByhG~aAh!vMd!EGvC@bMY1pMW%A&R8gXUNhAQxj?~T6 zgYL99uuFSQs1xU+7Jx~6y-(ACU!b!-t!d?%*f)B$>f!S;A47qJ5QH(7Z9^yY|)dL=HNAcK_a~T z!7R0P`in0Ng1_Lz=pDy^ynUIC{tr>|($eFrE*ne0m*hRv6stM*R`T(I_>?z_u#ZWL zjobOs3o}^0i!Zh>d)!m_^X-0nE(}qH?YHK9YIZ4xC?-KMw@LjLus+!6D>+jtYmzxG zFOFw(%qVZqL&yMjYA_=hzOqUnbs@rO;gdRsVM$?jEXr|xqS_kjM^)PV`~_DA!m@yT z@PiBuw#M1vekxsmEL(1U6L71J?nx2fC>PXM3Ws~m-%oK^)AUpfKrQe2%{}9F1-c-X zz?sQf+z65@9cEV^uW;v1q)*t&(5J(}QkxX-QJX5cB599BGm#9G%Cnn>tI%jhmRr22^rCIJvY`4(Ki)>rxcD& zdk5|tlQJ-A2PR*<#GOl_H1M9(a|kPLJlVU?R;h?G6eyeIR9>W`c|L8*RD`YEfjJFG z1J&$-Ze=K$OT(V^HrvS6*_lJE6rsNT#XvJZyZeO>h1KodyNm~ZBeb%PKE-ZMyt$gP z&5|l0{j)&9W-Q?$k897MyyLqZ!SLB*9ackNe1o^TqhmBg5;Rm~wwA zp~hT|2WmZ6PP5;;)e$#X>{2cK?11bx5$S%+%7%$_=$f>%?JaAOAVI@52afyiC<)Sy-Q}d?Mh;I`w~fbf_7h&jl*!SM z_RxC-5P~=SqRU$YQ)y#3?ciRnd}EhB?w;tS+TFyAvBW>F+4rrlTv9T)rjomFA__F`(n|34XaDR3ZZFX zrzu&Jt}4dl^_1UvAsd*Wx_H{b!NeXpT~=O+Z!b{0TjS21I%4yfIa zqfZapb9i#)2_ePIfEi;XtsgYt@P^zW`o2F%dsY4Frv1mgKF{lK-(cQIxzP7Tter*9I9dM^J|kXk9{hjf3@{Q#7` z-_od+;~sEYNy*GaU(_D=X#jSR*-wh8$d7lvIXK@~FRLvPKN_R>j^atR%=s}rroJ&9 zu-66M95!-EPLy>*6^x^w6hOGwCIdiyAlWM|05?AE+QnbFnk*8%gGLrtkbBps=Xob; z88OTGSJC?HvppzU2eS4VvLTqtLfJZuX`z>XNe=30Sv{R!Ff$INjOdIwPr8W|Zt_@0 zRw_Gj=IRBENvg;a))a~pdvX;)5mR*+bK%87LE=#@nTIRvmI0emr;}pov!^D(>FSO6 z&9&96igJiKhS^q+cI|NGdtUam<640-ZgJxwba#-?tiy9qRN6VI_e{PZ$ui}GlN3 zsZ2YY`em5YCp&v>&5`1b4WkjKH{ZFU*0A>&7%K=E+9T* zfSWrSTmJ^@{1xWp@_@Q1J7f00Vh9j55l~p7L>&GJiFMdPV78ni^sk1^NuZp|%$E4@ z@9>W^a2do#57a{K{%HuJzyH)TqGGA58vkktVlqPJ_I9h`zY{XY!Z!@?p}Xe)8WK3V zlQ#;936%aFTq+O^(Sl@NEcw7c4MA+FLl9Qtmi!wY@f-IF*aT*P3IDH#AmRh6k1<{O j{~Ea`=**simple linear regression** : you have observed a set of two-dimensional data points of `X` and `Y`, where `X` is an explanatory variable and `Y` is corresponding dependent variable, and you want to recover the underlying correlation between `X` and `Y`. Linear regression can be used in many practical scenarios. For example, `X` can be a variable about house size, and `Y` a variable about house price. You can build a model that captures relationship between them by observing real estate markets. - -## 2. Prepare the Data - -Suppose the true relationship can be characterized as `Y = 2X + 0.3`, let's see how to recover this pattern only from observed data. Here is a piece of python code that feeds synthetic data to PaddlePaddle. The code is pretty self-explanatory, the only extra thing you need to add for PaddlePaddle is a definition of input data types. - -```python -# dataprovider.py -from paddle.trainer.PyDataProvider2 import * -import random - -# define data types of input: 2 real numbers -@provider(input_types=[dense_vector(1), dense_vector(1)],use_seq=False) -def process(settings, input_file): - for i in xrange(2000): - x = random.random() - yield [x], [2*x+0.3] -``` - -## 3. Train a NeuralNetwork in PaddlePaddle - -To recover this relationship between `X` and `Y`, we use a neural network with one layer of linear activation units and a square error cost layer. Don't worry if you are not familiar with these terminologies, it's just saying that we are starting from a random line `Y' = wX + b` , then we gradually adapt `w` and `b` to minimize the difference between `Y'` and `Y`. Here is what it looks like in PaddlePaddle: - -```python -# trainer_config.py -from paddle.trainer_config_helpers import * - -# 1. read data. Suppose you saved above python code as dataprovider.py -data_file = 'empty.list' -with open(data_file, 'w') as f: f.writelines(' ') -define_py_data_sources2(train_list=data_file, test_list=None, - module='dataprovider', obj='process',args={}) - -# 2. learning algorithm -settings(batch_size=12, learning_rate=1e-3, learning_method=MomentumOptimizer()) - -# 3. Network configuration -x = data_layer(name='x', size=1) -y = data_layer(name='y', size=1) -y_predict = fc_layer(input=x, param_attr=ParamAttr(name='w'), size=1, act=LinearActivation(), bias_attr=ParamAttr(name='b')) -cost = regression_cost(input=y_predict, label=y) -outputs(cost) -``` - -Some of the most fundamental usages of PaddlePaddle are demonstrated: - -- The first part shows how to feed data into PaddlePaddle. In general cases, PaddlePaddle reads raw data from a list of files, and then do some user-defined process to get real input. In this case, we only need to create a placeholder file since we are generating synthetic data on the fly. - -- The second part describes learning algorithm. It defines in what ways adjustments are made to model parameters. PaddlePaddle provides a rich set of optimizers, but a simple momentum based optimizer will suffice here, and it processes 12 data points each time. - -- Finally, the network configuration. It usually is as simple as "stacking" layers. Three kinds of layers are used in this configuration: - - **Data Layer**: a network always starts with one or more data layers. They provide input data to the rest of the network. In this problem, two data layers are used respectively for `X` and `Y`. - - **FC Layer**: FC layer is short for Fully Connected Layer, which connects all the input units to current layer and does the actual computation specified as activation function. Computation layers like this are the fundamental building blocks of a deeper model. - - **Cost Layer**: in training phase, cost layers are usually the last layers of the network. They measure the performance of current model, and provide guidence to adjust parameters. - -Now that everything is ready, you can train the network with a simple command line call: - ``` - paddle train --config=trainer_config.py --save_dir=./output --num_passes=30 - ``` - -This means that PaddlePaddle will train this network on the synthectic dataset for 30 passes, and save all the models under path `./output`. You will see from the messages printed out during training phase that the model cost is decreasing as time goes by, which indicates we are getting a closer guess. - - -## 4. Evaluate the Model - -Usually, a different dataset that left out during training phase should be used to evalute the models. However, we are lucky enough to know the real answer: `w=2, b=0.3`, thus a better option is to check out model parameters directly. - -In PaddlePaddle, training is just to get a collection of model parameters, which are `w` and `b` in this case. Each parameter is saved in an individual file in the popular `numpy` array format. Here is the code that reads parameters from last pass. - -```python -import numpy as np -import os - -def load(file_name): - with open(file_name, 'rb') as f: - f.read(16) # skip header for float type. - return np.fromfile(f, dtype=np.float32) - -print 'w=%.6f, b=%.6f' % (load('output/pass-00029/w'), load('output/pass-00029/b')) -# w=1.999743, b=0.300137 -``` - -
![](./parameters.png)
- -Although starts from a random guess, you can see that value of `w` changes quickly towards 2 and `b` changes quickly towards 0.3. In the end, the predicted line is almost identical with real answer. - -There, you have recovered the underlying pattern between `X` and `Y` only from observed data. - - -## 5. Where to Go from Here - -- Build and Installation -- Quick Start -- Example and Demo diff --git a/doc/introduction/index.rst b/doc/introduction/index.rst new file mode 100644 index 0000000000..ff22f05a1b --- /dev/null +++ b/doc/introduction/index.rst @@ -0,0 +1,8 @@ +Introduction +============ + +.. toctree:: + :maxdepth: 2 + + build_and_install/index.rst + basic_usage/basic_usage.rst diff --git a/doc/introduction/parameters.png b/doc/introduction/parameters.png deleted file mode 120000 index f47e74c94f..0000000000 --- a/doc/introduction/parameters.png +++ /dev/null @@ -1 +0,0 @@ -../../doc_cn/introduction/parameters.png \ No newline at end of file diff --git a/doc/demo/embedding_model/index.md b/doc/tutorials/embedding_model/index.md similarity index 100% rename from doc/demo/embedding_model/index.md rename to doc/tutorials/embedding_model/index.md diff --git a/doc/demo/embedding_model/neural-n-gram-model.png b/doc/tutorials/embedding_model/neural-n-gram-model.png similarity index 100% rename from doc/demo/embedding_model/neural-n-gram-model.png rename to doc/tutorials/embedding_model/neural-n-gram-model.png diff --git a/doc/demo/image_classification/cifar.png b/doc/tutorials/image_classification/cifar.png similarity index 100% rename from doc/demo/image_classification/cifar.png rename to doc/tutorials/image_classification/cifar.png diff --git a/doc/demo/image_classification/image_classification.md b/doc/tutorials/image_classification/image_classification.md similarity index 100% rename from doc/demo/image_classification/image_classification.md rename to doc/tutorials/image_classification/image_classification.md diff --git a/doc/demo/image_classification/image_classification.png b/doc/tutorials/image_classification/image_classification.png similarity index 100% rename from doc/demo/image_classification/image_classification.png rename to doc/tutorials/image_classification/image_classification.png diff --git a/doc/demo/image_classification/index.rst b/doc/tutorials/image_classification/index.rst similarity index 100% rename from doc/demo/image_classification/index.rst rename to doc/tutorials/image_classification/index.rst diff --git a/doc/demo/image_classification/lenet.png b/doc/tutorials/image_classification/lenet.png similarity index 100% rename from doc/demo/image_classification/lenet.png rename to doc/tutorials/image_classification/lenet.png diff --git a/doc/demo/image_classification/plot.png b/doc/tutorials/image_classification/plot.png similarity index 100% rename from doc/demo/image_classification/plot.png rename to doc/tutorials/image_classification/plot.png diff --git a/doc/demo/imagenet_model/resnet_block.jpg b/doc/tutorials/imagenet_model/resnet_block.jpg similarity index 100% rename from doc/demo/imagenet_model/resnet_block.jpg rename to doc/tutorials/imagenet_model/resnet_block.jpg diff --git a/doc/demo/imagenet_model/resnet_model.md b/doc/tutorials/imagenet_model/resnet_model.md similarity index 100% rename from doc/demo/imagenet_model/resnet_model.md rename to doc/tutorials/imagenet_model/resnet_model.md diff --git a/doc/demo/index.md b/doc/tutorials/index.md similarity index 96% rename from doc/demo/index.md rename to doc/tutorials/index.md index 289199d496..c845ca229c 100644 --- a/doc/demo/index.md +++ b/doc/tutorials/index.md @@ -1,4 +1,4 @@ -# Examples and demos +# Tutorials There are serveral examples and demos here. ## Image diff --git a/doc/demo/quick_start/NetContinuous_en.png b/doc/tutorials/quick_start/NetContinuous_en.png similarity index 100% rename from doc/demo/quick_start/NetContinuous_en.png rename to doc/tutorials/quick_start/NetContinuous_en.png diff --git a/doc/demo/quick_start/NetConv_en.png b/doc/tutorials/quick_start/NetConv_en.png similarity index 100% rename from doc/demo/quick_start/NetConv_en.png rename to doc/tutorials/quick_start/NetConv_en.png diff --git a/doc/demo/quick_start/NetLR_en.png b/doc/tutorials/quick_start/NetLR_en.png similarity index 100% rename from doc/demo/quick_start/NetLR_en.png rename to doc/tutorials/quick_start/NetLR_en.png diff --git a/doc/demo/quick_start/NetRNN_en.png b/doc/tutorials/quick_start/NetRNN_en.png similarity index 100% rename from doc/demo/quick_start/NetRNN_en.png rename to doc/tutorials/quick_start/NetRNN_en.png diff --git a/doc/demo/quick_start/PipelineNetwork_en.jpg b/doc/tutorials/quick_start/PipelineNetwork_en.jpg similarity index 100% rename from doc/demo/quick_start/PipelineNetwork_en.jpg rename to doc/tutorials/quick_start/PipelineNetwork_en.jpg diff --git a/doc/demo/quick_start/PipelineTest_en.png b/doc/tutorials/quick_start/PipelineTest_en.png similarity index 100% rename from doc/demo/quick_start/PipelineTest_en.png rename to doc/tutorials/quick_start/PipelineTest_en.png diff --git a/doc/demo/quick_start/PipelineTrain_en.png b/doc/tutorials/quick_start/PipelineTrain_en.png similarity index 100% rename from doc/demo/quick_start/PipelineTrain_en.png rename to doc/tutorials/quick_start/PipelineTrain_en.png diff --git a/doc/demo/quick_start/Pipeline_en.jpg b/doc/tutorials/quick_start/Pipeline_en.jpg similarity index 100% rename from doc/demo/quick_start/Pipeline_en.jpg rename to doc/tutorials/quick_start/Pipeline_en.jpg diff --git a/doc/demo/quick_start/index_en.md b/doc/tutorials/quick_start/index_en.md similarity index 100% rename from doc/demo/quick_start/index_en.md rename to doc/tutorials/quick_start/index_en.md diff --git a/doc/demo/rec/ml_dataset.md b/doc/tutorials/rec/ml_dataset.md similarity index 100% rename from doc/demo/rec/ml_dataset.md rename to doc/tutorials/rec/ml_dataset.md diff --git a/doc/demo/rec/ml_regression.rst b/doc/tutorials/rec/ml_regression.rst similarity index 100% rename from doc/demo/rec/ml_regression.rst rename to doc/tutorials/rec/ml_regression.rst diff --git a/doc/demo/rec/rec_regression_network.png b/doc/tutorials/rec/rec_regression_network.png similarity index 100% rename from doc/demo/rec/rec_regression_network.png rename to doc/tutorials/rec/rec_regression_network.png diff --git a/doc/demo/semantic_role_labeling/curve.jpg b/doc/tutorials/semantic_role_labeling/curve.jpg similarity index 100% rename from doc/demo/semantic_role_labeling/curve.jpg rename to doc/tutorials/semantic_role_labeling/curve.jpg diff --git a/doc/demo/semantic_role_labeling/feature.jpg b/doc/tutorials/semantic_role_labeling/feature.jpg similarity index 100% rename from doc/demo/semantic_role_labeling/feature.jpg rename to doc/tutorials/semantic_role_labeling/feature.jpg diff --git a/doc/demo/semantic_role_labeling/index.rst b/doc/tutorials/semantic_role_labeling/index.rst similarity index 100% rename from doc/demo/semantic_role_labeling/index.rst rename to doc/tutorials/semantic_role_labeling/index.rst diff --git a/doc/demo/semantic_role_labeling/network_arch.png b/doc/tutorials/semantic_role_labeling/network_arch.png similarity index 100% rename from doc/demo/semantic_role_labeling/network_arch.png rename to doc/tutorials/semantic_role_labeling/network_arch.png diff --git a/doc/demo/semantic_role_labeling/semantic_role_labeling.md b/doc/tutorials/semantic_role_labeling/semantic_role_labeling.md similarity index 97% rename from doc/demo/semantic_role_labeling/semantic_role_labeling.md rename to doc/tutorials/semantic_role_labeling/semantic_role_labeling.md index e2793b2b34..f5bdf64487 100644 --- a/doc/demo/semantic_role_labeling/semantic_role_labeling.md +++ b/doc/tutorials/semantic_role_labeling/semantic_role_labeling.md @@ -1,200 +1,200 @@ -# Semantic Role labeling Tutorial # - -Semantic role labeling (SRL) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence. SRL is useful as an intermediate step in a wide range of natural language processing tasks, such as information extraction. automatic document categorization and question answering. An instance is as following [1]: - - [ A0 He ] [ AM-MOD would ][ AM-NEG n’t ] [ V accept] [ A1 anything of value ] from [A2 those he was writing about ]. - -- V: verb -- A0: acceptor -- A1: thing accepted -- A2: accepted-from -- A3: Attribute -- AM-MOD: modal -- AM-NEG: negation - -Given the verb "accept", the chunks in sentence would play certain semantic roles. Here, the label scheme is from Penn Proposition Bank. - -To this date, most of the successful SRL systems are built on top of some form of parsing results where pre-defined feature templates over the syntactic structure are used. This tutorial will present an end-to-end system using deep bidirectional long short-term memory (DB-LSTM)[2] for solving the SRL task, which largely outperforms the previous state-of-the-art systems. The system regards SRL task as the sequence labelling problem. - -## Data Description -The relevant paper[2] takes the data set in CoNLL-2005&2012 Shared Task for training and testing. Accordingto data license, the demo adopts the test data set of CoNLL-2005, which can be reached on website. - -To download and process the original data, user just need to execute the following command: - -```bash -cd data -./get_data.sh -``` -Several new files appear in the `data `directory as follows. -```bash -conll05st-release:the test data set of CoNll-2005 shared task -test.wsj.words:the Wall Street Journal data sentences -test.wsj.props: the propositional arguments -feature: the extracted features from data set -``` - -## Training -### DB-LSTM -Please refer to the Sentiment Analysis demo to learn more about the long short-term memory unit. - -Unlike Bidirectional-LSTM that used in Sentiment Analysis demo, the DB-LSTM adopts another way to stack LSTM layer. First a standard LSTM processes the sequence in forward direction. The input and output of this LSTM layer are taken by the next LSTM layer as input, processed in reversed direction. These two standard LSTM layers compose a pair of LSTM. Then we stack LSTM layers pair after pair to obtain the deep LSTM model. - -The following figure shows a temporal expanded 2-layer DB-LSTM network. -
-![pic](./network_arch.png) -
- -### Features -Two input features play an essential role in this pipeline: predicate (pred) and argument (argu). Two other features: predicate context (ctx-p) and region mark (mr) are also adopted. Because a single predicate word can not exactly describe the predicate information, especially when the same words appear more than one times in a sentence. With the predicate context, the ambiguity can be largely eliminated. Similarly, we use region mark mr = 1 to denote the argument position if it locates in the predicate context region, or mr = 0 if does not. These four simple features are all we need for our SRL system. Features of one sample with context size set to 1 is showed as following[2]: -
-![pic](./feature.jpg) -
- -In this sample, the coresponding labelled sentence is: - -[ A1 A record date ] has [ AM-NEG n't ] been [ V set ] . - -In the demo, we adopt the feature template as above, consists of : `argument`, `predicate`, `ctx-p (p=-1,0,1)`, `mark` and use `B/I/O` scheme to label each argument. These features and labels are stored in `feature` file, and separated by `\t`. - -### Data Provider - -`dataprovider.py` is the python file to wrap data. `hook()` function is to define the data slots for network. The Six features and label are all IndexSlots. -``` -def hook(settings, word_dict, label_dict, **kwargs): - settings.word_dict = word_dict - settings.label_dict = label_dict - #all inputs are integral and sequential type - settings.slots = [ - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(predicate_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(2), - integer_value_sequence(len(label_dict))] -``` -The corresponding data iterator is as following: -``` -@provider(init_hook=hook, should_shuffle=True, calc_batch_size=get_batch_size, - can_over_batch_size=False, cache=CacheType.CACHE_PASS_IN_MEM) -def process(settings, file_name): - with open(file_name, 'r') as fdata: - for line in fdata: - sentence, predicate, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2, mark, label = \ - line.strip().split('\t') - - words = sentence.split() - sen_len = len(words) - word_slot = [settings.word_dict.get(w, UNK_IDX) for w in words] - - predicate_slot = [settings.predicate_dict.get(predicate)] * sen_len - ctx_n2_slot = [settings.word_dict.get(ctx_n2, UNK_IDX)] * sen_len - ctx_n1_slot = [settings.word_dict.get(ctx_n1, UNK_IDX)] * sen_len - ctx_0_slot = [settings.word_dict.get(ctx_0, UNK_IDX)] * sen_len - ctx_p1_slot = [settings.word_dict.get(ctx_p1, UNK_IDX)] * sen_len - ctx_p2_slot = [settings.word_dict.get(ctx_p2, UNK_IDX)] * sen_len - - marks = mark.split() - mark_slot = [int(w) for w in marks] - - label_list = label.split() - label_slot = [settings.label_dict.get(w) for w in label_list] - yield word_slot, predicate_slot, ctx_n2_slot, ctx_n1_slot, \ - ctx_0_slot, ctx_p1_slot, ctx_p2_slot, mark_slot, label_slot -``` -The `process`function yield 9 lists which are 8 features and label. - -### Neural Network Config -`db_lstm.py` is the neural network config file to load the dictionaries and define the data provider module and network architecture during the training procedure. - -Nine `data_layer` load instances from data provider. Eight features are transformed into embedddings respectively, and mixed by `mixed_layer` . Deep bidirectional LSTM layers extract features for the softmax layer. The objective function is cross entropy of labels. - -### Run Training -The script for training is `train.sh`, user just need to execute: -```bash - ./train.sh -``` -The content in `train.sh`: -``` -paddle train \ - --config=./db_lstm.py \ - --use_gpu=0 \ - --log_period=5000 \ - --trainer_count=1 \ - --show_parameter_stats_period=5000 \ - --save_dir=./output \ - --num_passes=10000 \ - --average_test_period=10000000 \ - --init_model_path=./data \ - --load_missing_parameter_strategy=rand \ - --test_all_data_in_one_period=1 \ -2>&1 | tee 'train.log' -``` - -- \--config=./db_lstm.py : network config file. -- \--use_gpu=false: use CPU to train, set true, if you install GPU version of PaddlePaddle and want to use GPU to train, until now crf_layer do not support GPU -- \--log_period=500: print log every 20 batches. -- \--trainer_count=1: set thread number (or GPU count). -- \--show_parameter_stats_period=5000: show parameter statistic every 100 batches. -- \--save_dir=./output: output path to save models. -- \--num_passes=10000: set pass number, one pass in PaddlePaddle means training all samples in dataset one time. -- \--average_test_period=10000000: do test on average parameter every average_test_period batches -- \--init_model_path=./data: parameter initialization path -- \--load_missing_parameter_strategy=rand: random initialization unexisted parameters -- \--test_all_data_in_one_period=1: test all data in one period - - -After training, the models will be saved in directory `output`. Our training curve is as following: -
-![pic](./curve.jpg) -
- -### Run testing -The script for testing is `test.sh`, user just need to execute: -```bash - ./test.sh -``` -The main part in `tesh.sh` -``` -paddle train \ - --config=./db_lstm.py \ - --model_list=$model_list \ - --job=test \ - --config_args=is_test=1 \ -``` - - - \--config=./db_lstm.py: network config file - - \--model_list=$model_list.list: model list file - - \--job=test: indicate the test job - - \--config_args=is_test=1: flag to indicate test - - \--test_all_data_in_one_period=1: test all data in 1 period - - -### Run prediction -The script for prediction is `predict.sh`, user just need to execute: -```bash - ./predict.sh - -``` -In `predict.sh`, user should offer the network config file, model path, label file, word dictionary file, feature file -``` -python predict.py - -c $config_file \ - -w $best_model_path \ - -l $label_file \ - -p $predicate_dict_file \ - -d $dict_file \ - -i $input_file \ - -o $output_file -``` - -`predict.py` is the main executable python script, which includes functions: load model, load data, data prediction. The network model will output the probability distribution of labels. In the demo, we take the label with maximum probability as result. User can also implement the beam search or viterbi decoding upon the probability distribution matrix. - -After prediction, the result is saved in `predict.res`. - -## Reference -[1] Martha Palmer, Dan Gildea, and Paul Kingsbury. The Proposition Bank: An Annotated Corpus of Semantic Roles , Computational Linguistics, 31(1), 2005. - -[2] Zhou, Jie, and Wei Xu. "End-to-end learning of semantic role labeling using recurrent neural networks." Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2015. +# Semantic Role labeling Tutorial # + +Semantic role labeling (SRL) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence. SRL is useful as an intermediate step in a wide range of natural language processing tasks, such as information extraction. automatic document categorization and question answering. An instance is as following [1]: + + [ A0 He ] [ AM-MOD would ][ AM-NEG n’t ] [ V accept] [ A1 anything of value ] from [A2 those he was writing about ]. + +- V: verb +- A0: acceptor +- A1: thing accepted +- A2: accepted-from +- A3: Attribute +- AM-MOD: modal +- AM-NEG: negation + +Given the verb "accept", the chunks in sentence would play certain semantic roles. Here, the label scheme is from Penn Proposition Bank. + +To this date, most of the successful SRL systems are built on top of some form of parsing results where pre-defined feature templates over the syntactic structure are used. This tutorial will present an end-to-end system using deep bidirectional long short-term memory (DB-LSTM)[2] for solving the SRL task, which largely outperforms the previous state-of-the-art systems. The system regards SRL task as the sequence labelling problem. + +## Data Description +The relevant paper[2] takes the data set in CoNLL-2005&2012 Shared Task for training and testing. Accordingto data license, the demo adopts the test data set of CoNLL-2005, which can be reached on website. + +To download and process the original data, user just need to execute the following command: + +```bash +cd data +./get_data.sh +``` +Several new files appear in the `data `directory as follows. +```bash +conll05st-release:the test data set of CoNll-2005 shared task +test.wsj.words:the Wall Street Journal data sentences +test.wsj.props: the propositional arguments +feature: the extracted features from data set +``` + +## Training +### DB-LSTM +Please refer to the Sentiment Analysis demo to learn more about the long short-term memory unit. + +Unlike Bidirectional-LSTM that used in Sentiment Analysis demo, the DB-LSTM adopts another way to stack LSTM layer. First a standard LSTM processes the sequence in forward direction. The input and output of this LSTM layer are taken by the next LSTM layer as input, processed in reversed direction. These two standard LSTM layers compose a pair of LSTM. Then we stack LSTM layers pair after pair to obtain the deep LSTM model. + +The following figure shows a temporal expanded 2-layer DB-LSTM network. +
+![pic](./network_arch.png) +
+ +### Features +Two input features play an essential role in this pipeline: predicate (pred) and argument (argu). Two other features: predicate context (ctx-p) and region mark (mr) are also adopted. Because a single predicate word can not exactly describe the predicate information, especially when the same words appear more than one times in a sentence. With the predicate context, the ambiguity can be largely eliminated. Similarly, we use region mark mr = 1 to denote the argument position if it locates in the predicate context region, or mr = 0 if does not. These four simple features are all we need for our SRL system. Features of one sample with context size set to 1 is showed as following[2]: +
+![pic](./feature.jpg) +
+ +In this sample, the coresponding labelled sentence is: + +[ A1 A record date ] has [ AM-NEG n't ] been [ V set ] . + +In the demo, we adopt the feature template as above, consists of : `argument`, `predicate`, `ctx-p (p=-1,0,1)`, `mark` and use `B/I/O` scheme to label each argument. These features and labels are stored in `feature` file, and separated by `\t`. + +### Data Provider + +`dataprovider.py` is the python file to wrap data. `hook()` function is to define the data slots for network. The Six features and label are all IndexSlots. +``` +def hook(settings, word_dict, label_dict, **kwargs): + settings.word_dict = word_dict + settings.label_dict = label_dict + #all inputs are integral and sequential type + settings.slots = [ + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(predicate_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(2), + integer_value_sequence(len(label_dict))] +``` +The corresponding data iterator is as following: +``` +@provider(init_hook=hook, should_shuffle=True, calc_batch_size=get_batch_size, + can_over_batch_size=False, cache=CacheType.CACHE_PASS_IN_MEM) +def process(settings, file_name): + with open(file_name, 'r') as fdata: + for line in fdata: + sentence, predicate, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2, mark, label = \ + line.strip().split('\t') + + words = sentence.split() + sen_len = len(words) + word_slot = [settings.word_dict.get(w, UNK_IDX) for w in words] + + predicate_slot = [settings.predicate_dict.get(predicate)] * sen_len + ctx_n2_slot = [settings.word_dict.get(ctx_n2, UNK_IDX)] * sen_len + ctx_n1_slot = [settings.word_dict.get(ctx_n1, UNK_IDX)] * sen_len + ctx_0_slot = [settings.word_dict.get(ctx_0, UNK_IDX)] * sen_len + ctx_p1_slot = [settings.word_dict.get(ctx_p1, UNK_IDX)] * sen_len + ctx_p2_slot = [settings.word_dict.get(ctx_p2, UNK_IDX)] * sen_len + + marks = mark.split() + mark_slot = [int(w) for w in marks] + + label_list = label.split() + label_slot = [settings.label_dict.get(w) for w in label_list] + yield word_slot, predicate_slot, ctx_n2_slot, ctx_n1_slot, \ + ctx_0_slot, ctx_p1_slot, ctx_p2_slot, mark_slot, label_slot +``` +The `process`function yield 9 lists which are 8 features and label. + +### Neural Network Config +`db_lstm.py` is the neural network config file to load the dictionaries and define the data provider module and network architecture during the training procedure. + +Nine `data_layer` load instances from data provider. Eight features are transformed into embedddings respectively, and mixed by `mixed_layer` . Deep bidirectional LSTM layers extract features for the softmax layer. The objective function is cross entropy of labels. + +### Run Training +The script for training is `train.sh`, user just need to execute: +```bash + ./train.sh +``` +The content in `train.sh`: +``` +paddle train \ + --config=./db_lstm.py \ + --use_gpu=0 \ + --log_period=5000 \ + --trainer_count=1 \ + --show_parameter_stats_period=5000 \ + --save_dir=./output \ + --num_passes=10000 \ + --average_test_period=10000000 \ + --init_model_path=./data \ + --load_missing_parameter_strategy=rand \ + --test_all_data_in_one_period=1 \ +2>&1 | tee 'train.log' +``` + +- \--config=./db_lstm.py : network config file. +- \--use_gpu=false: use CPU to train, set true, if you install GPU version of PaddlePaddle and want to use GPU to train, until now crf_layer do not support GPU +- \--log_period=500: print log every 20 batches. +- \--trainer_count=1: set thread number (or GPU count). +- \--show_parameter_stats_period=5000: show parameter statistic every 100 batches. +- \--save_dir=./output: output path to save models. +- \--num_passes=10000: set pass number, one pass in PaddlePaddle means training all samples in dataset one time. +- \--average_test_period=10000000: do test on average parameter every average_test_period batches +- \--init_model_path=./data: parameter initialization path +- \--load_missing_parameter_strategy=rand: random initialization unexisted parameters +- \--test_all_data_in_one_period=1: test all data in one period + + +After training, the models will be saved in directory `output`. Our training curve is as following: +
+![pic](./curve.jpg) +
+ +### Run testing +The script for testing is `test.sh`, user just need to execute: +```bash + ./test.sh +``` +The main part in `tesh.sh` +``` +paddle train \ + --config=./db_lstm.py \ + --model_list=$model_list \ + --job=test \ + --config_args=is_test=1 \ +``` + + - \--config=./db_lstm.py: network config file + - \--model_list=$model_list.list: model list file + - \--job=test: indicate the test job + - \--config_args=is_test=1: flag to indicate test + - \--test_all_data_in_one_period=1: test all data in 1 period + + +### Run prediction +The script for prediction is `predict.sh`, user just need to execute: +```bash + ./predict.sh + +``` +In `predict.sh`, user should offer the network config file, model path, label file, word dictionary file, feature file +``` +python predict.py + -c $config_file \ + -w $best_model_path \ + -l $label_file \ + -p $predicate_dict_file \ + -d $dict_file \ + -i $input_file \ + -o $output_file +``` + +`predict.py` is the main executable python script, which includes functions: load model, load data, data prediction. The network model will output the probability distribution of labels. In the demo, we take the label with maximum probability as result. User can also implement the beam search or viterbi decoding upon the probability distribution matrix. + +After prediction, the result is saved in `predict.res`. + +## Reference +[1] Martha Palmer, Dan Gildea, and Paul Kingsbury. The Proposition Bank: An Annotated Corpus of Semantic Roles , Computational Linguistics, 31(1), 2005. + +[2] Zhou, Jie, and Wei Xu. "End-to-end learning of semantic role labeling using recurrent neural networks." Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2015. diff --git a/doc/demo/sentiment_analysis/bi_lstm.jpg b/doc/tutorials/sentiment_analysis/bi_lstm.jpg similarity index 100% rename from doc/demo/sentiment_analysis/bi_lstm.jpg rename to doc/tutorials/sentiment_analysis/bi_lstm.jpg diff --git a/doc/demo/sentiment_analysis/index.rst b/doc/tutorials/sentiment_analysis/index.rst similarity index 100% rename from doc/demo/sentiment_analysis/index.rst rename to doc/tutorials/sentiment_analysis/index.rst diff --git a/doc/demo/sentiment_analysis/lstm.png b/doc/tutorials/sentiment_analysis/lstm.png similarity index 100% rename from doc/demo/sentiment_analysis/lstm.png rename to doc/tutorials/sentiment_analysis/lstm.png diff --git a/doc/demo/sentiment_analysis/sentiment_analysis.md b/doc/tutorials/sentiment_analysis/sentiment_analysis.md similarity index 100% rename from doc/demo/sentiment_analysis/sentiment_analysis.md rename to doc/tutorials/sentiment_analysis/sentiment_analysis.md diff --git a/doc/demo/sentiment_analysis/stacked_lstm.jpg b/doc/tutorials/sentiment_analysis/stacked_lstm.jpg similarity index 100% rename from doc/demo/sentiment_analysis/stacked_lstm.jpg rename to doc/tutorials/sentiment_analysis/stacked_lstm.jpg diff --git a/doc/demo/text_generation/encoder-decoder-attention-model.png b/doc/tutorials/text_generation/encoder-decoder-attention-model.png similarity index 100% rename from doc/demo/text_generation/encoder-decoder-attention-model.png rename to doc/tutorials/text_generation/encoder-decoder-attention-model.png diff --git a/doc/demo/text_generation/index.rst b/doc/tutorials/text_generation/index.rst similarity index 100% rename from doc/demo/text_generation/index.rst rename to doc/tutorials/text_generation/index.rst diff --git a/doc/demo/text_generation/text_generation.md b/doc/tutorials/text_generation/text_generation.md similarity index 100% rename from doc/demo/text_generation/text_generation.md rename to doc/tutorials/text_generation/text_generation.md diff --git a/doc/ui/index.md b/doc/ui/index.md deleted file mode 100644 index 9c1ba27bdc..0000000000 --- a/doc/ui/index.md +++ /dev/null @@ -1,20 +0,0 @@ -# User Interface - -## Data Provider - -* [Introduction](data_provider/index.rst) -* [PyDataProvider2](data_provider/pydataprovider2.rst) - -## API Reference - -* [Model Config Interface](api/trainer_config_helpers/index.md) - -## Command Line Argument - -* [Use Case](cmd_argument/use_case.md) -* [Argument Outline](cmd_argument/argument_outline.md) -* [Detailed Descriptions](cmd_argument/detail_introduction.md) - -## Predict - -* [Python Prediction API](predict/swig_py_paddle_en.rst) diff --git a/doc/user_guide.rst b/doc/user_guide.rst deleted file mode 100644 index d4deb3ca5a..0000000000 --- a/doc/user_guide.rst +++ /dev/null @@ -1,13 +0,0 @@ -User Guide -========== - -.. toctree:: - :maxdepth: 1 - - demo/quick_start/index_en.md - build/index.rst - build/contribute_to_paddle.md - ui/index.md - ui/api/trainer_config_helpers/index.rst - demo/index.md - cluster/index.md From a48f19cf453fae85f192598e813c1ad80b41bc86 Mon Sep 17 00:00:00 2001 From: liaogang Date: Fri, 25 Nov 2016 17:20:00 +0800 Subject: [PATCH 18/37] Add symbol link --- doc/howto/algorithm/rnn/bi_lstm.jpg | 1 + doc/howto/algorithm/rnn/encoder-decoder-attention-model.png | 1 + 2 files changed, 2 insertions(+) create mode 120000 doc/howto/algorithm/rnn/bi_lstm.jpg create mode 120000 doc/howto/algorithm/rnn/encoder-decoder-attention-model.png diff --git a/doc/howto/algorithm/rnn/bi_lstm.jpg b/doc/howto/algorithm/rnn/bi_lstm.jpg new file mode 120000 index 0000000000..f8f3b17691 --- /dev/null +++ b/doc/howto/algorithm/rnn/bi_lstm.jpg @@ -0,0 +1 @@ +../../../tutorials/sentiment_analysis/bi_lstm.jpg \ No newline at end of file diff --git a/doc/howto/algorithm/rnn/encoder-decoder-attention-model.png b/doc/howto/algorithm/rnn/encoder-decoder-attention-model.png new file mode 120000 index 0000000000..88a1d3e5ac --- /dev/null +++ b/doc/howto/algorithm/rnn/encoder-decoder-attention-model.png @@ -0,0 +1 @@ +../../../tutorials/text_generation/encoder-decoder-attention-model.png \ No newline at end of file From 80fb9116d1141951d5a9b60801a3f0378a073e96 Mon Sep 17 00:00:00 2001 From: liaogang Date: Fri, 25 Nov 2016 17:24:20 +0800 Subject: [PATCH 19/37] Change gpu profiling docs position --- doc/howto/index.rst | 1 + doc/{ => howto}/optimization/gpu_profiling.rst | 0 doc/{ => howto}/optimization/index.rst | 0 doc/{ => howto}/optimization/nvvp1.png | Bin doc/{ => howto}/optimization/nvvp2.png | Bin doc/{ => howto}/optimization/nvvp3.png | Bin doc/{ => howto}/optimization/nvvp4.png | Bin 7 files changed, 1 insertion(+) rename doc/{ => howto}/optimization/gpu_profiling.rst (100%) rename doc/{ => howto}/optimization/index.rst (100%) rename doc/{ => howto}/optimization/nvvp1.png (100%) rename doc/{ => howto}/optimization/nvvp2.png (100%) rename doc/{ => howto}/optimization/nvvp3.png (100%) rename doc/{ => howto}/optimization/nvvp4.png (100%) diff --git a/doc/howto/index.rst b/doc/howto/index.rst index e2d688e186..ed8294b3c1 100644 --- a/doc/howto/index.rst +++ b/doc/howto/index.rst @@ -7,5 +7,6 @@ How to cmd_argument/index.md cluster/cluster_train.md algorithm/index.rst + optimization/index.rst dev/index.rst contribute_to_paddle.md \ No newline at end of file diff --git a/doc/optimization/gpu_profiling.rst b/doc/howto/optimization/gpu_profiling.rst similarity index 100% rename from doc/optimization/gpu_profiling.rst rename to doc/howto/optimization/gpu_profiling.rst diff --git a/doc/optimization/index.rst b/doc/howto/optimization/index.rst similarity index 100% rename from doc/optimization/index.rst rename to doc/howto/optimization/index.rst diff --git a/doc/optimization/nvvp1.png b/doc/howto/optimization/nvvp1.png similarity index 100% rename from doc/optimization/nvvp1.png rename to doc/howto/optimization/nvvp1.png diff --git a/doc/optimization/nvvp2.png b/doc/howto/optimization/nvvp2.png similarity index 100% rename from doc/optimization/nvvp2.png rename to doc/howto/optimization/nvvp2.png diff --git a/doc/optimization/nvvp3.png b/doc/howto/optimization/nvvp3.png similarity index 100% rename from doc/optimization/nvvp3.png rename to doc/howto/optimization/nvvp3.png diff --git a/doc/optimization/nvvp4.png b/doc/howto/optimization/nvvp4.png similarity index 100% rename from doc/optimization/nvvp4.png rename to doc/howto/optimization/nvvp4.png From 5e7e5ccb68205ef0a5ea97389598a0c3ae9e479c Mon Sep 17 00:00:00 2001 From: liaogang Date: Fri, 25 Nov 2016 17:30:09 +0800 Subject: [PATCH 20/37] Change head to About --- doc/about/index.rst | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/doc/about/index.rst b/doc/about/index.rst index c70940ca85..511c154641 100644 --- a/doc/about/index.rst +++ b/doc/about/index.rst @@ -1,5 +1,9 @@ +About +======= + + Credits -======== +-------- PaddlPaddle is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu. From aa6c6dc07d65de97e65e050224e4b07d4f5b0f98 Mon Sep 17 00:00:00 2001 From: liaogang Date: Fri, 25 Nov 2016 22:00:20 +0800 Subject: [PATCH 21/37] Fix pythonlib can not be found --- CMakeLists.txt | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/CMakeLists.txt b/CMakeLists.txt index af193c27ae..38daa35483 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -12,8 +12,12 @@ include(package) find_package(SWIG 2.0) find_package(CUDA QUIET) find_package(Protobuf REQUIRED) -find_package(PythonLibs 2.7 REQUIRED) + +# Set up the versions we know about, in the order we will search. +# Always add the user supplied additional versions to the front. +set(Python_ADDITIONAL_VERSIONS 2.7) find_package(PythonInterp 2.7 REQUIRED) +find_package(PythonLibs 2.7 REQUIRED) find_package(ZLIB REQUIRED) find_package(NumPy REQUIRED) find_package(Threads REQUIRED) From 74c48d14d937df5a457746a6f71eae85d5b792c8 Mon Sep 17 00:00:00 2001 From: liaogang Date: Fri, 25 Nov 2016 22:54:23 +0800 Subject: [PATCH 22/37] Revert cmake modification in this pr. --- CMakeLists.txt | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/CMakeLists.txt b/CMakeLists.txt index 38daa35483..af193c27ae 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -12,12 +12,8 @@ include(package) find_package(SWIG 2.0) find_package(CUDA QUIET) find_package(Protobuf REQUIRED) - -# Set up the versions we know about, in the order we will search. -# Always add the user supplied additional versions to the front. -set(Python_ADDITIONAL_VERSIONS 2.7) -find_package(PythonInterp 2.7 REQUIRED) find_package(PythonLibs 2.7 REQUIRED) +find_package(PythonInterp 2.7 REQUIRED) find_package(ZLIB REQUIRED) find_package(NumPy REQUIRED) find_package(Threads REQUIRED) From a1143460c602fcde55357f7d65b48f7c31fe1be1 Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Mon, 28 Nov 2016 12:53:35 +0800 Subject: [PATCH 23/37] follow comments: refine ubuntu install doc --- .../build_and_install/install/ubuntu_install.rst | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/doc_cn/build_and_install/install/ubuntu_install.rst b/doc_cn/build_and_install/install/ubuntu_install.rst index 08d55f98d9..e48e4932ac 100644 --- a/doc_cn/build_and_install/install/ubuntu_install.rst +++ b/doc_cn/build_and_install/install/ubuntu_install.rst @@ -1,7 +1,7 @@ Ubuntu部署PaddlePaddle =================================== -PaddlePaddle提供了deb安装包,并在ubuntu 14.04做了完备测试,理论上也支持其他的debian发行版。 +PaddlePaddle提供了ubuntu 14.04 deb安装包。 安装 ------ @@ -10,13 +10,13 @@ PaddlePaddle提供了deb安装包,并在ubuntu 14.04做了完备测试,理 它包含四个版本\: -* cpu版本: 支持主流intel x86处理器平台, 支持avx指令集。 +* cpu版本: 支持主流x86处理器平台, 使用了avx指令集。 -* cpu-noavx版本:支持主流intel x86处理器平台,不支持avx指令集。 +* cpu-noavx版本:支持主流x86处理器平台,没有使用avx指令集。 -* gpu版本:支持主流intel x86处理器平台,支持nvidia cuda平台,支持avx指令集。 +* gpu版本:支持主流x86处理器平台,支持nvidia cuda平台,使用了avx指令集。 -* gpu-noavx版本:支持主流intel x86处理器平台,支持nvidia cuda平台,不支持avx指令级。 +* gpu-noavx版本:支持主流x86处理器平台,支持nvidia cuda平台,没有使用avx指令集。 下载完相关安装包后,执行: @@ -43,7 +43,7 @@ PaddlePaddle提供了deb安装包,并在ubuntu 14.04做了完备测试,理 可能遇到的问题 -------------- -如何设置gpu版本运行时cuda环境运行GPU版本 +如何设置CUDA环境运行GPU版本 ++++++++++++++++++++++++++++++++++++++++ 如果使用GPU版本的PaddlePaddle,请安装CUDA 7.5 和CUDNN 5到本地环境中,并设置: @@ -62,5 +62,5 @@ libcudart.so/libcudnn.so找不到 0831 12:36:04.151525 1085 hl_dso_loader.cc:70] Check failed: nullptr != *dso_handle For Gpu version of PaddlePaddle, it couldn't find CUDA library: libcudart.so Please make sure you already specify its path.Note: for training data on Cpu using Gpu version of PaddlePaddle,you must specify libcudart.so via LD_LIBRARY_PATH. -原因是未设置cuda运行时环境变量,请参考** 设置gpu版本运行时cuda环境** 解决方案。 +原因是未设置cuda运行时环境变量,请参考**如何设置CUDA环境运行GPU版本** 。 From bb97b47ee7057e4dfbb0dbc8ffa90a6704f28202 Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Mon, 28 Nov 2016 12:56:30 +0800 Subject: [PATCH 24/37] add blank char to fix tiny rst doc --- doc_cn/build_and_install/install/ubuntu_install.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc_cn/build_and_install/install/ubuntu_install.rst b/doc_cn/build_and_install/install/ubuntu_install.rst index e48e4932ac..372ad6a944 100644 --- a/doc_cn/build_and_install/install/ubuntu_install.rst +++ b/doc_cn/build_and_install/install/ubuntu_install.rst @@ -62,5 +62,5 @@ libcudart.so/libcudnn.so找不到 0831 12:36:04.151525 1085 hl_dso_loader.cc:70] Check failed: nullptr != *dso_handle For Gpu version of PaddlePaddle, it couldn't find CUDA library: libcudart.so Please make sure you already specify its path.Note: for training data on Cpu using Gpu version of PaddlePaddle,you must specify libcudart.so via LD_LIBRARY_PATH. -原因是未设置cuda运行时环境变量,请参考**如何设置CUDA环境运行GPU版本** 。 +原因是未设置cuda运行时环境变量,请参考 **如何设置CUDA环境运行GPU版本** 。 From f63bdc80a1f781abafd9b4d922050b3a2412dcf1 Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Mon, 28 Nov 2016 13:18:42 +0800 Subject: [PATCH 25/37] follow comments: more clean doc --- .../install/ubuntu_install.rst | 21 +++++++------------ doc_cn/faq/index.rst | 2 +- 2 files changed, 9 insertions(+), 14 deletions(-) diff --git a/doc_cn/build_and_install/install/ubuntu_install.rst b/doc_cn/build_and_install/install/ubuntu_install.rst index 372ad6a944..4500d6e0b0 100644 --- a/doc_cn/build_and_install/install/ubuntu_install.rst +++ b/doc_cn/build_and_install/install/ubuntu_install.rst @@ -43,24 +43,19 @@ PaddlePaddle提供了ubuntu 14.04 deb安装包。 可能遇到的问题 -------------- -如何设置CUDA环境运行GPU版本 -++++++++++++++++++++++++++++++++++++++++ - -如果使用GPU版本的PaddlePaddle,请安装CUDA 7.5 和CUDNN 5到本地环境中,并设置: - -.. code-block:: shell - export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/lib:$LD_LIBRARY_PATH - export PATH=/usr/local/cuda/bin:$PATH - - libcudart.so/libcudnn.so找不到 ++++++++++++++++++++++++++++++ 安装完成后,运行 :code:`paddle train` 报错\: -.. code-block:: shell +.. code-block:: shell - 0831 12:36:04.151525 1085 hl_dso_loader.cc:70] Check failed: nullptr != *dso_handle For Gpu version of PaddlePaddle, it couldn't find CUDA library: libcudart.so Please make sure you already specify its path.Note: for training data on Cpu using Gpu version of PaddlePaddle,you must specify libcudart.so via LD_LIBRARY_PATH. + 0831 12:36:04.151525 1085 hl_dso_loader.cc:70] Check failed: nullptr != *dso_handle For Gpu version of PaddlePaddle, it couldn't find CUDA library: libcudart.so Please make sure you already specify its path.Note: for training data on Cpu using Gpu version of PaddlePaddle,you must specify libcudart.so via LD_LIBRARY_PATH. + +原因是未设置cuda运行时环境变量。 如果使用GPU版本的PaddlePaddle,请安装CUDA 7.5 和CUDNN 5到本地环境中,并设置: + +.. code-block:: shell -原因是未设置cuda运行时环境变量,请参考 **如何设置CUDA环境运行GPU版本** 。 + export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/lib:$LD_LIBRARY_PATH + export PATH=/usr/local/cuda/bin:$PATH diff --git a/doc_cn/faq/index.rst b/doc_cn/faq/index.rst index 6e1102e552..551430eb41 100644 --- a/doc_cn/faq/index.rst +++ b/doc_cn/faq/index.rst @@ -7,7 +7,7 @@ PaddlePaddle常见问题 1. 如何减少内存占用 --------------------------------- -神经网络的训练本身是一个非常消耗内存和显存的工作,经常会消耗数十G的内存和数G的显存。 +神经网络的训练本身是一个非常消耗内存和显存的工作,经常会消耗数10GB的内存和数GB的显存。 PaddlePaddle的内存占用主要分为如下几个方面\: * DataProvider缓冲池内存(只针对内存) From 9935d7d61cb729877493bdfb2d1323168fa120c5 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Mon, 28 Nov 2016 14:26:30 +0800 Subject: [PATCH 26/37] Add Release notes --- RELEASE.md | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) create mode 100644 RELEASE.md diff --git a/RELEASE.md b/RELEASE.md new file mode 100644 index 0000000000..b2684f5f5d --- /dev/null +++ b/RELEASE.md @@ -0,0 +1,70 @@ +# Release v0.9.0 + +## New Features: + +* Add some layers to Paddle + * bilinear interpolation layer. + * spatial pyramid-pool layer. + * de-convolution layer. + * maxout layer. +* Support rectangle padding, stride, window and input for Pooling Operation. +* Add —job=time in trainer, which can be used to print time info without compiler option -WITH_TIMER=ON. +* Support Mac OS X Sierra by source code. +* Expose cost_weight/nce_layer in `trainer_config_helpers` +* Add FAQ, concepts, h-rnn docs. +* Add Bidi-LSTM and DB-LSTM to quick start demo @alvations +* Add usage track scripts. + +## Improvements + +* Add travis-ci for macos. Enable swig unittest in travis. Skip travis-ci when only docs are changed. +* Add code coverage tools. +* Refine convolution layer to speedup and reduce GPU memory. +* Speed up PyDataProvider2 +* Add ubuntu deb package build scripts. +* Make Paddle use git-flow branching model. +* PServer support no parameter blocks. + +## Bug Fixes + +* add zlib link to py_paddle +* add input sparse data check for sparse layer at runtime +* Bug fix for sparse matrix multiplication +* Fix floating-point overflow problem of tanh +* Fix some nvcc compile options +* Fix a bug in yield dictionary in DataProvider +* Fix SRL hang when exit. + +# Release v0.8.0beta.1 +New features: + +* Mac OSX is supported by source code. #138 + * Both GPU and CPU versions of PaddlePaddle are supported. + +* Support CUDA 8.0 + +* Enhance `PyDataProvider2` + * Add dictionary yield format. `PyDataProvider2` can yield a dictionary with key is data_layer's name, value is features. + * Add `min_pool_size` to control memory pool in provider. + +* Add `deb` install package & docker image for no_avx machines. + * Especially for cloud computing and virtual machines + +* Automatically disable `avx` instructions in cmake when machine's CPU don't support `avx` instructions. + +* Add Parallel NN api in trainer_config_helpers. + +* Add `travis ci` for Github + +Bug fixes: + +* Several bugs in trainer_config_helpers. Also complete the unittest for trainer_config_helpers +* Check if PaddlePaddle is installed when unittest. +* Fix bugs in GTX series GPU +* Fix bug in MultinomialSampler + +Also more documentation was written since last release. + +# Release v0.8.0beta.0 + +PaddlePaddle v0.8.0beta.0 release. The install package is not stable yet and it's a pre-release version. From ddb948a1131466348dc9d30deea42853ebbaae48 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Mon, 28 Nov 2016 16:26:42 +0800 Subject: [PATCH 27/37] refine doc_cn/ui/index.rst --- doc_cn/ui/cmd/dump_config.rst | 0 doc_cn/ui/cmd/index.rst | 25 ++++++++----------------- doc_cn/ui/cmd/make_diagram.rst | 0 doc_cn/ui/cmd/merge_model.rst | 0 doc_cn/ui/cmd/paddle_pserver.rst | 0 doc_cn/ui/cmd/paddle_train.rst | 0 doc_cn/ui/cmd/paddle_version.rst | 7 ------- doc_cn/ui/index.rst | 8 +++++--- doc_cn/ui/predict/swig_py_paddle.rst | 8 ++++---- 9 files changed, 17 insertions(+), 31 deletions(-) delete mode 100644 doc_cn/ui/cmd/dump_config.rst delete mode 100644 doc_cn/ui/cmd/make_diagram.rst delete mode 100644 doc_cn/ui/cmd/merge_model.rst delete mode 100644 doc_cn/ui/cmd/paddle_pserver.rst delete mode 100644 doc_cn/ui/cmd/paddle_train.rst delete mode 100644 doc_cn/ui/cmd/paddle_version.rst diff --git a/doc_cn/ui/cmd/dump_config.rst b/doc_cn/ui/cmd/dump_config.rst deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/doc_cn/ui/cmd/index.rst b/doc_cn/ui/cmd/index.rst index f975d432c0..31a8b8a79f 100644 --- a/doc_cn/ui/cmd/index.rst +++ b/doc_cn/ui/cmd/index.rst @@ -1,29 +1,20 @@ -PaddlePaddle的命令行参数 -======================== +命令 +==== -安装好PaddlePaddle后,在命令行直接敲击 ``paddle`` 或 ``paddle --help`` 会显示如下一些命令行参数。 +安装好PaddlePaddle后,在命令行直接敲击 ``paddle`` 或 ``paddle --help`` 会显示如下一些命令。 * ``train`` Start a paddle_trainer 启动一个PaddlePaddle训练进程。 ``paddle train`` 可以通过命令行参数 ``-local=true`` 启动一个单机的训练进程;也可以和 ``paddle pserver`` 一起使用启动多机的分布式训练进程。 * ``pserver`` Start a paddle_pserver_main 在多机分布式训练下启动PaddlePaddle的parameter server进程。 * ``version`` Print paddle version - 用于打印当前PaddlePaddle的版本和编译选项相关信息。 + 用于打印当前PaddlePaddle的版本和编译选项相关信息。常见的输出格式如下:1)第一行说明了PaddlePaddle的版本信息;2)第二行开始说明了一些主要的编译选项,具体意义可以参考 `编译参数选项文件 <../../build_and_install/cmake/compile_options.html>`_ 。 + + .. literalinclude:: paddle_version.txt + * ``merge_model`` Start a paddle_merge_model 用于将PaddlePaddle的模型参数文件和模型配置文件打包成一个文件,方便做部署分发。 * ``dump_config`` Dump the trainer config as proto string 用于将PaddlePaddle的模型配置文件以proto string的格式打印出来。 * ``make_diagram`` - 使用graphviz对PaddlePaddle的模型配置文件进行绘制。 - -更详细的介绍请参考各命令行参数文档。 - -.. toctree:: - :glob: - - paddle_train.rst - paddle_pserver.rst - paddle_version.rst - merge_model.rst - dump_config.rst - make_diagram.rst + 使用graphviz对PaddlePaddle的模型配置文件进行绘制。 \ No newline at end of file diff --git a/doc_cn/ui/cmd/make_diagram.rst b/doc_cn/ui/cmd/make_diagram.rst deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/doc_cn/ui/cmd/merge_model.rst b/doc_cn/ui/cmd/merge_model.rst deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/doc_cn/ui/cmd/paddle_pserver.rst b/doc_cn/ui/cmd/paddle_pserver.rst deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/doc_cn/ui/cmd/paddle_train.rst b/doc_cn/ui/cmd/paddle_train.rst deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/doc_cn/ui/cmd/paddle_version.rst b/doc_cn/ui/cmd/paddle_version.rst deleted file mode 100644 index 537c23df75..0000000000 --- a/doc_cn/ui/cmd/paddle_version.rst +++ /dev/null @@ -1,7 +0,0 @@ -paddle version的命令行参数 -========================== - -paddle version用于打印当前的版本信息和相关编译选项。常见的输出格式如下。第一行说明了PaddlePaddle的版本信息,后面跟着一些主要的编译选项。编译选项的具体意义可以参考 -`编译参数选项文件 <../../build_and_install/cmake/compile_options.html>`_ - -.. literalinclude:: paddle_version.txt diff --git a/doc_cn/ui/index.rst b/doc_cn/ui/index.rst index 8079bd9180..d871ad805f 100644 --- a/doc_cn/ui/index.rst +++ b/doc_cn/ui/index.rst @@ -11,21 +11,23 @@ data_provider/index.rst -命令行参数 -========== +命令及命令行参数 +================ .. toctree:: + :maxdepth: 1 cmd/index.rst +* `参数用例 <../../doc/ui/cmd_argument/use_case.html>`_ * `参数分类 <../../doc/ui/cmd_argument/argument_outline.html>`_ * `参数描述 <../../doc/ui/cmd_argument/detail_introduction.html>`_ -* `参数用例 <../../doc/ui/cmd_argument/use_case.html>`_ 预测 ==== .. toctree:: + :maxdepth: 1 predict/swig_py_paddle.rst diff --git a/doc_cn/ui/predict/swig_py_paddle.rst b/doc_cn/ui/predict/swig_py_paddle.rst index 4c0a0de820..89031dd72f 100644 --- a/doc_cn/ui/predict/swig_py_paddle.rst +++ b/doc_cn/ui/predict/swig_py_paddle.rst @@ -1,8 +1,8 @@ 基于Python的预测 ================ -Python预测接口 --------------- +预测流程 +-------- PaddlePaddle使用swig对常用的预测接口进行了封装,通过编译会生成py_paddle软件包,安装该软件包就可以在python环境下实现模型预测。可以使用python的 ``help()`` 函数查询软件包相关API说明。 @@ -20,8 +20,8 @@ PaddlePaddle使用swig对常用的预测接口进行了封装,通过编译会 通过调用 ``forwardTest()`` 传入预测数据,直接返回计算结果。 -基于Python的预测Demo --------------------- +预测Demo +-------- 如下是一段使用mnist model来实现手写识别的预测代码。完整的代码见 ``src_root/doc/ui/predict/predict_sample.py`` 。mnist model可以通过 ``src_root\demo\mnist`` 目录下的demo训练出来。 From 5b4bec43a623e96e725414c8c3166601361b9174 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Mon, 28 Nov 2016 17:36:35 +0800 Subject: [PATCH 28/37] Add syntax='proto2' when using protobuf 3 --- proto/CMakeLists.txt | 15 ++++++++++++++- proto/DataConfig.proto.m4 | 1 + proto/DataFormat.proto.m4 | 1 + proto/ModelConfig.proto.m4 | 1 + proto/ParameterConfig.proto.m4 | 1 + proto/ParameterService.proto.m4 | 2 +- proto/TrainerConfig.proto.m4 | 1 + 7 files changed, 20 insertions(+), 2 deletions(-) diff --git a/proto/CMakeLists.txt b/proto/CMakeLists.txt index 461c73f14c..ec68b53d44 100644 --- a/proto/CMakeLists.txt +++ b/proto/CMakeLists.txt @@ -1,3 +1,12 @@ +execute_process(COMMAND ${PROTOBUF_PROTOC_EXECUTABLE} --version + OUTPUT_VARIABLE PROTOBUF_VERSION) +string(REPLACE "libprotoc " "" PROTOBUF_VERSION ${PROTOBUF_VERSION}) + +set(PROTOBUF_3 OFF) +if (${PROTOBUF_VERSION} VERSION_GREATER "3.0.0" OR ${PROTOBUF_VERSION} VERSION_EQUAL "3.0.0") + set(PROTOBUF_3 ON) +endif() + set(proto_filenames DataConfig.proto DataFormat.proto @@ -11,8 +20,12 @@ set(real_proto_files) # TODO(yuyang18): Some internal proto will also be depended on. # Find a way to automatically calculate all depends. foreach(filename ${proto_filenames}) + set(PROTOBUF_3_FLAGS "") + if (PROTOBUF_3) + set(PROTOBUF_3_FLAGS "-Dproto3") + endif() add_custom_command(OUTPUT ${filename} - COMMAND ${M4_EXECUTABLE} -Dreal=${ACCURACY} -I '${INTERNAL_PROTO_PATH}' + COMMAND ${M4_EXECUTABLE} -Dreal=${ACCURACY} ${PROTOBUF_3_FLAGS} -I '${INTERNAL_PROTO_PATH}' ${PROJ_ROOT}/proto/${filename}.m4 > ${filename} DEPENDS ${PROJ_ROOT}/proto/${filename}.m4 COMMENT "Generate ${filename}") diff --git a/proto/DataConfig.proto.m4 b/proto/DataConfig.proto.m4 index 9862e4e7ef..01d451ff7d 100644 --- a/proto/DataConfig.proto.m4 +++ b/proto/DataConfig.proto.m4 @@ -11,6 +11,7 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +ifdef(`proto3', `syntax = "proto2";') package paddle; diff --git a/proto/DataFormat.proto.m4 b/proto/DataFormat.proto.m4 index 556eace5e1..8a4a0be1b3 100644 --- a/proto/DataFormat.proto.m4 +++ b/proto/DataFormat.proto.m4 @@ -11,6 +11,7 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +ifdef(`proto3', `syntax = "proto2";') package paddle; diff --git a/proto/ModelConfig.proto.m4 b/proto/ModelConfig.proto.m4 index c835cfd522..68a5eb9dd2 100644 --- a/proto/ModelConfig.proto.m4 +++ b/proto/ModelConfig.proto.m4 @@ -11,6 +11,7 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +ifdef(`proto3', `syntax = "proto2";') import "ParameterConfig.proto"; diff --git a/proto/ParameterConfig.proto.m4 b/proto/ParameterConfig.proto.m4 index e8d512445e..26e7c3ef77 100644 --- a/proto/ParameterConfig.proto.m4 +++ b/proto/ParameterConfig.proto.m4 @@ -11,6 +11,7 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +ifdef(`proto3', `syntax = "proto2";') package paddle; diff --git a/proto/ParameterService.proto.m4 b/proto/ParameterService.proto.m4 index 189dc1c970..0b3f14a2ee 100644 --- a/proto/ParameterService.proto.m4 +++ b/proto/ParameterService.proto.m4 @@ -11,6 +11,7 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +ifdef(`proto3', `syntax = "proto2";') import "ParameterConfig.proto"; import "TrainerConfig.proto"; @@ -20,7 +21,6 @@ package paddle; /** * Various structs for communicating with parameter server */ - enum ParameterUpdateMode { // Set parameter PSERVER_UPDATE_MODE_SET_PARAM = 0;//use local param diff --git a/proto/TrainerConfig.proto.m4 b/proto/TrainerConfig.proto.m4 index 3b0e24f90b..965c9cd393 100644 --- a/proto/TrainerConfig.proto.m4 +++ b/proto/TrainerConfig.proto.m4 @@ -11,6 +11,7 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +ifdef(`proto3', `syntax = "proto2";') import "DataConfig.proto"; import "ModelConfig.proto"; From 141e2e9856a12e5194ad409798ef0fbddb911dbf Mon Sep 17 00:00:00 2001 From: zhangjinchao01 Date: Mon, 28 Nov 2016 17:44:43 +0800 Subject: [PATCH 29/37] revise data download path of srl demo --- demo/semantic_role_labeling/data/get_data.sh | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/demo/semantic_role_labeling/data/get_data.sh b/demo/semantic_role_labeling/data/get_data.sh index 55e33f4685..84aecdd9a9 100644 --- a/demo/semantic_role_labeling/data/get_data.sh +++ b/demo/semantic_role_labeling/data/get_data.sh @@ -13,11 +13,11 @@ # See the License for the specific language governing permissions and # limitations under the License. set -e -wget http://www.cs.upc.edu/~srlconll/conll05st-tests.tar.gz -wget https://www.googledrive.com/host/0B7Q8d52jqeI9ejh6Q1RpMTFQT1k/semantic_role_labeling/verbDict.txt --no-check-certificate -wget https://www.googledrive.com/host/0B7Q8d52jqeI9ejh6Q1RpMTFQT1k/semantic_role_labeling/targetDict.txt --no-check-certificate -wget https://www.googledrive.com/host/0B7Q8d52jqeI9ejh6Q1RpMTFQT1k/semantic_role_labeling/wordDict.txt --no-check-certificate -wget https://www.googledrive.com/host/0B7Q8d52jqeI9ejh6Q1RpMTFQT1k/semantic_role_labeling/emb --no-check-certificate +#wget http://www.cs.upc.edu/~srlconll/conll05st-tests.tar.gz +wget http://paddlepaddle.bj.bcebos.com/demo/srl_dict_and_embedding/verbDict.txt +wget http://paddlepaddle.bj.bcebos.com/demo/srl_dict_and_embedding/targetDict.txt +wget http://paddlepaddle.bj.bcebos.com/demo/srl_dict_and_embedding/wordDict.txt +wget http://paddlepaddle.bj.bcebos.com/demo/srl_dict_and_embedding/emb tar -xzvf conll05st-tests.tar.gz rm conll05st-tests.tar.gz cp ./conll05st-release/test.wsj/words/test.wsj.words.gz . From 2f8947f751d38118be61941588e5141fb5173ec6 Mon Sep 17 00:00:00 2001 From: zhangjinchao01 Date: Mon, 28 Nov 2016 17:47:18 +0800 Subject: [PATCH 30/37] del comments --- demo/semantic_role_labeling/data/get_data.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/demo/semantic_role_labeling/data/get_data.sh b/demo/semantic_role_labeling/data/get_data.sh index 84aecdd9a9..99487e0d9a 100644 --- a/demo/semantic_role_labeling/data/get_data.sh +++ b/demo/semantic_role_labeling/data/get_data.sh @@ -13,7 +13,7 @@ # See the License for the specific language governing permissions and # limitations under the License. set -e -#wget http://www.cs.upc.edu/~srlconll/conll05st-tests.tar.gz +wget http://www.cs.upc.edu/~srlconll/conll05st-tests.tar.gz wget http://paddlepaddle.bj.bcebos.com/demo/srl_dict_and_embedding/verbDict.txt wget http://paddlepaddle.bj.bcebos.com/demo/srl_dict_and_embedding/targetDict.txt wget http://paddlepaddle.bj.bcebos.com/demo/srl_dict_and_embedding/wordDict.txt From cf205d0d43a82ef46cd9374218555be339dab5eb Mon Sep 17 00:00:00 2001 From: xutianbing Date: Tue, 22 Nov 2016 14:24:30 -0800 Subject: [PATCH 31/37] deepSwap --- paddle/math/BaseMatrix.cu | 6 ++++++ paddle/math/BaseMatrix.h | 7 +++++++ 2 files changed, 13 insertions(+) diff --git a/paddle/math/BaseMatrix.cu b/paddle/math/BaseMatrix.cu index 2f32b3fdd1..a723ef7bc8 100644 --- a/paddle/math/BaseMatrix.cu +++ b/paddle/math/BaseMatrix.cu @@ -1240,6 +1240,12 @@ void BaseMatrixT::assignAtOffset(BaseMatrixT& b, int64_t columnOffset) { } } +DEFINE_MATRIX_BINARY_OP(DeepSwap, T tmp = a; a = b; b = tmp); +template +void BaseMatrixT::deepSwap(BaseMatrixT& b) { + applyBinary(binary::DeepSwap(), b); +} + template<> void BaseMatrixT::rowDotMul(size_t destCol, BaseMatrixT& b, diff --git a/paddle/math/BaseMatrix.h b/paddle/math/BaseMatrix.h index d41dcee682..dbc217c30f 100644 --- a/paddle/math/BaseMatrix.h +++ b/paddle/math/BaseMatrix.h @@ -455,6 +455,13 @@ public: */ void assign(T p); + /** + * @code + * swap(this, b) + * @endcode + */ + void deepSwap(BaseMatrixT& b); + /** * @code * this = this + p From c7f96de12e9ce3f32cf8059a583eb9e5ff2b39e3 Mon Sep 17 00:00:00 2001 From: xutianbing Date: Mon, 28 Nov 2016 15:47:16 -0800 Subject: [PATCH 32/37] add unit test for deepSwap --- paddle/math/tests/test_matrixCompare.cpp | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index ae5bc5a86a..de540dad4c 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -448,6 +448,24 @@ void testMatrixZeroAtOffset(int height, int width) { MatrixCheckEqual(*cpuA, *cpuTest); } +void testMatrixDeepSwap(int height, int width) { + MatrixPtr cpuA = std::make_shared(height, width); + MatrixPtr cpuB = std::make_shared(height, width); + MatrixPtr cpuCopyA = std::make_shared(height, width); + MatrixPtr cpuCopyB = std::make_shared(height, width); + + cpuA->randomizeUniform(); + cpuB->randomizeUniform(); + cpuCopyA->copyFrom(*cpuA); + cpuCopyB->copyFrom(*cpuB); + + // swap matrix cpuA and cpuB + cpuA->deepSwap(*cpuB); + + MatrixCheckEqual(*cpuA, *cpuCopyB); + MatrixCheckEqual(*cpuB, *cpuCopyA); +} + void testMatrixBinaryAdd(int height, int width) { MatrixPtr cpuA = std::make_shared(height, width); MatrixPtr cpuB = std::make_shared(height, width); @@ -480,6 +498,7 @@ void testMatrixAssign(int height, int width) { MatrixCheckEqual(*cpuA, *outputCheck); } + void testMatrixAdd(int height, int width) { MatrixPtr cpuA = std::make_shared(height, width); MatrixPtr gpuA = std::make_shared(height, width); @@ -798,6 +817,7 @@ TEST(Matrix, unary) { testMatrixBinaryAdd(height, width); testMatrixTanh(height, width); testMatrixTanhDerivative(height, width); + testMatrixDeepSwap(height, width); // applyTernary testMatrixTernarySub(height, width); From 5de5453d15eddc4a3eb2e57f1aaf83e571ab55e0 Mon Sep 17 00:00:00 2001 From: xutianbing Date: Mon, 28 Nov 2016 16:25:56 -0800 Subject: [PATCH 33/37] add code comments for deepSwap --- paddle/math/BaseMatrix.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/paddle/math/BaseMatrix.h b/paddle/math/BaseMatrix.h index dbc217c30f..ea58c861a3 100644 --- a/paddle/math/BaseMatrix.h +++ b/paddle/math/BaseMatrix.h @@ -458,6 +458,10 @@ public: /** * @code * swap(this, b) + * example: swap two Matrices + * MatrixPtr cpuA = std::make_shared(height, width); + * MatrixPtr cpuB = std::make_shared(height, width); + * cpuA->deepSwap(*cpuB); * @endcode */ void deepSwap(BaseMatrixT& b); From 1d8d9573225f44eb39a6af187b590b06d45dfb95 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Tue, 29 Nov 2016 11:32:43 +0800 Subject: [PATCH 34/37] Add `set -e` for paddle boot up script. * error when paddle has a wrong version number. --- paddle/scripts/submit_local.sh.in | 2 ++ 1 file changed, 2 insertions(+) diff --git a/paddle/scripts/submit_local.sh.in b/paddle/scripts/submit_local.sh.in index 20ea2fedc4..ace2c0dee9 100644 --- a/paddle/scripts/submit_local.sh.in +++ b/paddle/scripts/submit_local.sh.in @@ -29,6 +29,7 @@ function version(){ } function ver2num() { + set -e # convert version to number. if [ -z "$1" ]; then # empty argument printf "%03d%03d%03d%03d%03d" 0 @@ -41,6 +42,7 @@ function ver2num() { printf "%03d%03d%03d%03d%03d" $VERN fi fi + set +e } PADDLE_CONF_HOME="$HOME/.config/paddle" From 7c25d9b8a1f422cdaadf26ac2c23f4b9fac4bc23 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Tue, 29 Nov 2016 11:46:49 +0800 Subject: [PATCH 35/37] fix style problem in doc_cn/introduction.rst --- doc_cn/build_and_install/index.rst | 5 - doc_cn/introduction/index.rst | 134 +++++++++++++++------------ doc_cn/ui/predict/swig_py_paddle.rst | 22 +++-- 3 files changed, 89 insertions(+), 72 deletions(-) diff --git a/doc_cn/build_and_install/index.rst b/doc_cn/build_and_install/index.rst index 2205e28224..48163fb36e 100644 --- a/doc_cn/build_and_install/index.rst +++ b/doc_cn/build_and_install/index.rst @@ -8,9 +8,7 @@ PaddlePaddle提供数个预编译的二进制来进行安装,包括Docker镜 .. toctree:: :maxdepth: 1 - :glob: - 使用Jumbo安装(对内) <../build/internal/install_from_jumbo.rst> install/docker_install.rst install/ubuntu_install.rst @@ -25,8 +23,5 @@ PaddlePaddle提供数个预编译的二进制来进行安装,包括Docker镜 .. toctree:: :maxdepth: 1 - :glob: - 源码下载(对内) <../build/internal/download_paddle_source_zh_cn.rst> - 从源码编译安装(对内) <../build/internal/build_from_source_zh_cn.rst> cmake/index.rst diff --git a/doc_cn/introduction/index.rst b/doc_cn/introduction/index.rst index f6eb5456c0..c996f5f4ac 100644 --- a/doc_cn/introduction/index.rst +++ b/doc_cn/introduction/index.rst @@ -1,102 +1,114 @@ -# 简介 +简介 +==== PaddlePaddle是源于百度的一个深度学习平台。这份简短的介绍将向你展示如何利用PaddlePaddle来解决一个经典的线性回归问题。 -## 1. 一个经典的任务 +1. 一个经典的任务 +----------------- -我们展示如何用PaddlePaddle解决单变量的线性回归问题。线性回归的输入是一批点`(x, y) `,其中 `y = wx + b + ε`, 而 ε 是一个符合高斯分布的随机变量。线性回归的输出是从这批点估计出来的参数 w 和 b。 +我们展示如何用PaddlePaddle解决 `单变量的线性回归 `_ 问题。线性回归的输入是一批点 `(x, y)` ,其中 `y = wx + b + ε`, 而 ε 是一个符合高斯分布的随机变量。线性回归的输出是从这批点估计出来的参数 `w` 和 `b` 。 一个例子是房产估值。我们假设房产的价格(y)是其大小(x)的一个线性函数,那么我们可以通过收集市场上房子的大小和价格,用来估计线性函数的参数w 和 b。 -## 2. 准备数据 +2. 准备数据 +----------- 假设变量 `x` 和 `y` 的真实关系为: `y = 2x + 0.3 + ε`,这里展示如何使用观测数据来拟合这一线性关系。首先,Python代码将随机产生2000个观测点,作为线性回归的输入。下面脚本符合PaddlePaddle期待的读取数据的Python程序的模式。 -```python -# dataprovider.py -from paddle.trainer.PyDataProvider2 import * -import random +.. code-block:: python -# 定义输入数据的类型: 2个浮点数 -@provider(input_types=[dense_vector(1), dense_vector(1)],use_seq=False) -def process(settings, input_file): - for i in xrange(2000): - x = random.random() - yield [x], [2*x+0.3] -``` + # dataprovider.py + from paddle.trainer.PyDataProvider2 import * + import random -## 3. 训练模型 + # 定义输入数据的类型: 2个浮点数 + @provider(input_types=[dense_vector(1), dense_vector(1)],use_seq=False) + def process(settings, input_file): + for i in xrange(2000): + x = random.random() + yield [x], [2*x+0.3] + +3. 训练模型 +----------- 为了还原 `y = 2x + 0.3`,我们先从一条随机的直线 `y' = wx + b` 开始,然后利用观测数据调整 `w` 和 `b` 使得 `y'` 和 `y` 的差距不断减小,最终趋于接近。这个过程就是模型的训练过程,而 `w` 和 `b` 就是模型的参数,即我们的训练目标。 在PaddlePaddle里,该模型的网络配置如下。 -```python -# trainer_config.py -from paddle.trainer_config_helpers import * - -# 1. 定义数据来源,调用上面的process函数获得观测数据 -data_file = 'empty.list' -with open(data_file, 'w') as f: f.writelines(' ') -define_py_data_sources2(train_list=data_file, test_list=None, - module='dataprovider', obj='process',args={}) - -# 2. 学习算法。控制如何改变模型参数 w 和 b -settings(batch_size=12, learning_rate=1e-3, learning_method=MomentumOptimizer()) - -# 3. 神经网络配置 -x = data_layer(name='x', size=1) -y = data_layer(name='y', size=1) -# 线性计算网络层: ȳ = wx + b -ȳ = fc_layer(input=x, param_attr=ParamAttr(name='w'), size=1, act=LinearActivation(), bias_attr=ParamAttr(name='b')) -# 计算误差函数,即 ȳ 和真实 y 之间的距离 -cost = regression_cost(input= ȳ, label=y) -outputs(cost) -``` +.. code-block:: python + + # trainer_config.py + from paddle.trainer_config_helpers import * + + # 1. 定义数据来源,调用上面的process函数获得观测数据 + data_file = 'empty.list' + with open(data_file, 'w') as f: f.writelines(' ') + define_py_data_sources2(train_list=data_file, test_list=None, + module='dataprovider', obj='process',args={}) + + # 2. 学习算法。控制如何改变模型参数 w 和 b + settings(batch_size=12, learning_rate=1e-3, learning_method=MomentumOptimizer()) + + # 3. 神经网络配置 + x = data_layer(name='x', size=1) + y = data_layer(name='y', size=1) + # 线性计算网络层: ȳ = wx + b + ȳ = fc_layer(input=x, param_attr=ParamAttr(name='w'), size=1, act=LinearActivation(), bias_attr=ParamAttr(name='b')) + # 计算误差函数,即 ȳ 和真实 y 之间的距离 + cost = regression_cost(input= ȳ, label=y) + outputs(cost) + 这段简短的配置展示了PaddlePaddle的基本用法: -- 第一部分定义了数据输入。一般情况下,PaddlePaddle先从一个文件列表里获得数据文件地址,然后交给用户自定义的函数(例如上面的`process`函数)进行读入和预处理从而得到真实输入。本文中由于输入数据是随机生成的不需要读输入文件,所以放一个空列表(`empty.list`)即可。 +- 第一部分定义了数据输入。一般情况下,PaddlePaddle先从一个文件列表里获得数据文件地址,然后交给用户自定义的函数(例如上面的 `process`函数)进行读入和预处理从而得到真实输入。本文中由于输入数据是随机生成的不需要读输入文件,所以放一个空列表(`empty.list`)即可。 - 第二部分主要是选择学习算法,它定义了模型参数改变的规则。PaddlePaddle提供了很多优秀的学习算法,这里使用一个基于momentum的随机梯度下降(SGD)算法,该算法每批量(batch)读取12个采样数据进行随机梯度计算来更新更新。 - 最后一部分是神经网络的配置。由于PaddlePaddle已经实现了丰富的网络层,所以很多时候你需要做的只是定义正确的网络层并把它们连接起来。这里使用了三种网络单元: + - **数据层**:数据层 `data_layer` 是神经网络的入口,它读入数据并将它们传输到接下来的网络层。这里数据层有两个,分别对应于变量 `x` 和 `y`。 - **全连接层**:全连接层 `fc_layer` 是基础的计算单元,这里利用它建模变量之间的线性关系。计算单元是神经网络的核心,PaddlePaddle支持大量的计算单元和任意深度的网络连接,从而可以拟合任意的函数来学习复杂的数据关系。 - - **回归误差代价层**:回归误差代价层 `regression_cost`是众多误差代价函数层的一种,它们在训练过程作为网络的出口,用来计算模型的误差,是模型参数优化的目标函数。 + - **回归误差代价层**:回归误差代价层 `regression_cost` 是众多误差代价函数层的一种,它们在训练过程作为网络的出口,用来计算模型的误差,是模型参数优化的目标函数。 + +定义了网络结构并保存为 `trainer_config.py` 之后,运行以下训练命令: + +.. code-block:: bash -定义了网络结构并保存为`trainer_config.py`之后,运行以下训练命令: - ``` - paddle train --config=trainer_config.py --save_dir=./output --num_passes=30 - ``` + paddle train --config=trainer_config.py --save_dir=./output --num_passes=30 PaddlePaddle将在观测数据集上迭代训练30轮,并将每轮的模型结果存放在 `./output` 路径下。从输出日志可以看到,随着轮数增加误差代价函数的输出在不断的减小,这意味着模型在训练数据上不断的改进,直到逼近真实解:` y = 2x + 0.3 ` -## 4. 模型检验 +4. 模型检验 +----------- 训练完成后,我们希望能够检验模型的好坏。一种常用的做法是用学习的模型对另外一组测试数据进行预测,评价预测的效果。在这个例子中,由于已经知道了真实答案,我们可以直接观察模型的参数是否符合预期来进行检验。 PaddlePaddle将每个模型参数作为一个numpy数组单独存为一个文件,所以可以利用如下方法读取模型的参数。 -```python -import numpy as np -import os +.. code-block:: python -def load(file_name): - with open(file_name, 'rb') as f: - f.read(16) # skip header for float type. - return np.fromfile(f, dtype=np.float32) + import numpy as np + import os + + def load(file_name): + with open(file_name, 'rb') as f: + f.read(16) # skip header for float type. + return np.fromfile(f, dtype=np.float32) -print 'w=%.6f, b=%.6f' % (load('output/pass-00029/w'), load('output/pass-00029/b')) -# w=1.999743, b=0.300137 -``` -
![](./parameters.png)
+ print 'w=%.6f, b=%.6f' % (load('output/pass-00029/w'), load('output/pass-00029/b')) + # w=1.999743, b=0.300137 + +.. image:: ./parameters.png + :align: center + :scale: 80 % 从图中可以看到,虽然 `w` 和 `b` 都使用随机值初始化,但在起初的几轮训练中它们都在快速逼近真实值,并且后续仍在不断改进,使得最终得到的模型几乎与真实模型一致。 -这样,我们用PaddlePaddle解决了单变量线性回归问题, 包括数据输入,模型训练和最后的结果验证。 +这样,我们用PaddlePaddle解决了单变量线性回归问题, 包括数据输入、模型训练和最后的结果验证。 -## 5. 推荐后续阅读 +5. 推荐后续阅读 +--------------- -- 安装/编译:PaddlePaddle的安装与编译文档。 -- 快速入门 :使用商品评论分类任务,系统性的介绍如何一步步改进,最终得到产品级的深度模型。 -- 示例:各种实用案例,涵盖图像、文本、推荐等多个领域。 +- `安装/编译 <../build_and_install/index.html>`_ :PaddlePaddle的安装与编译文档。 +- `快速入门 <../demo/quick_start/index.html>`_ :使用商品评论分类任务,系统性的介绍如何一步步改进,最终得到产品级的深度模型。 +- `示例 <../demo/index.html>`_ :各种实用案例,涵盖图像、文本、推荐等多个领域。 \ No newline at end of file diff --git a/doc_cn/ui/predict/swig_py_paddle.rst b/doc_cn/ui/predict/swig_py_paddle.rst index 89031dd72f..05f25345c5 100644 --- a/doc_cn/ui/predict/swig_py_paddle.rst +++ b/doc_cn/ui/predict/swig_py_paddle.rst @@ -9,15 +9,24 @@ PaddlePaddle使用swig对常用的预测接口进行了封装,通过编译会 基于Python的模型预测,主要包括以下五个步骤。 1. 初始化PaddlePaddle环境 - 在程序开始阶段,通过调用 ``swig_paddle.initPaddle()`` 并传入相应的命令行参数初始化PaddlePaddle。 + + 在程序开始阶段,通过调用 ``swig_paddle.initPaddle()`` 并传入相应的命令行参数初始化PaddlePaddle。 + 2. 解析模型配置文件 - 初始化之后,可以通过调用 ``parse_config()`` 解析训练模型时用的配置文件。注意预测数据通常不包含label, 同时预测网络通常直接输出最后一层的结果而不是像训练网络一样再接一层cost layer,所以一般需要对训练用的模型配置文件稍作相应修改才能在预测时使用。 + + 初始化之后,可以通过调用 ``parse_config()`` 解析训练模型时用的配置文件。注意预测数据通常不包含label, 同时预测网络通常直接输出最后一层的结果而不是像训练网络一样再接一层cost layer,所以一般需要对训练用的模型配置文件稍作相应修改才能在预测时使用。 + 3. 构造paddle.GradientMachine - 通过调用 ``swig_paddle.GradientMachine.createFromConfigproto()`` 传入上一步解析出来的模型配置就可以创建一个 ``GradientMachine``。 + + 通过调用 ``swig_paddle.GradientMachine.createFromConfigproto()`` 传入上一步解析出来的模型配置就可以创建一个 ``GradientMachine``。 + 4. 准备预测数据 - swig_paddle中的预测接口的参数是自定义的C++数据类型,py_paddle里面提供了一个工具类 ``DataProviderConverter`` 可以用于接收和PyDataProvider2一样的输入数据并转换成预测接口所需的数据类型。 + + swig_paddle中的预测接口的参数是自定义的C++数据类型,py_paddle里面提供了一个工具类 ``DataProviderConverter`` 可以用于接收和PyDataProvider2一样的输入数据并转换成预测接口所需的数据类型。 + 5. 模型预测 - 通过调用 ``forwardTest()`` 传入预测数据,直接返回计算结果。 + + 通过调用 ``forwardTest()`` 传入预测数据,直接返回计算结果。 预测Demo @@ -34,7 +43,8 @@ Demo预测输出如下,其中value即为softmax层的输出。由于TEST_DATA .. code-block:: text - [{'id': None, 'value': array([[ 5.53018653e-09, 1.12194102e-05, 1.96644767e-09, + [{'id': None, 'value': array( + [[ 5.53018653e-09, 1.12194102e-05, 1.96644767e-09, 1.43630644e-02, 1.51111044e-13, 9.85625684e-01, 2.08823112e-10, 2.32777140e-08, 2.00186201e-09, 1.15501715e-08], From 5c551f90558955fd8d05a3bee55af6c302ddc0aa Mon Sep 17 00:00:00 2001 From: liaogang Date: Tue, 29 Nov 2016 15:34:52 +0800 Subject: [PATCH 36/37] refine doc structure and revise some words --- doc/about/index.rst | 10 ++--- doc/api/data_provider/index.rst | 4 +- doc/api/data_provider/pydataprovider2.rst | 4 +- doc/api/index.md | 14 ------- doc/api/index.rst | 36 ++++++++++++++++++ doc/api/predict/swig_py_paddle_en.rst | 4 +- doc/api/trainer_config_helpers/attrs.rst | 4 +- doc/api/trainer_config_helpers/index.rst | 14 ------- .../basic_usage/basic_usage.rst | 0 .../basic_usage/parameters.png | Bin .../build_and_install/build_from_source.md | 0 .../build_and_install/cmake.png | Bin .../build_and_install/docker_install.rst | 0 .../build_and_install/index.rst | 0 .../build_and_install/ubuntu_install.rst | 0 doc/{introduction => getstarted}/index.rst | 2 +- doc/howto/algorithm/index.rst | 7 ---- doc/howto/algorithm/rnn/bi_lstm.jpg | 1 - .../rnn/encoder-decoder-attention-model.png | 1 - doc/howto/cluster/cluster_train.md | 2 +- .../arguments.md} | 0 .../detail_introduction.md | 0 .../{cmd_argument => cmd_parameter}/index.md | 4 +- .../use_case.md | 0 doc/howto/contribute_to_paddle.md | 2 +- doc/howto/deep_model/index.rst | 7 ++++ .../{algorithm => deep_model}/rnn/rnn.rst | 4 +- doc/howto/dev/index.rst | 9 ----- doc/howto/dev/layer.md | 5 --- doc/howto/index.rst | 27 ++++++++++--- .../{dev => }/new_layer/FullyConnected.jpg | Bin .../new_layer.rst => new_layer/index.rst} | 6 +-- doc/howto/optimization/index.rst | 4 +- doc/howto/{dev => }/source/api.rst | 0 doc/howto/{dev => }/source/cuda/index.rst | 0 doc/howto/{dev => }/source/cuda/matrix.rst | 0 doc/howto/{dev => }/source/cuda/nn.rst | 0 doc/howto/{dev => }/source/cuda/utils.rst | 0 .../{dev => }/source/gserver/activations.rst | 0 .../source/gserver/dataproviders.rst | 0 .../{dev => }/source/gserver/evaluators.rst | 0 .../source/gserver/gradientmachines.rst | 0 doc/howto/{dev => }/source/gserver/index.rst | 0 doc/howto/{dev => }/source/gserver/layers.rst | 0 .../{dev => }/source/gserver/neworks.rst | 0 doc/howto/{dev => }/source/index.rst | 0 doc/howto/{dev => }/source/math/functions.rst | 0 doc/howto/{dev => }/source/math/index.rst | 0 doc/howto/{dev => }/source/math/matrix.rst | 0 doc/howto/{dev => }/source/math/utils.rst | 0 doc/howto/{dev => }/source/math/vector.rst | 0 .../{dev => }/source/parameter/index.rst | 0 .../{dev => }/source/parameter/optimizer.rst | 0 .../{dev => }/source/parameter/parameter.rst | 0 .../{dev => }/source/parameter/updater.rst | 0 doc/howto/{dev => }/source/pserver/client.rst | 0 doc/howto/{dev => }/source/pserver/index.rst | 0 .../{dev => }/source/pserver/network.rst | 0 doc/howto/{dev => }/source/pserver/server.rst | 0 doc/howto/{dev => }/source/trainer.rst | 0 .../source/utils/customStackTrace.rst | 0 doc/howto/{dev => }/source/utils/enum.rst | 0 doc/howto/{dev => }/source/utils/index.rst | 0 doc/howto/{dev => }/source/utils/lock.rst | 0 doc/howto/{dev => }/source/utils/queue.rst | 0 doc/howto/{dev => }/source/utils/thread.rst | 0 doc/index.rst | 2 +- doc/tutorials/index.md | 2 +- 68 files changed, 92 insertions(+), 83 deletions(-) delete mode 100644 doc/api/index.md create mode 100644 doc/api/index.rst delete mode 100644 doc/api/trainer_config_helpers/index.rst rename doc/{introduction => getstarted}/basic_usage/basic_usage.rst (100%) rename doc/{introduction => getstarted}/basic_usage/parameters.png (100%) rename doc/{introduction => getstarted}/build_and_install/build_from_source.md (100%) rename doc/{introduction => getstarted}/build_and_install/cmake.png (100%) rename doc/{introduction => getstarted}/build_and_install/docker_install.rst (100%) rename doc/{introduction => getstarted}/build_and_install/index.rst (100%) rename doc/{introduction => getstarted}/build_and_install/ubuntu_install.rst (100%) rename doc/{introduction => getstarted}/index.rst (88%) delete mode 100644 doc/howto/algorithm/index.rst delete mode 120000 doc/howto/algorithm/rnn/bi_lstm.jpg delete mode 120000 doc/howto/algorithm/rnn/encoder-decoder-attention-model.png rename doc/howto/{cmd_argument/argument_outline.md => cmd_parameter/arguments.md} (100%) rename doc/howto/{cmd_argument => cmd_parameter}/detail_introduction.md (100%) rename doc/howto/{cmd_argument => cmd_parameter}/index.md (53%) rename doc/howto/{cmd_argument => cmd_parameter}/use_case.md (100%) create mode 100644 doc/howto/deep_model/index.rst rename doc/howto/{algorithm => deep_model}/rnn/rnn.rst (99%) delete mode 100644 doc/howto/dev/index.rst delete mode 100644 doc/howto/dev/layer.md rename doc/howto/{dev => }/new_layer/FullyConnected.jpg (100%) rename doc/howto/{dev/new_layer/new_layer.rst => new_layer/index.rst} (99%) rename doc/howto/{dev => }/source/api.rst (100%) rename doc/howto/{dev => }/source/cuda/index.rst (100%) rename doc/howto/{dev => }/source/cuda/matrix.rst (100%) rename doc/howto/{dev => }/source/cuda/nn.rst (100%) rename doc/howto/{dev => }/source/cuda/utils.rst (100%) rename doc/howto/{dev => }/source/gserver/activations.rst (100%) rename doc/howto/{dev => }/source/gserver/dataproviders.rst (100%) rename doc/howto/{dev => }/source/gserver/evaluators.rst (100%) rename doc/howto/{dev => }/source/gserver/gradientmachines.rst (100%) rename doc/howto/{dev => }/source/gserver/index.rst (100%) rename doc/howto/{dev => }/source/gserver/layers.rst (100%) rename doc/howto/{dev => }/source/gserver/neworks.rst (100%) rename doc/howto/{dev => }/source/index.rst (100%) rename doc/howto/{dev => }/source/math/functions.rst (100%) rename doc/howto/{dev => }/source/math/index.rst (100%) rename doc/howto/{dev => }/source/math/matrix.rst (100%) rename doc/howto/{dev => }/source/math/utils.rst (100%) rename doc/howto/{dev => }/source/math/vector.rst (100%) rename doc/howto/{dev => }/source/parameter/index.rst (100%) rename doc/howto/{dev => }/source/parameter/optimizer.rst (100%) rename doc/howto/{dev => }/source/parameter/parameter.rst (100%) rename doc/howto/{dev => }/source/parameter/updater.rst (100%) rename doc/howto/{dev => }/source/pserver/client.rst (100%) rename doc/howto/{dev => }/source/pserver/index.rst (100%) rename doc/howto/{dev => }/source/pserver/network.rst (100%) rename doc/howto/{dev => }/source/pserver/server.rst (100%) rename doc/howto/{dev => }/source/trainer.rst (100%) rename doc/howto/{dev => }/source/utils/customStackTrace.rst (100%) rename doc/howto/{dev => }/source/utils/enum.rst (100%) rename doc/howto/{dev => }/source/utils/index.rst (100%) rename doc/howto/{dev => }/source/utils/lock.rst (100%) rename doc/howto/{dev => }/source/utils/queue.rst (100%) rename doc/howto/{dev => }/source/utils/thread.rst (100%) diff --git a/doc/about/index.rst b/doc/about/index.rst index 511c154641..8a372d2bc2 100644 --- a/doc/about/index.rst +++ b/doc/about/index.rst @@ -1,14 +1,14 @@ -About +ABOUT ======= - -Credits --------- - PaddlPaddle is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu. PaddlePaddle is now open source but far from complete, which is intended to be built upon, improved, scaled, and extended. We hope to build an active open source community both by providing feedback and by actively contributing to the source code. + +Credits +-------- + We owe many thanks to `all contributors and developers `_ of PaddlePaddle! diff --git a/doc/api/data_provider/index.rst b/doc/api/data_provider/index.rst index 3db5b57376..5e7a49d632 100644 --- a/doc/api/data_provider/index.rst +++ b/doc/api/data_provider/index.rst @@ -1,5 +1,5 @@ -DataProvider Introduction -========================= +Introduction +============== DataProvider is a module that loads training or testing data into cpu or gpu memory for the following triaining or testing process. diff --git a/doc/api/data_provider/pydataprovider2.rst b/doc/api/data_provider/pydataprovider2.rst index e105d3be30..b42cbca576 100644 --- a/doc/api/data_provider/pydataprovider2.rst +++ b/doc/api/data_provider/pydataprovider2.rst @@ -1,5 +1,5 @@ -How to use PyDataProvider2 -========================== +PyDataProvider2 +================= We highly recommand users to use PyDataProvider2 to provide training or testing data to PaddlePaddle. The user only needs to focus on how to read a single diff --git a/doc/api/index.md b/doc/api/index.md deleted file mode 100644 index 8c4a65e0d5..0000000000 --- a/doc/api/index.md +++ /dev/null @@ -1,14 +0,0 @@ -# API - -## Data Provider - -* [Introduction](data_provider/index.rst) -* [PyDataProvider2](data_provider/pydataprovider2.rst) - -## Trainer Configuration - -* [Model Config Interface](trainer_config_helpers/index.rst) - -## Predict - -* [Python Prediction API](predict/swig_py_paddle_en.rst) diff --git a/doc/api/index.rst b/doc/api/index.rst new file mode 100644 index 0000000000..ccee7a0f1f --- /dev/null +++ b/doc/api/index.rst @@ -0,0 +1,36 @@ +API +==== + +DataProvider API +---------------- + +.. toctree:: + :maxdepth: 1 + + data_provider/index.rst + data_provider/pydataprovider2.rst + +Model Config API +---------------- + +.. toctree:: + :maxdepth: 1 + + trainer_config_helpers/index.rst + trainer_config_helpers/optimizers.rst + trainer_config_helpers/data_sources.rst + trainer_config_helpers/layers.rst + trainer_config_helpers/activations.rst + trainer_config_helpers/poolings.rst + trainer_config_helpers/networks.rst + trainer_config_helpers/evaluators.rst + trainer_config_helpers/attrs.rst + + +Applications API +---------------- + +.. toctree:: + :maxdepth: 1 + + predict/swig_py_paddle_en.rst \ No newline at end of file diff --git a/doc/api/predict/swig_py_paddle_en.rst b/doc/api/predict/swig_py_paddle_en.rst index b743fc4569..9845cd1607 100644 --- a/doc/api/predict/swig_py_paddle_en.rst +++ b/doc/api/predict/swig_py_paddle_en.rst @@ -1,5 +1,5 @@ -Python Prediction API -===================== +Python Prediction +================== PaddlePaddle offers a set of clean prediction interfaces for python with the help of SWIG. The main steps of predict values in python are: diff --git a/doc/api/trainer_config_helpers/attrs.rst b/doc/api/trainer_config_helpers/attrs.rst index 44919aba90..ac63127bf7 100644 --- a/doc/api/trainer_config_helpers/attrs.rst +++ b/doc/api/trainer_config_helpers/attrs.rst @@ -1,5 +1,5 @@ -Parameter and Extra Layer Attribute -=================================== +Parameter Attributes +======================= .. automodule:: paddle.trainer_config_helpers.attrs :members: diff --git a/doc/api/trainer_config_helpers/index.rst b/doc/api/trainer_config_helpers/index.rst deleted file mode 100644 index 8395eb7571..0000000000 --- a/doc/api/trainer_config_helpers/index.rst +++ /dev/null @@ -1,14 +0,0 @@ -Model Config Interface -====================== - -.. toctree:: - :maxdepth: 1 - - optimizers.rst - data_sources.rst - layers.rst - activations.rst - poolings.rst - networks.rst - evaluators.rst - attrs.rst diff --git a/doc/introduction/basic_usage/basic_usage.rst b/doc/getstarted/basic_usage/basic_usage.rst similarity index 100% rename from doc/introduction/basic_usage/basic_usage.rst rename to doc/getstarted/basic_usage/basic_usage.rst diff --git a/doc/introduction/basic_usage/parameters.png b/doc/getstarted/basic_usage/parameters.png similarity index 100% rename from doc/introduction/basic_usage/parameters.png rename to doc/getstarted/basic_usage/parameters.png diff --git a/doc/introduction/build_and_install/build_from_source.md b/doc/getstarted/build_and_install/build_from_source.md similarity index 100% rename from doc/introduction/build_and_install/build_from_source.md rename to doc/getstarted/build_and_install/build_from_source.md diff --git a/doc/introduction/build_and_install/cmake.png b/doc/getstarted/build_and_install/cmake.png similarity index 100% rename from doc/introduction/build_and_install/cmake.png rename to doc/getstarted/build_and_install/cmake.png diff --git a/doc/introduction/build_and_install/docker_install.rst b/doc/getstarted/build_and_install/docker_install.rst similarity index 100% rename from doc/introduction/build_and_install/docker_install.rst rename to doc/getstarted/build_and_install/docker_install.rst diff --git a/doc/introduction/build_and_install/index.rst b/doc/getstarted/build_and_install/index.rst similarity index 100% rename from doc/introduction/build_and_install/index.rst rename to doc/getstarted/build_and_install/index.rst diff --git a/doc/introduction/build_and_install/ubuntu_install.rst b/doc/getstarted/build_and_install/ubuntu_install.rst similarity index 100% rename from doc/introduction/build_and_install/ubuntu_install.rst rename to doc/getstarted/build_and_install/ubuntu_install.rst diff --git a/doc/introduction/index.rst b/doc/getstarted/index.rst similarity index 88% rename from doc/introduction/index.rst rename to doc/getstarted/index.rst index ff22f05a1b..5f2787066e 100644 --- a/doc/introduction/index.rst +++ b/doc/getstarted/index.rst @@ -1,4 +1,4 @@ -Introduction +GET STARTED ============ .. toctree:: diff --git a/doc/howto/algorithm/index.rst b/doc/howto/algorithm/index.rst deleted file mode 100644 index b4ecbc4847..0000000000 --- a/doc/howto/algorithm/index.rst +++ /dev/null @@ -1,7 +0,0 @@ -Algorithm Configuration -======================= - -.. toctree:: - :maxdepth: 1 - - rnn/rnn.rst diff --git a/doc/howto/algorithm/rnn/bi_lstm.jpg b/doc/howto/algorithm/rnn/bi_lstm.jpg deleted file mode 120000 index f8f3b17691..0000000000 --- a/doc/howto/algorithm/rnn/bi_lstm.jpg +++ /dev/null @@ -1 +0,0 @@ -../../../tutorials/sentiment_analysis/bi_lstm.jpg \ No newline at end of file diff --git a/doc/howto/algorithm/rnn/encoder-decoder-attention-model.png b/doc/howto/algorithm/rnn/encoder-decoder-attention-model.png deleted file mode 120000 index 88a1d3e5ac..0000000000 --- a/doc/howto/algorithm/rnn/encoder-decoder-attention-model.png +++ /dev/null @@ -1 +0,0 @@ -../../../tutorials/text_generation/encoder-decoder-attention-model.png \ No newline at end of file diff --git a/doc/howto/cluster/cluster_train.md b/doc/howto/cluster/cluster_train.md index 6b68596dc1..1de34a6a99 100644 --- a/doc/howto/cluster/cluster_train.md +++ b/doc/howto/cluster/cluster_train.md @@ -1,4 +1,4 @@ -# Distributed Training +# How to Run Distributed Training In this article, we explain how to run distributed Paddle training jobs on clusters. We will create the distributed version of the single-process training example, [recommendation](https://github.com/baidu/Paddle/tree/develop/demo/recommendation). diff --git a/doc/howto/cmd_argument/argument_outline.md b/doc/howto/cmd_parameter/arguments.md similarity index 100% rename from doc/howto/cmd_argument/argument_outline.md rename to doc/howto/cmd_parameter/arguments.md diff --git a/doc/howto/cmd_argument/detail_introduction.md b/doc/howto/cmd_parameter/detail_introduction.md similarity index 100% rename from doc/howto/cmd_argument/detail_introduction.md rename to doc/howto/cmd_parameter/detail_introduction.md diff --git a/doc/howto/cmd_argument/index.md b/doc/howto/cmd_parameter/index.md similarity index 53% rename from doc/howto/cmd_argument/index.md rename to doc/howto/cmd_parameter/index.md index 90472c44cb..48cf835de1 100644 --- a/doc/howto/cmd_argument/index.md +++ b/doc/howto/cmd_parameter/index.md @@ -1,5 +1,5 @@ -# Command Line Argument +# How to Set Command-line Parameters * [Use Case](use_case.md) -* [Argument Outline](argument_outline.md) +* [Arguments](arguments.md) * [Detailed Descriptions](detail_introduction.md) diff --git a/doc/howto/cmd_argument/use_case.md b/doc/howto/cmd_parameter/use_case.md similarity index 100% rename from doc/howto/cmd_argument/use_case.md rename to doc/howto/cmd_parameter/use_case.md diff --git a/doc/howto/contribute_to_paddle.md b/doc/howto/contribute_to_paddle.md index 1d03eb7362..d1f12c6ab2 100644 --- a/doc/howto/contribute_to_paddle.md +++ b/doc/howto/contribute_to_paddle.md @@ -1,4 +1,4 @@ -# Contribute Code +# How to Contribute Code We sincerely appreciate your contributions. You can use fork and pull request workflow to merge your code. diff --git a/doc/howto/deep_model/index.rst b/doc/howto/deep_model/index.rst new file mode 100644 index 0000000000..06ef443f62 --- /dev/null +++ b/doc/howto/deep_model/index.rst @@ -0,0 +1,7 @@ +How to Configure Deep Models +============================ + +.. toctree:: + :maxdepth: 1 + + rnn/rnn.rst diff --git a/doc/howto/algorithm/rnn/rnn.rst b/doc/howto/deep_model/rnn/rnn.rst similarity index 99% rename from doc/howto/algorithm/rnn/rnn.rst rename to doc/howto/deep_model/rnn/rnn.rst index 01d2caefb5..da29b8efad 100644 --- a/doc/howto/algorithm/rnn/rnn.rst +++ b/doc/howto/deep_model/rnn/rnn.rst @@ -42,7 +42,7 @@ Simple Gated Recurrent Neural Network Recurrent neural network process a sequence at each time step sequentially. An example of the architecture of LSTM is listed below. -.. image:: ./bi_lstm.jpg +.. image:: ../../../tutorials/sentiment_analysis/bi_lstm.jpg :align: center Generally speaking, a recurrent network perform the following operations from :math:`t=1` to :math:`t=T`, or reversely from :math:`t=T` to :math:`t=1`. @@ -101,7 +101,7 @@ Sequence to Sequence Model with Attention ----------------------------------------- We will use the sequence to sequence model with attention as an example to demonstrate how you can configure complex recurrent neural network models. An illustration of the sequence to sequence model with attention is shown in the following figure. -.. image:: ./encoder-decoder-attention-model.png +.. image:: ../../../tutorials/text_generation/encoder-decoder-attention-model.png :align: center In this model, the source sequence :math:`S = \{s_1, \dots, s_T\}` is encoded with a bidirectional gated recurrent neural networks. The hidden states of the bidirectional gated recurrent neural network :math:`H_S = \{H_1, \dots, H_T\}` is called *encoder vector* The decoder is a gated recurrent neural network. When decoding each token :math:`y_t`, the gated recurrent neural network generates a set of weights :math:`W_S^t = \{W_1^t, \dots, W_T^t\}`, which are used to compute a weighted sum of the encoder vector. The weighted sum of the encoder vector is utilized to condition the generation of the token :math:`y_t`. diff --git a/doc/howto/dev/index.rst b/doc/howto/dev/index.rst deleted file mode 100644 index 876c42e9db..0000000000 --- a/doc/howto/dev/index.rst +++ /dev/null @@ -1,9 +0,0 @@ -Development Guide -================= - -.. toctree:: - :maxdepth: 2 - - layer.md - new_layer/new_layer.rst - source/index.rst diff --git a/doc/howto/dev/layer.md b/doc/howto/dev/layer.md deleted file mode 100644 index 1ce0cc5829..0000000000 --- a/doc/howto/dev/layer.md +++ /dev/null @@ -1,5 +0,0 @@ -# Layer Documents - -* [Layer Python API](../../api/trainer_config_helpers/index.rst) -* [Layer Source Code](source/gserver/layers.rst) -* [Writing New Layers](new_layer/new_layer.rst) diff --git a/doc/howto/index.rst b/doc/howto/index.rst index ed8294b3c1..41877a64a5 100644 --- a/doc/howto/index.rst +++ b/doc/howto/index.rst @@ -1,12 +1,29 @@ -How to +HOW TO ======= +Usage +------- + .. toctree:: :maxdepth: 1 - cmd_argument/index.md + cmd_parameter/index.md + deep_model/index.rst cluster/cluster_train.md - algorithm/index.rst + +Development +------------ + +.. toctree:: + :maxdepth: 1 + + new_layer/index.rst + contribute_to_paddle.md + +Optimization +------------- + +.. toctree:: + :maxdepth: 1 + optimization/index.rst - dev/index.rst - contribute_to_paddle.md \ No newline at end of file diff --git a/doc/howto/dev/new_layer/FullyConnected.jpg b/doc/howto/new_layer/FullyConnected.jpg similarity index 100% rename from doc/howto/dev/new_layer/FullyConnected.jpg rename to doc/howto/new_layer/FullyConnected.jpg diff --git a/doc/howto/dev/new_layer/new_layer.rst b/doc/howto/new_layer/index.rst similarity index 99% rename from doc/howto/dev/new_layer/new_layer.rst rename to doc/howto/new_layer/index.rst index af8b76a307..922bda5b0d 100644 --- a/doc/howto/dev/new_layer/new_layer.rst +++ b/doc/howto/new_layer/index.rst @@ -1,6 +1,6 @@ -================== -Writing New Layers -================== +======================= +How to Write New Layers +======================= This tutorial will guide you to write customized layers in PaddlePaddle. We will utilize fully connected layer as an example to guide you through the following steps for writing a new layer. diff --git a/doc/howto/optimization/index.rst b/doc/howto/optimization/index.rst index c9e87e0778..e2822a0098 100644 --- a/doc/howto/optimization/index.rst +++ b/doc/howto/optimization/index.rst @@ -1,5 +1,5 @@ -Performance Tuning -================== +How to Tune GPU Performance +=========================== .. toctree:: :maxdepth: 3 diff --git a/doc/howto/dev/source/api.rst b/doc/howto/source/api.rst similarity index 100% rename from doc/howto/dev/source/api.rst rename to doc/howto/source/api.rst diff --git a/doc/howto/dev/source/cuda/index.rst b/doc/howto/source/cuda/index.rst similarity index 100% rename from doc/howto/dev/source/cuda/index.rst rename to doc/howto/source/cuda/index.rst diff --git a/doc/howto/dev/source/cuda/matrix.rst b/doc/howto/source/cuda/matrix.rst similarity index 100% rename from doc/howto/dev/source/cuda/matrix.rst rename to doc/howto/source/cuda/matrix.rst diff --git a/doc/howto/dev/source/cuda/nn.rst b/doc/howto/source/cuda/nn.rst similarity index 100% rename from doc/howto/dev/source/cuda/nn.rst rename to doc/howto/source/cuda/nn.rst diff --git a/doc/howto/dev/source/cuda/utils.rst b/doc/howto/source/cuda/utils.rst similarity index 100% rename from doc/howto/dev/source/cuda/utils.rst rename to doc/howto/source/cuda/utils.rst diff --git a/doc/howto/dev/source/gserver/activations.rst b/doc/howto/source/gserver/activations.rst similarity index 100% rename from doc/howto/dev/source/gserver/activations.rst rename to doc/howto/source/gserver/activations.rst diff --git a/doc/howto/dev/source/gserver/dataproviders.rst b/doc/howto/source/gserver/dataproviders.rst similarity index 100% rename from doc/howto/dev/source/gserver/dataproviders.rst rename to doc/howto/source/gserver/dataproviders.rst diff --git a/doc/howto/dev/source/gserver/evaluators.rst b/doc/howto/source/gserver/evaluators.rst similarity index 100% rename from doc/howto/dev/source/gserver/evaluators.rst rename to doc/howto/source/gserver/evaluators.rst diff --git a/doc/howto/dev/source/gserver/gradientmachines.rst b/doc/howto/source/gserver/gradientmachines.rst similarity index 100% rename from doc/howto/dev/source/gserver/gradientmachines.rst rename to doc/howto/source/gserver/gradientmachines.rst diff --git a/doc/howto/dev/source/gserver/index.rst b/doc/howto/source/gserver/index.rst similarity index 100% rename from doc/howto/dev/source/gserver/index.rst rename to doc/howto/source/gserver/index.rst diff --git a/doc/howto/dev/source/gserver/layers.rst b/doc/howto/source/gserver/layers.rst similarity index 100% rename from doc/howto/dev/source/gserver/layers.rst rename to doc/howto/source/gserver/layers.rst diff --git a/doc/howto/dev/source/gserver/neworks.rst b/doc/howto/source/gserver/neworks.rst similarity index 100% rename from doc/howto/dev/source/gserver/neworks.rst rename to doc/howto/source/gserver/neworks.rst diff --git a/doc/howto/dev/source/index.rst b/doc/howto/source/index.rst similarity index 100% rename from doc/howto/dev/source/index.rst rename to doc/howto/source/index.rst diff --git a/doc/howto/dev/source/math/functions.rst b/doc/howto/source/math/functions.rst similarity index 100% rename from doc/howto/dev/source/math/functions.rst rename to doc/howto/source/math/functions.rst diff --git a/doc/howto/dev/source/math/index.rst b/doc/howto/source/math/index.rst similarity index 100% rename from doc/howto/dev/source/math/index.rst rename to doc/howto/source/math/index.rst diff --git a/doc/howto/dev/source/math/matrix.rst b/doc/howto/source/math/matrix.rst similarity index 100% rename from doc/howto/dev/source/math/matrix.rst rename to doc/howto/source/math/matrix.rst diff --git a/doc/howto/dev/source/math/utils.rst b/doc/howto/source/math/utils.rst similarity index 100% rename from doc/howto/dev/source/math/utils.rst rename to doc/howto/source/math/utils.rst diff --git a/doc/howto/dev/source/math/vector.rst b/doc/howto/source/math/vector.rst similarity index 100% rename from doc/howto/dev/source/math/vector.rst rename to doc/howto/source/math/vector.rst diff --git a/doc/howto/dev/source/parameter/index.rst b/doc/howto/source/parameter/index.rst similarity index 100% rename from doc/howto/dev/source/parameter/index.rst rename to doc/howto/source/parameter/index.rst diff --git a/doc/howto/dev/source/parameter/optimizer.rst b/doc/howto/source/parameter/optimizer.rst similarity index 100% rename from doc/howto/dev/source/parameter/optimizer.rst rename to doc/howto/source/parameter/optimizer.rst diff --git a/doc/howto/dev/source/parameter/parameter.rst b/doc/howto/source/parameter/parameter.rst similarity index 100% rename from doc/howto/dev/source/parameter/parameter.rst rename to doc/howto/source/parameter/parameter.rst diff --git a/doc/howto/dev/source/parameter/updater.rst b/doc/howto/source/parameter/updater.rst similarity index 100% rename from doc/howto/dev/source/parameter/updater.rst rename to doc/howto/source/parameter/updater.rst diff --git a/doc/howto/dev/source/pserver/client.rst b/doc/howto/source/pserver/client.rst similarity index 100% rename from doc/howto/dev/source/pserver/client.rst rename to doc/howto/source/pserver/client.rst diff --git a/doc/howto/dev/source/pserver/index.rst b/doc/howto/source/pserver/index.rst similarity index 100% rename from doc/howto/dev/source/pserver/index.rst rename to doc/howto/source/pserver/index.rst diff --git a/doc/howto/dev/source/pserver/network.rst b/doc/howto/source/pserver/network.rst similarity index 100% rename from doc/howto/dev/source/pserver/network.rst rename to doc/howto/source/pserver/network.rst diff --git a/doc/howto/dev/source/pserver/server.rst b/doc/howto/source/pserver/server.rst similarity index 100% rename from doc/howto/dev/source/pserver/server.rst rename to doc/howto/source/pserver/server.rst diff --git a/doc/howto/dev/source/trainer.rst b/doc/howto/source/trainer.rst similarity index 100% rename from doc/howto/dev/source/trainer.rst rename to doc/howto/source/trainer.rst diff --git a/doc/howto/dev/source/utils/customStackTrace.rst b/doc/howto/source/utils/customStackTrace.rst similarity index 100% rename from doc/howto/dev/source/utils/customStackTrace.rst rename to doc/howto/source/utils/customStackTrace.rst diff --git a/doc/howto/dev/source/utils/enum.rst b/doc/howto/source/utils/enum.rst similarity index 100% rename from doc/howto/dev/source/utils/enum.rst rename to doc/howto/source/utils/enum.rst diff --git a/doc/howto/dev/source/utils/index.rst b/doc/howto/source/utils/index.rst similarity index 100% rename from doc/howto/dev/source/utils/index.rst rename to doc/howto/source/utils/index.rst diff --git a/doc/howto/dev/source/utils/lock.rst b/doc/howto/source/utils/lock.rst similarity index 100% rename from doc/howto/dev/source/utils/lock.rst rename to doc/howto/source/utils/lock.rst diff --git a/doc/howto/dev/source/utils/queue.rst b/doc/howto/source/utils/queue.rst similarity index 100% rename from doc/howto/dev/source/utils/queue.rst rename to doc/howto/source/utils/queue.rst diff --git a/doc/howto/dev/source/utils/thread.rst b/doc/howto/source/utils/thread.rst similarity index 100% rename from doc/howto/dev/source/utils/thread.rst rename to doc/howto/source/utils/thread.rst diff --git a/doc/index.rst b/doc/index.rst index 36a410881a..3555da1dfc 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -4,7 +4,7 @@ PaddlePaddle Documentation .. toctree:: :maxdepth: 1 - introduction/index.rst + getstarted/index.rst tutorials/index.md howto/index.rst api/index.rst diff --git a/doc/tutorials/index.md b/doc/tutorials/index.md index c845ca229c..ebf5397391 100644 --- a/doc/tutorials/index.md +++ b/doc/tutorials/index.md @@ -1,4 +1,4 @@ -# Tutorials +# TUTORIALS There are serveral examples and demos here. ## Image From d118609582c1bef6b90a973ee9a0e1b94c2801cf Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Tue, 29 Nov 2016 16:26:54 +0800 Subject: [PATCH 37/37] Refine documentation in RELEASE.md --- RELEASE.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/RELEASE.md b/RELEASE.md index b2684f5f5d..a8a245ab44 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -2,14 +2,13 @@ ## New Features: -* Add some layers to Paddle +* New Layers * bilinear interpolation layer. * spatial pyramid-pool layer. * de-convolution layer. * maxout layer. * Support rectangle padding, stride, window and input for Pooling Operation. * Add —job=time in trainer, which can be used to print time info without compiler option -WITH_TIMER=ON. -* Support Mac OS X Sierra by source code. * Expose cost_weight/nce_layer in `trainer_config_helpers` * Add FAQ, concepts, h-rnn docs. * Add Bidi-LSTM and DB-LSTM to quick start demo @alvations @@ -17,7 +16,7 @@ ## Improvements -* Add travis-ci for macos. Enable swig unittest in travis. Skip travis-ci when only docs are changed. +* Add Travis-CI for Mac OS X. Enable swig unittest in Travis-CI. Skip Travis-CI when only docs are changed. * Add code coverage tools. * Refine convolution layer to speedup and reduce GPU memory. * Speed up PyDataProvider2