move trainer

revert-12383-port_py3_syntax
Xin Pan 7 years ago
parent da158c2279
commit aa2f76fd9b

@ -52,7 +52,7 @@ In `trainer_internal.cpp:L93 trainOneBatch`:
When doing actual network forward and backward, at the beginning of each batch, the trainer will try to download one row of data from pserver.
In `trainer/RemoteParameterUpdater.cpp`: `parameterUpdater_->getParametersRemote();`:
In `legacy/trainer/RemoteParameterUpdater.cpp`: `parameterUpdater_->getParametersRemote();`:
```c++
if (fullSize) {

@ -18,20 +18,20 @@ Figure 1. PaddlePaddle on IA
具体的完成状态可以参见[这里](https://github.com/PaddlePaddle/Paddle/projects/21)。
## Contents
- [Overview](#overview)
- [Actions](#actions)
- [CMake](#cmake)
- [Matrix](#matrix)
- [Layers](#layers)
- [Activations](#activations)
- [Parameters](#parameters)
- [Gradients](#gradients)
- [Unit Tests](#unit-tests)
- [Python API](#python-api)
- [Benchmarking](#benchmarking)
- [Others](#others)
- [Design Concerns](#design-concerns)
- [Overview](#overview)
- [Actions](#actions)
- [CMake](#cmake)
- [Matrix](#matrix)
- [Layers](#layers)
- [Activations](#activations)
- [Parameters](#parameters)
- [Gradients](#gradients)
- [Unit Tests](#unit-tests)
- [Python API](#python-api)
- [Benchmarking](#benchmarking)
- [Others](#others)
- [Design Concerns](#design-concerns)
## Overview
@ -218,20 +218,20 @@ if use_mkldnn
我们总结出一些特别需要注意的点:
1. 使用**deviceId_**。为了尽可能少的在父类Layer中添加变量或者函数
我们决定使用已有的`deviceId_`变量来区分layer的属性定义`-2`为`MKLDNNLayer`特有的设备ID。
2. 重写父类Layer的**init**函数,修改`deviceId_`为`-2`代表这个layer是用于跑在MKL-DNN的环境下。
我们决定使用已有的`deviceId_`变量来区分layer的属性定义`-2`为`MKLDNNLayer`特有的设备ID。
2. 重写父类Layer的**init**函数,修改`deviceId_`为`-2`代表这个layer是用于跑在MKL-DNN的环境下。
3. 创建`MKLDNNBase`定义一些除了layer和memory相关的类和函数。
包括MKL-DNN会用到`MKLDNNStream`和`CPUEngine`,和未来可能还会用到`FPGAEngine`等。
包括MKL-DNN会用到`MKLDNNStream`和`CPUEngine`,和未来可能还会用到`FPGAEngine`等。
4. 如果MKL-DNN layer的后面接有cpu device那么就会使`output_.value`与`extOutVal_`共享内存,
同时数据格式就是`NCHW`这样下一个cpu device就能拿到正确的数据。
在有普通的CPU layer时 `extOutVal_`和`extOutGrad_`的格式始终是`NCHW`或者`NC`。
## References
1. [MKL small library](https://github.com/01org/mkl-dnn#linking-your-application)是[Intel MKL](https://software.intel.com/en-us/mkl)的一个子集。
主要包括了深度学习相关的数学原语与操作一般由MKL-DNN在发布[新版本](https://github.com/01org/mkl-dnn/releases)时一起更新。
主要包括了深度学习相关的数学原语与操作一般由MKL-DNN在发布[新版本](https://github.com/01org/mkl-dnn/releases)时一起更新。
2. [MKL-DNN System Requirements](https://github.com/01org/mkl-dnn#system-requirements)。
目前在PaddlePaddle中仅会在支持AVX2指令集及以上的机器才使用MKL-DNN。
3. [原来的方案](https://github.com/PaddlePaddle/Paddle/pull/3096)会引入**nextLayer**的信息。
但是在PaddlePaddle中无论是重构前的layer还是重构后的op都不会想要知道next layer/op的信息。
但是在PaddlePaddle中无论是重构前的layer还是重构后的op都不会想要知道next layer/op的信息。
4. MKL-DNN的高性能格式与PaddlePaddle原有的`NCHW`不同(PaddlePaddle中的cuDNN部分使用的也是`NCHW`,所以不存在这个问题)。
所以需要引入一个转换方法并且只需要在必要的时候转换这种格式才能更好的发挥MKL-DNN的性能。
所以需要引入一个转换方法并且只需要在必要的时候转换这种格式才能更好的发挥MKL-DNN的性能。

@ -339,7 +339,7 @@ If you are creating a new file for the test, such as :code:`paddle/legacy/gserve
Implement Python Wrapper
========================
Implementing Python wrapper allows us to use the added layer in configuration files. All the Python wrappers are in file :code:`python/paddle/trainer/config_parser.py`. An example of the Python wrapper for fully connected layer is listed below. It has the following steps:
Implementing Python wrapper allows us to use the added layer in configuration files. All the Python wrappers are in file :code:`python/paddle/legacy/trainer/config_parser.py`. An example of the Python wrapper for fully connected layer is listed below. It has the following steps:
- Use :code:`@config_layer('fc')` at the decorator for all the Python wrapper class. :code:`fc` is the identifier of the layer.
- Implements :code:`__init__` constructor function.

@ -10,7 +10,7 @@ if(NOT WITH_FLUID_ONLY)
add_subdirectory(legacy/capi)
else()
add_subdirectory(legacy/pserver)
add_subdirectory(trainer)
add_subdirectory(legacy/trainer)
add_subdirectory(scripts)
if(WITH_C_API)

@ -14,7 +14,7 @@ limitations under the License. */
#include "PaddleAPI.h"
#include "PaddleAPIPrivate.h"
#include "paddle/trainer/Trainer.h"
#include "paddle/legacy/trainer/Trainer.h"
struct ParameterConfigPrivate {
paddle::ParameterPtr parameter;

@ -17,7 +17,7 @@ limitations under the License. */
#include "paddle/legacy/gserver/evaluators/Evaluator.h"
#include "paddle/legacy/gserver/gradientmachines/GradientMachine.h"
#include "paddle/legacy/parameter/ParameterUpdaterBase.h"
#include "paddle/trainer/TrainerConfigHelper.h"
#include "paddle/legacy/trainer/TrainerConfigHelper.h"
struct GradientMachinePrivate {
std::shared_ptr<paddle::GradientMachine> machine;

@ -16,10 +16,10 @@ limitations under the License. */
#include "PaddleAPIPrivate.h"
#ifndef PADDLE_WITHOUT_GOLANG
#include "paddle/trainer/NewRemoteParameterUpdater.h"
#include "paddle/legacy/trainer/NewRemoteParameterUpdater.h"
#endif
#include "paddle/trainer/RemoteParameterUpdater.h"
#include "paddle/trainer/ThreadParameterUpdater.h"
#include "paddle/legacy/trainer/RemoteParameterUpdater.h"
#include "paddle/legacy/trainer/ThreadParameterUpdater.h"
ParameterUpdater::ParameterUpdater() : m(new ParameterUpdaterPrivate()) {}

@ -20,9 +20,9 @@ limitations under the License. */
#include <memory>
#include "paddle/legacy/gserver/gradientmachines/NeuralNetwork.h"
#include "paddle/trainer/ParamUtil.h"
#include "paddle/trainer/Trainer.h"
#include "paddle/trainer/TrainerInternal.h"
#include "paddle/legacy/trainer/ParamUtil.h"
#include "paddle/legacy/trainer/Trainer.h"
#include "paddle/legacy/trainer/TrainerInternal.h"
#include "paddle/utils/Flags.h"
using paddle::real;

@ -18,7 +18,7 @@ limitations under the License. */
#include <vector>
#include "capi_private.h"
#include "main.h"
#include "paddle/trainer/TrainerConfigHelper.h"
#include "paddle/legacy/trainer/TrainerConfigHelper.h"
#include "paddle/utils/Excepts.h"
#include "paddle/utils/PythonUtil.h"

@ -14,7 +14,7 @@ limitations under the License. */
#include <gtest/gtest.h>
#include <paddle/legacy/gserver/gradientmachines/GradientMachine.h>
#include <paddle/trainer/TrainerConfigHelper.h>
#include <paddle/legacy/trainer/TrainerConfigHelper.h>
#include <stdlib.h>
#include <string.h>
#include <type_traits>

@ -15,7 +15,7 @@ limitations under the License. */
#include "MKLDNNTester.h"
#include "paddle/legacy/gserver/layers/MKLDNNBase.h"
#include "paddle/legacy/gserver/layers/MKLDNNLayer.h"
#include "paddle/trainer/Trainer.h"
#include "paddle/legacy/trainer/Trainer.h"
namespace paddle {

@ -14,7 +14,7 @@ limitations under the License. */
#include <paddle/utils/PythonUtil.h>
#include "paddle/trainer/Trainer.h"
#include "paddle/legacy/trainer/Trainer.h"
#include <gtest/gtest.h>
#include <paddle/legacy/pserver/ParameterServer2.h>

@ -17,7 +17,7 @@ limitations under the License. */
#include <algorithm>
#include <cstdlib>
#include "paddle/trainer/Trainer.h"
#include "paddle/legacy/trainer/Trainer.h"
using namespace paddle; // NOLINT
using namespace std; // NOLINT

@ -15,8 +15,8 @@ limitations under the License. */
#include <gtest/gtest.h>
#include <vector>
#include "ModelConfig.pb.h"
#include "paddle/legacy/trainer/Trainer.h"
#include "paddle/testing/TestUtil.h"
#include "paddle/trainer/Trainer.h"
using namespace paddle; // NOLINT
using namespace std; // NOLINT

@ -18,8 +18,8 @@ limitations under the License. */
#include <algorithm>
#include <cstdlib>
#include "paddle/legacy/trainer/Trainer.h"
#include "paddle/testing/TestUtil.h"
#include "paddle/trainer/Trainer.h"
#include "paddle/utils/Stat.h"
using namespace paddle; // NOLINT

@ -15,8 +15,8 @@ limitations under the License. */
#include <gtest/gtest.h>
#include <paddle/legacy/gserver/gradientmachines/GradientMachine.h>
#include <paddle/legacy/parameter/ParameterUpdateFunctions.h>
#include <paddle/trainer/Trainer.h>
#include <paddle/trainer/TrainerInternal.h>
#include <paddle/legacy/trainer/Trainer.h>
#include <paddle/legacy/trainer/TrainerInternal.h>
#include <paddle/utils/PythonUtil.h>
#include <paddle/utils/Util.h>
#include <paddle/utils/Version.h>

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save