commit
3db3a1066b
@ -0,0 +1,216 @@
|
||||
# Design Doc: Python API
|
||||
|
||||
Due to the refactorization of the PaddlePaddle core, we need Python classes to construct corresponding protobuf messages that describe a DL program.
|
||||
|
||||
| Python classes | Protobuf messages |
|
||||
| --- | --- |
|
||||
| Program | ProgramDesc |
|
||||
| Block | BlockDesc |
|
||||
| Operator | OpDesc |
|
||||
| Variable | VarDesc |
|
||||
|
||||
Please be aware that these Python classes need to maintain some construction-time information, which are not part of the protobuf messages.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### Program
|
||||
|
||||
A `ProgramDesc` describes a [DL program](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/program.md), which is composed of an array of `BlockDesc`s. The `BlockDesc`s in a `ProgramDesc` can have a tree-like hierarchical structure. However, the `ProgramDesc` onlys stores a flattened array of `BlockDesc`s. A `BlockDesc` refers to its parent block by its index in the array. For example, operators in the step block of an RNN operator need to be able to access variables in its ancestor blocks.
|
||||
|
||||
Whenever we create a block, we need to set its parent block to the current block, hence the Python class `Program` needs to maintain a data member `current_block`.
|
||||
|
||||
```python
|
||||
class Program(objects):
|
||||
def __init__(self):
|
||||
self.proto = core.NewProgram() # a C++ ProgramDesc pointer.
|
||||
self.blocks = vector<Block>()
|
||||
self.blocks.append(Block(self, -1)) # the global block
|
||||
self.current_block = 0 # initialized to the global block
|
||||
|
||||
def global_block():
|
||||
return self.blocks[0]
|
||||
|
||||
def current_block():
|
||||
return self.get_block(self.current_block)
|
||||
|
||||
def rollback():
|
||||
self.current_block = self.current_block().parent_idx
|
||||
|
||||
def create_block():
|
||||
new_block_idx = len(self.block)
|
||||
self.blocks.append(Block(self, self.current_block))
|
||||
self.current_block = new_block_idx
|
||||
return current_block()
|
||||
```
|
||||
|
||||
`Program` is an accessor to the protobuf message `ProgramDesc`, which is created in C++ space, because the InferShape function is in C++, which manipulates `VarDesc` messages, which are in turn members of `BlockDesc`, which is a member of `ProgramDesc`.
|
||||
|
||||
`Program` creates the first block as the global block in its constructor. All parameters and their initializer operators are in the global block.
|
||||
|
||||
### Block
|
||||
|
||||
A [Block](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/block.md) includes
|
||||
|
||||
1. a map from variable names to an instance of the Python `Variable` class, and
|
||||
1. a list of `Operator` instances.
|
||||
|
||||
```python
|
||||
class Block(objects):
|
||||
def __init__(self, program, parent_idx):
|
||||
self.proto = core.NewBlock(program.proto)
|
||||
self.program = program
|
||||
self.vars = map<string, Variable>()
|
||||
self.ops = vector<Operator>()
|
||||
self.parent_idx = parent_idx
|
||||
|
||||
def create_var(self, ...):
|
||||
return Variable(self, ...)
|
||||
|
||||
def _create_global_var(self, ...):
|
||||
program.global_block().create_var(...)
|
||||
|
||||
def create_parameter(self, name, ...):
|
||||
# Parameter is a subclass of variable. See Parameter section for details.
|
||||
self.vars[name] = Parameter(self._create_global_var(...), ...)
|
||||
return self.vars[name]
|
||||
|
||||
def append_operator(self, ...):
|
||||
self.ops.append(Operator(self, ...))
|
||||
|
||||
def prepend_operator(self, ...): # Parameter's ctor prepands initialize operators.
|
||||
self.ops.prepend(Operator(self, ...))
|
||||
```
|
||||
|
||||
`create_parameter` is necessary because parameters are global variables, defined in the global block, but can be created in some sub-blocks. For example, an FC layer in the step block of an RNN operator.
|
||||
|
||||
`prepend_operator` is necessary because the constructor of `Parameter` needs to create the initialize (or load) operator of the parameter, and would like to put it in the *preamble* of the global block.
|
||||
|
||||
### Operator
|
||||
|
||||
The `Operator` class fills in the `OpDesc` message and calls the C++ function `InferShape` to infer the output shapes from the input shapes.
|
||||
|
||||
```python
|
||||
class Operator(object):
|
||||
def __init__(self,
|
||||
block, # Block
|
||||
type, # string
|
||||
inputs, # dict<string, Variable>
|
||||
outputs,# dict<stirng, Variable>
|
||||
attrs # dict<string, Any>
|
||||
):
|
||||
self.proto = core.NewOpDesc(block.proto, type, inputs, outputs, attrs)
|
||||
core.infer_shape(self.proto, inputs, outputs)
|
||||
|
||||
def type(self):
|
||||
return self.proto.type()
|
||||
```
|
||||
|
||||
`Operator` creates the `OpDesc` message in C++ space, so that it can call the `InferShape` function, which is in C++.
|
||||
|
||||
### Variable
|
||||
|
||||
Operators take Variables as its inputs and outputs.
|
||||
|
||||
```python
|
||||
class Variable(object):
|
||||
def __init__(self,
|
||||
block=None, # Block
|
||||
name=None, # string
|
||||
shape, # tuple
|
||||
dtype="float32", # string
|
||||
lod_level=None # int
|
||||
):
|
||||
if name is None:
|
||||
name = unique_name_generator()
|
||||
self.name = name
|
||||
self.block = block
|
||||
self.proto = core.NewVarDesc(block.proto, name, shape, lod_level)
|
||||
self.writer = None
|
||||
```
|
||||
|
||||
Please be aware of `self.writer`, that tracks operator who creates the variable. It possible that there are more than one operators who write a variable, but in Python space, each write to a variable is represented by a Variable class. This is guaranteed by the fact that **`core.NewVarDesc` must NOT create a new `VarDesc` message if its name already exists in the specified block**.
|
||||
|
||||
### Parameter
|
||||
|
||||
A parameter is a global variable with an initializer (or load) operator.
|
||||
|
||||
```python
|
||||
class Parameter(Variable):
|
||||
def __init__(self,
|
||||
block=None, # Block
|
||||
name=None, # string
|
||||
shape, # tuple
|
||||
dtype="float32", # string
|
||||
lod_level=None # int
|
||||
trainable, # bool
|
||||
initialize_op_attrs,
|
||||
optimize_op_attrs):
|
||||
super(Parameter, self).__init__(block, name, shape, dtype, lod_level)
|
||||
self.trainable = trainable
|
||||
self.optimize_op_attrs = optimize_op_attrs
|
||||
block.prepend(Operator(block, # Block
|
||||
initialize_op_attrs['type'], # string
|
||||
None, # no inputs
|
||||
self, # output is the parameter
|
||||
initialize_op_attrs)
|
||||
```
|
||||
|
||||
When users create a parameter, they can call
|
||||
|
||||
```python
|
||||
program.create_parameter(
|
||||
...,
|
||||
init_attr={
|
||||
type: "uniform_random",
|
||||
min: -1.0,
|
||||
max: 1.0,
|
||||
})
|
||||
)
|
||||
```
|
||||
|
||||
In above example, `init_attr.type` names an initialize operator. It can also name the load operator
|
||||
|
||||
```python
|
||||
init_attr={
|
||||
type: "load",
|
||||
filename: "something.numpy",
|
||||
}
|
||||
```
|
||||
|
||||
`optimize_op_attrs` is not in the `VarDesc` message, but kept in the Python instance, as it will be used in the Python space when creating the optimize operator's `OpDesc`, and will be in the `OpDesc` message.
|
||||
|
||||
## Layer Functions
|
||||
|
||||
A layer is a Python function that creates some operators and variables. Layers simplify the work of application programmers.
|
||||
|
||||
### Data Layer
|
||||
|
||||
```python
|
||||
def data_layer(name, type, column_name):
|
||||
block = the_current_program.glolal_block()
|
||||
var = block.create_global_var(
|
||||
name=name,
|
||||
shape=[None] + type.dims(),
|
||||
dtype=type.dtype)
|
||||
block.prepend_operator(block,
|
||||
type="Feed",
|
||||
inputs = None,
|
||||
outputs = [var],
|
||||
{column_name: column_name})
|
||||
return var
|
||||
```
|
||||
|
||||
The input to the feed operator is a special variable in the global scope, which is the output of [Python readers](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md).
|
||||
|
||||
### FC Layer
|
||||
|
||||
```python
|
||||
def fc_layer(input, size, ...):
|
||||
block = program.current_block()
|
||||
w = block.create_parameter(...)
|
||||
b = block.create_parameter(...)
|
||||
out = block.create_var()
|
||||
op = block.append_operator("FC", X=input, W=w, b=b, out=out)
|
||||
out.writer = op
|
||||
return out
|
||||
```
|
@ -0,0 +1,180 @@
|
||||
# Design Doc: Session
|
||||
|
||||
## Abstract
|
||||
|
||||
The *session* object encapsulates the environment in which the
|
||||
computation graph is executed.
|
||||
|
||||
We will have the *local* session and *remote* session, they offer the
|
||||
same [interface](#interface). The local session encapsulates the local
|
||||
runtime environment and the remote session encapsulates the cluster
|
||||
runtime environment.
|
||||
|
||||
The local runtime environment contains:
|
||||
|
||||
1. computation devices (i.e., CPU, GPU) handles, and
|
||||
1. the [scope](../scope.md) which holds all variables.
|
||||
|
||||
The remote runtime environment contains:
|
||||
|
||||
1. computation devices (i.e., CPU and GPU on node 0, 1) in a cluster,
|
||||
and
|
||||
1. the distributed [scope](../scope.md) in a cluster which holds all
|
||||
variables.
|
||||
|
||||
The user can create a remote session on Paddle Cloud and evaluate the
|
||||
computation graph with it. In this way, the user can control the
|
||||
remote computation resource in a cluster from his local computer.
|
||||
|
||||
|
||||
## Background
|
||||
|
||||
The current design has an implicit global session in which
|
||||
`paddle.eval()` is executed. The pain point is:
|
||||
|
||||
Since the user is not able to explicitly switch between runtime
|
||||
environments, the user cannot run a topology in two independent
|
||||
environments.
|
||||
|
||||
For example, in reinforcement learning, the user may want to have a
|
||||
stale model for inference and a fresh model for training, and only
|
||||
replace the stale model with the fresh model periodically.
|
||||
|
||||
Furthermore, we have no concept that encapsulates a remote environment
|
||||
that executes a computation graph.
|
||||
|
||||
We need the session object to address above issues.
|
||||
|
||||
|
||||
## Session
|
||||
|
||||
A session is an object that owns the runtime environment. All
|
||||
computations are executed through `session.eval()`.
|
||||
|
||||
|
||||
### Interface
|
||||
|
||||
```python
|
||||
eval(
|
||||
targets,
|
||||
feed_dict=None,
|
||||
)
|
||||
```
|
||||
|
||||
Evaluates the target Operations or Variables in `targets`.
|
||||
|
||||
- *targets*: the evaluation targets. Can be a single Operation or
|
||||
Variable, or a list with the Operations or Variables as
|
||||
elements. The value returned by `eval()` has the same shape as the
|
||||
`target` argument.
|
||||
|
||||
The PaddlePaddle program is represented by
|
||||
the [ProgramDesc](../design/program.md), `eval()` will infer the
|
||||
ProgramDesc from the given targets and run the PaddlePaddle
|
||||
program. Please
|
||||
see
|
||||
[this graph](./distributed_architecture.md#local-training-architecture) for
|
||||
the detailed illustration for the local session
|
||||
and
|
||||
[this graph](./distributed_architecture.md#distributed-training-architecture) for
|
||||
the detailed illustration for the remote session.
|
||||
|
||||
- *feed_dict*: a dictionary that contains the tensors which override
|
||||
the edges of the computation graph.
|
||||
|
||||
feed_dict not only can provide the input data, it can override any
|
||||
OP's input as well:
|
||||
|
||||
```python
|
||||
a = pd.constant(2.0, name="a")
|
||||
b = pd.variable(name="b")
|
||||
c = pd.mul(a,b)
|
||||
sess.eval(targets=c, feed_dict={"b":3.0}) # returns 6.0
|
||||
```
|
||||
|
||||
```python
|
||||
close()
|
||||
```
|
||||
|
||||
Closes the session and releases the scope that the session owns.
|
||||
|
||||
|
||||
### Create a Local Session
|
||||
|
||||
```python
|
||||
session(
|
||||
devices=None
|
||||
)
|
||||
```
|
||||
|
||||
Creates a new session. One session owns one global scope, so creating
|
||||
multiple sessions will create different scopes.
|
||||
|
||||
- *devices*: a single `string` or a list of `string` of device names,
|
||||
the corresponding devices will be the computation devices for
|
||||
`eval()`. If not specified, all available devices (e.g., all GPUs)
|
||||
will be used. The user doesn't need to specify the CPU device since
|
||||
it will be always used. Multiple sessions can use the same device.
|
||||
|
||||
|
||||
#### Example
|
||||
|
||||
```Python
|
||||
a = paddle.constant(1.0)
|
||||
b = paddle.constant(2.0)
|
||||
c = a + b
|
||||
sess = paddle.session(devices=["gpu:0", "gpu:1", "fpga:0"])
|
||||
sess.eval(c)
|
||||
sess.close()
|
||||
```
|
||||
|
||||
### Create a Remote Session
|
||||
|
||||
```python
|
||||
create_cloud_job(
|
||||
name,
|
||||
num_trainer,
|
||||
mem_per_trainer,
|
||||
gpu_per_trainer,
|
||||
cpu_per_trainer,
|
||||
num_ps,
|
||||
mem_per_ps,
|
||||
cpu_per_ps,
|
||||
)
|
||||
```
|
||||
|
||||
Creates a Paddle Cloud job. Fails if the job name exists.
|
||||
|
||||
```python
|
||||
get_cloud_job(
|
||||
name
|
||||
)
|
||||
```
|
||||
|
||||
Gets a Paddle Cloud job.
|
||||
|
||||
```python
|
||||
remote_session(
|
||||
job
|
||||
)
|
||||
```
|
||||
|
||||
- *job*: the Paddle Cloud job.
|
||||
|
||||
#### Example
|
||||
|
||||
```Python
|
||||
reader = paddle.reader.recordio("/pfs/home/peter/mnist-train-*") # data stored on Paddle Cloud
|
||||
image = reader.column(0)
|
||||
label = reader.column(1)
|
||||
fc1 = paddle.op.fc(image, size=256, act="sigmoid")
|
||||
fc2 = paddle.op.fc(fc1, size=10, act="softmax")
|
||||
cost = paddle.op.cross_entropy(fc2, label)
|
||||
opt = paddle.optimizer.sgd(cost)
|
||||
|
||||
job = paddle.create_cloud_job("test", 3, "1G", 1, 1, 2, "1G", 1)
|
||||
sess = paddle.remote_ession(job)
|
||||
for i in range(1000):
|
||||
sess.eval(opt)
|
||||
sess.close()
|
||||
```
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,90 @@
|
||||
# Design Doc: Gradient Operators Registration
|
||||
|
||||
|
||||
## The Problem Posed
|
||||
|
||||
In our current operator registration mechanism, for each operator, the programmer should register a *gradient operator creator* function, which takes a C++ operator instance, and returns the corresponding gradient instance.
|
||||
|
||||
However, as we decided to separate the *compilation* and *execution* of DL models, we need to reshape the creator to take a protobuf `OpDesc` message, and returns a corresponding message.
|
||||
|
||||
More than that, the new registration mechanism need to support the fact that an operators' gradient computation might be a composition of operators.
|
||||
|
||||
## Current Implementation
|
||||
|
||||
OpInfos store in a association map which key is the operator type. The `grad_op_type` indicate associated gradient operator type. Operator can create gradient operator by `OpInfo::creator_` of gradient. The pseudo code is
|
||||
|
||||
```cpp
|
||||
struct OpInfo {
|
||||
std::function<OperatorBase*(...)> creator_;
|
||||
std::string grad_op_type_;
|
||||
...
|
||||
};
|
||||
|
||||
map<string, OpInfo> OpInfoMap;
|
||||
|
||||
OperatorBase* CreateGradientOperator(const OperatorBase& op) {
|
||||
return OpInfoMap.at(op.Type()).creator_(...);
|
||||
}
|
||||
```
|
||||
|
||||
## Proposed Solution
|
||||
|
||||
The mapping relationship between an operator and its gradient operators is a function. The interface of that function is:
|
||||
|
||||
```cpp
|
||||
// (OpDesc) --> vector<OpDesc>
|
||||
std::function<std::vector<OpDescBind>(const OpDescBind&)>;
|
||||
```
|
||||
|
||||
The function takes an `OpDescBind` of the forward operator and returns one or many gradient operator descriptions. `OpDescBind` is a C++ wrapper for protobuf message `OpDesc` to manipulate `OpDesc` fast.
|
||||
|
||||
The `GradOpDescMaker` will be registered in `OpInfo`, to replace `grad_op_type_` field. The `OpInfo` should be
|
||||
|
||||
```cpp
|
||||
struct OpInfo {
|
||||
std::function<std::vector<std::unique_ptr<OpDescBind>>(const OpDescBind&)> grad_op_maker_;
|
||||
...
|
||||
};
|
||||
```
|
||||
|
||||
The `grad_op_maker_ ` is `nullptr` if the operator does not have associated gradient operators.
|
||||
|
||||
We propose a base class called `GradOpDescMakerBase` to let operator developers generate `Gradient Operators` easily. The public interface of that class is
|
||||
|
||||
```cpp
|
||||
class GradOpDescMakerBase {
|
||||
public:
|
||||
GradOpDescMakerBase(const OpDescBind& );
|
||||
virtual std::vector<std::unique_ptr<OpDescBind>> operator()()const = 0;
|
||||
};
|
||||
```
|
||||
|
||||
We can convert `GradOpDescMakerBase` to `std::function<std::vector<std::unique_ptr<OpDescBind>>(const OpDescBind&)>` by
|
||||
|
||||
```cpp
|
||||
using GradOpMaker = ...;
|
||||
std::function<std::vector<OpDescBind>(const OpDescBind&)> func;
|
||||
func = [] (const OpDescBind& fwd_op) {
|
||||
GradOpMaker maker(fwd_op);
|
||||
return maker();
|
||||
};
|
||||
```
|
||||
|
||||
We can write many helper functions since the `GradOpDescMakerBase` is a class now. The basic helper functions get the variables of `Input`, `Output`, `InputGradient` and `OutputGradient` in the forwarding operator.
|
||||
|
||||
We should chagne register macros at the same time. In the current solution, there is no difference between forwarding operators and backward operators. So `REGISTER_OP` just register one operator. If the `REGISTER_OPERATOR ` contains `OpProtoAndCheckerMaker` and `GradOpDescMaker`, we just list them in the same macro. It can be done by a macro contains `__VA_ARGS__`.
|
||||
|
||||
The user interface should be
|
||||
|
||||
```cpp
|
||||
vector<OpDesc> MinusOpGradMaker(OpDesc) {...}
|
||||
REGISTER_OPERATOR(minus, MinusOp, MinusOpProtoAndCheckerMaker, SumOpGradMaker);
|
||||
// Developers can still manually implement gradient operator.
|
||||
REGISTER_OPERATOR(minus_grad, MinusGradOp);
|
||||
```
|
||||
|
||||
The interface of current `REGISTER_OP` macro could not be changed. In `REGISTER_OP`, it will invoke `REGISTER_OPERATOR` two times and generate GradOpDescMaker inside.
|
||||
|
||||
```cpp
|
||||
REGISTER_OP(minus, MinusOp, MinusOpProtoAndCheckerMaker, minus_grad, MinusGradOp);
|
||||
```
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,146 @@
|
||||
## How to use Eigen in Paddle
|
||||
|
||||
Essentially, a neural network is a compute graph. T data needed for the computation is stored in `Tensor`s and its computation procedure is described by `Operator`s. An `Operator` calls the `Compute` interface in its corresponding `OpKernel` and operates on the `Tensor`.
|
||||
|
||||
|
||||
### Eigen Tensor Module
|
||||
|
||||
The Eigen Tensor module supports powerful element-wise computation. In addition, a piece of code written using it can be run on both the CPU and the GPU.
|
||||
|
||||
Note that Eigen Tensor is still being actively developed, so its tests are not completely covered and its documentation may be sparse.
|
||||
|
||||
For details on Eigen Tensor module, please see [doc 1](https://github.com/RLovelett/eigen/blob/master/unsupported/Eigen/CXX11/src/Tensor/README.md) and [doc 2](https://bitbucket.org/eigen/eigen/src/default/unsupported/Eigen/CXX11/src/Tensor/README.md).
|
||||
|
||||
|
||||
### paddle::framework::Tensor
|
||||
|
||||
Paddle Tensor's is defined in the framework directory with the following interface:
|
||||
|
||||
```cpp
|
||||
class Tensor {
|
||||
public:
|
||||
/*! Return a pointer to mutable memory block. */
|
||||
template <typename T>
|
||||
inline T* data();
|
||||
|
||||
/**
|
||||
* @brief Return a pointer to mutable memory block.
|
||||
* @note If not exist, then allocation.
|
||||
*/
|
||||
template <typename T>
|
||||
inline T* mutable_data(platform::Place place);
|
||||
|
||||
/**
|
||||
* @brief Return a pointer to mutable memory block.
|
||||
*
|
||||
* @param[in] dims The dimensions of the memory block.
|
||||
* @param[in] place The place of the memory block.
|
||||
*
|
||||
* @note If not exist, then allocation.
|
||||
*/
|
||||
template <typename T>
|
||||
inline T* mutable_data(DDim dims, platform::Place place);
|
||||
|
||||
/*! Resize the dimensions of the memory block. */
|
||||
inline Tensor& Resize(const DDim& dims);
|
||||
|
||||
/*! Return the dimensions of the memory block. */
|
||||
inline const DDim& dims() const;
|
||||
|
||||
private:
|
||||
/*! holds the memory block if allocated. */
|
||||
std::shared_ptr<Placeholder> holder_;
|
||||
|
||||
/*! points to dimensions of memory block. */
|
||||
DDim dim_;
|
||||
};
|
||||
```
|
||||
|
||||
`Placeholder` is used to delay memory allocation; that is, we can first define a tensor, using `Resize` to configure its shape, and then call `mutuable_data` to allocate the actual memory.
|
||||
|
||||
```cpp
|
||||
paddle::framework::Tensor t;
|
||||
paddle::platform::CPUPlace place;
|
||||
// set size first
|
||||
t.Resize({2, 3});
|
||||
// allocate memory on CPU later
|
||||
t.mutable_data(place);
|
||||
```
|
||||
|
||||
### paddle::framework::Tensor Usage
|
||||
`AddOp` demonstrates Tensor's usage.
|
||||
|
||||
- InferShape
|
||||
|
||||
When computing a neural network's compute graph, first call every `Operator`'s `InferShape` method, and use `Resize` to configure the size of the output tensor.
|
||||
|
||||
```cpp
|
||||
void InferShape(const framework::InferShapeContext &ctx) const override {
|
||||
PADDLE_ENFORCE_EQ(ctx.Input<Tensor>("X")->dims(),
|
||||
ctx.Input<Tensor>("Y")->dims(),
|
||||
"Two input of Add Op's dimension must be same.");
|
||||
ctx.Output<Tensor>("Out")->Resize(ctx.Input<Tensor>("X")->dims());
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
- Run
|
||||
|
||||
```cpp
|
||||
void Compute(const framework::ExecutionContext& context) const override {
|
||||
auto* input0 = context.Input<Tensor>("X");
|
||||
auto* input1 = context.Input<Tensor>("Y");
|
||||
auto* output = context.Output<Tensor>("Out");
|
||||
|
||||
output->mutable_data<T>(context.GetPlace());
|
||||
|
||||
auto x = EigenVector<T>::Flatten(*input0);
|
||||
auto y = EigenVector<T>::Flatten(*input1);
|
||||
auto z = EigenVector<T>::Flatten(*output);
|
||||
|
||||
auto place = context.GetEigenDevice<Place>();
|
||||
|
||||
z.device(place) = x + y;
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### paddle::framework::Tensor到EigenTensor的转换
|
||||
|
||||
As shown above, in actual computation, we need to transform the input and output `Tensor`s into formats Eigen supports. We show some functions in [eigen.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/eigen.h) to implement the transformation from `paddle::framework::Tensor`to `EigenTensor/EigenMatrix/EigenVector/EigenScalar`.
|
||||
|
||||
Using EigenTensor as an example:
|
||||
|
||||
```cpp
|
||||
Tensor t;
|
||||
float* p = t.mutable_data<float>(make_ddim({1, 2, 3}), platform::CPUPlace());
|
||||
for (int i = 0; i < 1 * 2 * 3; i++) {
|
||||
p[i] = static_cast<float>(i);
|
||||
}
|
||||
|
||||
EigenTensor<float, 3>::Type et = EigenTensor<float, 3>::From(t);
|
||||
```
|
||||
|
||||
`From` is an interfacing method provided by the EigenTensor template, which implements the transformation from a `paddle::framework::Tensor` object to an EigenTensor. Since `rank` is a template parameter, it needs to be explicitly specified at the time of the transformation.
|
||||
|
||||
In Eigen, tensors with different ranks are different types, with `Vector` bring a rank-1 instance. Note that `EigenVector<T>::From` uses a transformation from an 1-dimensional Paddle tensor to a 1-dimensional Eigen tensor while `EigenVector<T>::Flatten` reshapes a paddle tensor and flattens it into a 1-dimensional Eigen tensor. Both resulting tensors are still typed EigenVector.
|
||||
|
||||
For more transformations, see the [unit tests](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/eigen_test.cc) in the `eigen_test.cc` file.
|
||||
|
||||
|
||||
|
||||
### Implementing Computation
|
||||
|
||||
While computing, the device interface is needed from the EigenTensors on the left hand side of the assignments. Note that the computation between EigenTensors only changes the data originally inthe Tensor and does not change all the shape information associated with the Tensor.
|
||||
|
||||
```cpp
|
||||
auto x = EigenVector<T>::Flatten(*input0);
|
||||
auto y = EigenVector<T>::Flatten(*input1);
|
||||
auto z = EigenVector<T>::Flatten(*output);
|
||||
auto place = context.GetEigenDevice<Place>();
|
||||
z.device(place) = x + y;
|
||||
```
|
||||
|
||||
In this code segment, input0/input1/output can be Tensors of arbitrary dimension. We are calling Flatten from EigenVector, transforming a tensor of any dimension into a 1-dimensional EigenVector. After completing computation, input0/input1/output will retain the same shape information, and they can be resized using the `Resize` interface.
|
||||
|
||||
Because the Eigen Tensor module is under-documented, please refer to `OpKernel`'s computation code in TensorFlow's [kernel module documentation](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/kernels).
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,93 @@
|
||||
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License. */
|
||||
|
||||
#include "paddle/framework/block_desc.h"
|
||||
#include "paddle/framework/program_desc.h"
|
||||
|
||||
namespace paddle {
|
||||
namespace framework {
|
||||
|
||||
VarDescBind *BlockDescBind::NewVar(const std::string &name) {
|
||||
need_update_ = true;
|
||||
auto it = vars_.find(name);
|
||||
PADDLE_ENFORCE(it == vars_.end(), "Duplicated variable %s", name);
|
||||
auto var = new VarDescBind(name);
|
||||
vars_[name].reset(var);
|
||||
return var;
|
||||
}
|
||||
|
||||
VarDescBind *BlockDescBind::Var(const std::string &name) const {
|
||||
auto it = vars_.find(name);
|
||||
PADDLE_ENFORCE(it != vars_.end(),
|
||||
"Can not find variable %s in current block.", name);
|
||||
return it->second.get();
|
||||
}
|
||||
|
||||
bool BlockDescBind::HasVar(const std::string &name) const {
|
||||
return vars_.find(name) != vars_.end();
|
||||
}
|
||||
|
||||
std::vector<VarDescBind *> BlockDescBind::AllVars() const {
|
||||
std::vector<VarDescBind *> res;
|
||||
for (const auto &p : vars_) {
|
||||
res.push_back(p.second.get());
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
OpDescBind *BlockDescBind::AppendOp() {
|
||||
need_update_ = true;
|
||||
ops_.emplace_back(new OpDescBind());
|
||||
return ops_.back().get();
|
||||
}
|
||||
|
||||
OpDescBind *BlockDescBind::PrependOp() {
|
||||
need_update_ = true;
|
||||
ops_.emplace_front(new OpDescBind());
|
||||
return ops_.front().get();
|
||||
}
|
||||
|
||||
std::vector<OpDescBind *> BlockDescBind::AllOps() const {
|
||||
std::vector<OpDescBind *> res;
|
||||
for (const auto &op : ops_) {
|
||||
res.push_back(op.get());
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
void BlockDescBind::Sync() {
|
||||
if (need_update_) {
|
||||
auto &op_field = *this->desc_->mutable_ops();
|
||||
op_field.Clear();
|
||||
op_field.Reserve(static_cast<int>(ops_.size()));
|
||||
for (auto &op_desc : ops_) {
|
||||
op_field.AddAllocated(op_desc->Proto());
|
||||
}
|
||||
need_update_ = false;
|
||||
}
|
||||
}
|
||||
|
||||
BlockDescBind *BlockDescBind::ParentBlock() const {
|
||||
if (this->desc_->parent_idx() == -1) {
|
||||
return nullptr;
|
||||
}
|
||||
return prog_->Block(static_cast<size_t>(this->desc_->parent_idx()));
|
||||
}
|
||||
|
||||
void OpDescBind::SetBlockAttr(const std::string &name, BlockDescBind &block) {
|
||||
BlockDesc *desc = block.RawPtr();
|
||||
this->attrs_[name] = desc;
|
||||
}
|
||||
} // namespace framework
|
||||
} // namespace paddle
|
@ -0,0 +1,81 @@
|
||||
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License. */
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <deque>
|
||||
#include <unordered_map>
|
||||
#include <vector>
|
||||
#include "paddle/framework/op_desc.h"
|
||||
#include "paddle/framework/var_desc.h"
|
||||
#include "paddle/platform/macros.h"
|
||||
|
||||
namespace paddle {
|
||||
namespace framework {
|
||||
|
||||
class ProgramDescBind;
|
||||
|
||||
// Each Protobuf Message, we provide a XXXBind class. In that class, we optimize
|
||||
// read/write speed. Only when we want the protobuf message, the local changes
|
||||
// will be synchronized (by `Sync` method).
|
||||
|
||||
class BlockDescBind {
|
||||
public:
|
||||
friend std::vector<std::unique_ptr<OpDescBind>> MakeBlockBackward(
|
||||
ProgramDescBind &program_desc, int block_idx,
|
||||
std::unordered_set<std::string> &no_grad_vars);
|
||||
|
||||
friend void AppendBackward(
|
||||
ProgramDescBind &program_desc,
|
||||
const std::unordered_set<std::string> &no_grad_vars);
|
||||
|
||||
BlockDescBind(ProgramDescBind *prog, BlockDesc *desc)
|
||||
: prog_(prog), desc_(desc), need_update_(false) {}
|
||||
|
||||
int32_t ID() const { return desc_->idx(); }
|
||||
|
||||
int32_t Parent() const { return desc_->parent_idx(); }
|
||||
|
||||
VarDescBind *NewVar(const std::string &name_bytes);
|
||||
|
||||
VarDescBind *Var(const std::string &name_bytes) const;
|
||||
|
||||
bool HasVar(const std::string &var_name) const;
|
||||
|
||||
std::vector<VarDescBind *> AllVars() const;
|
||||
|
||||
BlockDescBind *ParentBlock() const;
|
||||
|
||||
OpDescBind *AppendOp();
|
||||
|
||||
OpDescBind *PrependOp();
|
||||
|
||||
std::vector<OpDescBind *> AllOps() const;
|
||||
|
||||
void Sync();
|
||||
|
||||
BlockDesc *RawPtr() { return desc_; }
|
||||
|
||||
private:
|
||||
ProgramDescBind *prog_; // not_own
|
||||
BlockDesc *desc_; // not_own
|
||||
bool need_update_;
|
||||
|
||||
std::deque<std::unique_ptr<OpDescBind>> ops_;
|
||||
std::unordered_map<std::string, std::unique_ptr<VarDescBind>> vars_;
|
||||
|
||||
DISABLE_COPY_AND_ASSIGN(BlockDescBind);
|
||||
};
|
||||
} // namespace framework
|
||||
} // namespace paddle
|
@ -0,0 +1,36 @@
|
||||
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License. */
|
||||
|
||||
#pragma once
|
||||
#include <typeindex>
|
||||
#include "paddle/framework/framework.pb.h"
|
||||
|
||||
namespace paddle {
|
||||
namespace framework {
|
||||
|
||||
inline DataType ToDataType(std::type_index type) {
|
||||
if (typeid(float).hash_code() == type.hash_code()) {
|
||||
return DataType::FP32;
|
||||
} else if (typeid(double).hash_code() == type.hash_code()) {
|
||||
return DataType::FP64;
|
||||
} else if (typeid(int).hash_code() == type.hash_code()) {
|
||||
return DataType::INT32;
|
||||
} else {
|
||||
PADDLE_THROW("Not supported");
|
||||
return static_cast<DataType>(-1);
|
||||
}
|
||||
}
|
||||
|
||||
} // namespace framework
|
||||
} // namespace paddle
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue