revert-4814-Add_sequence_project_op
commit
a4d410aec8
@ -0,0 +1,180 @@
|
||||
# Design Doc: Session
|
||||
|
||||
## Abstract
|
||||
|
||||
The *session* object encapsulates the environment in which the
|
||||
computation graph is executed.
|
||||
|
||||
We will have the *local* session and *remote* session, they offer the
|
||||
same [interface](#interface). The local session encapsulates the local
|
||||
runtime environment and the remote session encapsulates the cluster
|
||||
runtime environment.
|
||||
|
||||
The local runtime environment contains:
|
||||
|
||||
1. computation devices (i.e., CPU, GPU) handles, and
|
||||
1. the [scope](../scope.md) which holds all variables.
|
||||
|
||||
The remote runtime environment contains:
|
||||
|
||||
1. computation devices (i.e., CPU and GPU on node 0, 1) in a cluster,
|
||||
and
|
||||
1. the distributed [scope](../scope.md) in a cluster which holds all
|
||||
variables.
|
||||
|
||||
The user can create a remote session on Paddle Cloud and evaluate the
|
||||
computation graph with it. In this way, the user can control the
|
||||
remote computation resource in a cluster from his local computer.
|
||||
|
||||
|
||||
## Background
|
||||
|
||||
The current design has an implicit global session in which
|
||||
`paddle.eval()` is executed. The pain point is:
|
||||
|
||||
Since the user is not able to explicitly switch between runtime
|
||||
environments, the user cannot run a topology in two independent
|
||||
environments.
|
||||
|
||||
For example, in reinforcement learning, the user may want to have a
|
||||
stale model for inference and a fresh model for training, and only
|
||||
replace the stale model with the fresh model periodically.
|
||||
|
||||
Furthermore, we have no concept that encapsulates a remote environment
|
||||
that executes a computation graph.
|
||||
|
||||
We need the session object to address above issues.
|
||||
|
||||
|
||||
## Session
|
||||
|
||||
A session is an object that owns the runtime environment. All
|
||||
computations are executed through `session.eval()`.
|
||||
|
||||
|
||||
### Interface
|
||||
|
||||
```python
|
||||
eval(
|
||||
targets,
|
||||
feed_dict=None,
|
||||
)
|
||||
```
|
||||
|
||||
Evaluates the target Operations or Variables in `targets`.
|
||||
|
||||
- *targets*: the evaluation targets. Can be a single Operation or
|
||||
Variable, or a list with the Operations or Variables as
|
||||
elements. The value returned by `eval()` has the same shape as the
|
||||
`target` argument.
|
||||
|
||||
The PaddlePaddle program is represented by
|
||||
the [ProgramDesc](../design/program.md), `eval()` will infer the
|
||||
ProgramDesc from the given targets and run the PaddlePaddle
|
||||
program. Please
|
||||
see
|
||||
[this graph](./distributed_architecture.md#local-training-architecture) for
|
||||
the detailed illustration for the local session
|
||||
and
|
||||
[this graph](./distributed_architecture.md#distributed-training-architecture) for
|
||||
the detailed illustration for the remote session.
|
||||
|
||||
- *feed_dict*: a dictionary that contains the tensors which override
|
||||
the edges of the computation graph.
|
||||
|
||||
feed_dict not only can provide the input data, it can override any
|
||||
OP's input as well:
|
||||
|
||||
```python
|
||||
a = pd.constant(2.0, name="a")
|
||||
b = pd.variable(name="b")
|
||||
c = pd.mul(a,b)
|
||||
sess.eval(targets=c, feed_dict={"b":3.0}) # returns 6.0
|
||||
```
|
||||
|
||||
```python
|
||||
close()
|
||||
```
|
||||
|
||||
Closes the session and releases the scope that the session owns.
|
||||
|
||||
|
||||
### Create a Local Session
|
||||
|
||||
```python
|
||||
session(
|
||||
devices=None
|
||||
)
|
||||
```
|
||||
|
||||
Creates a new session. One session owns one global scope, so creating
|
||||
multiple sessions will create different scopes.
|
||||
|
||||
- *devices*: a single `string` or a list of `string` of device names,
|
||||
the corresponding devices will be the computation devices for
|
||||
`eval()`. If not specified, all available devices (e.g., all GPUs)
|
||||
will be used. The user doesn't need to specify the CPU device since
|
||||
it will be always used. Multiple sessions can use the same device.
|
||||
|
||||
|
||||
#### Example
|
||||
|
||||
```Python
|
||||
a = paddle.constant(1.0)
|
||||
b = paddle.constant(2.0)
|
||||
c = a + b
|
||||
sess = paddle.session(devices=["gpu:0", "gpu:1", "fpga:0"])
|
||||
sess.eval(c)
|
||||
sess.close()
|
||||
```
|
||||
|
||||
### Create a Remote Session
|
||||
|
||||
```python
|
||||
create_cloud_job(
|
||||
name,
|
||||
num_trainer,
|
||||
mem_per_trainer,
|
||||
gpu_per_trainer,
|
||||
cpu_per_trainer,
|
||||
num_ps,
|
||||
mem_per_ps,
|
||||
cpu_per_ps,
|
||||
)
|
||||
```
|
||||
|
||||
Creates a Paddle Cloud job. Fails if the job name exists.
|
||||
|
||||
```python
|
||||
get_cloud_job(
|
||||
name
|
||||
)
|
||||
```
|
||||
|
||||
Gets a Paddle Cloud job.
|
||||
|
||||
```python
|
||||
remote_session(
|
||||
job
|
||||
)
|
||||
```
|
||||
|
||||
- *job*: the Paddle Cloud job.
|
||||
|
||||
#### Example
|
||||
|
||||
```Python
|
||||
reader = paddle.reader.recordio("/pfs/home/peter/mnist-train-*") # data stored on Paddle Cloud
|
||||
image = reader.column(0)
|
||||
label = reader.column(1)
|
||||
fc1 = paddle.op.fc(image, size=256, act="sigmoid")
|
||||
fc2 = paddle.op.fc(fc1, size=10, act="softmax")
|
||||
cost = paddle.op.cross_entropy(fc2, label)
|
||||
opt = paddle.optimizer.sgd(cost)
|
||||
|
||||
job = paddle.create_cloud_job("test", 3, "1G", 1, 1, 2, "1G", 1)
|
||||
sess = paddle.remote_ession(job)
|
||||
for i in range(1000):
|
||||
sess.eval(opt)
|
||||
sess.close()
|
||||
```
|
@ -0,0 +1,90 @@
|
||||
# Design Doc: Gradient Operators Registration
|
||||
|
||||
|
||||
## The Problem Posed
|
||||
|
||||
In our current operator registration mechanism, for each operator, the programmer should register a *gradient operator creator* function, which takes a C++ operator instance, and returns the corresponding gradient instance.
|
||||
|
||||
However, as we decided to separate the *compilation* and *execution* of DL models, we need to reshape the creator to take a protobuf `OpDesc` message, and returns a corresponding message.
|
||||
|
||||
More than that, the new registration mechanism need to support the fact that an operators' gradient computation might be a composition of operators.
|
||||
|
||||
## Current Implementation
|
||||
|
||||
OpInfos store in a association map which key is the operator type. The `grad_op_type` indicate associated gradient operator type. Operator can create gradient operator by `OpInfo::creator_` of gradient. The pseudo code is
|
||||
|
||||
```cpp
|
||||
struct OpInfo {
|
||||
std::function<OperatorBase*(...)> creator_;
|
||||
std::string grad_op_type_;
|
||||
...
|
||||
};
|
||||
|
||||
map<string, OpInfo> OpInfoMap;
|
||||
|
||||
OperatorBase* CreateGradientOperator(const OperatorBase& op) {
|
||||
return OpInfoMap.at(op.Type()).creator_(...);
|
||||
}
|
||||
```
|
||||
|
||||
## Proposed Solution
|
||||
|
||||
The mapping relationship between an operator and its gradient operators is a function. The interface of that function is:
|
||||
|
||||
```cpp
|
||||
// (OpDesc) --> vector<OpDesc>
|
||||
std::function<std::vector<OpDescBind>(const OpDescBind&)>;
|
||||
```
|
||||
|
||||
The function takes an `OpDescBind` of the forward operator and returns one or many gradient operator descriptions. `OpDescBind` is a C++ wrapper for protobuf message `OpDesc` to manipulate `OpDesc` fast.
|
||||
|
||||
The `GradOpDescMaker` will be registered in `OpInfo`, to replace `grad_op_type_` field. The `OpInfo` should be
|
||||
|
||||
```cpp
|
||||
struct OpInfo {
|
||||
std::function<std::vector<std::unique_ptr<OpDescBind>>(const OpDescBind&)> grad_op_maker_;
|
||||
...
|
||||
};
|
||||
```
|
||||
|
||||
The `grad_op_maker_ ` is `nullptr` if the operator does not have associated gradient operators.
|
||||
|
||||
We propose a base class called `GradOpDescMakerBase` to let operator developers generate `Gradient Operators` easily. The public interface of that class is
|
||||
|
||||
```cpp
|
||||
class GradOpDescMakerBase {
|
||||
public:
|
||||
GradOpDescMakerBase(const OpDescBind& );
|
||||
virtual std::vector<std::unique_ptr<OpDescBind>> operator()()const = 0;
|
||||
};
|
||||
```
|
||||
|
||||
We can convert `GradOpDescMakerBase` to `std::function<std::vector<std::unique_ptr<OpDescBind>>(const OpDescBind&)>` by
|
||||
|
||||
```cpp
|
||||
using GradOpMaker = ...;
|
||||
std::function<std::vector<OpDescBind>(const OpDescBind&)> func;
|
||||
func = [] (const OpDescBind& fwd_op) {
|
||||
GradOpMaker maker(fwd_op);
|
||||
return maker();
|
||||
};
|
||||
```
|
||||
|
||||
We can write many helper functions since the `GradOpDescMakerBase` is a class now. The basic helper functions get the variables of `Input`, `Output`, `InputGradient` and `OutputGradient` in the forwarding operator.
|
||||
|
||||
We should chagne register macros at the same time. In the current solution, there is no difference between forwarding operators and backward operators. So `REGISTER_OP` just register one operator. If the `REGISTER_OPERATOR ` contains `OpProtoAndCheckerMaker` and `GradOpDescMaker`, we just list them in the same macro. It can be done by a macro contains `__VA_ARGS__`.
|
||||
|
||||
The user interface should be
|
||||
|
||||
```cpp
|
||||
vector<OpDesc> MinusOpGradMaker(OpDesc) {...}
|
||||
REGISTER_OPERATOR(minus, MinusOp, MinusOpProtoAndCheckerMaker, SumOpGradMaker);
|
||||
// Developers can still manually implement gradient operator.
|
||||
REGISTER_OPERATOR(minus_grad, MinusGradOp);
|
||||
```
|
||||
|
||||
The interface of current `REGISTER_OP` macro could not be changed. In `REGISTER_OP`, it will invoke `REGISTER_OPERATOR` two times and generate GradOpDescMaker inside.
|
||||
|
||||
```cpp
|
||||
REGISTER_OP(minus, MinusOp, MinusOpProtoAndCheckerMaker, minus_grad, MinusGradOp);
|
||||
```
|
File diff suppressed because it is too large
Load Diff
@ -1,27 +1,32 @@
|
||||
add_subdirectory(cuda)
|
||||
add_subdirectory(function)
|
||||
add_subdirectory(utils)
|
||||
add_subdirectory(testing)
|
||||
add_subdirectory(math)
|
||||
add_subdirectory(parameter)
|
||||
add_subdirectory(gserver)
|
||||
add_subdirectory(pserver)
|
||||
add_subdirectory(trainer)
|
||||
add_subdirectory(scripts)
|
||||
add_subdirectory(string)
|
||||
|
||||
if(Boost_FOUND)
|
||||
add_subdirectory(memory)
|
||||
add_subdirectory(platform)
|
||||
add_subdirectory(framework)
|
||||
add_subdirectory(operators)
|
||||
add_subdirectory(pybind)
|
||||
endif()
|
||||
add_subdirectory(parameter)
|
||||
add_subdirectory(testing)
|
||||
|
||||
if(WITH_C_API)
|
||||
if(MOBILE_INFERENCE)
|
||||
add_subdirectory(capi)
|
||||
endif()
|
||||
else()
|
||||
add_subdirectory(pserver)
|
||||
add_subdirectory(trainer)
|
||||
add_subdirectory(string)
|
||||
add_subdirectory(scripts)
|
||||
|
||||
if(WITH_C_API)
|
||||
add_subdirectory(capi)
|
||||
endif()
|
||||
|
||||
if(Boost_FOUND)
|
||||
add_subdirectory(memory)
|
||||
add_subdirectory(platform)
|
||||
add_subdirectory(framework)
|
||||
add_subdirectory(operators)
|
||||
add_subdirectory(pybind)
|
||||
endif()
|
||||
|
||||
if(WITH_SWIG_PY)
|
||||
add_subdirectory(api)
|
||||
if(WITH_SWIG_PY)
|
||||
add_subdirectory(api)
|
||||
endif()
|
||||
endif()
|
||||
|
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue