After Width: | Height: | Size: 15 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 20 KiB After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 17 KiB |
@ -0,0 +1,114 @@
|
||||
# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import sys
|
||||
import argparse
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser('Parse Log')
|
||||
parser.add_argument(
|
||||
'--file_path', '-f', type=str, help='the path of the log file')
|
||||
parser.add_argument(
|
||||
'--sample_rate',
|
||||
'-s',
|
||||
type=float,
|
||||
default=1.0,
|
||||
help='the rate to take samples from log')
|
||||
parser.add_argument(
|
||||
'--log_period', '-p', type=int, default=1, help='the period of log')
|
||||
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
||||
def parse_file(file_name):
|
||||
loss = []
|
||||
error = []
|
||||
with open(file_name) as f:
|
||||
for i, line in enumerate(f):
|
||||
line = line.strip()
|
||||
if not line.startswith('pass'):
|
||||
continue
|
||||
line_split = line.split(' ')
|
||||
if len(line_split) != 5:
|
||||
continue
|
||||
|
||||
loss_str = line_split[2][:-1]
|
||||
cur_loss = float(loss_str.split('=')[-1])
|
||||
loss.append(cur_loss)
|
||||
|
||||
err_str = line_split[3][:-1]
|
||||
cur_err = float(err_str.split('=')[-1])
|
||||
error.append(cur_err)
|
||||
|
||||
accuracy = [1.0 - err for err in error]
|
||||
|
||||
return loss, accuracy
|
||||
|
||||
|
||||
def sample(metric, sample_rate):
|
||||
interval = int(1.0 / sample_rate)
|
||||
if interval > len(metric):
|
||||
return metric[:1]
|
||||
|
||||
num = len(metric) / interval
|
||||
idx = [interval * i for i in range(num)]
|
||||
metric_sample = [metric[id] for id in idx]
|
||||
return metric_sample
|
||||
|
||||
|
||||
def plot_metric(metric,
|
||||
batch_id,
|
||||
graph_title,
|
||||
line_style='b-',
|
||||
line_label='y',
|
||||
line_num=1):
|
||||
plt.figure()
|
||||
plt.title(graph_title)
|
||||
if line_num == 1:
|
||||
plt.plot(batch_id, metric, line_style, label=line_label)
|
||||
else:
|
||||
for i in range(line_num):
|
||||
plt.plot(batch_id, metric[i], line_style[i], label=line_label[i])
|
||||
plt.xlabel('batch')
|
||||
plt.ylabel(graph_title)
|
||||
plt.legend()
|
||||
plt.savefig(graph_title + '.jpg')
|
||||
plt.close()
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
assert args.sample_rate > 0. and args.sample_rate <= 1.0, "The sample rate should in the range (0, 1]."
|
||||
|
||||
loss, accuracy = parse_file(args.file_path)
|
||||
batch = [args.log_period * i for i in range(len(loss))]
|
||||
|
||||
batch_sample = sample(batch, args.sample_rate)
|
||||
loss_sample = sample(loss, args.sample_rate)
|
||||
accuracy_sample = sample(accuracy, args.sample_rate)
|
||||
|
||||
plot_metric(loss_sample, batch_sample, 'loss', line_label='loss')
|
||||
plot_metric(
|
||||
accuracy_sample,
|
||||
batch_sample,
|
||||
'accuracy',
|
||||
line_style='g-',
|
||||
line_label='accuracy')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
@ -0,0 +1,158 @@
|
||||
# Backward Building
|
||||
|
||||
## Motivation
|
||||
|
||||
In Neural Network, most models are solved by the backpropagation algorithm(known as **BP**) at present. Technically, BP calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. However, when configuring the model structure, users do not need to define the backward part. So a mechanism is required by the framework which can complete the model's backward part automatically according to the given forward part.
|
||||
|
||||
When implementing a specific `op`, the developer is also asked to implement its backward version, called `grad_op`. A `grad_op` takes gradients of its corresponding `op`'s outputs, and calculate gradients of the `op`'s inputs. During the building of a model's backward part, the framework creates each forward `op`'s `grad_op`, and then string them together in reverse order of forwarding part. In this way, gradients spread from the end to the beginning of the model, in another word, from the loss to parameters.
|
||||
|
||||
## Challenges
|
||||
|
||||
The motivation of backward building is apparent. However, implementation it correctly is not so easy. In the **Fluid** design, a deep learning model is described by `Program`, `Block`, `Op` and `Variable`. The `Block` itself can be nested. It means that the `op`s and `variable`s are scattered across different blocks rather than all be gathered in a single graph. Our backward building algorithm shall visit blocks in recursive order and be able to insert `grad_op`s and new created `variable`s into the right place.
|
||||
|
||||
## Usage
|
||||
|
||||
Although the whole algorithm is comprised of many functions, only one is exposed as API:
|
||||
|
||||
```python
|
||||
def append_backward(loss, parameter_list=None, no_grad_set=None):
|
||||
"""
|
||||
Append backward part to main_program
|
||||
|
||||
Args:
|
||||
loss(Variable): The variable generated by the cost function.
|
||||
parameter_list(list): Parameters that need to be updated by optimizers.
|
||||
If None, it means all parameters need to be updated.
|
||||
|
||||
no_grad_set(set): Variables that have no gradients in Block 0.
|
||||
If None, the set will be generated inside the function and
|
||||
contains all variables with `step_gradient=True` from all blocks.
|
||||
|
||||
Return:
|
||||
(list[Variable]): list of (parameters, gradients) pair.
|
||||
"""
|
||||
```
|
||||
|
||||
By invoking this API, the framework appends backward part of the program where the `loss` is. It takes three arguments. `loss` means the final loss value. It must be a scalar and is usually the output of the loss layer. It is also where the gradient generated and backpropagation starts. `parameter_list` marks all parameters needs updating. If it's `None`, all parameter will be updated by optimizers. `no_grad_set` marks variables without gradient. if all outputs of some `grad_op` are in `no_grad_set`, the `grad_op` will not be run.
|
||||
|
||||
This API will be invoked automatically before optimizer building.
|
||||
As a result, in most cases, users do not need to invoke the API by themselves to append backward part.
|
||||
|
||||
## Implementation
|
||||
|
||||
The implementation of backward building algorithm is in `backward.py` file. The whole algorithm can be divided into two independent parts: creating `grad_op`s and creating new variables.
|
||||
|
||||
### Creating `grad_op`s
|
||||
|
||||
The creating of `grad_op`s is implemented by:
|
||||
|
||||
```python
|
||||
def _append_backward_ops_(target,
|
||||
block,
|
||||
target_block,
|
||||
no_grad_dict,
|
||||
grad_to_var):
|
||||
"""
|
||||
Create all grad ops, and insert them into given block
|
||||
|
||||
Args:
|
||||
target(Variable): the target variable of forward pass
|
||||
block(Block): the block where forward ops are
|
||||
target_block(Block): the block which is going to hold new generated grad ops
|
||||
no_grad_dict(dict):
|
||||
key(int) block index
|
||||
val(set) a set of varibale names. These varibales have no gradient
|
||||
grad_to_var(dict)(output argument):
|
||||
key(str): grad variable name
|
||||
val(str): corresponding forward variable name
|
||||
"""
|
||||
```
|
||||
|
||||
Given a `block`, the function will traverses all `op`s in this block in reverse order, gets corresponding `grad_op` from the C++ core via `core.get_grad_op_desc()`, then append it to `target_block`.
|
||||
|
||||
However, some specific `op`(e.g. `while_op`, `if_else_op`) can hold its own sub-block. For these sub-blocks contains `op`s as well, the `grad_op` creating should be recursive.
|
||||
|
||||
During the reverse traversal, we check each `op` whether it has an attribute named `sub_block`. If so, it means there is a sub-block and we need to deal with it first. After creating a new block whose father is the one in `op`'s attribute, we invoke `_append_backward_ops_()` recursively, assigning the new block to parameter `target_block` and the one in `op`'s attribute to `block`. The *pseudo-code* shows this process:
|
||||
|
||||
```
|
||||
******* pseudo-code ********
|
||||
for op in reversed(block.ops):
|
||||
if op has an attribute named 'sub_block':
|
||||
Get the sub-block(`s_block`) from op's attribute.
|
||||
Create a new block(`grad_s_block`), whose father is `s_block`.
|
||||
Invoke _append_backward_ops_(), with `block=s_block` and `target_block=grad_s_block`
|
||||
|
||||
Invoke `core.get_grad_op_desc()` to get op's grad_op.
|
||||
Insert name correspondings between variables and their gradients of the grad_op to grad_to_var
|
||||
Assign grad_s_block to grad_op as it's 'sub_block' attribute.
|
||||
Append grad_op to current target_block.
|
||||
```
|
||||
|
||||
The first invoking of `_append_backward_ops_()` is initiated by `append_backward()`, in which parameters `block` and `target_block` are all assigned with root block(the block with index 0).
|
||||
|
||||
### Corner Cases of `grad_op` Creating
|
||||
|
||||
In the previous section, we show the regular process of `grad_op` creating. However, in some corner cases, the conventional algorithm is not enough to get the correct result and appending handling is required. These additional processes run after the algorithm mentioned above and do some special adjusts on its output `grad_op`s.
|
||||
|
||||
#### Shared Variables
|
||||
|
||||
If a variable is read by more than one `op` in the forward pass, its gradient is likely to be written by more than one `grad_op`s in the next backward pass. To make the gradient result being the sum of all `grad_op`s' outputs instead of the last running one, we assign each output with a temporary variable and then add a `sum_op` to add them up.
|
||||
|
||||
For the debug convenience, if the final gradient name is `w@GRAD`, it's corresponding temporary variables will be named as `w@GRAD@RENAME@0`, `w@GRAD@RENAME@1`...
|
||||
|
||||
See function `_addup_repetitive_outputs_` in `backward.py` for implementation details.
|
||||
|
||||
#### No Gradient Variables
|
||||
|
||||
In our framework, variables can be marked as *no_gradient*, it means that the gradient of this variable is unnecessary and can be considered as zero in model training. Apparently, when all the outputs of some `grad_op` are marked as *no_gradient*, the `grad_op` itself can be skipped in backward pass.
|
||||
|
||||
Another situation is all the gradient inputs of some `grad_op` are marked as *no_gradient*, which means all of them can be considered as zeros. For `grad_op`s are in essence the propagation of gradients, all the outputs are definitely zeros when all gradient inputs are zeros. Therefore the `grad_op` can also be skipped.
|
||||
|
||||
It should be noted that all these zero gradients still need to be creating and initialized by something, otherwise following `grad_op`s who take these gradients as inputs take the risk of using uninitialized memory. In our code, we employ `fill_zeros_like_op` to initialize them as all zeros.
|
||||
|
||||
This features are implemented in function `_remove_no_grad_branch_`. It checks new created `grad_op`s one-by-one, removes who can be skipped and inserts `fill_zeros_like_op` when its necessary. We can get the `no_grad_set` from the `_append_backward_ops_` argument `no_grad_dict` or generate it on the fly by scanning all variables' `no_gradient` attribute(True or False).
|
||||
|
||||
### Creating Backward Variables
|
||||
|
||||
Up to now, we have completed all creating and adjusting jobs of `grad_op`s. However, backward variables have not been created. Now they are only represented by `grad_op`'s input and output arguments. The backward variable creating job will be done by:
|
||||
|
||||
```python
|
||||
def _append_backward_vars_(block,
|
||||
start_op_idx,
|
||||
grad_to_var,
|
||||
grad_info_map):
|
||||
"""
|
||||
Create new variables required by backward pass.
|
||||
|
||||
Args:
|
||||
block(Block): the block where new variables will be created
|
||||
start_op_idx(int): Only variables required by ops in block.ops[start_op_idx : ] will be created
|
||||
grad_to_var(dict):
|
||||
key(str): grad variable name
|
||||
val(str): corresponding forward variable name
|
||||
In most cases, this dict is generated by _append_backward_ops_()
|
||||
grad_info_map(dict)(output argument):
|
||||
key(str): forward variable name
|
||||
val(tuple): a tuple of (str, int), str is the corresponding grad name, int is the block index
|
||||
"""
|
||||
```
|
||||
|
||||
Given a `block`, this function traverses all the `grad_op`s in it(The argument `start_op_idx` indicates where the grad_op sequence starts.) and creates all the uncreated outputs. The *pseudo-code* shows this process:
|
||||
|
||||
```
|
||||
for op in block.ops[start_op_idx : ]:
|
||||
|
||||
if op has an attribute named 'sub_block':
|
||||
Get the sub-block(`s_block`) from op's attribute.
|
||||
Invoke _append_backward_vars_(), with `block=s_block`
|
||||
|
||||
for var_name in op.all_output_names():
|
||||
if block.has_var_recursive(var_name) or var_name is the name of empty variable:
|
||||
continue
|
||||
create a new variable named 'var_name' in block
|
||||
if grad_to_var.has_key(var_name):
|
||||
set grad_info_map[grad_to_var[var_name]] as a tuple of (var_name. block)
|
||||
|
||||
do op's var type inference
|
||||
do op's shape inference
|
||||
```
|
After Width: | Height: | Size: 280 KiB |
@ -0,0 +1,163 @@
|
||||
# Design Doc: Concurrent Programming with Fluid
|
||||
|
||||
With PaddlePaddle Fluid, users describe a program other than a model. The program is a [`ProgramDesc`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/framework.proto) protobuf message. TensorFlow/MxNet/Caffe2 applications generate protobuf messages too, but their protobuf messages represent the model, a graph of operators, but not the program that trains/uses the model.
|
||||
|
||||
Many know that when we program TensorFlow, we can specify the device on which each operator runs. This allows us to create a concurrent/parallel AI application. An interesting questions is **how does a `ProgramDesc` represents a concurrent program?**
|
||||
|
||||
The answer relies on the fact that a `ProgramDesc` is similar to an abstract syntax tree (AST) that describes a program. So users just program a concurrent program that they do with any concurrent programming language, e.g., [Go](https://golang.org).
|
||||
|
||||
## An Analogy
|
||||
|
||||
The following table compares concepts in Fluid and Go
|
||||
|
||||
| Go | Fluid |
|
||||
|----|-------|
|
||||
|user-defined functions | [layers](https://github.com/PaddlePaddle/Paddle/tree/develop/python/paddle/v2/fluid) |
|
||||
| control-flow and built-in functions | [intrinsics/operators](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/operators) |
|
||||
| goroutines, channels | [class ThreadPool](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/framework/thread_pool.h) |
|
||||
| runtime | [class Executor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/executor.h) |
|
||||
|
||||
## An Example Concurrent Program
|
||||
|
||||
To review all above concepts in an example, let us take a simple program and writes its distributed version.
|
||||
|
||||
Suppose that we want to parallelize a naive Fluid program (written in Go and calling Fluid's Go binding) that multiplies two tensors.
|
||||
|
||||
```go
|
||||
import "fluid"
|
||||
|
||||
func paddlepaddle() {
|
||||
X = fluid.read(...)
|
||||
W = fluid.Tensor(...)
|
||||
Y = fluid.mult(X, W)
|
||||
}
|
||||
```
|
||||
|
||||
Please be aware that the Fluid's Go binding provides the default `main` function, which calls the `paddlepaddle` function, which, in this case, is defined in above program and creates the following `ProgramDesc` message.
|
||||
|
||||
```protobuf
|
||||
message ProgramDesc {
|
||||
block[0] = Block {
|
||||
vars = [X, W, Y],
|
||||
ops = [
|
||||
read(output = X)
|
||||
assign(input = ..., output = W)
|
||||
mult(input = {X, W}, output = Y)
|
||||
],
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Then, the default `main` function calls `fluid.run()`, which creates an instance of the [`class Executor`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/executor.h) and calls `Executor.Run(block[0])`, where `block[0]` is the first and only block defined in above `ProgramDesc` message.
|
||||
|
||||
The default `main` function is defined as follows:
|
||||
|
||||
```go
|
||||
func main() {
|
||||
paddlepaddle()
|
||||
fluid.run()
|
||||
}
|
||||
```
|
||||
|
||||
## The Concurrent Version
|
||||
|
||||
By parallelizing the above program, we could support very big tensor X by splitting into small pieces {x_1, x_2, ...} and sent each piece to worker process/node for parallel multiplication.
|
||||
|
||||
In this case, we can write a transpiler that takes a `ProgramDesc` message that represents the above example program and outputs two `ProgramDesc` messages, one for running on the master process/node, and the other one for worker processes/nodes.
|
||||
|
||||
### The Master Program
|
||||
|
||||
The master program could look like the following:
|
||||
|
||||
```protobuf
|
||||
message ProgramDesc {
|
||||
block[0] = Block {
|
||||
vars = [X, L, Y],
|
||||
ops = [
|
||||
read(output = X)
|
||||
kube_get_workers_addrs(output = L)
|
||||
Y = tensor_array(len(L))
|
||||
parallel_for(input = X, output = Y,
|
||||
attrs = {L, block_id(1)}) # referring to block 1
|
||||
]
|
||||
}
|
||||
|
||||
block[1] = Block {
|
||||
parent = 0,
|
||||
vars = [x, y, index],
|
||||
ops = [
|
||||
slice(input = [X, index], output = x) # index is initialized by parallel_for
|
||||
send(input = x, attrs = L[index])
|
||||
recv(outputs = y, attrs = L[index])
|
||||
assign(input = y, output = Y[index])
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The equivalent Fluid program (calling the Go binding) is:
|
||||
|
||||
```go
|
||||
func main() { //// block 0
|
||||
X = fluid.read(...)
|
||||
L = fluid.k8s.get_worker_addrs()
|
||||
Y = fluid.tensor_array(len(L))
|
||||
fluid.parallel_for(X, L,
|
||||
func(index int) { //// block 1
|
||||
x = X[index]
|
||||
fluid.send(L[index], x)
|
||||
y = fluid.recv(L[index])
|
||||
Y[index] = y
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
An explanation of the above program:
|
||||
|
||||
- `fluid.k8s` is a package that provides access to Kubernetes API.
|
||||
- `fluid.k8s.get_worker_addrs` returns the list of IP and ports of all pods of the current job except for the current one (the master pod).
|
||||
- `fluid.tensor_array` creates a [tensor array](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/lod_tensor_array.h). `fluid.parallel_for` creates a `ParallelFor` intrinsic, which, when executed,
|
||||
|
||||
1. creates `len(L)` scopes, each for the concurrent running of the sub-block (block 1 in this case), and initializes a variable named "index" in the scope to an integer value in the range `[0, len(L)-1]`, and
|
||||
2. creates `len(L)` threads by calling into the `ThreadPool` singleton, each thread
|
||||
1. creates an Executor instance, and
|
||||
2. calls `Executor.Run(block)`, where `block` is block 1 as explained above.
|
||||
1. Please be aware that block 1 is a sub-block of block 0, so ops in block 1 could refer to variables defined in block 0.
|
||||
|
||||
### The Worker Program
|
||||
|
||||
The worker program looks like
|
||||
|
||||
```go
|
||||
func main() {
|
||||
W = Tensor(...)
|
||||
x = fluid.listen_and_do(
|
||||
fluid.k8s.self_addr(),
|
||||
func(input Tensor) {
|
||||
output = fluid.mult(input, W)
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
where
|
||||
|
||||
- `fluid.listen_and_do` creates a `ListenAndDo` intrinsic, which, when executed,
|
||||
1. listens on the current pod's IP address, as returned by `fliud.k8s.self_addr()`,
|
||||
2. once a connection is established,
|
||||
1. creates a scope of two parameters, "input" and "output",
|
||||
2. reads a [Fluid variable](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/variable.h) and saves it into "input",
|
||||
3. creates an Executor instance and calls `Executor.Run(block)`, where the block is generated by running the lambda specified as the second parameter of `fluid.listen_and_do`.
|
||||
|
||||
## Summarization
|
||||
|
||||
From the above example, we see that:
|
||||
|
||||
1. Fluid enables the imperative programming paradigm by:
|
||||
1. letting users describe a program, but not a model (a sequence of layers, or a graph of operators), and
|
||||
2. call the `fluid.run` function that runs the program implicitly.
|
||||
1. The program is described as a `ProgramDesc` protobuf message.
|
||||
2. Function `Executor.Run` takes a block, instead of a `ProgramDesc`, as its parameter.
|
||||
3. `fluid.run` calls `Executor.Run` to run the first block in the `ProgramDesc` message.
|
||||
4. `Executor.Run`'s implementation is extremely simple -- it doesn't plan the execution nor create threads; instead, it runs on the current thread and execute intrinsics/operators' `Run` method sequentially as they appear in the `Block.ops` array.
|
||||
5. Intrinsics/operators' `Run` method might create threads. For example, the `ListenAndDo` operator creates a thread to handle each incoming request.
|
||||
6. Threads are not necessarily OS thread; instead, they could be [green threads](https://en.wikipedia.org/wiki/Green_threads) managed by ThreadPool. Multiple green threads might run on the same OS thread. An example green threads is Go's [goroutines](https://tour.golang.org/concurrency/1).
|