update to develop branch.

revert-4814-Add_sequence_project_op
dangqingqing 7 years ago
commit b325213150

@ -31,6 +31,3 @@
- id: go-fmt
types:
- go
- id: gometalinter
types:
- go

@ -125,3 +125,8 @@ simple_attention
:members: simple_attention
:noindex:
dot_product_attention
---------------------
.. automodule:: paddle.v2.networks
:members: dot_product_attention
:noindex:

@ -0,0 +1,23 @@
# Executor Design Doc
## Motivation
We use executor to do the runtime evaluation of a `ProgramDesc`.
## Overview
An executor takes a `ProgramDesc`, a `block_id` and a `Scope`. The `ProgramDesc` is a list of blocks and each block contains the protobuf definition of all the parameters and operators. The `block_id` specifies the entrance block. And the `Scope` is the container of all the variable instance, which is persistent throughout different runs.
### What does executor do?
It evaluates all the operators in the `block_id`th block of a `ProgramDesc`.
### What does executor NOT do?
It does not do runtime optimization, meaning intelligently parse the dependency of each op a choose which one to be run and in which order they should be run.
It does not do graph partitioning, meaning dividing the `ProgramDesc` into several small pieces and executing them on different devices.
## Implementation
`Executor` evaluates a `ProgramDesc`. Essentially, it instantiates Variables and Operators, then run all the operators in sequence. [[code]](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/executor.cc)

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 989 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB

@ -0,0 +1,78 @@
# Design Doc: InferVarType
## The Problem Posed
The variable in our design can hold variant types. Such as `LoDTensor` and `SelectedRows`. An operator should be able to inference the variable types of its output.
For example, a `lookup table` operator takes two `LoDTensor`; one is a float tensor as the embedding table, the other is an int tensor as word ID. The gradient operator of `lookup table` will generate a `SelectedRows` as its output. A `sum` operator can take both `LoDTensor` and `SelectedRows` as its inputs and will generate a `LoDTensor` if any of its inputs is `LoDTensor`, otherwise, the `sum` operator will generate `SelectedRows` as its output.
The variable type will be constant at runtime. Every variable's type can either be set by the user (input data and parameter) or be inferred by the operator in compile time.
## Proposed Solution
The `InferVarType` is a compile-time function which is registered to each operator. The inferface of that function is:
```c++
using InferVarTypeFN = std::function<
void (const OpDescBind& /*op_desc*/, BlockDescBind* /*block*/)>;
```
It takes an operator description as its input and will write the output variable type and store them in block description.
The `InferVarTypeFN` will be registered in `OpInfo`, to replace `infer_var_type_` field. The `OpInfo` should be
```cpp
struct OpInfo {
InferVarTypeFN infer_var_type_;
...
};
```
The default `InferVarType` will set output type as `LoDTensor`. It can be done by `GetInferVarType()`.
```cpp
void DefaultInferVarType(const OpDescBind& op_desc, BlockDescBind* block) {
// set the output type of variable as `LoDTensor`.
// ...
}
struct OpInfo {
InferVarTypeFN infer_var_type_;
InferVarTypeFN GetInferVarType() const {
if (infer_var_type_) {
return infer_var_type_;
} else {
return DefaultInferVarType;
}
}
};
```
## Register InferVarType
We provide a thin base class for registering an `InferVarTypeFN`. To use a base class will ease the implementation of registry since we can detect the registry entry is an `InferVarTypeFN` or not.
```cpp
class VarTypeInferer {
public:
virtual void operator()(const OpDescBind& op_desc, BlockDescBind* block) const = 0;
}
```
Operator developers can write the specialize `VarTypeInferer` as follow.
```cpp
class SpecialVarTypeInferer : public VarTypeInferer {
public:
virtual void operator()(const OpDescBind& op_desc, BlockDescBind* block) const {
// .. own logic
}
}
```
Then user can register the `InferVarType` just like `GradOpDescMaker` and `OpInfoMaker`.
```
REGISTER_OPERATOR(some_op, OpType, SpecialVarTypeInferer, ...);
```

@ -0,0 +1,63 @@
# Prune
## Motivation
We want to support running inference, training and checkpointing in one `ProgramDesc`. We implement
`void Prune(const ProgramDesc* input, ProgramDesc* output)` function, which takes a `ProgramDesc`
and generate a pruned `ProgramDesc`.
## Challenge
Pruning need to support both variables and operators being evaluation targets. Consider the following
different situations.
```python
# Case 1: run foward pass.
cost_np = session.run(target=cost)
# Case 2: run backward passing.
opts_np, _ = session.run(target=[cost, opt])
# Case 3: run checkpointing
_ = session.run(target=checkpoint)
```
## Solution
To support evaluation of operators, we add `is_target` field in the `OpDesc`.
```c++
message OpDesc {
required string type = 3;
repeated Var inputs = 1;
repeated Var outputs = 2;
repeated Attr attrs = 4;
optional bool is_target = 5 [ default = false ];
};
```
To support evaluation of variables, we add [fetch_op](https://github.com/PaddlePaddle/Paddle/pull/4599).
For each variable in the `target`, we insert a `fetch_op` into the `ProgramDesc` with `variable` being
`fetch_op`'s input. Then we also set `fetch_op` is a target.
### Algorithm
If an operator needs to be run, it must fall into one of the following cases:
1. It is the target.
2. It is depended by some other ops, meaning its output is some other op's input.
The first case can be checked by `op_desc.is_traget()` . The second case can be implement as
```c++
bool HasDependentVar(const OpDesc& op_desc, const std::set<string>& dependent_vars) {
for (auto& var : op_desc.outputs()) {
for (auto& argu : var.arguments()) {
if (dependent_vars.count(argu) != 0) {
return true;
}
}
}
return false;
}
```
Then the whole algorithm can be implemented as the following [code](https://github.com/tonyyang-svail/Paddle/blob/prune_impl/paddle/framework/prune.cc).

@ -179,40 +179,104 @@ init_attr={
`optimize_op_attrs` is not in the `VarDesc` message, but kept in the Python instance, as it will be used in the Python space when creating the optimize operator's `OpDesc`, and will be in the `OpDesc` message.
## Layer Functions
## Layer Function
A layer is a Python function that creates some operators and variables. Layers simplify the work of application programmers.
A layer is a Python function that creates some operators and variables. Layers simplify the work of application programmers.
### Data Layer
Layer functions take `Variable` and configuration parameters as its input and return the output variable(s).
For example, `FullyConnected` take one or more variable as its input. The input could be input data or another layer's output. There are many configuration options for a `FullyConnected` layer, such as layer size, activation, parameter names, initialization strategies of parameters, and so on. The `FullyConnected` layer will return an output variable.
### Necessity for reusing code between layer functions
There are a lot of code that can be reused. Such as
* Give the default value of configuration. e.g., default initialize strategy for parameters is uniform random with `min = -1.0`, `max = 1.0`. and default initialize strategy for bias is to fill zero.
* Append the activation operator.
* Create a temporary variable.
* Create parameter.
* Generate a unique name.
* Add a bias.
* ...
A mechanism to reuse code between layer functions is necessary. It will be around [150 lines of code](https://github.com/PaddlePaddle/Paddle/pull/4724/files#diff-823b27e07e93914ada859232ae23f846R12) if we write a `FullyConnected` layer without any helper functions.
### Comparision between global functions and helper class
The `FullyConnected` layer will be as follow when we provide global functions:
```python
def data_layer(name, type, column_name):
block = the_current_program.glolal_block()
var = block.create_global_var(
name=name,
shape=[None] + type.dims(),
dtype=type.dtype)
block.prepend_operator(block,
type="Feed",
inputs = None,
outputs = [var],
{column_name: column_name})
return var
def fc_layer(input, size, param_attr=None, bias_attr=None, act=None, name=None):
if name is None:
name = unique_name("fc")
input = multiple_input(input)
param_attr = default_param_attr(param_attr)
param_attr = multiple_param_attr(param_attr, len(input))
# mul
mul_results = []
for ipt, attr in zip(input, param_attr):
shape = ipt.shape[1:] + [size]
w = g_program.global_block().create_parameter(shape, ipt.dtype, name, attr)
tmp = create_tmp_var(name)
g_program.current_block().append_op("mul", {ipt, w}, {tmp})
mul_results.append(tmp)
# add sum
...
# add bias
...
# add activation
...
return out
```
The input to the feed operator is a special variable in the global scope, which is the output of [Python readers](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md).
We can provide many helpers functions for layer developers. However, there are several disadvantages for global helper functions:
1. We need a namespace for these methods, then layer developers can quickly figure out what method they can use.
2. Global functions will force layer developers to pass its parameter time by time.
So we provide a helper class, `LayerHelper`, to share code between layer functions. The `FullyConnected` Layer will be as follow.
```python
def fc_layer(input, size, param_attr=None, bias_attr=None, act=None, name=None):
helper = LayerHelper(locals()) # pass all parameter to LayerHelper
mul_results = []
for ipt, param in helper.iter_multiple_input_and_param():
w = helper.create_parameter(shape=ipt.shape[1:] + [size], dtype = ipt.dtype)
tmp = helper.create_tmp_variable()
helper.append_op('mul', {ipt, w}, {tmp})
mul_results.append(tmp)
pre_bias = helper.add_sum(mul_results)
pre_activation = helper.add_bias(pre_bias)
return helper.add_activation(pre_activation)
```
We not only use the fewer lines of code to write `fc_layer` but also make the code clearer to understand. At the same time, layer developers can figure out what function they can invoke by typing `helper.` in a python editor.
### Implementation of layer helper
### FC Layer
We just keep all parameters of a layer function as a dictionary in layer helper as a private data member. Every method of layer helper will look up the dictionary after it is invoked. In that way, we can implement a layer helper for all layer functions even some layer does not contain some operator. For example, The `activation` is used by the FullyConnected layer or convolution layers, but a cross-entropy layer does not use it. The example code of `add_activation` are:
```python
def fc_layer(input, size, ...):
block = program.current_block()
w = block.create_parameter(...)
b = block.create_parameter(...)
out = block.create_var()
op = block.append_operator("FC", X=input, W=w, b=b, out=out)
out.writer = op
return out
class LayerHelper(object):
def __init__(self, **kwargs): # kwargs is short for `keyword arguments`
self.kwargs = kwargs
def add_activation(self, input_var):
act = self.kwargs.get("act", None) # default value is None
if act is None: # do nothing if no act
return input_var
tmp = self.create_tmp_var(self)
self.append_op(type=act, input=input_var, output=tmp)
return tmp
```
## Optimizer

@ -177,9 +177,6 @@ REGISTER_OP(op_type, op_class, op_maker_class, grad_op_type, grad_op_class)
REGISTER_OP_WITHOUT_GRADIENT(op_type, op_class, op_maker_class)
```
### USE Macros
Make sure the registration process is executed and linked.
---
# Registration Process
1. Write an Op class and its gradient Op class, if required.
@ -188,8 +185,6 @@ Make sure the registration process is executed and linked.
1. Call maker class to complete `proto` and `checker`
2. Using the completed `proto` and `checker`, it will add a new key-value pair to the `OpInfoMap`
4. Invoke the `USE` macro in which the Op is used to make sure that it is linked.
---
# Backward Module (1/2)
### Create Backward Operator

@ -0,0 +1,103 @@
# Regularization in PaddlePaddle
## Introduction to Regularization
A central problem in machine learning is how to design an algorithm that will perform well not just on the training data, but also on new data. Many strategies are used by machine learning practitioners to reduce the test error, possibly at the expense of increased training error. These strategies are collectively known as **regularization**.
### Parameter Norm Penalties
Most common regularization approaches in deep learning are based on limiting the capacity of the models by adding a parameter norm penalty to the objective function `J`. This is given as follows:
<img src="./images/loss_equation.png" align="center"/><br/>
The parameter `alpha` is a hyperparameter that weights the relative contribution of the norm penalty term, `omega`, relative to the standard objective function `J`.
The most commonly used norm penalties are the L2 norm penalty and the L1 norm penalty. These are given as follows:
##### L2 Regularization:
<img src="./images/l2_regularization.png" align="center"/><br/>
##### L1 Regularization
<img src="./images/l1_regularization.png" align="center"/><br/>
A much more detailed mathematical background of reguilarization can be found [here](http://www.deeplearningbook.org/contents/regularization.html).
## How to do Regularization in PaddlePaddle
On surveying existing frameworks like Tensorflow, PyTorch, Caffe, etc, it can be seen that there are 2 common approaches of doing regularization:
1. Making regularization a part of the optimizer using an attribute like `weight_decay` that is used to control the scale of the L2 Penalty. This approach is used in PyTorch as follows:
```python
opt = torch.optim.SGD(params, lr=0.2, weight_decay=0.2)
```
At every optimization step, this code will add the gradient of the L2 Norm of the params to the gradient of the params with respect to the loss function. This can seen in the following code snippet:
```python
if weight_decay != 0:
d_p.add_(weight_decay, p.data)
```
This is a very restyrictive way of doing regularization and does not give the users enough flexibility.
**Advantages**:
- It is easy to implement for us.
- Faster execution of backward. However, it can be done manually by advanced users too.
**Disadvantages**:
- Not flexible for other regularizations such as L1/L0 regularization.
- Does not allow for different regularization coefficient for different parameters. For example, in most models, ony the weight matrices are regularized and the bias vectors are unregularized.
- Tightly coupled optimizer and regularization implementation.
2. Adding regularization ops to the graph through Python API. This approach is used by Tensorflow and Caffe. Using this approach, we manually add regularization ops to the graph and then add the regularization loss to the final loss function before sending them to the optimizer.
**Advantages**:
- Allows for greater flexibility to the users of Paddle. Using this approach, the users can put different regularization to different parameters and also choose parameters that are not a part of regularization.
- Makes it easy for the users to customize and extend the framework.
**Disadvantages**:
- Implementation requires comprehensive design and time.
## Proposal for Regularization in PaddlePaddle
### Low-Level implementation
In the new design, we propose to create new operations for regularization. For now, we can add 2 ops thgat correspond to the most frequently used regularizations:
- L2_regularization_op
- L1_regularization_op
These ops can be like any other ops with their own CPU/GPU implementations either using Eigen or separate Cpu and GPU kernels. As the initial implementation, we can implement their kernels using Eigen following the abstraction pattern implemented for [Activation Ops](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/accuracy_op.h). This abstraction pattern can make it very easy to implement new regularization schemes. other than L1 and L2 norm penalties.
The idea of building ops for regularization is in sync with the refactored Paddle philosophy of using operators to represent any computation unit. The way these ops will be added to the computation graph, will be decided by the [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md#layer-function) in Python API.
### Computation Graph
Below is an example of a really simple feed forward neural network.
<img src="./images/feed_forward.png" align="center"/><br/>
The Python API will modify this computation graph to add regularization operators. The modified computation graph will look as follows:
<img src="./images/feed_forward_regularized.png" align="center"/><br/>
   
### Python API implementation for Regularization
Using the low level ops, `L2_regularization_op` and `L1_regularization_op`, any user can add regularization to their computation graphs. However, this will require a lot of lines of code and we should design Python APIs that support regularization. An example of such an API can be seen in [Keras](https://keras.io/regularizers/). As per the PaddlePaddle [Python API design](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md), the layer functions are responsible for creating operators, operator parameters and variables. Since regularization is a property of parameters, it makes sense to create these in the layer functions.
#### Creation of Regularization ops
There are two possibilities for creating the regularization ops:
1. We create these ops immediately while building the computation graph.
2. We add these ops in a lazy manner, just before the backward, similar to the way the optimization ops are added.
The proposal is to add these ops in a lazy manner just before the backward pass.
#### Storage of Regularization attributes
Since we want to create the regularization ops in a lazy manner, the regularization attributes (type of regularization and weight of regularization penalty) can be stored as attributes of the [`Parameter`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/framework/framework.py#L421) class. This is because regularization is a property of the parameters and storing regularization properties with Parameters also allows for shared parameters.
#### High-level API
In PaddlePaddle Python API, users will primarily rely on [layer functions](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/python_api.md#layer-function) to create neural network layers. Hence, we lso need to provide regularization functionality in layer functions. The design of these APIs can be postponed for later right now. A good reference for these APIs can be found in [Keras](https://keras.io/regularizers/) and also by looking at Tensorflow in [`tf.contrib.layers`](https://www.tensorflow.org/api_guides/python/contrib.layers).

@ -1,6 +1,6 @@
# Design Doc: Selected Rows
`SelectedRows` is a kind of sparse tensor data type, which is designed to support `embedding` operators. The gradient of embedding table is a sparse tensor. Only a few rows are non-zero values in that tensor. It is straightforward to represent the sparse tensor by the following sparse tensor data structure:
`SelectedRows` is a type of sparse tensor data type, which is designed to support `embedding` operators. The gradient of embedding table is a sparse tensor. Only a few rows are non-zero values in this tensor. It is straight-forward to represent a sparse tensor by the following sparse tensor data structure:
```cpp
class SelectedRows {
@ -11,7 +11,7 @@ class SelectedRows {
};
```
The field `height_` shows the first dimension of `SelectedRows`. The `rows` are the indices of which rows of `SelectedRows` are non-zeros. The `value_` field is an N-dim tensor and shape is `[rows.size() /* NUM_ROWS */, ...]`, which supplies values for each row. The dimension of `SelectedRows` satisfies `[height_] + value_.shape[1:]`.
The field `height_` is the first dimension of `SelectedRows`. The `rows` are the indices of the non-zero rows of `SelectedRows`. The `value_` field is an N-dim tensor of shape `[rows.size() /* NUM_ROWS */, ...]`, which supplies values for each row. The dimension of `SelectedRows` satisfies `[height_] + value_.shape[1:]`.
Suppose that a SelectedRows-typed variable `x` has many rows, but only two of them have values -- row 73 is `[1, 2]` and row 84 is `[3, 4]`, the `SelectedRows` representation would be:
@ -25,7 +25,7 @@ x = SelectedRow {
## SelectedRows in Protobuf
`SelectedRows` is a kind of `Variable`. `VarDesc` in protobuf should describe the `SelectedRows` information. Only the tensor dimension of a `SelectedRows` will be described in compile-time since the `rows_` and `value_` are related to training data.
`SelectedRows` is a type of `Variable`. `VarDesc` in protobuf should describe the `SelectedRows` information. Only the tensor dimension of a `SelectedRows` will be described in compile-time because the `rows_` and `value_` are dependent on the training data.
So we use `TensorDesc` to unify `data_type` and `dims`. A LodTensorDesc contains a `TensorDesc` and `lod_level`. The description of `SelectedRows` is a Tensor description.
```proto
@ -54,7 +54,7 @@ message VarDesc {
## InferShape for Selected Rows
Just like `LoD` information, `InferShape` method will inference output tensor type as well. The operator should decide whether its output is a `SelectedRows` or `Dense` tensor.
Just like `LoD` information, `InferShape` method will infer the output tensor type as well. The operator should decide whether its output is a `SelectedRows` or `Dense` tensor.
For example, the gradient operator of `TableLookup` will always generate `SelectedRows`. Its `InferShape` method should be like following
@ -68,7 +68,7 @@ void TableLookupGrad::InferShape(context) {
## Sparse Operators
There are several operators should be written to support `SelectedRows`. They are:
There are several operators that need to be written to support `SelectedRows`. These are:
1. Operators which generates `SelectedRows` gradient. e.g. Gradient of `TableLookupOp`.
1. Operators which generate `SelectedRows` gradient. e.g. Gradient of `TableLookupOp`.
2. Optimize operators which support `SelectedRows` gradient. e.g. `SGD` or `AdaGrad` for `SelectedRows`. However, there should be only one `SGD` operator. `OpWithKernel::Run` should select a suitable kernel for both `dense` tensor or `SelectedRows`.

@ -1,9 +1,46 @@
# 构建Android平台上的PaddlePaddle库
用户可通过交叉编译的方式在用户熟悉的开发平台LinuxMac OS X和Windows上编译Android平台上适用的PaddlePaddle库。
用户可通过如下两种方式交叉编译Android平台上适用的PaddlePaddle库
- 基于Docker容器的编译方式
- 基于Linux交叉编译环境的编译方式
## 基于Docker容器的编译方式
Docker能在所有主要操作系统包括LinuxMac OS X和Windows上运行因此使用基于Docker容器的编译方式用户可在自己熟悉的开发平台上编译Android平台上适用的PaddlePaddle库。
### 构建PaddlePaddle的Android开发镜像
我们把PaddlePaddle的交叉编译环境打包成一个镜像称为开发镜像里面涵盖了交叉编译Android版PaddlePaddle库需要的所有编译工具。
```bash
$ git clone https://github.com/PaddlePaddle/Paddle.git
$ cd Paddle
$ docker build -t username/paddle-android:dev . -f Dockerfile.android
```
### 编译PaddlePaddle C-API库
构建好开发镜像后即可使用开发镜像来编译Android版PaddlePaddle C-API库。
Android的Docker开发镜像向用户提供两个可配置的参数
| Argument | Optional Values | Default |
|-----------------|-------------------------|---------|
|`ANDROID_ABI` |`armeabi-v7a, arm64-v8a` | `armeabi-v7a` |
|`ANDROID_API` |`>= 21` | `21` |
- 编译`armeabi-v7a``Android API 21`的PaddlePaddle库
```bash
$ docker run -it --rm -v $PWD:/paddle -e "ANDROID_ABI=armeabi-v7a" -e "ANDROID_API=21" username/paddle-android:dev
```
- 编译`arm64-v8a``Android API 21`的PaddlePaddle库
```bash
$ docker run -it --rm -v $PWD:/paddle -e "ANDROID_ABI=arm64-v8a" -e "ANDROID_API=21" username/paddle-android:dev
```
执行上述`docker run`命令时,容器默认执行[paddle/scripts/docker/build_android.sh](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/scripts/docker/build_android.sh)脚本。该脚本中记录了交叉编译Android版PaddlePaddle库常用的CMake配置并且会根据`ANDROID_ABI`和`ANDROID_API`自动构建独立工具链、进行编译和安装。由于arm64架构要求Android API不小于21。因此当`ANDROID_ABI=arm64-v8a``ANDROID_API<21`Docker使`Android API 21`****DockerPaddlePaddleC-API`$PWD/install_android``$PWD/install_android/third_party`
## 基于Linux交叉编译环境的编译方式
本文档将以Linux x86-64平台为例介绍交叉编译Android平台上适用的PaddlePaddle库的方法和步骤。
## 准备交叉编译环境
### 准备交叉编译环境
从源码交叉编译PaddlePaddle用户需要提前准备好交叉编译环境。Android平台上使用的C/C++交叉编译工具链为[Android NDK](https://developer.android.com/ndk/downloads/index.html?hl=zh-cn),用户可自行前往下载预编译好的版本,也可通过以下命令获取:
@ -13,18 +50,27 @@ unzip -q android-ndk-r14b-linux-x86_64.zip
```
Android NDK中包含了所有Android API级别、所有架构arm/arm64/x86/mips需要用到的编译工具和系统库。用户可根据自己的编译目标架构、所需支持的最低Android API级别构建[独立工具链](https://developer.android.google.cn/ndk/guides/standalone_toolchain.html?hl=zh-cn)。
比如:
- 构建`armeabi-v7a`、 `Android API 21`的独立工具链:
```bash
your/path/to/android-ndk-r14b-linux-x86_64/build/tools/make-standalone-toolchain.sh \
--arch=arm --platform=android-21 --install-dir=your/path/to/my_standalone_toolchain
--arch=arm --platform=android-21 --install-dir=your/path/to/arm_standalone_toolchain
```
此命令将在your/path/to/my_standalone_toolchain目录生成一套编译工具链面向架构为32位ARM架构支持的最小的Android API级别为21使用的编译器为arm-linux-androideabi-gcc (GCC) 4.9
此命令将在`your/path/to/arm_standalone_toolchain`目录生成一套独立编译工具链面向架构为32位ARM架构支持的最小的Android API级别为21支持编译器`arm-linux-androideabi-gcc (GCC) 4.9`和`clang 3.8`
注意:**PaddlePaddle要求使用的编译工具链所支持的Andoid API级别不小于21**。
- 构建`arm64-v8a`、 `Android API 21`的独立工具链:
```bash
your/path/to/android-ndk-r14b-linux-x86_64/build/tools/make-standalone-toolchain.sh \
--arch=arm64 --platform=android-21 --install-dir=your/path/to/arm64_standalone_toolchain
```
## 配置交叉编译参数
此命令将在`your/path/to/arm64_standalone_toolchain`目录生成一套独立编译工具链面向架构为64位ARM64架构支持的最小Android API级别为21支持编译器`arm-linux-androideabi-gcc (GCC) 4.9`和`clang 3.8`。
注意:**PaddlePaddle要求使用的编译工具链所支持的Android API级别不小于21**。
### 配置交叉编译参数
CMake系统对交叉编译提供了支持[cmake-toolchains](https://cmake.org/cmake/help/v3.0/manual/cmake-toolchains.7.html#cross-compiling)。为了简化cmake配置PaddlePaddle为交叉编译提供了工具链配置文档[cmake/cross_compiling/android.cmake](https://github.com/PaddlePaddle/Paddle/blob/develop/cmake/cross_compiling/android.cmake)以提供一些默认的编译器和编译参数相关配置。注意从CMake 3.7版本开始CMake官方对Android平台的交叉编译提供了通用的支持。PaddlePaddle若检测到用户使用的CMake版本不低于3.7时将会将用户传进来的配置参数传递CMake系统交由CMake系统本身来处理。有关参数配置的详细说明见[cmake-toolchains](https://cmake.org/cmake/help/v3.7/manual/cmake-toolchains.7.html#cross-compiling)。
@ -36,32 +82,57 @@ CMake系统对交叉编译提供了支持[cmake-toolchains](https://cmake.org/cm
Android平台可选配置参数
- `ANDROID_STANDALONE_TOOLCHAIN`独立工具链所在的绝对路径或者相对于构建目录的相对路径。PaddlePaddle的CMake系统将根据该值自动推导和设置需要使用的交叉编译器、sysroot、以及Android API级别否则用户需要在cmake时手动设置这些值。无默认值。
- `ANDROID_ABI`目标架构ABI。目前只支持`armeabi-v7a`,默认值为`armeabi-v7a`。
- `ANDROID_TOOLCHAIN`,目标工具链。可设置`gcc/clang`,默认值为`clang`。
- CMake 3.7以上,将会始终使用`clang`工具链CMake 3.7以下,可设置`ANDROID_TOOLCHAIN=gcc`以使用`gcc`工具链。
- Android官方提供的`clang`编译器要求系统支持`GLIBC 2.15`以上。
- `ANDROID_ABI`目标架构ABI。目前支持`armeabi-v7a`和`arm64-v8a`,默认值为`armeabi-v7a`。
- `ANDROID_NATIVE_API_LEVEL`工具链的Android API级别。若没有显式设置PaddlePaddle将根据`ANDROID_STANDALONE_TOOLCHAIN`的值自动推导得到。
- `ANROID_ARM_MODE`是否使用ARM模式。可设置`ON/OFF`,默认值为`ON`。
- `ANDROID_ARM_NEON`是否使用NEON指令。目前必须设置成`ON`,默认值为`ON`。
- `ANROID_ARM_MODE`是否使用ARM模式。
- `ANDROID_ABI=armeabi-v7a`时,可设置`ON/OFF`,默认值为`ON`
- `ANDROID_ABI=arm64-v8a`时,不需要设置。
- `ANDROID_ARM_NEON`是否使用NEON指令。
- `ANDROID_ABI=armeabi-v7a`时,可设置`ON/OFF`,默认值为`ON`
- `ANDROID_ABI=arm64-v8a`时,不需要设置。
其他配置参数:
- `USE_EIGEN_FOR_BLAS`是否使用Eigen库进行矩阵计算。可设置`ON/OFF`,默认值为`OFF`。
- `HOST_C/CXX_COMPILER`宿主机的C/C++编译器。在编译宿主机版protoc可执行文件和目标机版OpenBLAS库时需要用到。默认设置成环境变量`CC`的值;若环境变量`CC`没有设置,则设置成`cc`编译器。
一种常用的cmake配置如下
常用的cmake配置如下
```bash
cmake -DCMAKE_SYSTEM_NAME=Android \
-DANDROID_STANDALONE_TOOLCHAIN=your/path/to/my_standalone_toolchain \
-DANDROID_STANDALONE_TOOLCHAIN=your/path/to/arm_standalone_toolchain \
-DANDROID_ABI=armeabi-v7a \
-DANDROID_ARM_NEON=ON \
-DANDROID_ARM_MODE=ON \
-DUSE_EIGEN_FOR_BLAS=ON \
-DCMAKE_INSTALL_PREFIX=your/path/to/install \
-DWITH_C_API=ON \
-DWITH_SWIG_PY=OFF \
..
```
```
cmake -DCMAKE_SYSTEM_NAME=Android \
-DANDROID_STANDALONE_TOOLCHAIN=your/path/to/arm64_standalone_toolchain \
-DANDROID_ABI=arm64-v8a \
-DUSE_EIGEN_FOR_BLAS=OFF \
-DCMAKE_INSTALL_PREFIX=your/path/to/install \
-DWITH_C_API=ON \
-DWITH_SWIG_PY=OFF \
..
```
用户还可根据自己的需求设置其他编译参数。比如希望最小化生成的库的大小,可以设置`CMAKE_BUILD_TYPE`为`MinSizeRel`;若希望最快的执行速度,则可设置`CMAKE_BUILD_TYPE`为`Release`。亦可以通过手动设置`CMAKE_C/CXX_FLAGS_MINSIZEREL/RELEASE`来影响PaddlePaddle的编译过程。
## 编译和安装
**性能TIPS**为了达到最快的计算速度在CMake参数配置上有以下建议
- 设置`CMAKE_BUILD_TYPE`为`Release`
- 使用`clang`编译工具链
- `armeabi-v7a`时,设置`USE_EIGEN_BLAS=ON`使用Eigen进行矩阵计算`arm64-v8a`时,设置`USE_EIGEN_FOR_BLAS=OFF`使用OpenBLAS进行矩阵计算
### 编译和安装
CMake配置完成后执行以下命令PaddlePaddle将自动下载和编译所有第三方依赖库、编译和安装PaddlePaddle预测库。
@ -72,4 +143,4 @@ make install
注意如果你曾经在源码目录下编译过其他平台的PaddlePaddle库请先使用`rm -rf`命令删除`third_party`目录和`build`目录以确保所有的第三方依赖库和PaddlePaddle代码都是针对新的CMake配置重新编译的。
执行完安装命令后,`your/path/to/install`目录中会包含`include`和`lib`目录,其中`include`中包含C-API的头文件`lib`中包含一个Android版本的库。自此PaddlePaddle的已经安装完成用户可将`your/path/to/install`目录下的生成文件用于深度学习相关Android App中调用方法见C-API文档。
执行完安装命令后,`your/path/to/install`目录中会包含`include`、`lib`和`third_party`目录,其中`include`中包含C-API的头文件`lib`中包含若干个不同Android ABI的PaddlePaddle库`third_party`中包含所依赖的所有第三方库。自此PaddlePaddle的已经安装完成用户可将`your/path/to/install`目录下的生成文件用于深度学习相关Android App中调用方法见C-API文档。

@ -21,7 +21,7 @@ wmt14数据的提供文件在 `python/paddle/v2/dataset/wmt14.py <https://github
循环神经网络在每个时间步骤顺序地处理序列。下面列出了 LSTM 的架构的示例。
.. image:: ../../../tutorials/sentiment_analysis/bi_lstm.jpg
.. image:: src/bi_lstm.jpg
:align: center
一般来说,循环网络从 :math:`t=1`:math:`t=T` 或者反向地从 :math:`t=T`:math:`t=1` 执行以下操作。
@ -96,7 +96,7 @@ Sequence to Sequence Model with Attention
我们将使用 sequence to sequence model with attention
作为例子演示如何配置复杂的循环神经网络模型。该模型的说明如下图所示。
.. image:: ../../../tutorials/text_generation/encoder-decoder-attention-model.png
.. image:: src/encoder-decoder-attention-model.png
:align: center
在这个模型中,源序列 :math:`S = \{s_1, \dots, s_T\}`

@ -19,7 +19,7 @@ Simple Gated Recurrent Neural Network
Recurrent neural network process a sequence at each time step sequentially. An example of the architecture of LSTM is listed below.
.. image:: ../../../tutorials/sentiment_analysis/src/bi_lstm.jpg
.. image:: src/bi_lstm.jpg
:align: center
Generally speaking, a recurrent network perform the following operations from :math:`t=1` to :math:`t=T`, or reversely from :math:`t=T` to :math:`t=1`.
@ -78,7 +78,7 @@ Sequence to Sequence Model with Attention
-----------------------------------------
We will use the sequence to sequence model with attention as an example to demonstrate how you can configure complex recurrent neural network models. An illustration of the sequence to sequence model with attention is shown in the following figure.
.. image:: ../../../tutorials/text_generation/encoder-decoder-attention-model.png
.. image:: src/encoder-decoder-attention-model.png
:align: center
In this model, the source sequence :math:`S = \{s_1, \dots, s_T\}` is encoded with a bidirectional gated recurrent neural networks. The hidden states of the bidirectional gated recurrent neural network :math:`H_S = \{H_1, \dots, H_T\}` is called *encoder vector* The decoder is a gated recurrent neural network. When decoding each token :math:`y_t`, the gated recurrent neural network generates a set of weights :math:`W_S^t = \{W_1^t, \dots, W_T^t\}`, which are used to compute a weighted sum of the encoder vector. The weighted sum of the encoder vector is utilized to condition the generation of the token :math:`y_t`.

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 456 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

@ -1,205 +0,0 @@
图像分类教程
==========
在本教程中我们将使用CIFAR-10数据集训练一个卷积神经网络并使用这个神经网络来对图片进行分类。如下图所示卷积神经网络可以辨识图片中的主体并给出分类结果。
<center>![Image Classification](./image_classification.png)</center>
## 数据准备
首先下载CIFAR-10数据集。下面是CIFAR-10数据集的官方网址
<https://www.cs.toronto.edu/~kriz/cifar.html>
我们准备了一个脚本可以用于从官方网站上下载CIFAR-10数据集转为jpeg文件并存入特定的目录。使用这个脚本前请确认已经安装了pillow及相关依赖模块。可以参照下面的命令进行安装
1. 安装pillow
```bash
sudo apt-get install libjpeg-dev
pip install pillow
```
2. 下载数据集
```bash
cd demo/image_classification/data/
sh download_cifar.sh
```
CIFAR-10数据集包含60000张32x32的彩色图片。图片分为10类每个类包含6000张。其中50000张图片作为训练集10000张作为测试集。
下图展示了所有的图片类别每个类别中随机抽取了10张图片。
<center>![Image Classification](./cifar.png)</center>
脚本运行完成后我们应当会得到一个名为cifar-out的文件夹其下子文件夹的结构如下
```
train
---airplane
---automobile
---bird
---cat
---deer
---dog
---frog
---horse
---ship
---truck
test
---airplane
---automobile
---bird
---cat
---deer
---dog
---frog
---horse
---ship
---truck
```
cifar-out下包含`train`和`test`两个文件夹其中分别包含了CIFAR-10中的训练集和测试集。这两个文件夹下各自有10个子文件夹每个子文件夹下存储相应分类的图片。将图片按照上述结构存储好之后我们就可以着手对分类模型进行训练了。
## 预处理
数据下载之后还需要进行预处理将数据转换为Paddle的格式。我们可以通过如下命令进行预处理工作
```
cd demo/image_classification/
sh preprocess.sh
```
其中`preprocess.sh` 调用 `./demo/image_classification/preprocess.py` 对图片进行预处理
```sh
export PYTHONPATH=$PYTHONPATH:../../
data_dir=./data/cifar-out
python preprocess.py -i $data_dir -s 32 -c 1
```
`./demo/image_classification/preprocess.py` 使用如下参数:
- `-i``--input` 给出输入数据所在路径;
- `-s``--size` 给出图片尺寸;
- `-c``--color` 标示图片是彩色图或灰度图
## 模型训练
在开始训练之前,我们需要先创建一个模型配置文件。下面我们给出了一个配置示例。**注意**,这里的列出的和`vgg_16_cifar.py`文件稍有差别,因为该文件可适用于预测。
```python
from paddle.trainer_config_helpers import *
data_dir='data/cifar-out/batches/'
meta_path=data_dir+'batches.meta'
args = {'meta':meta_path, 'mean_img_size': 32,
'img_size': 32, 'num_classes': 10,
'use_jpeg': 1, 'color': "color"}
define_py_data_sources2(train_list=data_dir+"train.list",
test_list=data_dir+'test.list',
module='image_provider',
obj='processData',
args=args)
settings(
batch_size = 128,
learning_rate = 0.1 / 128.0,
learning_method = MomentumOptimizer(0.9),
regularization = L2Regularization(0.0005 * 128))
img = data_layer(name='image', size=3*32*32)
lbl = data_layer(name="label", size=10)
# small_vgg is predined in trainer_config_helpers.network
predict = small_vgg(input_image=img, num_channels=3)
outputs(classification_cost(input=predict, label=lbl))
```
在第一行中我们载入用于定义网络的函数。
```python
from paddle.trainer_config_helpers import *
```
之后定义的`define_py_data_sources2`使用Python数据提供器其中 `args`将在`image_provider.py`进行使用该文件负责产生图片数据并传递给Paddle系统
- `meta`: 训练集平均值。
- `mean_img_size`: 平均特征图的高度及宽度。
- `img_size`:输入图片的高度及宽度。
- `num_classes`:类别个数。
- `use_jpeg`:处理过程中数据存储格式。
- `color`:标示是否为彩色图片。
`settings`用于设置训练算法。在下面的例子中learning rate被设置为0.1除以batch size而weight decay则为0.0005乘以batch size。
```python
settings(
batch_size = 128,
learning_rate = 0.1 / 128.0,
learning_method = MomentumOptimizer(0.9),
regularization = L2Regularization(0.0005 * 128)
)
```
`small_vgg`定义了网络结构。这里我们使用的是一个小的VGG网络。关于VGG卷积神经网络的描述可以参考[http://www.robots.ox.ac.uk/~vgg/research/very_deep/](http://www.robots.ox.ac.uk/~vgg/research/very_deep/)。
```python
# small_vgg is predined in trainer_config_helpers.network
predict = small_vgg(input_image=img, num_channels=3)
```
配置创建完毕后可以运行脚本train.sh来训练模型。
```bash
config=vgg_16_cifar.py
output=./cifar_vgg_model
log=train.log
paddle train \
--config=$config \
--dot_period=10 \
--log_period=100 \
--test_all_data_in_one_period=1 \
--use_gpu=1 \
--save_dir=$output \
2>&1 | tee $log
python -m paddle.utils.plotcurve -i $log > plot.png
```
- 这里我们使用的是GPU模式进行训练。如果你没有GPU环境可以设置`use_gpu=0`。
- `./demo/image_classification/vgg_16_cifar.py`是网络和数据配置文件。各项参数的详细说明可以在命令行参数相关文档中找到。
- 脚本`plotcurve.py`依赖于python的`matplotlib`模块。因此如果这个脚本运行失败,也许是因为需要安装`matplotlib`。
在训练完成后,训练及测试误差曲线图会被`plotcurve.py`脚本保存在 `plot.png`中。下面是一个误差曲线图的示例:
<center>![Training and testing curves.](./plot.png)</center>
## 预测
在训练完成后,模型及参数会被保存在路径`./cifar_vgg_model/pass-%05d`下。例如第300个pass的模型会被保存在`./cifar_vgg_model/pass-00299`。
要对一个图片的进行分类预测,我们可以使用`predict.sh`,该脚本将输出预测分类的标签:
```
sh predict.sh
```
predict.sh:
```
model=cifar_vgg_model/pass-00299/
image=data/cifar-out/test/airplane/seaplane_s_000978.png
use_gpu=1
python prediction.py $model $image $use_gpu
```
## 练习
在CUB-200数据集上使用VGG模型训练一个鸟类图片分类模型。相关的鸟类数据集可以从如下地址下载其中包含了200种鸟类的照片主要来自北美洲
<http://www.vision.caltech.edu/visipedia/CUB-200.html>
## 细节探究
### 卷积神经网络
卷积神经网络是一种使用卷积层的前向神经网络,很适合构建用于理解图片内容的模型。一个典型的神经网络如下图所示:
![Convolutional Neural Network](./lenet.png)
一个卷积神经网络包含如下层:
- 卷积层:通过卷积操作从图片或特征图中提取特征
- 池化层使用max-pooling对特征图下采样
- 全连接层:使输入层到隐藏层的神经元是全部连接的。
卷积神经网络在图片分类上有着惊人的性能,这是因为它发掘出了图片的两类重要信息:局部关联性质和空间不变性质。通过交替使用卷积和池化处理, 卷积神经网络能够很好的表示这两类信息。
关于如何定义网络中的层以及如何在层之间进行连接请参考Layer文档。

@ -1,221 +0,0 @@
Image Classification Tutorial
==============================
This tutorial will guide you through training a convolutional neural network to classify objects using the CIFAR-10 image classification dataset.
As shown in the following figure, the convolutional neural network can recognize the main object in images, and output the classification result.
<center>![Image Classification](./image_classification.png)</center>
## Data Preparation
First, download CIFAR-10 dataset. CIFAR-10 dataset can be downloaded from its official website.
<https://www.cs.toronto.edu/~kriz/cifar.html>
We have prepared a script to download and process CIFAR-10 dataset. The script will download CIFAR-10 dataset from the official dataset.
It will convert it to jpeg images and organize them into a directory with the required structure for the tutorial. Make sure that you have installed pillow and its dependents.
Consider the following commands:
1. install pillow dependents
```bash
sudo apt-get install libjpeg-dev
pip install pillow
```
2. download data and preparation
```bash
cd demo/image_classification/data/
sh download_cifar.sh
```
The CIFAR-10 dataset consists of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
Here are the classes in the dataset, as well as 10 random images from each:
<center>![Image Classification](./cifar.png)</center>
After downloading and converting, we should find a directory (cifar-out) containing the dataset in the following format:
```
train
---airplane
---automobile
---bird
---cat
---deer
---dog
---frog
---horse
---ship
---truck
test
---airplane
---automobile
---bird
---cat
---deer
---dog
---frog
---horse
---ship
---truck
```
It has two directories:`train` and `test`. These two directories contain training data and testing data of CIFAR-10, respectively. Each of these two folders contains 10 sub-folders, ranging from `airplane` to `truck`. Each sub-folder contains images with the corresponding label. After the images are organized into this structure, we are ready to train an image classification model.
## Preprocess
After the data has been downloaded, it needs to be pre-processed into the Paddle format. We can run the following command for preprocessing.
```
cd demo/image_classification/
sh preprocess.sh
```
`preprocess.sh` calls `./demo/image_classification/preprocess.py` to preprocess image data.
```sh
export PYTHONPATH=$PYTHONPATH:../../
data_dir=./data/cifar-out
python preprocess.py -i $data_dir -s 32 -c 1
```
`./demo/image_classification/preprocess.py` has the following arguments
- `-i` or `--input` specifes the input data directory.
- `-s` or `--size` specifies the processed size of images.
- `-c` or `--color` specifes whether images are color images or gray images.
## Model Training
We need to create a model config file before training the model. An example of the config file (vgg_16_cifar.py) is listed below. **Note**, it is slightly different from the `vgg_16_cifar.py` which also applies to the prediction.
```python
from paddle.trainer_config_helpers import *
data_dir='data/cifar-out/batches/'
meta_path=data_dir+'batches.meta'
args = {'meta':meta_path, 'mean_img_size': 32,
'img_size': 32, 'num_classes': 10,
'use_jpeg': 1, 'color': "color"}
define_py_data_sources2(train_list=data_dir+"train.list",
test_list=data_dir+'test.list',
module='image_provider',
obj='processData',
args=args)
settings(
batch_size = 128,
learning_rate = 0.1 / 128.0,
learning_method = MomentumOptimizer(0.9),
regularization = L2Regularization(0.0005 * 128))
img = data_layer(name='image', size=3*32*32)
lbl = data_layer(name="label", size=10)
# small_vgg is predined in trainer_config_helpers.network
predict = small_vgg(input_image=img, num_channels=3)
outputs(classification_cost(input=predict, label=lbl))
```
The first line imports python functions for defining networks.
```python
from paddle.trainer_config_helpers import *
```
Then define an `define_py_data_sources2` which use python data provider
interface. The arguments in `args` are used in `image_provider.py` which
yeilds image data and transform them to Paddle.
- `meta`: the mean value of training set.
- `mean_img_size`: the size of mean feature map.
- `img_size`: the height and width of input image.
- `num_classes`: the number of classes.
- `use_jpeg`: the data storage type when preprocessing.
- `color`: specify color image.
`settings` specifies the training algorithm. In the following example,
it specifies learning rate as 0.1, but divided by batch size, and the weight decay
is 0.0005 and multiplied by batch size.
```python
settings(
batch_size = 128,
learning_rate = 0.1 / 128.0,
learning_method = MomentumOptimizer(0.9),
regularization = L2Regularization(0.0005 * 128)
)
```
The `small_vgg` specifies the network. We use a small version of VGG convolutional network as our network
for classification. A description of VGG network can be found here [http://www.robots.ox.ac.uk/~vgg/research/very_deep/](http://www.robots.ox.ac.uk/~vgg/research/very_deep/).
```python
# small_vgg is predined in trainer_config_helpers.network
predict = small_vgg(input_image=img, num_channels=3)
```
After writing the config, we can train the model by running the script train.sh.
```bash
config=vgg_16_cifar.py
output=./cifar_vgg_model
log=train.log
paddle train \
--config=$config \
--dot_period=10 \
--log_period=100 \
--test_all_data_in_one_period=1 \
--use_gpu=1 \
--save_dir=$output \
2>&1 | tee $log
python -m paddle.utils.plotcurve -i $log > plot.png
```
- Here we use GPU mode to train. If you have no gpu environment, just set `use_gpu=0`.
- `./demo/image_classification/vgg_16_cifar.py` is the network and data configuration file. The meaning of the other flags can be found in the documentation of the command line flags.
- The script `plotcurve.py` requires the python module of `matplotlib`, so if it fails, maybe you need to install `matplotlib`.
After training finishes, the training and testing error curves will be saved to `plot.png` using `plotcurve.py` script. An example of the plot is shown below:
<center>![Training and testing curves.](./plot.png)</center>
## Prediction
After we train the model, the model file as well as the model parameters are stored in path `./cifar_vgg_model/pass-%05d`. For example, the model of the 300-th pass is stored at `./cifar_vgg_model/pass-00299`.
To make a prediction for an image, one can run `predict.sh` as follows. The script will output the label of the classfiication.
```
sh predict.sh
```
predict.sh:
```
model=cifar_vgg_model/pass-00299/
image=data/cifar-out/test/airplane/seaplane_s_000978.png
use_gpu=1
python prediction.py $model $image $use_gpu
```
## Exercise
Train a image classification of birds using VGG model and CUB-200 dataset. The birds dataset can be downloaded here. It contains an image dataset with photos of 200 bird species (mostly North American).
<http://www.vision.caltech.edu/visipedia/CUB-200.html>
## Delve into Details
### Convolutional Neural Network
A Convolutional Neural Network is a feedforward neural network that uses convolution layers. It is very suitable for building neural networks that process and understand images. A standard convolutional neural network is shown below:
![Convolutional Neural Network](./lenet.png)
Convolutional Neural Network contains the following layers:
- Convolutional layer: It uses convolution operation to extract features from an image or a feature map.
- Pooling layer: It uses max-pooling to downsample feature maps.
- Fully Connected layer: It uses fully connected connections to transform features.
Convolutional Neural Network achieves amazing performance for image classification because it exploits two important characteristics of images: *local correlation* and *spatial invariance*. By iteratively applying convolution and max-pooing operations, convolutional neural network can well represent these two characteristics of images.
For more details of how to define layers and their connections, please refer to the documentation of layers.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save