From 1c710053292394154d41db6c44ea808a8feaf65c Mon Sep 17 00:00:00 2001 From: Helin Wang Date: Sun, 10 Sep 2017 15:03:58 -0700 Subject: [PATCH 1/9] Design Doc: Session --- doc/design/session.md | 62 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 doc/design/session.md diff --git a/doc/design/session.md b/doc/design/session.md new file mode 100644 index 0000000000..2e8c0ece7a --- /dev/null +++ b/doc/design/session.md @@ -0,0 +1,62 @@ +# Design Doc: Session + +## Abstract + +This design doc proposes to have an object called *Session* which +encapsulates the environment in which the computation graph is +executed. + +## Background + +A computation graph is executed in an environment which contains the +[scope](./scope.md) and other states. PaddlePaddle used to only have +an implicit global session on which `paddle.eval()` is executed. + +This has the limitation that the user can not create two independent +environments. For example, in reinforcement learning, the user may +want to have a stale model for inference and a fresh model for +training, and only replace the stale model with the fresh model +periodically. Also, we have no concept that can encapsulate a remote +environment that could execute a computation graph. + +## Session + +Session is an object that owns all runtime states such as scope, +reader OP's file handles, connection to a remote PaddlePaddle cluster, +etc. + +Session has two methods: `eval` and `close`. `eval` executes the +target OP in a given graph, and `close` closes the session and +releases all related resources: + +```Python +a = paddle.constant(1.0) +b = paddle.constant(2.0) +c = a + b +sess = paddle.session() +sess.eval(c) +sess.close() +``` + +### Remote Session + +Paddle Cloud will support user creating a remote session pointing to +the Paddle Cloud cluster. The user can send the computation graph to +be executed on the Paddle Cloud. In this way, the user can control a +cluster from her local computer: + +```Python +reader = paddle.reader.recordio("/pfs/home/peter/mnist-train-*") # data stored on Paddle Cloud +image = reader.column(0) +label = reader.column(1) +fc1 = paddle.op.fc(image, size=256, act="sigmoid") +fc2 = paddle.op.fc(fc1, size=10, act="softmax") +cost = paddle.op.cross_entropy(fc2) +opt = paddle.optimizer.sgd(cost) + +remote_config = ... # remote configuration such as endpoint, number of nodes and authentication. +sess = paddle.remoteSession(remote_config) +for i in range(1000): + sess.eval(opt) +sess.close() +``` From 94dfd8649e06108bc0c03e6f53eb43ab13f30332 Mon Sep 17 00:00:00 2001 From: Helin Wang Date: Mon, 25 Sep 2017 16:02:58 -0700 Subject: [PATCH 2/9] fix according to comments --- doc/design/session.md | 34 ++++++++++++++++++++++------------ 1 file changed, 22 insertions(+), 12 deletions(-) diff --git a/doc/design/session.md b/doc/design/session.md index 2e8c0ece7a..dc034c3906 100644 --- a/doc/design/session.md +++ b/doc/design/session.md @@ -6,26 +6,36 @@ This design doc proposes to have an object called *Session* which encapsulates the environment in which the computation graph is executed. +The session is able to distinguish running a graph locally or +remotely, using CPU only or using one or more GPUs. Different sessions +have different runtime environments such as [scopes](./scope.md) and +device contexts. + + ## Background -A computation graph is executed in an environment which contains the -[scope](./scope.md) and other states. PaddlePaddle used to only have -an implicit global session on which `paddle.eval()` is executed. +A computation graph runs in an environment which contains states such +as the scope and device contexts. The current design has an implicit +global session on which `paddle.eval()` is executed. + +Since the user is not able to explicitly switch between runtime +environments such as the scope and the device contexts, the user +cannot run a topology in two independent environments. For example, in +reinforcement learning, the user may want to have a stale model for +inference and a fresh model for training, and only replace the stale +model with the fresh model periodically. Also, we have no concept that +can encapsulate a remote environment that could execute a computation +graph. -This has the limitation that the user can not create two independent -environments. For example, in reinforcement learning, the user may -want to have a stale model for inference and a fresh model for -training, and only replace the stale model with the fresh model -periodically. Also, we have no concept that can encapsulate a remote -environment that could execute a computation graph. +We need a session concept to address above issues. ## Session -Session is an object that owns all runtime states such as scope, +A session is an object that owns all runtime states such as scope, reader OP's file handles, connection to a remote PaddlePaddle cluster, etc. -Session has two methods: `eval` and `close`. `eval` executes the +The session has two methods: `eval` and `close`. `eval` executes the target OP in a given graph, and `close` closes the session and releases all related resources: @@ -51,7 +61,7 @@ image = reader.column(0) label = reader.column(1) fc1 = paddle.op.fc(image, size=256, act="sigmoid") fc2 = paddle.op.fc(fc1, size=10, act="softmax") -cost = paddle.op.cross_entropy(fc2) +cost = paddle.op.cross_entropy(fc2, label) opt = paddle.optimizer.sgd(cost) remote_config = ... # remote configuration such as endpoint, number of nodes and authentication. From f24b5dffc42063ddfe28229fdb242c3df4ec1aa7 Mon Sep 17 00:00:00 2001 From: Helin Wang Date: Tue, 26 Sep 2017 18:03:37 -0700 Subject: [PATCH 3/9] Update Session design doc --- doc/design/refactor/session.md | 160 +++++++++++++++++++++++++++++++++ doc/design/session.md | 72 --------------- 2 files changed, 160 insertions(+), 72 deletions(-) create mode 100644 doc/design/refactor/session.md delete mode 100644 doc/design/session.md diff --git a/doc/design/refactor/session.md b/doc/design/refactor/session.md new file mode 100644 index 0000000000..5f58148f01 --- /dev/null +++ b/doc/design/refactor/session.md @@ -0,0 +1,160 @@ +# Design Doc: Session + +## Abstract + +The *session* object encapsulates the environment in which the +computation graph is executed. + +We will have *local* session and *remote* session, they offer the +same [interface](#interface). The local session encapsulates the local +runtime environment and the remote session encapsulates the cluster +runtime envrionment. + +The local runtime envrionment contains: + +1. computation devices (i.e., CPU, GPU) handles, and +1. the [scope](../scope.md) which holds all variables. + +The remote runtime envrionment contains: + +1. computation devices (i.e., CPU and GPU on node 0, 1) in a cluster, + and +1. the distributed [scope](../scope.md) in a cluster which holds all + variables. + +The user can create a remote session on Paddle Cloud and evaluate the +computation graph with it. In this way, the user can control the +remote computation resource in a cluster from his local computer. + + +## Background + +The current design has an implicit global session on which +`paddle.eval()` is executed. The pain point is: + +Since the user is not able to explicitly switch between runtime +environments such as the scope and the device contexts, the user +cannot run a topology in two independent environments. + +For example, in reinforcement learning, the user may want to have a +stale model for inference and a fresh model for training, and only +replace the stale model with the fresh model periodically. + +Furthermore, we have no concept that encapsulates a remote environment +that executes a computation graph. + +We need the session object to address above issues. + + +## Session + +A session is an object that owns the runtime environment. All +computations are executed through `session.eval`. + + +### Interface + +``` +eval( + targets, + feed_dict=None, +) +``` + +Evaluates the target Operations or Variables in `targets`. + +- *targets*: the evaluation targets. Can be a single Operation or + Variable, or a list with the Operations or Variables as elements. + + The value returned by `eval()` has the same shape as the `target` + argument. + + The computation graph is implicitly inferred from the targets. + +- *feed_dict*: a dictionary that contains the tensors which overrides + the edges of the computation graph. + +``` +close() +``` + +Closes the session. Calling this method releases the scope. + + +### Create a Local Session + +``` +session( + gpu_ids=None +) +``` + +Creates a new session. One session owns one scope, so creating +multiple sessions will create different scopes. + +- *gpu_ids*: a single `int` or a list of `int` of the GPU IDs to be + used as the computation devices. If not specified, all avaiable GPUs + will be used. + + +#### Example + +```Python +a = paddle.constant(1.0) +b = paddle.constant(2.0) +c = a + b +sess = paddle.session(gpu_ids=[0,1]) +sess.eval(c) +sess.close() +``` + +### Create a Remote Session + +``` +create_cloud_job( + name, + num_trainer, + mem_per_trainer, + gpu_per_trainer, + cpu_per_trainer, + num_ps, + mem_per_ps, + cpu_per_ps, +) +``` + +Creates a Paddle Cloud job. Fails if the job name exists. + +``` +get_cloud_job( + name +) +``` + +Gets a Paddle Cloud job. + +``` +remote_session( + job +) +``` + +- *job*: the Paddle Cloud job. + +#### Example + +```Python +reader = paddle.reader.recordio("/pfs/home/peter/mnist-train-*") # data stored on Paddle Cloud +image = reader.column(0) +label = reader.column(1) +fc1 = paddle.op.fc(image, size=256, act="sigmoid") +fc2 = paddle.op.fc(fc1, size=10, act="softmax") +cost = paddle.op.cross_entropy(fc2, label) +opt = paddle.optimizer.sgd(cost) + +job = paddle.create_cloud_job("test", 3, "1G", 1, 1, 2, "1G", 1) +sess = paddle.remote_ession(job) +for i in range(1000): + sess.eval(opt) +sess.close() +``` diff --git a/doc/design/session.md b/doc/design/session.md deleted file mode 100644 index dc034c3906..0000000000 --- a/doc/design/session.md +++ /dev/null @@ -1,72 +0,0 @@ -# Design Doc: Session - -## Abstract - -This design doc proposes to have an object called *Session* which -encapsulates the environment in which the computation graph is -executed. - -The session is able to distinguish running a graph locally or -remotely, using CPU only or using one or more GPUs. Different sessions -have different runtime environments such as [scopes](./scope.md) and -device contexts. - - -## Background - -A computation graph runs in an environment which contains states such -as the scope and device contexts. The current design has an implicit -global session on which `paddle.eval()` is executed. - -Since the user is not able to explicitly switch between runtime -environments such as the scope and the device contexts, the user -cannot run a topology in two independent environments. For example, in -reinforcement learning, the user may want to have a stale model for -inference and a fresh model for training, and only replace the stale -model with the fresh model periodically. Also, we have no concept that -can encapsulate a remote environment that could execute a computation -graph. - -We need a session concept to address above issues. - -## Session - -A session is an object that owns all runtime states such as scope, -reader OP's file handles, connection to a remote PaddlePaddle cluster, -etc. - -The session has two methods: `eval` and `close`. `eval` executes the -target OP in a given graph, and `close` closes the session and -releases all related resources: - -```Python -a = paddle.constant(1.0) -b = paddle.constant(2.0) -c = a + b -sess = paddle.session() -sess.eval(c) -sess.close() -``` - -### Remote Session - -Paddle Cloud will support user creating a remote session pointing to -the Paddle Cloud cluster. The user can send the computation graph to -be executed on the Paddle Cloud. In this way, the user can control a -cluster from her local computer: - -```Python -reader = paddle.reader.recordio("/pfs/home/peter/mnist-train-*") # data stored on Paddle Cloud -image = reader.column(0) -label = reader.column(1) -fc1 = paddle.op.fc(image, size=256, act="sigmoid") -fc2 = paddle.op.fc(fc1, size=10, act="softmax") -cost = paddle.op.cross_entropy(fc2, label) -opt = paddle.optimizer.sgd(cost) - -remote_config = ... # remote configuration such as endpoint, number of nodes and authentication. -sess = paddle.remoteSession(remote_config) -for i in range(1000): - sess.eval(opt) -sess.close() -``` From 757c76b83f3701c29efc88c546ec90a18952f98a Mon Sep 17 00:00:00 2001 From: Helin Wang Date: Thu, 28 Sep 2017 15:07:17 -0700 Subject: [PATCH 4/9] update according to comments --- doc/design/refactor/session.md | 74 +++++++++++++++++++++------------- 1 file changed, 47 insertions(+), 27 deletions(-) diff --git a/doc/design/refactor/session.md b/doc/design/refactor/session.md index 5f58148f01..9a7451ece5 100644 --- a/doc/design/refactor/session.md +++ b/doc/design/refactor/session.md @@ -5,17 +5,17 @@ The *session* object encapsulates the environment in which the computation graph is executed. -We will have *local* session and *remote* session, they offer the +We will have the *local* session and *remote* session, they offer the same [interface](#interface). The local session encapsulates the local runtime environment and the remote session encapsulates the cluster -runtime envrionment. +runtime environment. -The local runtime envrionment contains: +The local runtime environment contains: 1. computation devices (i.e., CPU, GPU) handles, and 1. the [scope](../scope.md) which holds all variables. -The remote runtime envrionment contains: +The remote runtime environment contains: 1. computation devices (i.e., CPU and GPU on node 0, 1) in a cluster, and @@ -29,12 +29,12 @@ remote computation resource in a cluster from his local computer. ## Background -The current design has an implicit global session on which +The current design has an implicit global session in which `paddle.eval()` is executed. The pain point is: Since the user is not able to explicitly switch between runtime -environments such as the scope and the device contexts, the user -cannot run a topology in two independent environments. +environments, the user cannot run a topology in two independent +environments. For example, in reinforcement learning, the user may want to have a stale model for inference and a fresh model for training, and only @@ -49,12 +49,12 @@ We need the session object to address above issues. ## Session A session is an object that owns the runtime environment. All -computations are executed through `session.eval`. +computations are executed through `session.eval()`. ### Interface -``` +```python eval( targets, feed_dict=None, @@ -64,37 +64,57 @@ eval( Evaluates the target Operations or Variables in `targets`. - *targets*: the evaluation targets. Can be a single Operation or - Variable, or a list with the Operations or Variables as elements. + Variable, or a list with the Operations or Variables as + elements. The value returned by `eval()` has the same shape as the + `target` argument. + + The PaddlePaddle program is represented by + the [ProgramDesc](../design/program.md), `eval()` will infer the + ProgramDesc from the given targets and run the PaddlePaddle + program. Please + see + [this graph](./distributed_architecture.md#local-training-architecture) for + the detailed illustration for the local session + and + [this graph](./distributed_architecture.md#distributed-training-architecture) for + the detailed illustration for the remote session. + +- *feed_dict*: a dictionary that contains the tensors which override + the edges of the computation graph. - The value returned by `eval()` has the same shape as the `target` - argument. + feed_dict not only can provide the input data, it can override any + OP's input as well: - The computation graph is implicitly inferred from the targets. + ```python + a = pd.constant(1.0, name="a") + b = pd.constant(2.0) + c = pd.mul(a,b) + sess.eval(targets=c, feed_dict={"a":3.0}) # returns 6.0 + ``` -- *feed_dict*: a dictionary that contains the tensors which overrides - the edges of the computation graph. - -``` +```python close() ``` -Closes the session. Calling this method releases the scope. +Closes the session and releases the scope that the session owns. ### Create a Local Session -``` +```python session( - gpu_ids=None + devices=None ) ``` Creates a new session. One session owns one scope, so creating multiple sessions will create different scopes. -- *gpu_ids*: a single `int` or a list of `int` of the GPU IDs to be - used as the computation devices. If not specified, all avaiable GPUs - will be used. +- *devices*: a single `string` or a list of `string` of device names, + the corresponding devices will be the computation devices for + `eval()`. If not specified, all available devices (e.g., all GPUs) + will be used. The user doesn't need to specify the CPU device since + it will be always used. #### Example @@ -103,14 +123,14 @@ multiple sessions will create different scopes. a = paddle.constant(1.0) b = paddle.constant(2.0) c = a + b -sess = paddle.session(gpu_ids=[0,1]) +sess = paddle.session(devices=["gpu:0", "gpu:1", "fpga:0"]) sess.eval(c) sess.close() ``` ### Create a Remote Session -``` +```python create_cloud_job( name, num_trainer, @@ -125,7 +145,7 @@ create_cloud_job( Creates a Paddle Cloud job. Fails if the job name exists. -``` +```python get_cloud_job( name ) @@ -133,7 +153,7 @@ get_cloud_job( Gets a Paddle Cloud job. -``` +```python remote_session( job ) From 62de57e1ee28bdb349148028d079b4e3192ecb46 Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Tue, 3 Oct 2017 14:01:22 -0700 Subject: [PATCH 5/9] Update lod_tensor.md --- paddle/framework/lod_tensor.md | 168 +++++++++++++++++++-------------- 1 file changed, 97 insertions(+), 71 deletions(-) diff --git a/paddle/framework/lod_tensor.md b/paddle/framework/lod_tensor.md index 07bbdf9416..0fa14f3470 100644 --- a/paddle/framework/lod_tensor.md +++ b/paddle/framework/lod_tensor.md @@ -1,147 +1,173 @@ # Design Doc: LoD (Level-of-Detail) Tensor -PaddlePaddle's RNN doesn't require that all instances have the same length. To do so, we introduce an extension to Tensor, namely, LoD Tensor. +As other deep learning systems, PaddlePaddle supports training models from sequence data. Also, like other systems, PaddlePaddle represent a mini-batch of sequences as a Tensor. What is different is that PaddlePaddle doesn't require that all sequences in a mini-batch are of the same length. Thus no need for padding zeros. -## Challenge of Variable-length Inputs +| | TensorFlow | PaddlePaddle | +|-----------------------|------------|--------------| +| RNN | Support | Support | +| recursive RNN | Support | Support | +| padding zeros | Must | No need | +| blob data type | Tensor | LoDTensor | -People usually represent a mini-batch by a Tensor. For example, a mini-batch of 10 images, each of size 32x32, is a 10x32x32 Tensor. So a transformation, T, of all images can be a matrix multiplication of the 10xOx32-dimensional tensor T and the 10x32x32 Tensor. +PaddlePaddle achieves this flexibility by passing through a new data type, *LoD Tensor*, which is a Tensor attached with segmentation index known as *LoD*, between operators. The LoD index doesn't only segments a tensor, but also recursively segments sub-sequences. This document presents the design of LoD and LoDTensor. -Another example is that each mini-batch contains 32 sentences, where each word is a D-dimensional one-hot vector. If all sentences have the same length L, we can represent this mini-batch by a 32xLxD tensor. However, in most cases, sentences have variable lengths, and we will need an index data structure to record these variable lengths. -## LoD as a Solution +## The Challenge: Variable-length Sequences -### Mini-Batch of variable-length sentences +Most deep learning systems represent a mini-batch as a Tensor. For example, a mini-batch of 10 images, each of size 32x32, is a 10x32x32 Tensor. Another example is that each mini-batch contains N sentences, where each word is a D-dimensional one-hot vector. Suppose that all sentences have the same length L, we can represent this mini-batch by a NxLxD tensor. -Let's imagine a mini-batch of 3 variable lengths sentences, containing 3, 1, and 2 words respectively. We can represent it by a (3+1+2)xD tensor plus some index information: +Both examples show that the elements of sequences are usually of the same size. In the first example, all images are 32x32, and in the second one, all words are D-dimensional vectors. It doesn't make sense to allow variable-sized images, as that would require transformations like convolution represented by variable-sized Tensors. + +The real challenge is that in most cases, sentences have variable lengths, and we will need an index data structure to segment the tensor into sequences. Also, sequences might consist of sub-sequences. + +## A Solution: The LoD Index + +Let is visit this challenge from examples. + +### A Mini-Batch of Sentences + +Let's imagine a mini-batch of 3 variable lengths sentences composed by 3, 1, and 2 words respectively. We can represent it by a (3+1+2)xD tensor plus some index information: ``` - 3 3 1 2 ||| | || ``` -Each `|` represents a D-dimensional word vectors. The number 3 on top indicate 3 sentences, and numbers 3, 1, and 2 on the second level represent the number of words in each sentence. +where each `|` represents a D-dimensional word vector. The numbers, 3, 1, and 2, form a 1-level LoD. + +### Recursive Sequences + +Let check another example of a 2-level LoD Tensor. Consider a mini-batch of three articles with 3, 1, and 2 sentences, and each sentence consists of words: + +``` +3 1 2 +3 2 4 1 2 3 +||| || |||| | || ||| +``` -### Mini-Batch of variable-length videos +### A Mini-Batch of Videos -This approach generalizes to the case where elements are not words, but higher dimensional objects, like images. Suppose that a mini-batch contains videos of the same frame size 640x480. If a mini-batch contains 3 videos of 3, 1, and 2 frames respectively. The underlying tensor is of size (3+1+2)x640x480. The index information illustrates as: +LoD Tensor generalizes to the case where elements are higher dimensional objects, like images. Suppose that a mini-batch contains videos of the same frame size 640x480. Here is a mini-batch of 3 videos with 3, 1, and 2 frames respectively. ``` - 3 3 1 2 口口口 口 口口 ``` -where each `口` represents an image. +The underlying tensor is of size (3+1+2)x640x480, and each `口` represents a 640x480 image. -### Mini-Batch of fixed-size images +### A Mini-Batch of Images -Let's get back to a typical example, image classification, where each mini-batch has M fixed-sized images. The LoD Tensor representation is +In traditional cases like a mini-batch with N fixed-sized images, the LoD Tensor representation is as ``` - M 1 1 1 1 1 口口口口 ... 口 ``` -The many 1's on the second level seem duplicated. For this particular case of 2 levels and the second level always have length 1, we can ignore the LoD index. - -### Design and summarization - -In summary, as long as that the essential elements (words or images) have the same size, we can represent mini-batches by a LoD Tensor: +It doesn't loss anything to ignore the many 1's in the index and to consider this LoD Tensor a usual Tensor: -- The underlying tensor has size LxD1xD2x..., where D1xD2... is the size of the essential elements, and -- The first dimension size L has an additonal property -- a LoD index as a nested vector: +``` +口口口口 ... 口 +``` - ```c++ - typedef std::vector> LoD; - ``` +### Model Parameters -- The LoD index is not necessary when there are only two levels and all elements of the second level have length 1. +A model parameter is just a usual Tensor, which, just like the above example, is a **0-level LoD Tensor**. -## Slicing of LoD Tensor +## The LoD Tensor -Consider that we have a network with three levels of RNN: the top level one handles articles, the second level one handles sentences, and the basic level one handles words. This network requires that mini-batches represented by 3 level LoD Tensor, for example, +Let us revisit above example of the 2-level LoD Tensor ``` - 3 3 1 2 3 2 4 1 2 3 ||| || |||| | || ||| ``` -To allow each level of RNN to handle its input, we define **the slicing of a LoD Tensor is defined as getting the j-th sequence on level i, or the -slice** +It is indeed a tree, where leaves are elementary sequences identified by **branches**. + +For example, the third sentence in above example is identified by branch <0,2>, where 0 indicates the first article with length 3, and 2 indicates the third sentence in this article with length 4. + +### The LoD Index -For example, the <2,1>-slice of above slice is +We can save the LoD index in above example ``` -2 -|| +3 1 2 +3 2 4 1 2 3 ``` -and the <1,2>-slice of above example is +in a not-full 2D matrix: +```c++ +typedef std::vector > LoD; ``` -2 -2 3 -|| ||| -``` -Let's go on slicing this slice. Its <1,1>-slice is +where + +- `LoD.size()` is the number of levels, or the maximum length of branches, +- `LoD[i][j]` is the length of the j-th segment at the i-th level. + +## The Offset Representation + +To quickly access elementary sequences, we adopt an offset representation -- instead of saving the lengths, we save the beginning and ending elements of sequences. + +In the above example, we accumulate the length of elementary sequences: ``` -1 -1 -| +3 2 4 1 2 3 ``` -### The Slicing Algorithm +into offsets -The algorithm, with over-simplified data structure, is defined as +``` +0 3 5 9 10 12 15 + = = = = = = + 3 2+3 4+5 1+9 2+10 3+12 +``` -```c++ -typedef std::vector> LoD; +so we know that the first sentence is from word 0 to word 3, and the second sentence from work 3 to word 5. -struct LoDTensor { - LoD lod_; - float* tensor_; -}; +Similarly, lengths in the top level LoD -LoDTensor Slice(const LoDTensor& lodt, int level, int sequence); +``` +3 1 2 ``` -Let us revisit the example above +is transformed into offsets of elements/words: ``` - 3 -3 1 2 -3 2 4 1 2 3 -||| || |||| | || ||| +0 9 10 15 + = = = + 3+2+4 1+9 2+3+10 ``` -Suppose that we want to retrieve the <1,2>-slice +so we can tell that the first article is from word 0 to word 9, and the second article is from word 9 to word 10. + +The complete offset representation is as follows: ``` -2 -2 3 -|| ||| +0 9 10 15 +0 3 5 9 10 12 15 +||| || |||| | || ||| ``` -we will need to find out the starting position of this slice by summing over all leaf nodes in `LoD` to the left of the slice, i.e., 3 + 2 + 4 + 1 = 10. +## Slicing of LoD Tensors + +When we use the above 2-level LoD Tensor as the input to a nested-RNN, we need to retrieve certain sequences. Here we define the sequence identified by branch as the **-slice**. -To avoid the traversal of the LoD tree at slicing time, we can do it at the construction time -- instead of saving the lengths of the next level in the LoD tree, we can save the starting offset of the next level. For example, above LoD Tensor can be transformed into +For example, the <2>-slice of above example is ``` - 0 -0 9 10 -0 3 5 9 10 12 -||| || |||| | || ||| +10 15 +10 12 15 + || ||| ``` -We don't really need the 0 on top, so the LoD Tensor could be +and the <2,0>-slice of above slice is ``` -0 9 10 -0 3 5 9 10 12 -||| || |||| | || ||| +10 12 + || ``` From 48a9ab4a0896b3102637fb7606b27bbf6b097bc3 Mon Sep 17 00:00:00 2001 From: Markus Kliegl Date: Tue, 3 Oct 2017 14:21:42 -0700 Subject: [PATCH 6/9] minor language fixes --- paddle/framework/lod_tensor.md | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/paddle/framework/lod_tensor.md b/paddle/framework/lod_tensor.md index 0fa14f3470..597bc48cf3 100644 --- a/paddle/framework/lod_tensor.md +++ b/paddle/framework/lod_tensor.md @@ -1,6 +1,6 @@ # Design Doc: LoD (Level-of-Detail) Tensor -As other deep learning systems, PaddlePaddle supports training models from sequence data. Also, like other systems, PaddlePaddle represent a mini-batch of sequences as a Tensor. What is different is that PaddlePaddle doesn't require that all sequences in a mini-batch are of the same length. Thus no need for padding zeros. +Like other deep learning systems, PaddlePaddle supports training models from sequence data. Also, like other systems, PaddlePaddle represent a mini-batch of sequences as a Tensor. What is different is that PaddlePaddle doesn't require all sequences in a mini-batch to be of the same length. Thus no need for padding zeros. | | TensorFlow | PaddlePaddle | |-----------------------|------------|--------------| @@ -9,24 +9,24 @@ As other deep learning systems, PaddlePaddle supports training models from seque | padding zeros | Must | No need | | blob data type | Tensor | LoDTensor | -PaddlePaddle achieves this flexibility by passing through a new data type, *LoD Tensor*, which is a Tensor attached with segmentation index known as *LoD*, between operators. The LoD index doesn't only segments a tensor, but also recursively segments sub-sequences. This document presents the design of LoD and LoDTensor. +PaddlePaddle achieves this flexibility by passing through a new data type, *LoD Tensor*, which is a Tensor attached with segmentation index known as *LoD*, between operators. The LoD index doesn't only segment a tensor, but also recursively segments sub-sequences. This document presents the design of LoD and LoDTensor. ## The Challenge: Variable-length Sequences Most deep learning systems represent a mini-batch as a Tensor. For example, a mini-batch of 10 images, each of size 32x32, is a 10x32x32 Tensor. Another example is that each mini-batch contains N sentences, where each word is a D-dimensional one-hot vector. Suppose that all sentences have the same length L, we can represent this mini-batch by a NxLxD tensor. -Both examples show that the elements of sequences are usually of the same size. In the first example, all images are 32x32, and in the second one, all words are D-dimensional vectors. It doesn't make sense to allow variable-sized images, as that would require transformations like convolution represented by variable-sized Tensors. +Both examples show that the elements of sequences are usually of the same size. In the first example, all images are 32x32, and in the second one, all words are D-dimensional vectors. It doesn't make sense to allow variable-sized images, as that would require transformations like convolution to handle variable-sized Tensors. The real challenge is that in most cases, sentences have variable lengths, and we will need an index data structure to segment the tensor into sequences. Also, sequences might consist of sub-sequences. ## A Solution: The LoD Index -Let is visit this challenge from examples. +To understand our solution, it is best to look at some examples. ### A Mini-Batch of Sentences -Let's imagine a mini-batch of 3 variable lengths sentences composed by 3, 1, and 2 words respectively. We can represent it by a (3+1+2)xD tensor plus some index information: +Let's imagine a mini-batch of 3 variable lengths sentences composed of 3, 1, and 2 words, respectively. We can represent the mini-batch by a (3+1+2)xD tensor plus some index information: ``` 3 1 2 @@ -37,7 +37,7 @@ where each `|` represents a D-dimensional word vector. The numbers, 3, 1, and 2 ### Recursive Sequences -Let check another example of a 2-level LoD Tensor. Consider a mini-batch of three articles with 3, 1, and 2 sentences, and each sentence consists of words: +Let check another example of a 2-level LoD Tensor. Consider a mini-batch of three articles with 3, 1, and 2 sentences, and each sentence consists of a variable number of words: ``` 3 1 2 @@ -47,7 +47,7 @@ Let check another example of a 2-level LoD Tensor. Consider a mini-batch of thr ### A Mini-Batch of Videos -LoD Tensor generalizes to the case where elements are higher dimensional objects, like images. Suppose that a mini-batch contains videos of the same frame size 640x480. Here is a mini-batch of 3 videos with 3, 1, and 2 frames respectively. +LoD tensors generalize to the case where elements are higher dimensional objects, like images. Suppose that a mini-batch contains videos of the same frame size 640x480. Here is a mini-batch of 3 videos with 3, 1, and 2 frames, respectively. ``` 3 1 2 @@ -65,7 +65,7 @@ In traditional cases like a mini-batch with N fixed-sized images, the LoD Tenso 口口口口 ... 口 ``` -It doesn't loss anything to ignore the many 1's in the index and to consider this LoD Tensor a usual Tensor: +In this case, we don't lose any information by ignoring the many 1's in the index and simply considering this LoD Tensor as a usual Tensor: ``` 口口口口 ... 口 @@ -91,7 +91,7 @@ For example, the third sentence in above example is identified by branch <0,2>, ### The LoD Index -We can save the LoD index in above example +We can save the LoD index in the above example ``` 3 1 2 @@ -129,13 +129,13 @@ into offsets so we know that the first sentence is from word 0 to word 3, and the second sentence from work 3 to word 5. -Similarly, lengths in the top level LoD +Similarly, the lengths in the top level LoD ``` 3 1 2 ``` -is transformed into offsets of elements/words: +are transformed into offsets of elements/words as follows: ``` 0 9 10 15 @@ -148,9 +148,9 @@ so we can tell that the first article is from word 0 to word 9, and the second a The complete offset representation is as follows: ``` -0 9 10 15 -0 3 5 9 10 12 15 -||| || |||| | || ||| +0 9 10 15 +0 3 5 9 10 12 15 + ||| || |||| | || ||| ``` ## Slicing of LoD Tensors From e08367c80678804fc388004ba2ab72f754bc1143 Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Tue, 3 Oct 2017 15:53:42 -0700 Subject: [PATCH 7/9] Add few blank lines --- paddle/framework/lod_tensor.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/paddle/framework/lod_tensor.md b/paddle/framework/lod_tensor.md index 597bc48cf3..d147f1c425 100644 --- a/paddle/framework/lod_tensor.md +++ b/paddle/framework/lod_tensor.md @@ -20,6 +20,7 @@ Both examples show that the elements of sequences are usually of the same size. The real challenge is that in most cases, sentences have variable lengths, and we will need an index data structure to segment the tensor into sequences. Also, sequences might consist of sub-sequences. + ## A Solution: The LoD Index To understand our solution, it is best to look at some examples. @@ -75,6 +76,7 @@ In this case, we don't lose any information by ignoring the many 1's in the inde A model parameter is just a usual Tensor, which, just like the above example, is a **0-level LoD Tensor**. + ## The LoD Tensor Let us revisit above example of the 2-level LoD Tensor From a9e298bebef29390b815e431e1a475ab1417015a Mon Sep 17 00:00:00 2001 From: Helin Wang Date: Wed, 4 Oct 2017 13:40:32 -0700 Subject: [PATCH 8/9] fix according to comments --- doc/design/refactor/session.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/doc/design/refactor/session.md b/doc/design/refactor/session.md index 9a7451ece5..1d9a26683c 100644 --- a/doc/design/refactor/session.md +++ b/doc/design/refactor/session.md @@ -86,10 +86,10 @@ Evaluates the target Operations or Variables in `targets`. OP's input as well: ```python - a = pd.constant(1.0, name="a") - b = pd.constant(2.0) + a = pd.constant(2.0, name="a") + b = pd.variable(name="b") c = pd.mul(a,b) - sess.eval(targets=c, feed_dict={"a":3.0}) # returns 6.0 + sess.eval(targets=c, feed_dict={"b":3.0}) # returns 6.0 ``` ```python @@ -107,14 +107,14 @@ session( ) ``` -Creates a new session. One session owns one scope, so creating +Creates a new session. One session owns one global scope, so creating multiple sessions will create different scopes. - *devices*: a single `string` or a list of `string` of device names, the corresponding devices will be the computation devices for `eval()`. If not specified, all available devices (e.g., all GPUs) will be used. The user doesn't need to specify the CPU device since - it will be always used. + it will be always used. Multiple sessions can use the same device. #### Example From 2b204f048bf6599bdb9ba799769404dc5fd206a8 Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Wed, 4 Oct 2017 14:09:19 -0700 Subject: [PATCH 9/9] Rename platform::GetDeviceCount into platform::GetCUDADeviceCount --- paddle/memory/memory.cc | 2 +- paddle/platform/device_context_test.cc | 4 ++-- paddle/platform/gpu_info.cc | 4 ++-- paddle/platform/gpu_info.h | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/paddle/memory/memory.cc b/paddle/memory/memory.cc index 6d5a74dafe..f816962890 100644 --- a/paddle/memory/memory.cc +++ b/paddle/memory/memory.cc @@ -77,7 +77,7 @@ BuddyAllocator* GetGPUBuddyAllocator(int gpu_id) { // GPU buddy allocator initialization std::call_once(gpu_allocator_flag, [&]() { - int gpu_num = platform::GetDeviceCount(); + int gpu_num = platform::GetCUDADeviceCount(); allocators.reserve(gpu_num); for (int gpu = 0; gpu < gpu_num; gpu++) { platform::SetDeviceId(gpu); diff --git a/paddle/platform/device_context_test.cc b/paddle/platform/device_context_test.cc index f4b00c57de..8bf5174c4a 100644 --- a/paddle/platform/device_context_test.cc +++ b/paddle/platform/device_context_test.cc @@ -20,7 +20,7 @@ TEST(Device, Init) { using paddle::platform::CUDADeviceContext; using paddle::platform::GPUPlace; - int count = paddle::platform::GetDeviceCount(); + int count = paddle::platform::GetCUDADeviceCount(); for (int i = 0; i < count; i++) { DeviceContext* device_context = new CUDADeviceContext(GPUPlace(i)); Eigen::GpuDevice* gpu_device = @@ -34,7 +34,7 @@ TEST(Device, CUDADeviceContext) { using paddle::platform::CUDADeviceContext; using paddle::platform::GPUPlace; - int count = paddle::platform::GetDeviceCount(); + int count = paddle::platform::GetCUDADeviceCount(); for (int i = 0; i < count; i++) { CUDADeviceContext* device_context = new CUDADeviceContext(GPUPlace(i)); Eigen::GpuDevice* gpu_device = device_context->eigen_device(); diff --git a/paddle/platform/gpu_info.cc b/paddle/platform/gpu_info.cc index be381a4e26..70ad611d5d 100644 --- a/paddle/platform/gpu_info.cc +++ b/paddle/platform/gpu_info.cc @@ -26,11 +26,11 @@ DEFINE_double(fraction_of_gpu_memory_to_use, 0.95, namespace paddle { namespace platform { -int GetDeviceCount() { +int GetCUDADeviceCount() { int count; PADDLE_ENFORCE( cudaGetDeviceCount(&count), - "cudaGetDeviceCount failed in paddle::platform::GetDeviceCount"); + "cudaGetDeviceCount failed in paddle::platform::GetCUDADeviceCount"); return count; } diff --git a/paddle/platform/gpu_info.h b/paddle/platform/gpu_info.h index ac884386dd..276783bbe4 100644 --- a/paddle/platform/gpu_info.h +++ b/paddle/platform/gpu_info.h @@ -28,7 +28,7 @@ const std::string kEnvFractionGpuMemoryToUse = "PADDLE_FRACTION_GPU_MEMORY_TO_USE"; //! Get the total number of GPU devices in system. -int GetDeviceCount(); +int GetCUDADeviceCount(); //! Get the current GPU device id in system. int GetCurrentDeviceId();