Merge branch 'develop' of https://github.com/baidu/Paddle into conv_op
commit
3705de6ddd
@ -0,0 +1,106 @@
|
||||
# Design Doc: Operation Graph Based Parameter Server
|
||||
|
||||
## Abstract
|
||||
|
||||
We propose an approach to implement the parameter server. In this
|
||||
approach, there is no fundamental difference between the trainer and
|
||||
the parameter server: they both run subgraphs, but subgraphs of
|
||||
different purposes.
|
||||
|
||||
## Background
|
||||
|
||||
The previous implementations of the parameter server does not run a
|
||||
subgraph. parameter initialization, optimizer computation, network
|
||||
communication and checkpointing are implemented twice on both the
|
||||
trainer and the parameter server.
|
||||
|
||||
It would be great if we can write code once and use them on both the
|
||||
trainer and the parameter server: reduces code duplication and
|
||||
improves extensibility. Given that after the current refactor, we are
|
||||
representing everything as a computing graph on the
|
||||
trainer. Representing everything as a computing graph on the parameter
|
||||
server becomes a natural extension.
|
||||
|
||||
## Design
|
||||
|
||||
### Graph Converter
|
||||
|
||||
The *graph converter* converts the user-defined operation (OP) graph
|
||||
into subgraphs to be scheduled on different nodes with the following
|
||||
steps:
|
||||
|
||||
1. OP placement: the OPs will be placed on different nodes according
|
||||
to heuristic that minimizes estimated total computation
|
||||
time. Currently we will use a simple heuristic that puts parameter
|
||||
varable on parameter server workers and everything else on trainer
|
||||
workers.
|
||||
|
||||
1. Add communication OPs to enable the communication between nodes.
|
||||
|
||||
We will need these OPs: *Send*, *Recv*, *Enqueue*, *Dequeue*.
|
||||
|
||||
Below is an example of converting the user defined graph to the
|
||||
subgraphs for the trainer and the parameter server:
|
||||
|
||||
<img src="src/local-graph.png" width="300"/>
|
||||
|
||||
After converting:
|
||||
|
||||
<img src="src/dist-graph.png" width="700"/>
|
||||
|
||||
1. The parameter variable W and it's optimizer subgraph are placed on the parameter server.
|
||||
1. Operators are added to the subgraphs.
|
||||
- *Send* sends data to the connected *Recv* operator. The
|
||||
scheduler on the receive node will only schedule *Recv* operator
|
||||
to run when the *Send* operator has ran (the *Send* OP will mark
|
||||
the *Recv* OP runnable automatically).
|
||||
- *Enueue* enqueues the input variable, it can block until space
|
||||
become available in the queue.
|
||||
- *Dequeue* outputs configurable numbers of tensors from the
|
||||
queue. It will block until the queue have the required number of
|
||||
tensors.
|
||||
|
||||
|
||||
### Benefits
|
||||
|
||||
- Model parallelism become easier to implement: it's an extension to
|
||||
the trainer - parameter server approach. we already have the
|
||||
communication OPs, but need to extend the graph converter's
|
||||
placement functionality.
|
||||
|
||||
- User-defined optimizer is easier to add - user can now express it as
|
||||
a subgraph.
|
||||
|
||||
- No more duplication logic inside the trainer and the parameter
|
||||
server mentioned in the background section.
|
||||
|
||||
### Challenges
|
||||
|
||||
- It might be hard for the graph converter to cut a general graph
|
||||
(without any hint for which subgraph is the optimizer). We may need
|
||||
to label which subgraph inside the OP graph is the optimizer.
|
||||
|
||||
- It's important to balance the parameter shards of on multiple
|
||||
parameter server. If a single parameter is very big (some
|
||||
word-embedding, fully connected, softmax layer), we need to
|
||||
automatically partition the single parameter onto different
|
||||
parameter servers when possible (only element-wise optimizer depends
|
||||
on the parameter variable).
|
||||
|
||||
### Discussion
|
||||
|
||||
- In the "Aync SGD" figure, the "W" variable on the parameter server
|
||||
could be read and wrote concurrently, what is our locking strategy?
|
||||
E.g., each variable have a lock cpp method to be invoked by every
|
||||
OP, or, have a lock OP.
|
||||
|
||||
- Can the Enqueue OP be implemented under our current tensor design
|
||||
(puts the input tensor into the queue tensor)?
|
||||
|
||||
- *Dequeue* OP will have variable numbers of output (depends on the
|
||||
`min_count` attribute), does our current design support it? (similar
|
||||
question for the *Add* OP)
|
||||
|
||||
|
||||
### References:
|
||||
[1] [TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45166.pdf)
|
Binary file not shown.
After Width: | Height: | Size: 222 KiB |
Binary file not shown.
After Width: | Height: | Size: 28 KiB |
@ -0,0 +1,52 @@
|
||||
/*
|
||||
Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
#include <cuda.h>
|
||||
#include <cuda_runtime.h>
|
||||
#include "paddle/framework/lod_tensor.h"
|
||||
#include "paddle/platform/assert.h"
|
||||
|
||||
#include <gtest/gtest.h>
|
||||
|
||||
__global__ void test(size_t* a, int size) {
|
||||
for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < size;
|
||||
i += blockDim.x * gridDim.x) {
|
||||
a[i] *= 2;
|
||||
}
|
||||
}
|
||||
|
||||
TEST(LoDTensor, LoDInGPU) {
|
||||
paddle::framework::Tensor tensor;
|
||||
paddle::framework::LoDTensor lod_tensor;
|
||||
paddle::platform::GPUPlace place(0);
|
||||
|
||||
paddle::framework::LoD src_lod;
|
||||
src_lod.push_back(std::vector<size_t>{0, 2, 4, 6, 8, 10, 12, 14});
|
||||
|
||||
tensor.Resize({14, 16});
|
||||
tensor.mutable_data<float>(place);
|
||||
|
||||
lod_tensor.set_lod(src_lod);
|
||||
lod_tensor.set_tensor(&tensor);
|
||||
CHECK_EQ(lod_tensor.lod_element(0, 2), 4);
|
||||
CHECK_EQ(lod_tensor.lod_element(0, 4), 8);
|
||||
|
||||
auto lod = lod_tensor.lod();
|
||||
|
||||
test<<<1, 8>>>(lod[0].data(), lod[0].size());
|
||||
cudaDeviceSynchronize();
|
||||
|
||||
for (size_t i = 0; i < src_lod[0].size(); ++i) {
|
||||
CHECK_EQ(lod[0].data()[i], src_lod[0].data()[i] * 2);
|
||||
}
|
||||
}
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue