diff --git a/benchmark/IntelOptimizedPaddle.md b/benchmark/IntelOptimizedPaddle.md index 040f5ffa41..16c2390fd3 100644 --- a/benchmark/IntelOptimizedPaddle.md +++ b/benchmark/IntelOptimizedPaddle.md @@ -12,11 +12,11 @@ Machine: System: CentOS release 6.3 (Final), Docker 1.12.1. -PaddlePaddle: paddlepaddle/paddle:latest (TODO: will rerun after 0.11.0) - -- MKL-DNN tag v0.10 -- MKLML 2018.0.20170720 +PaddlePaddle: paddlepaddle/paddle:latest (for MKLML and MKL-DNN), paddlepaddle/paddle:latest-openblas (for OpenBLAS) +- MKL-DNN tag v0.11 +- MKLML 2018.0.1.20171007 - OpenBLAS v0.2.20 +(TODO: will rerun after 0.11.0) On each machine, we will test and compare the performance of training on single node using MKL-DNN / MKLML / OpenBLAS respectively. @@ -31,17 +31,37 @@ Input image size - 3 * 224 * 224, Time: images/second | BatchSize | 64 | 128 | 256 | |--------------|-------| -----| --------| -| OpenBLAS | 7.82 | 8.62 | 10.34 | -| MKLML | 11.02 | 12.86 | 15.33 | -| MKL-DNN | 27.69 | 28.8 | 29.27 | +| OpenBLAS | 7.80 | 9.00 | 10.80 | +| MKLML | 12.12 | 13.70 | 16.18 | +| MKL-DNN | 28.46 | 29.83 | 30.44 | + + +chart on batch size 128 +TBD + + - ResNet-50 + +| BatchSize | 64 | 128 | 256 | +|--------------|-------| ------| -------| +| OpenBLAS | 25.22 | 25.68 | 27.12 | +| MKLML | 32.52 | 31.89 | 33.12 | +| MKL-DNN | 81.69 | 82.35 | 84.08 | chart on batch size 128 TBD - - ResNet - GoogLeNet +| BatchSize | 64 | 128 | 256 | +|--------------|-------| ------| -------| +| OpenBLAS | 89.52 | 96.97 | 108.25 | +| MKLML | 128.46| 137.89| 158.63 | +| MKL-DNN     | 250.46| 264.83| 269.50 | + +chart on batch size 128 +TBD + ### Laptop TBD ### Desktop diff --git a/doc/design/reader/README.md b/doc/design/reader/README.md index 320dccec3d..2cd4b6225b 100644 --- a/doc/design/reader/README.md +++ b/doc/design/reader/README.md @@ -1,25 +1,25 @@ # Python Data Reader Design Doc -At training and testing time, PaddlePaddle programs need to read data. To ease the users' work to write data reading code, we define that +During the training and testing phases, PaddlePaddle programs need to read data. To help the users write code that performs reading input data, we define the following: -- A *reader* is a function that reads data (from file, network, random number generator, etc) and yields data items. -- A *reader creator* is a function that returns a reader function. -- A *reader decorator* is a function, which accepts one or more readers, and returns a reader. -- A *batch reader* is a function that reads data (from *reader*, file, network, random number generator, etc) and yields a batch of data items. +- A *reader*: A function that reads data (from file, network, random number generator, etc) and yields the data items. +- A *reader creator*: A function that returns a reader function. +- A *reader decorator*: A function, which takes in one or more readers, and returns a reader. +- A *batch reader*: A function that reads data (from *reader*, file, network, random number generator, etc) and yields a batch of data items. -and provide function which converts reader to batch reader, frequently used reader creators and reader decorators. +and also provide a function which can convert a reader to a batch reader, frequently used reader creators and reader decorators. ## Data Reader Interface -Indeed, *data reader* doesn't have to be a function that reads and yields data items. It can be any function with no parameter that creates a iterable (anything can be used in `for x in iterable`): +*Data reader* doesn't have to be a function that reads and yields data items. It can just be any function without any parameters that creates an iterable (anything can be used in `for x in iterable`) as follows: ``` iterable = data_reader() ``` -Element produced from the iterable should be a **single** entry of data, **not** a mini batch. That entry of data could be a single item, or a tuple of items. Item should be of [supported type](http://www.paddlepaddle.org/doc/ui/data_provider/pydataprovider2.html?highlight=dense_vector#input-types) (e.g., numpy 1d array of float32, int, list of int) +The item produced from the iterable should be a **single** entry of data and **not** a mini batch. The entry of data could be a single item or a tuple of items. Item should be of one of the [supported types](http://www.paddlepaddle.org/doc/ui/data_provider/pydataprovider2.html?highlight=dense_vector#input-types) (e.g., numpy 1d array of float32, int, list of int etc.) -An example implementation for single item data reader creator: +An example implementation for single item data reader creator is as follows: ```python def reader_creator_random_image(width, height): @@ -29,7 +29,7 @@ def reader_creator_random_image(width, height): return reader ``` -An example implementation for multiple item data reader creator: +An example implementation for multiple item data reader creator is as follows: ```python def reader_creator_random_image_and_label(width, height, label): def reader(): @@ -40,9 +40,10 @@ def reader_creator_random_image_and_label(width, height, label): ## Batch Reader Interface -*batch reader* can be any function with no parameter that creates a iterable (anything can be used in `for x in iterable`). The output of the iterable should be a batch (list) of data items. Each item inside the list must be a tuple. +*Batch reader* can be any function without any parameters that creates an iterable (anything can be used in `for x in iterable`). The output of the iterable should be a batch (list) of data items. Each item inside the list should be a tuple. + +Here are some valid outputs: -Here are valid outputs: ```python # a mini batch of three data items. Each data item consist three columns of data, each of which is 1. [(1, 1, 1), @@ -58,20 +59,22 @@ Here are valid outputs: Please note that each item inside the list must be a tuple, below is an invalid output: ```python # wrong, [1,1,1] needs to be inside a tuple: ([1,1,1],). - # Otherwise it's ambiguous whether [1,1,1] means a single column of data [1, 1, 1], - # or three column of datas, each of which is 1. + # Otherwise it is ambiguous whether [1,1,1] means a single column of data [1, 1, 1], + # or three columns of data, each of which is 1. [[1,1,1], [2,2,2], [3,3,3]] ``` -It's easy to convert from reader to batch reader: +It is easy to convert from a reader to a batch reader: + ```python mnist_train = paddle.dataset.mnist.train() mnist_train_batch_reader = paddle.batch(mnist_train, 128) ``` -Also easy to create custom batch reader: +It is also straight forward to create a custom batch reader: + ```python def custom_batch_reader(): while True: @@ -85,7 +88,8 @@ mnist_random_image_batch_reader = custom_batch_reader ## Usage -batch reader, mapping from item(s) read to data layer, batch size and number of total pass will be passed into `paddle.train`: +Following is how we can use the reader with PaddlePaddle: +The batch reader, a mapping from item(s) to data layer, the batch size and the number of total passes will be passed into `paddle.train` as follows: ```python # two data layer is created: @@ -99,13 +103,13 @@ paddle.train(batch_reader, {"image":0, "label":1}, 128, 10, ...) ## Data Reader Decorator -*Data reader decorator* takes a single or multiple data reader, returns a new data reader. It is similar to a [python decorator](https://wiki.python.org/moin/PythonDecorators), but it does not use `@` syntax. +The *Data reader decorator* takes in a single reader or multiple data readers and returns a new data reader. It is similar to a [python decorator](https://wiki.python.org/moin/PythonDecorators), but it does not use `@` in the syntax. -Since we have a strict interface for data readers (no parameter, return a single data item). Data reader can be used flexiable via data reader decorators. Following are a few examples: +Since we have a strict interface for data readers (no parameters and return a single data item), a data reader can be used in a flexible way using data reader decorators. Following are a few examples: ### Prefetch Data -Since reading data may take time and training can not proceed without data. It is generally a good idea to prefetch data. +Since reading data may take some time and training can not proceed without data, it is generally a good idea to prefetch the data. Use `paddle.reader.buffered` to prefetch data: @@ -117,9 +121,9 @@ buffered_reader = paddle.reader.buffered(paddle.dataset.mnist.train(), 100) ### Compose Multiple Data Readers -For example, we want to use a source of real images (reusing mnist dataset), and a source of random images as input for [Generative Adversarial Networks](https://arxiv.org/abs/1406.2661). +For example, if we want to use a source of real images (say reusing mnist dataset), and a source of random images as input for [Generative Adversarial Networks](https://arxiv.org/abs/1406.2661). -We can do: +We can do the following : ```python def reader_creator_random_image(width, height): @@ -139,13 +143,13 @@ false_reader = reader_creator_bool(False) reader = paddle.reader.compose(paddle.dataset.mnist.train(), data_reader_creator_random_image(20, 20), true_reader, false_reader) # Skipped 1 because paddle.dataset.mnist.train() produces two items per data entry. -# And we don't care second item at this time. +# And we don't care about the second item at this time. paddle.train(paddle.batch(reader, 128), {"true_image":0, "fake_image": 2, "true_label": 3, "false_label": 4}, ...) ``` ### Shuffle -Given shuffle buffer size `n`, `paddle.reader.shuffle` will return a data reader that buffers `n` data entries and shuffle them before a data entry is read. +Given the shuffle buffer size `n`, `paddle.reader.shuffle` returns a data reader that buffers `n` data entries and shuffles them before a data entry is read. Example: ```python @@ -154,21 +158,21 @@ reader = paddle.reader.shuffle(paddle.dataset.mnist.train(), 512) ## Q & A -### Why reader return only a single entry, but not a mini batch? +### Why does a reader return only a single entry, and not a mini batch? -Always returning a single entry make reusing existing data readers much easier (e.g., if existing reader return not a single entry but 3 entries, training code will be more complex because it need to handle cases like batch size 2). +Returning a single entry makes reusing existing data readers much easier (for example, if an existing reader returns 3 entries instead if a single entry, the training code will be more complicated because it need to handle cases like a batch size 2). -We provide function `paddle.batch` to turn (single entry) reader into batch reader. +We provide a function: `paddle.batch` to turn (a single entry) reader into a batch reader. -### Why do we need batch reader, isn't train take reader and batch_size as arguments sufficient? +### Why do we need a batch reader, isn't is sufficient to give the reader and batch_size as arguments during training ? -In most of the case, train taking reader and batch_size as arguments would be sufficent. However sometimes user want to customize order of data entries inside a mini batch. Or even change batch size dynamically. +In most of the cases, it would be sufficient to give the reader and batch_size as arguments to the train method. However sometimes the user wants to customize the order of data entries inside a mini batch, or even change the batch size dynamically. For these cases using a batch reader is very efficient and helpful. -### Why use a dictionary but not a list to provide mapping? +### Why use a dictionary instead of a list to provide mapping? -We decided to use dictionary (`{"image":0, "label":1}`) instead of list (`["image", "label"]`) is because that user can easily resue item (e.g., using `{"image_a":0, "image_b":0, "label":1}`) or skip item (e.g., using `{"image_a":0, "label":2}`). +Using a dictionary (`{"image":0, "label":1}`) instead of a list (`["image", "label"]`) gives the advantage that the user can easily reuse the items (e.g., using `{"image_a":0, "image_b":0, "label":1}`) or even skip an item (e.g., using `{"image_a":0, "label":2}`). -### How to create custom data reader creator +### How to create a custom data reader creator ? ```python def image_reader_creator(image_path, label_path, n): @@ -192,7 +196,7 @@ paddle.train(paddle.batch(reader, 128), {"image":0, "label":1}, ...) ### How is `paddle.train` implemented -An example implementation of paddle.train could be: +An example implementation of paddle.train is: ```python def train(batch_reader, mapping, batch_size, total_pass): diff --git a/paddle/capi/examples/model_inference/dense/main.c b/paddle/capi/examples/model_inference/dense/main.c index 876af2aa76..5eeaf7e31f 100644 --- a/paddle/capi/examples/model_inference/dense/main.c +++ b/paddle/capi/examples/model_inference/dense/main.c @@ -1,5 +1,6 @@ #include #include + #include "../common/common.h" #define CONFIG_BIN "./trainer_config.bin" @@ -27,20 +28,19 @@ int main() { CHECK(paddle_arguments_resize(in_args, 1)); // Create input matrix. - paddle_matrix mat = paddle_matrix_create(/* sample_num */ 10, + paddle_matrix mat = paddle_matrix_create(/* sample_num */ 1, /* size */ 784, /* useGPU */ false); srand(time(0)); - std::vector input; - input.resize(784 * 10); + paddle_real* array; + + // Get First row. + CHECK(paddle_matrix_get_row(mat, 0, &array)); - for (int i = 0; i < input.size(); ++i) { - input[i] = rand() / ((float)RAND_MAX); + for (int i = 0; i < 784; ++i) { + array[i] = rand() / ((float)RAND_MAX); } - - // Set value for the input matrix - CHECK(paddle_matrix_set_value(mat, input.data())); CHECK(paddle_arguments_set_value(in_args, 0, mat)); @@ -53,17 +53,18 @@ int main() { CHECK(paddle_arguments_get_value(out_args, 0, prob)); - std::std::vector result; - int height; - int width; + uint64_t height; + uint64_t width; - CHECK(paddle_matrix_get_shape(prob, &height, &width); - result.resize(height * width); - CHECK(paddle_matrix_get_value(prob, result.data())); + CHECK(paddle_matrix_get_shape(prob, &height, &width)); + CHECK(paddle_matrix_get_row(prob, 0, &array)); - printf("Prob: "); + printf("Prob: \n"); for (int i = 0; i < height * width; ++i) { - printf("%.2f ", result[i]); + printf("%.4f ", array[i]); + if ((i + 1) % width == 0) { + printf("\n"); + } } printf("\n"); diff --git a/paddle/gserver/layers/BatchNormBaseLayer.cpp b/paddle/gserver/layers/BatchNormBaseLayer.cpp index bc7d1c83a4..925af31289 100644 --- a/paddle/gserver/layers/BatchNormBaseLayer.cpp +++ b/paddle/gserver/layers/BatchNormBaseLayer.cpp @@ -41,6 +41,7 @@ bool BatchNormBaseLayer::init(const LayerMap& layerMap, useGlobalStats_ = config_.use_global_stats(); } movingAvgFraction_ = config_.moving_average_fraction(); + epsilon_ = config_.epsilon(); weight_.reset(new Weight(1, channels_, parameters_[0])); movingMean_.reset(new Weight(1, channels_, parameters_[1])); diff --git a/paddle/gserver/layers/BatchNormBaseLayer.h b/paddle/gserver/layers/BatchNormBaseLayer.h index e721d2d267..2ac3cd9d67 100644 --- a/paddle/gserver/layers/BatchNormBaseLayer.h +++ b/paddle/gserver/layers/BatchNormBaseLayer.h @@ -94,6 +94,8 @@ protected: bool useGlobalStats_; // use to compute moving mean and variance. real movingAvgFraction_; + // Epsilon is a small random noise used in batch normalization for stability. + real epsilon_; }; } // namespace paddle diff --git a/paddle/gserver/layers/BatchNormalizationLayer.cpp b/paddle/gserver/layers/BatchNormalizationLayer.cpp index dacff25e59..25ab5cd927 100644 --- a/paddle/gserver/layers/BatchNormalizationLayer.cpp +++ b/paddle/gserver/layers/BatchNormalizationLayer.cpp @@ -22,8 +22,6 @@ namespace paddle { REGISTER_LAYER(batch_norm, BatchNormalizationLayer); -const real BatchNormalizationLayer::EPS = 1E-5; - bool BatchNormalizationLayer::init(const LayerMap& layerMap, const ParameterMap& parameterMap) { /* Initialize the basic parent class */ @@ -53,7 +51,7 @@ void BatchNormalizationLayer::calMeanAndStd(const MatrixPtr& mat) { calMovingMeanAndVar(); - savedInvVar_->subScalar(-EPS); + savedInvVar_->subScalar(-epsilon_); savedInvVar_->sqrt2(*savedInvVar_); } @@ -74,7 +72,7 @@ void BatchNormalizationLayer::setMeanAndStd() { savedInvVar_->copyFrom(*(movingVar_->getW())); savedInvVar_->downClip(real(0.0)); - savedInvVar_->subScalar(-EPS); + savedInvVar_->subScalar(-epsilon_); savedInvVar_->sqrt2(*savedInvVar_); } diff --git a/paddle/gserver/layers/BatchNormalizationLayer.h b/paddle/gserver/layers/BatchNormalizationLayer.h index f6115801fc..1fdb5e2070 100644 --- a/paddle/gserver/layers/BatchNormalizationLayer.h +++ b/paddle/gserver/layers/BatchNormalizationLayer.h @@ -39,9 +39,6 @@ public: void backward(const UpdateCallback& callback = nullptr) override; protected: - /// Epsilon value used in the batch normalization formula. - static const real EPS; - /// Load pre-calculated mean and std. void setMeanAndStd(); diff --git a/paddle/gserver/layers/CudnnBatchNormLayer.cpp b/paddle/gserver/layers/CudnnBatchNormLayer.cpp index 49a9540c0b..8390b55026 100644 --- a/paddle/gserver/layers/CudnnBatchNormLayer.cpp +++ b/paddle/gserver/layers/CudnnBatchNormLayer.cpp @@ -21,8 +21,6 @@ namespace paddle { REGISTER_LAYER(cudnn_batch_norm, CudnnBatchNormLayer); -const double CudnnBatchNormLayer::EPS = 1E-5; - bool CudnnBatchNormLayer::init(const LayerMap& layerMap, const ParameterMap& parameterMap) { /* Initialize the basic parent class */ @@ -61,6 +59,9 @@ void CudnnBatchNormLayer::forward(PassType passType) { real* movingMean = movingMean_->getW()->getData(); real* movingVar = movingVar_->getW()->getData(); + // cuDNN does not allow an epsilon value less than CUDNN_BN_MIN_EPSILON. + eps_ = std::max(CUDNN_BN_MIN_EPSILON, static_cast(epsilon_)); + if (!useGlobalStats_) { REGISTER_TIMER_INFO("CudnnBatchFwTimer", getName().c_str()); real* savedMean = savedMean_->getData(); @@ -75,7 +76,7 @@ void CudnnBatchNormLayer::forward(PassType passType) { 1.0 - movingAvgFraction_, movingMean, movingVar, - EPS, + eps_, savedMean, savedInvVar); } else { @@ -90,7 +91,7 @@ void CudnnBatchNormLayer::forward(PassType passType) { beta, movingMean, movingVar, - EPS); + eps_); } else { // There is a limitation in cudnn library. // When the batch size is larger than 1024 in cuDNN v5.1, @@ -101,7 +102,7 @@ void CudnnBatchNormLayer::forward(PassType passType) { beta, movingMean, movingVar, - EPS, + eps_, batchSize, channels_, imageH_ * imageD_, @@ -128,6 +129,9 @@ void CudnnBatchNormLayer::backward(const UpdateCallback& callback) { real* savedMean = savedMean_->getData(); real* savedInvVar = savedInvVar_->getData(); + // cuDNN does not allow an epsilon value less than CUDNN_BN_MIN_EPSILON. + eps_ = std::max(CUDNN_BN_MIN_EPSILON, static_cast(epsilon_)); + auto create = [](MatrixPtr& m, size_t h, size_t w, real** p) { Matrix::resizeOrCreate(m, h, w, false, true); m->zeroMem(); @@ -157,7 +161,7 @@ void CudnnBatchNormLayer::backward(const UpdateCallback& callback) { gamma, gammaGrad, betaGrad, - EPS, + eps_, savedMean, savedInvVar); diff --git a/paddle/gserver/layers/CudnnBatchNormLayer.h b/paddle/gserver/layers/CudnnBatchNormLayer.h index 413efd4d3e..1a3f0c0cbf 100644 --- a/paddle/gserver/layers/CudnnBatchNormLayer.h +++ b/paddle/gserver/layers/CudnnBatchNormLayer.h @@ -14,6 +14,7 @@ limitations under the License. */ #pragma once +#include #include "BatchNormBaseLayer.h" #include "Layer.h" #include "paddle/utils/Stat.h" @@ -46,12 +47,9 @@ public: void backward(const UpdateCallback& callback = nullptr) override; protected: - /** - * Epsilon value used in the batch normalization formula. - * Minimum allowed value is CUDNN_BN_MIN_EPSILON defined in cudnn.h. - * Same epsilon value should be used in forward and backward functions. - */ - static const double EPS; + /// Epsilon value used in the batch normalization formula. + /// Same epsilon value should be used in forward and backward functions. + double eps_; /// Input/output tensor descriptor desc hl_tensor_descriptor ioDesc_; diff --git a/paddle/gserver/layers/MKLDNNAddtoLayer.cpp b/paddle/gserver/layers/MKLDNNAddtoLayer.cpp index 0f2b67fd75..39bffc26f7 100644 --- a/paddle/gserver/layers/MKLDNNAddtoLayer.cpp +++ b/paddle/gserver/layers/MKLDNNAddtoLayer.cpp @@ -38,12 +38,13 @@ bool MKLDNNAddtoLayer::init(const LayerMap& layerMap, } void MKLDNNAddtoLayer::reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) { + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) { CHECK_EQ(layerSize_, getSize()) << "this layer size can not be changed"; reshapeInput(bs, ih, iw); ic = inputLayers_[0]->getSize() / ih / iw; CHECK_EQ((size_t)ic * ih * iw, inputLayers_[0]->getSize()); - CHECK_EQ(inputElemenCnt_, (size_t)bs * ic * ih * iw); + CHECK_EQ(inputLayers_[0]->getOutputValue()->getElementCnt(), + (size_t)bs * ic * ih * iw); for (size_t i = 0; i < inputLayers_.size(); i++) { CHECK_EQ(int64_t(bs), inputLayers_[i]->getOutput().getBatchSize()); CHECK_EQ(layerSize_, inputLayers_[i]->getSize()); @@ -57,47 +58,43 @@ void MKLDNNAddtoLayer::reshape( } void MKLDNNAddtoLayer::resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) { - resetFwdBuffers(inVals_, bias, out); - in = inVals_[0]; + resetFwdBuffers(inputs, biasVal_, out); std::shared_ptr fwdPD; std::shared_ptr biasPD; - resetFwdPD(fwdPD, biasPD, inVals_, bias, out); + resetFwdPD(fwdPD, biasPD, inputs, biasVal_, out); - resetFwdPipeline(pipeline, fwdPD, biasPD, inVals_, bias, out); + resetFwdPipeline(pipeline, fwdPD, biasPD, inputs, biasVal_, out); } void MKLDNNAddtoLayer::resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) { - resetBwdBuffers(inGrads_, bias, out); - in = inGrads_[0]; + resetBwdBuffers(inputs, biasGrad_, out); // backward only need share output grad to input grad - for (size_t i = 0; i < inGrads_.size(); i++) { - if (inGrads_[i] != nullptr) { - inGrads_[i] = out; - inputLayers_[i]->getOutputGrad()->setData(inGrads_[i]->getData()); + for (size_t i = 0; i < inputs.size(); i++) { + if (inputs[i] != nullptr) { + inputs[i] = out; + inputLayers_[i]->getOutputGrad()->setData(inputs[i]->getData()); } } // backward bias bwdBias_ = nullptr; - if (bias) { + if (biasGrad_) { std::vector scales(bs_, 1.0); - std::vector srcPDs(bs_, bias->getPrimitiveDesc()); - auto biasPD = sum::primitive_desc(bias->getMemoryDesc(), scales, srcPDs); + std::vector srcPDs(bs_, + biasGrad_->getPrimitiveDesc()); + auto biasPD = + sum::primitive_desc(biasGrad_->getMemoryDesc(), scales, srcPDs); std::vector srcs; for (size_t i = 0; i < grads_.size(); ++i) { srcs.push_back(*(grads_[i])); } - bwdBias_.reset(new sum(biasPD, srcs, *bias)); + bwdBias_.reset(new sum(biasPD, srcs, *biasGrad_)); pipeline.push_back(*bwdBias_); } } @@ -208,7 +205,7 @@ void MKLDNNAddtoLayer::resetBwdBuffers(std::vector& inputs, inputs.resize(inputLayers_.size()); for (size_t i = 0; i < inputs.size(); i++) { - resetInGrad(inputs[i], inVal_->getPrimitiveDesc(), i); + resetInGrad(inputs[i], inVals_[i]->getPrimitiveDesc(), i); CHECK_PRIMITIVE_DESC_EQ(inputs[i], out->getPrimitiveDesc()); } diff --git a/paddle/gserver/layers/MKLDNNAddtoLayer.h b/paddle/gserver/layers/MKLDNNAddtoLayer.h index 24504b7b4f..0ea3e208e5 100644 --- a/paddle/gserver/layers/MKLDNNAddtoLayer.h +++ b/paddle/gserver/layers/MKLDNNAddtoLayer.h @@ -26,9 +26,6 @@ namespace paddle { */ class MKLDNNAddtoLayer : public MKLDNNLayer { protected: - std::vector inVals_; - std::vector inGrads_; - // layer size == ic * ih * iw == oc * oh *ow, and can not be changed size_t layerSize_; @@ -50,52 +47,19 @@ public: const ParameterMap& parameterMap) override; void reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) override; + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) override; void resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) override; void resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) override; void updateWeights(const UpdateCallback& callback) override; - void printValueFormat() override { - for (size_t i = 0; i < inVals_.size(); ++i) { - VLOG(MKLDNN_FMTS) << i << " input: " << inVals_[i]->getFormat() << " >>>"; - } - if (outVal_) { - VLOG(MKLDNN_FMTS) << outVal_->getFormat() << " >>> "; - } - if (extOutVal_) { - VLOG(MKLDNN_FMTS) << extOutVal_->getFormat(); - } - } - - void printGradFormat() override { - if (extOutGrad_) { - VLOG(MKLDNN_FMTS) << extOutGrad_->getFormat(); - } - if (outGrad_) { - VLOG(MKLDNN_FMTS) << outGrad_->getFormat() << " <<< "; - } - for (size_t i = 0; i < inGrads_.size(); ++i) { - VLOG(MKLDNN_FMTS) << i << " input: " << inGrads_[i]->getFormat() << "<<<"; - } - } - protected: - /** - * Forward functions: reset buffers(inputs, output, bias), - * reset primitive descriptor, - * reset pipeline. - */ void resetFwdBuffers(std::vector& inputs, MKLDNNMatrixPtr& bias, MKLDNNMatrixPtr& out); @@ -110,17 +74,10 @@ protected: std::vector& inputs, MKLDNNMatrixPtr& bias, MKLDNNMatrixPtr& out); - - /** - * Backward functions: reset buffers(inputs, output, bias) - */ void resetBwdBuffers(std::vector& inputs, MKLDNNMatrixPtr& bias, MKLDNNMatrixPtr& out); - /** - * prepare for bias - */ void prepareBias(MKLDNNMatrixPtr& bias, const MatrixPtr& biasMat, const MKLDNNMatrixPtr& out, diff --git a/paddle/gserver/layers/MKLDNNBatchNormLayer.cpp b/paddle/gserver/layers/MKLDNNBatchNormLayer.cpp index 071bdf54d5..7faca0f8b7 100644 --- a/paddle/gserver/layers/MKLDNNBatchNormLayer.cpp +++ b/paddle/gserver/layers/MKLDNNBatchNormLayer.cpp @@ -21,8 +21,6 @@ namespace paddle { REGISTER_LAYER(mkldnn_batch_norm, MKLDNNBatchNormLayer); -const real MKLDNNBatchNormLayer::EPS = 1E-5; - bool MKLDNNBatchNormLayer::init(const LayerMap& layerMap, const ParameterMap& parameterMap) { if (!MKLDNNLayer::init(layerMap, parameterMap)) { @@ -50,6 +48,8 @@ bool MKLDNNBatchNormLayer::init(const LayerMap& layerMap, useGlobalStats_ = config_.use_global_stats(); } movingAvgFraction_ = config_.moving_average_fraction(); + epsilon_ = config_.epsilon(); + VLOG(MKLDNN_BASE) << "--- " << (useGlobalStats_ ? "use" : "do not use") << " --- global stats"; VLOG(MKLDNN_BASE) << "Moving average fraction: " << movingAvgFraction_; @@ -116,21 +116,20 @@ void MKLDNNBatchNormLayer::calMovingMeanAndVar() { } void MKLDNNBatchNormLayer::reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) { + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) { reshapeInput(bs, ih, iw); oh = ih; ow = iw; // ic_ and oc can not be changed - CHECK_EQ(inputElemenCnt_ / bs / ih / iw, (size_t)ic) + CHECK_EQ((size_t)ic, + inputLayers_[0]->getOutputValue()->getElementCnt() / bs / ih / iw) << "Input channel can not be changed"; reshapeOutput(oh, ow); resizeOutput(bs, oc * oh * ow); } void MKLDNNBatchNormLayer::resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) { // In training phase, it will always calculate mean and var, // so useGlobalStats must be false. @@ -140,25 +139,23 @@ void MKLDNNBatchNormLayer::resetFwd(std::vector& pipeline, useGlobalStats_ = false; } - resetFwdBuffers(in, wgt, out); + resetFwdBuffers(inputs[0], wgtVal_, out); - resetFwdPD(fwdPD_, in, wgt, out); + resetFwdPD(fwdPD_, inputs[0], wgtVal_, out); - resetFwdPipeline(pipeline, fwdPD_, in, wgt, out); + resetFwdPipeline(pipeline, fwdPD_, inputs[0], wgtVal_, out); } void MKLDNNBatchNormLayer::resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) { std::shared_ptr pd; - resetBwdBuffers(in, wgt, out); + resetBwdBuffers(inputs[0], wgtGrad_, out); - resetBwdPD(pd, in, wgt, out); + resetBwdPD(pd, inputs[0], wgtGrad_, out); - resetBwdPipeline(pipeline, pd, in, wgt, out); + resetBwdPipeline(pipeline, pd, inputs[0], wgtGrad_, out); } void MKLDNNBatchNormLayer::forward(PassType passType) { @@ -213,7 +210,7 @@ void MKLDNNBatchNormLayer::resetFwdPD( if (wgt) { flags_ = (flags_ | batch_normalization_flag::use_scale_shift); } - auto fwdDesc = bn_fwd::desc(pk, in->getMemoryDesc(), EPS, flags_); + auto fwdDesc = bn_fwd::desc(pk, in->getMemoryDesc(), epsilon_, flags_); pd.reset(new bn_fwd::primitive_desc(fwdDesc, engine_)); CHECK_PRIMITIVE_DESC_EQ(out, pd->dst_primitive_desc()); if (wgt) { @@ -260,9 +257,9 @@ void MKLDNNBatchNormLayer::resetFwdPipeline( void MKLDNNBatchNormLayer::resetBwdBuffers(MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& out) { - CHECK(inVal_ && outVal_); + CHECK(inVals_[0] && outVal_); resetOutGrad(out, outVal_->getPrimitiveDesc()); - resetInGrad(in, inVal_->getPrimitiveDesc()); + resetInGrad(in, inVals_[0]->getPrimitiveDesc()); if (gradScaleShift_) { CHECK(wgtVal_); resetWithMatrix(wgt, gradScaleShift_, wgtVal_->getPrimitiveDesc()); @@ -280,7 +277,7 @@ void MKLDNNBatchNormLayer::resetBwdPD( } CHECK_PRIMITIVE_DESC_EQ(out, in->getPrimitiveDesc()); auto md = in->getMemoryDesc(); - auto bwdDesc = bn_bwd::desc(prop_kind::backward, md, md, EPS, flags_); + auto bwdDesc = bn_bwd::desc(prop_kind::backward, md, md, epsilon_, flags_); pd.reset(new bn_bwd::primitive_desc(bwdDesc, engine_, *fwdPD_)); CHECK(pd->weights_primitive_desc() == fwdPD_->weights_primitive_desc()); CHECK_PRIMITIVE_DESC_EQ(wgt, pd->diff_weights_primitive_desc()); @@ -297,11 +294,12 @@ void MKLDNNBatchNormLayer::resetBwdPipeline( if (pd == nullptr) { return; } - CHECK(inVal_); + CHECK(inVals_[0]); bwdData_.reset( wgt && wgtVal_ - ? new bn_bwd(*pd, *inVal_, *mean_, *var_, *out, *wgtVal_, *in, *wgt) - : new bn_bwd(*pd, *inVal_, *mean_, *var_, *out, *in)); + ? new bn_bwd( + *pd, *inVals_[0], *mean_, *var_, *out, *wgtVal_, *in, *wgt) + : new bn_bwd(*pd, *inVals_[0], *mean_, *var_, *out, *in)); pipeline.push_back(*bwdData_); } diff --git a/paddle/gserver/layers/MKLDNNBatchNormLayer.h b/paddle/gserver/layers/MKLDNNBatchNormLayer.h index 456c0424ec..1cf33cb34f 100644 --- a/paddle/gserver/layers/MKLDNNBatchNormLayer.h +++ b/paddle/gserver/layers/MKLDNNBatchNormLayer.h @@ -32,7 +32,8 @@ protected: std::shared_ptr fwdPD_; // Epsilon value used in the batch normalization formula. - static const real EPS; + real epsilon_; + // weight and bias in paddle std::unique_ptr weight_; std::unique_ptr biases_; @@ -73,18 +74,14 @@ public: void forward(PassType passType) override; void reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) override; + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) override; void resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) override; void resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) override; void updateWeights(const UpdateCallback& callback) override; @@ -98,11 +95,7 @@ protected: * moving = moving * AvgFraction + local * (1 - AvgFraction) */ void calMovingMeanAndVar(); - /** - * Forward functions: reset buffers(input, weight, output), - * reset primitive descriptor, - * reset pipeline. - */ + void resetFwdBuffers(MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& out); @@ -115,12 +108,6 @@ protected: MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& out); - - /** - * Backward functions: reset buffers(input, weight, output), - * reset primitive descriptor, - * reset pipeline. - */ void resetBwdBuffers(MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& out); diff --git a/paddle/gserver/layers/MKLDNNConcatLayer.cpp b/paddle/gserver/layers/MKLDNNConcatLayer.cpp index c9099297cc..44bb0883b8 100644 --- a/paddle/gserver/layers/MKLDNNConcatLayer.cpp +++ b/paddle/gserver/layers/MKLDNNConcatLayer.cpp @@ -32,17 +32,16 @@ bool MKLDNNConcatLayer::init(const LayerMap& layerMap, } void MKLDNNConcatLayer::reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) { + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) { reshapeInput(bs, ih, iw); ic = inputLayers_[0]->getSize() / ih / iw; CHECK_EQ((size_t)ic * ih * iw, inputLayers_[0]->getSize()); - CHECK_EQ(inputElemenCnt_, (size_t)bs * ic * ih * iw); + CHECK_EQ(inputLayers_[0]->getOutputValue()->getElementCnt(), + (size_t)bs * ic * ih * iw); CHECK_GT(inputLayers_.size(), 1UL); channels_.resize(inputLayers_.size()); channels_[0] = ic; - // need change the output channel, so use oc_ instead - // TODO(TJ): change API, use &oc - oc_ = ic; + oc = ic; for (size_t i = 1; i < inputLayers_.size(); i++) { int batchsize, height, witdh; reshapeInput(batchsize, height, witdh, i); @@ -52,37 +51,31 @@ void MKLDNNConcatLayer::reshape( channels_[i] = inputLayers_[i]->getSize() / height / witdh; CHECK_EQ((size_t)channels_[i] * height * witdh, inputLayers_[i]->getSize()); - oc_ += channels_[i]; + oc += channels_[i]; } oh = ih; ow = iw; reshapeOutput(oh, ow); - resizeOutput(bs, oc_ * oh * ow); + resizeOutput(bs, oc * oh * ow); } void MKLDNNConcatLayer::resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) { - resetFwdBuffers(inVals_, out); - in = inVals_[0]; + resetFwdBuffers(inputs, out); std::shared_ptr fwdPD; - resetFwdPD(fwdPD, inVals_, out); + resetFwdPD(fwdPD, inputs, out); - resetFwdPipeline(pipeline, fwdPD, inVals_, out); + resetFwdPipeline(pipeline, fwdPD, inputs, out); } void MKLDNNConcatLayer::resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) { - resetBwdBuffers(inGrads_, out); - in = inGrads_[0]; + resetBwdBuffers(inputs, out); - resetBwdPipeline(pipeline, bwds_, inGrads_, out); + resetBwdPipeline(pipeline, bwds_, inputs, out); } void MKLDNNConcatLayer::resetFwdBuffers(std::vector& inputs, @@ -90,10 +83,7 @@ void MKLDNNConcatLayer::resetFwdBuffers(std::vector& inputs, inputs.resize(inputLayers_.size()); bool has8c = false, has16c = false, hasnc = false; for (size_t i = 0; i < inputs.size(); i++) { - // resetInValue will use ic_ so temporary change as current input's channel - // TODO(TJ): change ic_ as vector then can remove channels_ - ic_ = channels_[i]; - resetInValue(inputs[i], nullptr, i); + resetInValue(inputs[i], nullptr, i, channels_[i]); CHECK(inputs[i]); auto dm = inputs[i]->getDims(); // inputs format can be different, but ndims must equal @@ -114,8 +104,6 @@ void MKLDNNConcatLayer::resetFwdBuffers(std::vector& inputs, has16c = true; } } - // change back, ic_ always save the input 0 size - ic_ = channels_[0]; format outFmt; if (has16c && oc_ % 16 == 0) { @@ -168,14 +156,9 @@ void MKLDNNConcatLayer::resetBwdBuffers(std::vector& inputs, inputs.resize(inputLayers_.size()); for (size_t i = 0; i < inputs.size(); i++) { CHECK(inVals_[i]); - // resetInGrad will use inVal_ - // TODO(TJ): change move inVals_ to MKLDNNLayer ans remove inVal_ - inVal_ = inVals_[i]; resetInGrad(inputs[i], inVals_[i]->getPrimitiveDesc(), i); CHECK_PRIMITIVE_DESC_EQ(inputs[i], inVals_[i]->getPrimitiveDesc()); } - // change back, inVal_ always save the input 0 - inVal_ = inVals_[0]; } void MKLDNNConcatLayer::resetBwdPipeline( diff --git a/paddle/gserver/layers/MKLDNNConcatLayer.h b/paddle/gserver/layers/MKLDNNConcatLayer.h index d5749d327e..37f3a26c5e 100644 --- a/paddle/gserver/layers/MKLDNNConcatLayer.h +++ b/paddle/gserver/layers/MKLDNNConcatLayer.h @@ -26,8 +26,6 @@ namespace paddle { */ class MKLDNNConcatLayer : public MKLDNNLayer { protected: - std::vector inVals_; - std::vector inGrads_; std::vector> bwds_; // input channel numbers std::vector channels_; @@ -47,18 +45,14 @@ public: const ParameterMap& parameterMap) override; void reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) override; + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) override; void resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) override; void resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) override; void printSizeInfo() override { @@ -72,38 +66,16 @@ public: << ", " << ow_; } - void printValueFormat() override { - for (size_t i = 0; i < inVals_.size(); ++i) { - VLOG(MKLDNN_FMTS) << "Input " << i << ", " << inputLayers_[i]->getName() - << ": " << inVals_[i]->getFormat() << " >>>"; - } - if (outVal_) { - VLOG(MKLDNN_FMTS) << outVal_->getFormat() << " >>> "; - } - if (extOutVal_) { - VLOG(MKLDNN_FMTS) << extOutVal_->getFormat(); - } - } - - void printGradFormat() override { - if (extOutGrad_) { - VLOG(MKLDNN_FMTS) << extOutGrad_->getFormat(); - } - if (outGrad_) { - VLOG(MKLDNN_FMTS) << outGrad_->getFormat() << " <<< "; - } - for (size_t i = 0; i < inGrads_.size(); ++i) { - VLOG(MKLDNN_FMTS) << "Input " << i << ", " << inputLayers_[i]->getName() - << ": " << inGrads_[i]->getFormat() << "<<<"; + size_t keepCondition() { + // reset when the total element size of all inputs changed + size_t totalSize = inputLayers_[0]->getOutputValue()->getElementCnt(); + for (size_t i = 1; i < inputLayers_.size(); ++i) { + totalSize += inputLayers_[i]->getOutputValue()->getElementCnt(); } + return totalSize; } protected: - /** - * Forward functions: reset buffers(inputs, output, bias), - * reset primitive descriptor, - * reset pipeline. - */ void resetFwdBuffers(std::vector& inputs, MKLDNNMatrixPtr& out); void resetFwdPD(std::shared_ptr& pd, @@ -113,11 +85,6 @@ protected: std::shared_ptr& pd, std::vector& inputs, MKLDNNMatrixPtr& out); - - /** - * Backward functions: reset buffers(inputs, output, bias) - * reset primitives and pipeline - */ void resetBwdBuffers(std::vector& inputs, MKLDNNMatrixPtr& out); void resetBwdPipeline(std::vector& pipeline, diff --git a/paddle/gserver/layers/MKLDNNConvLayer.cpp b/paddle/gserver/layers/MKLDNNConvLayer.cpp index 8aa54e0a9e..ab1d0f7b04 100644 --- a/paddle/gserver/layers/MKLDNNConvLayer.cpp +++ b/paddle/gserver/layers/MKLDNNConvLayer.cpp @@ -90,7 +90,7 @@ void MKLDNNConvLayer::convertWeightsToPaddle() { } void MKLDNNConvLayer::reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) { + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) { reshapeInput(bs, ih, iw); // cal output sizes @@ -105,21 +105,17 @@ void MKLDNNConvLayer::reshape( } void MKLDNNConvLayer::resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) { resetFwdPD(fwdPD_); - resetFwdBuffers(fwdPD_, in, wgt, bias, out); + resetFwdBuffers(fwdPD_, inputs[0], wgtVal_, biasVal_, out); - resetFwdPipeline(pipeline, fwdPD_, in, wgt, bias, out); + resetFwdPipeline(pipeline, fwdPD_, inputs[0], wgtVal_, biasVal_, out); } void MKLDNNConvLayer::resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) { std::shared_ptr bwdWgtPD; std::shared_ptr bwdDataPD; @@ -128,9 +124,10 @@ void MKLDNNConvLayer::resetBwd(std::vector& pipeline, resetBwdDataPD(bwdDataPD); - resetBwdBuffers(bwdWgtPD, bwdDataPD, in, wgt, bias, out); + resetBwdBuffers(bwdWgtPD, bwdDataPD, inputs[0], wgtGrad_, biasGrad_, out); - resetBwdPipeline(pipeline, bwdWgtPD, bwdDataPD, in, wgt, bias, out); + resetBwdPipeline( + pipeline, bwdWgtPD, bwdDataPD, inputs[0], wgtGrad_, biasGrad_, out); } void MKLDNNConvLayer::updateWeights(const UpdateCallback& callback) { @@ -236,14 +233,14 @@ void MKLDNNConvLayer::resetBwdWgtPD( loadConvSettings(wgtDims, biasDims, strides, dilations, padL, padR); // create backward weight using input, output and weight value memory desc - CHECK(inVal_) << "Should have internal input value"; + CHECK(inVals_[0]) << "Should have internal input value"; CHECK(outVal_) << "Should have internal output value"; CHECK(wgtVal_) << "Should have weight value"; algorithm algo = algorithm::convolution_direct; padding_kind padKind = padding_kind::zero; auto bwdWgtDesc = biasVal_ != nullptr ? conv_bwdWgt::desc(algo, - inVal_->getMemoryDesc(), + inVals_[0]->getMemoryDesc(), wgtVal_->getMemoryDesc(), biasVal_->getMemoryDesc(), outVal_->getMemoryDesc(), @@ -252,7 +249,7 @@ void MKLDNNConvLayer::resetBwdWgtPD( padR, padKind) : conv_bwdWgt::desc(algo, - inVal_->getMemoryDesc(), + inVals_[0]->getMemoryDesc(), wgtVal_->getMemoryDesc(), outVal_->getMemoryDesc(), strides, @@ -260,7 +257,7 @@ void MKLDNNConvLayer::resetBwdWgtPD( padR, padKind); pd.reset(new conv_bwdWgt::primitive_desc(bwdWgtDesc, engine_, *fwdPD_)); - CHECK_PRIMITIVE_DESC_EQ(inVal_, pd->src_primitive_desc()); + CHECK_PRIMITIVE_DESC_EQ(inVals_[0], pd->src_primitive_desc()); CHECK_PRIMITIVE_DESC_EQ( outVal_, pd->diff_dst_primitive_desc(), @@ -280,12 +277,12 @@ void MKLDNNConvLayer::resetBwdDataPD( memory::dims wgtDims, biasDims, strides, dilations, padL, padR; loadConvSettings(wgtDims, biasDims, strides, dilations, padL, padR); - CHECK(inVal_) << "Should have internal input value"; + CHECK(inVals_[0]) << "Should have internal input value"; CHECK(outVal_) << "Should have internal output value"; // create backward data using input and output value memory desc // but using weight memory desc with any format auto bwdDataDesc = conv_bwdData::desc(algorithm::convolution_direct, - inVal_->getMemoryDesc(), + inVals_[0]->getMemoryDesc(), MKLDNNMatrix::createMemoryDesc(wgtDims), outVal_->getMemoryDesc(), strides, @@ -294,7 +291,7 @@ void MKLDNNConvLayer::resetBwdDataPD( padding_kind::zero); pd.reset(new conv_bwdData::primitive_desc(bwdDataDesc, engine_, *fwdPD_)); CHECK_PRIMITIVE_DESC_EQ( - inVal_, + inVals_[0], pd->diff_src_primitive_desc(), "primitive desc of in value and grad should be equal"); CHECK_PRIMITIVE_DESC_EQ( @@ -346,12 +343,12 @@ void MKLDNNConvLayer::resetBwdPipeline( MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& bias, MKLDNNMatrixPtr& out) { - CHECK(inVal_); + CHECK(inVals_[0]); // add bwdWgt handle if (bias) { - bwdWgt_.reset(new conv_bwdWgt(*wgtPD, *inVal_, *out, *wgt, *bias)); + bwdWgt_.reset(new conv_bwdWgt(*wgtPD, *inVals_[0], *out, *wgt, *bias)); } else { - bwdWgt_.reset(new conv_bwdWgt(*wgtPD, *inVal_, *out, *wgt)); + bwdWgt_.reset(new conv_bwdWgt(*wgtPD, *inVals_[0], *out, *wgt)); } pipeline.push_back(*bwdWgt_); diff --git a/paddle/gserver/layers/MKLDNNConvLayer.h b/paddle/gserver/layers/MKLDNNConvLayer.h index 9c69136684..3e754a0e65 100644 --- a/paddle/gserver/layers/MKLDNNConvLayer.h +++ b/paddle/gserver/layers/MKLDNNConvLayer.h @@ -69,18 +69,14 @@ public: const ParameterMap& parameterMap) override; void reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) override; + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) override; void resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) override; void resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) override; void updateWeights(const UpdateCallback& callback) override; @@ -107,48 +103,26 @@ protected: mkldnn::memory::dims& padL, mkldnn::memory::dims& padR); - /** - * reset the forward primitive descriptor. - */ void resetFwdPD(std::shared_ptr& pd); - /** - * reset the MKLDNNMatrix buffers used in forward. - */ void resetFwdBuffers(std::shared_ptr& pd, MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& bias, MKLDNNMatrixPtr& out); - /** - * reset the forward pipeline. - */ void resetFwdPipeline(std::vector& pipeline, std::shared_ptr& pd, MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& bias, MKLDNNMatrixPtr& out); - - /** - * reset the backward weight primitive descriptor. - */ void resetBwdWgtPD(std::shared_ptr& pd); - /** - * reset the backward data primitive descriptor. - */ void resetBwdDataPD(std::shared_ptr& pd); - /** - * reset the MKLDNNMatrix buffers used in backward. - */ void resetBwdBuffers(std::shared_ptr& wgtPD, std::shared_ptr& dataPD, MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& bias, MKLDNNMatrixPtr& out); - /** - * reset the backward pipeline. - */ void resetBwdPipeline(std::vector& pipeline, std::shared_ptr& wgtPD, std::shared_ptr& dataPD, diff --git a/paddle/gserver/layers/MKLDNNFcLayer.cpp b/paddle/gserver/layers/MKLDNNFcLayer.cpp index 350ec65fff..c8778bdd07 100644 --- a/paddle/gserver/layers/MKLDNNFcLayer.cpp +++ b/paddle/gserver/layers/MKLDNNFcLayer.cpp @@ -74,7 +74,7 @@ void MKLDNNFcLayer::convertWeightsToPaddle() { } void MKLDNNFcLayer::reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) { + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) { reshapeInput(bs, ih, iw); CHECK_EQ(iLayerSize_, inputLayers_[0]->getSize()); @@ -87,32 +87,29 @@ void MKLDNNFcLayer::reshape( } void MKLDNNFcLayer::resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) { - resetFwdBuffers(in, wgt, bias, out); + resetFwdBuffers(inputs[0], wgtVal_, biasVal_, out); - resetFwdPD(fwdPD_, in, wgt, bias, out); + resetFwdPD(fwdPD_, inputs[0], wgtVal_, biasVal_, out); - resetFwdPipeline(pipeline, fwdPD_, in, wgt, bias, out); + resetFwdPipeline(pipeline, fwdPD_, inputs[0], wgtVal_, biasVal_, out); } void MKLDNNFcLayer::resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) { std::shared_ptr bwdWgtPD; std::shared_ptr bwdDataPD; - resetBwdBuffers(in, wgt, bias, out); + resetBwdBuffers(inputs[0], wgtGrad_, biasGrad_, out); - resetBwdWgtPD(bwdWgtPD, wgt, bias, out); + resetBwdWgtPD(bwdWgtPD, wgtGrad_, biasGrad_, out); - resetBwdDataPD(bwdDataPD, in, out); + resetBwdDataPD(bwdDataPD, inputs[0], out); - resetBwdPipeline(pipeline, bwdWgtPD, bwdDataPD, in, wgt, bias, out); + resetBwdPipeline( + pipeline, bwdWgtPD, bwdDataPD, inputs[0], wgtGrad_, biasGrad_, out); } void MKLDNNFcLayer::updateWeights(const UpdateCallback& callback) { @@ -193,9 +190,9 @@ void MKLDNNFcLayer::resetBwdBuffers(MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& bias, MKLDNNMatrixPtr& out) { - CHECK(inVal_ && outVal_); + CHECK(inVals_[0] && outVal_); resetOutGrad(out, outVal_->getPrimitiveDesc()); - resetInGrad(in, inVal_->getPrimitiveDesc()); + resetInGrad(in, inVals_[0]->getPrimitiveDesc()); CHECK(wgtVal_); resetWithMatrix(wgt, weight_->getWGrad(), wgtVal_->getPrimitiveDesc()); @@ -212,14 +209,15 @@ void MKLDNNFcLayer::resetBwdWgtPD( MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& bias, MKLDNNMatrixPtr& out) { - CHECK(inVal_); - fc_bwdWgt::desc bwdWgtDesc = bias ? fc_bwdWgt::desc(inVal_->getMemoryDesc(), - wgt->getMemoryDesc(), - bias->getMemoryDesc(), - out->getMemoryDesc()) - : fc_bwdWgt::desc(inVal_->getMemoryDesc(), - wgt->getMemoryDesc(), - out->getMemoryDesc()); + CHECK(inVals_[0]); + fc_bwdWgt::desc bwdWgtDesc = + bias ? fc_bwdWgt::desc(inVals_[0]->getMemoryDesc(), + wgt->getMemoryDesc(), + bias->getMemoryDesc(), + out->getMemoryDesc()) + : fc_bwdWgt::desc(inVals_[0]->getMemoryDesc(), + wgt->getMemoryDesc(), + out->getMemoryDesc()); pd.reset(new fc_bwdWgt::primitive_desc(bwdWgtDesc, engine_, *fwdPD_)); } @@ -245,11 +243,11 @@ void MKLDNNFcLayer::resetBwdPipeline( MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& bias, MKLDNNMatrixPtr& out) { - CHECK(inVal_); + CHECK(inVals_[0]); if (bias) { - bwdWgt_.reset(new fc_bwdWgt(*bwdWgtPD, *inVal_, *out, *wgt, *bias)); + bwdWgt_.reset(new fc_bwdWgt(*bwdWgtPD, *inVals_[0], *out, *wgt, *bias)); } else { - bwdWgt_.reset(new fc_bwdWgt(*bwdWgtPD, *inVal_, *out, *wgt)); + bwdWgt_.reset(new fc_bwdWgt(*bwdWgtPD, *inVals_[0], *out, *wgt)); } pipeline.push_back(*bwdWgt_); diff --git a/paddle/gserver/layers/MKLDNNFcLayer.h b/paddle/gserver/layers/MKLDNNFcLayer.h index ee861763ff..283dc9b540 100644 --- a/paddle/gserver/layers/MKLDNNFcLayer.h +++ b/paddle/gserver/layers/MKLDNNFcLayer.h @@ -52,18 +52,14 @@ public: const ParameterMap& parameterMap) override; void reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) override; + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) override; void resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) override; void resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) override; void updateWeights(const UpdateCallback& callback) override; @@ -73,11 +69,6 @@ public: void convertWeightsToPaddle() override; protected: - /** - * Forward functions: reset buffers(input, output, weight and bias), - * reset primitive descriptor, - * reset pipeline. - */ void resetFwdBuffers(MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& bias, @@ -93,13 +84,6 @@ protected: MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& bias, MKLDNNMatrixPtr& out); - - /** - * Backward functions: reset buffers(input, output, weight and bias), - * reset primitive descriptor for backward weight, - * reset primitive descriptor for backward data, - * reset pipeline. - */ void resetBwdBuffers(MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& wgt, MKLDNNMatrixPtr& bias, diff --git a/paddle/gserver/layers/MKLDNNLayer.cpp b/paddle/gserver/layers/MKLDNNLayer.cpp index cf42da0735..6fbf3c7fde 100644 --- a/paddle/gserver/layers/MKLDNNLayer.cpp +++ b/paddle/gserver/layers/MKLDNNLayer.cpp @@ -48,31 +48,20 @@ void MKLDNNLayer::forward(PassType passType) { REGISTER_TIMER_INFO("mkldnn_FwdTimer", getName().c_str()); CHECK(!inputLayers_.empty()); copySeqInfoToOutputs(); - size_t elemenCnt = inputLayers_[0]->getOutputValue()->getElementCnt(); - if (inputElemenCnt_ != elemenCnt) { + if (condition_ != keepCondition()) { VLOG(MKLDNN_BASE) << getName() << " reset mkldnn forward"; - // reset when input total sizes changed, not only the batchsize - inputElemenCnt_ = elemenCnt; - pipelineFwd_.clear(); + condition_ = keepCondition(); reshape(bs_, ic_, ih_, iw_, oc_, oh_, ow_); - // all cpu device output grad or value share output's + printSizeInfo(); + // the output_.value and output_.grad are shared with CPU device shareCPUDevice(); - resetFwd(pipelineFwd_, inVal_, wgtVal_, biasVal_, outVal_); - // MKLDNNLayer output value should be MKLDNNMatrix - // so external output value is necessary. - // Then external input value is not necessary, - // since input may be mkldnn internal buffer. - CHECK(extOutVal_) << "external output value is necessary"; - output_.value = std::dynamic_pointer_cast(extOutVal_); - CHECK(inVal_ && outVal_) << "internal memories are necessary"; - if (cvtInVal_) { - pipelineFwd_.insert(pipelineFwd_.begin(), *cvtInVal_); - } - if (cvtOutVal_) { - pipelineFwd_.push_back(*cvtOutVal_); - } + pipelineFwd_.clear(); + inVals_.resize(inputLayers_.size(), nullptr); + extInVals_.resize(inputLayers_.size(), nullptr); + cvtInVals_.resize(inputLayers_.size(), nullptr); + resetFwd(pipelineFwd_, inVals_, outVal_); + prepareValueConversions(pipelineFwd_); convertWeightsFromPaddle(); - printSizeInfo(); printValueFormat(); needResetBwd_ = true; } @@ -80,8 +69,8 @@ void MKLDNNLayer::forward(PassType passType) { if (inputLayers_[0]->getType() == "data" && inputLayers_.size() == 1) { // Update input value data when input layer is "data" type, // since the input value data address might be changed. - CHECK(extInVal_); - extInVal_->setData(getInputValue(0, CPU_DEVICE)->getData()); + CHECK(extInVals_[0]); + extInVals_[0]->setData(getInputValue(0, CPU_DEVICE)->getData()); } if (!outputOnlyMKLDNN_) { @@ -99,22 +88,13 @@ void MKLDNNLayer::backward(const UpdateCallback& callback) { if (needResetBwd_) { VLOG(MKLDNN_BASE) << getName() << " reset mkldnn backward"; pipelineBwd_.clear(); + inGrads_.resize(inputLayers_.size(), nullptr); + extInGrads_.resize(inputLayers_.size(), nullptr); + cvtInGrads_.resize(inputLayers_.size(), nullptr); pipelineMergeGrad_.clear(); mergeGrad_ = nullptr; - resetBwd(pipelineBwd_, inGrad_, wgtGrad_, biasGrad_, outGrad_); - // external output grad is not necessary - // since output may be mkldnn internal buffer or merge them directly. - CHECK(outGrad_) << "internal output grad is necessary"; - if (extOutGrad_) { - CHECK_EQ(extOutGrad_->getData(), output_.grad->getData()) - << "the external buffer should share the same data with output_.grad"; - } - if (cvtOutGrad_) { - pipelineBwd_.insert(pipelineBwd_.begin(), *cvtOutGrad_); - } - if (cvtInGrad_) { - pipelineBwd_.push_back(*cvtInGrad_); - } + resetBwd(pipelineBwd_, inGrads_, outGrad_); + prepareGradConversions(pipelineBwd_); printGradFormat(); needResetBwd_ = false; } @@ -141,8 +121,8 @@ void MKLDNNLayer::backward(const UpdateCallback& callback) { void MKLDNNLayer::reshapeInput(int& batchsize, int& height, int& width, - size_t inputIdx) { - const Argument& input = inputLayers_[inputIdx]->getOutput(); + size_t idx) { + const Argument& input = inputLayers_[idx]->getOutput(); batchsize = input.getBatchSize(); int h = input.getFrameHeight(); int w = input.getFrameWidth(); @@ -176,27 +156,30 @@ void MKLDNNLayer::resetWithMatrix(MKLDNNMatrixPtr& dnn, void MKLDNNLayer::resetInValue( MKLDNNMatrixPtr& in, const std::shared_ptr& intPD, - size_t inputIdx) { - cvtInVal_ = nullptr; - extInVal_ = nullptr; + size_t idx, + int inputChannel) { + cvtInVals_[idx] = nullptr; + extInVals_[idx] = nullptr; in = nullptr; - CHECK_GT(bs_ * ic_ * ih_ * iw_, 0); + inputChannel = inputChannel == 0 ? ic_ : inputChannel; + CHECK_GT(bs_ * inputChannel * ih_ * iw_, 0); auto extPD = MKLDNNMatrix::createPrimitiveDesc( - {bs_, ic_, ih_, iw_}, format::nchw, engine_); - const MatrixPtr& inMat = inputLayers_[inputIdx]->getOutputValue(); - extInVal_ = std::dynamic_pointer_cast(inMat); - CHECK_EQ(inputIsOnlyMKLDNN(), extInVal_ != nullptr); - if (extInVal_ == nullptr || extInVal_->getFormat() == format::nc) { - extInVal_ = MKLDNNMatrix::create(extPD, inMat); + {bs_, inputChannel, ih_, iw_}, format::nchw, engine_); + const MatrixPtr& inMat = inputLayers_[idx]->getOutputValue(); + extInVals_[idx] = std::dynamic_pointer_cast(inMat); + CHECK_EQ(inputIsOnlyMKLDNN(), extInVals_[idx] != nullptr); + if (extInVals_[idx] == nullptr || + extInVals_[idx]->getFormat() == format::nc) { + extInVals_[idx] = MKLDNNMatrix::create(extPD, inMat); } - in = extInVal_; + in = extInVals_[idx]; if (nullptr == intPD || in->getPrimitiveDesc() == *intPD) { return; } // need create reorder in = MKLDNNMatrix::create(*intPD); - cvtInVal_ = MKLDNNMatrix::createReorder(extInVal_, in); - CHECK(cvtInVal_) << "should not be emptry"; + cvtInVals_[idx] = MKLDNNMatrix::createReorder(extInVals_[idx], in); + CHECK(cvtInVals_[idx]) << "should not be emptry"; } void MKLDNNLayer::resetOutValue(MKLDNNMatrixPtr& out, @@ -218,11 +201,11 @@ void MKLDNNLayer::resetOutValue(MKLDNNMatrixPtr& out, void MKLDNNLayer::resetInGrad(MKLDNNMatrixPtr& in, memory::primitive_desc intPD, - size_t inputIdx) { - cvtInGrad_ = nullptr; - extInGrad_ = nullptr; + size_t idx) { + cvtInGrads_[idx] = nullptr; + extInGrads_[idx] = nullptr; in = nullptr; - LayerPtr& input = inputLayers_[inputIdx]; + LayerPtr& input = inputLayers_[idx]; if (input->getOutputGrad() == nullptr) { // no need input grad return; @@ -237,23 +220,25 @@ void MKLDNNLayer::resetInGrad(MKLDNNMatrixPtr& in, in = MKLDNNMatrix::create(intPD, inMat); Argument& arg = input->getOutput(this->getName()); arg.grad = std::dynamic_pointer_cast(in); - CHECK_PRIMITIVE_DESC_EQ(inVal_, intPD); + CHECK_PRIMITIVE_DESC_EQ(inVals_[idx], intPD); if (inputIsOnlyMKLDNN()) { return; } - extInGrad_ = in; - if (isPaddleFormat(extInGrad_->getFormat())) { + extInGrads_[idx] = in; + if (isPaddleFormat(extInGrads_[idx]->getFormat())) { return; } // need create reorder - CHECK(extInVal_ != nullptr && isPaddleFormat(extInVal_->getFormat())) + CHECK(extInVals_[idx] != nullptr && + isPaddleFormat(extInVals_[idx]->getFormat())) << "should have external input value and the format must be nchw(nc)"; - extInGrad_ = MKLDNNMatrix::create(extInVal_->getPrimitiveDesc(), inMat); - CHECK_PRIMITIVE_DESC_EQ(inVal_, intPD); + extInGrads_[idx] = + MKLDNNMatrix::create(extInVals_[idx]->getPrimitiveDesc(), inMat); + CHECK_PRIMITIVE_DESC_EQ(inVals_[idx], intPD); in = MKLDNNMatrix::create(intPD); - cvtInGrad_ = MKLDNNMatrix::createReorder(in, extInGrad_); - CHECK(cvtInGrad_); + cvtInGrads_[idx] = MKLDNNMatrix::createReorder(in, extInGrads_[idx]); + CHECK(cvtInGrads_[idx]); } void MKLDNNLayer::resetOutGrad(MKLDNNMatrixPtr& out, @@ -309,22 +294,8 @@ void MKLDNNLayer::resetMergeGrad(MKLDNNMatrixPtr& out) { srcs.push_back(*src); } - // TODO(TJ): remove me when mkldnn sum support different formats - for (size_t i = 1; i < srcPDs.size(); ++i) { - CHECK(srcPDs[0] == srcPDs[i]); - } - tmpOutGrad_ = out; - tmpCvt_ = nullptr; - if (out->getPrimitiveDesc() != srcPDs[0]) { - tmpOutGrad_ = MKLDNNMatrix::create(srcPDs[0]); - tmpCvt_ = MKLDNNMatrix::createReorder(tmpOutGrad_, out); - CHECK(tmpCvt_); - pipelineMergeGrad_.push_back(*tmpCvt_); - } - - auto sumPD = - sum::primitive_desc(tmpOutGrad_->getMemoryDesc(), scales, srcPDs); - mergeGrad_.reset(new sum(sumPD, srcs, *tmpOutGrad_)); + auto sumPD = sum::primitive_desc(out->getMemoryDesc(), scales, srcPDs); + mergeGrad_.reset(new sum(sumPD, srcs, *out)); pipelineMergeGrad_.insert(pipelineMergeGrad_.begin(), *mergeGrad_); } diff --git a/paddle/gserver/layers/MKLDNNLayer.h b/paddle/gserver/layers/MKLDNNLayer.h index 4c42df1bee..e48b9b5a91 100644 --- a/paddle/gserver/layers/MKLDNNLayer.h +++ b/paddle/gserver/layers/MKLDNNLayer.h @@ -34,15 +34,16 @@ typedef std::shared_ptr MKLDNNLayerPtr; */ class MKLDNNLayer : public Layer { protected: - // input value element count - size_t inputElemenCnt_; // batch size int bs_; + // their sizes are always from the first input layer // input image channel, height and width int ic_, ih_, iw_; // output image channel, height and width int oc_, oh_, ow_; + // the condition that forward need be reset + size_t condition_; // backward also need reset after reset forward handle bool needResetBwd_; @@ -67,18 +68,18 @@ protected: * When all layers are mkldnn layers, they could save internal data. */ // below MKLDNNMatrix buffers are all internal buffers - MKLDNNMatrixPtr inVal_; - MKLDNNMatrixPtr inGrad_; + std::vector inVals_; + std::vector inGrads_; MKLDNNMatrixPtr outVal_; MKLDNNMatrixPtr outGrad_; // below are external value and grad - MKLDNNMatrixPtr extInVal_; - MKLDNNMatrixPtr extInGrad_; + std::vector extInVals_; + std::vector extInGrads_; MKLDNNMatrixPtr extOutVal_; MKLDNNMatrixPtr extOutGrad_; // convert handle between external and internal buffers - std::shared_ptr cvtInVal_; - std::shared_ptr cvtInGrad_; + std::vector> cvtInVals_; + std::vector> cvtInGrads_; std::shared_ptr cvtOutVal_; std::shared_ptr cvtOutGrad_; @@ -93,23 +94,11 @@ protected: std::vector pipelineMergeGrad_; // tmp input argument to save input grad, only used to merge grad Argument tmpInArg_; - // since mkldnn sum do not support different formats: - // can refer to https://github.com/01org/mkl-dnn/issues/134 - // so need create reorder manually and save tmp MKLDNNMatrix - MKLDNNMatrixPtr tmpOutGrad_; - std::shared_ptr tmpCvt_; public: explicit MKLDNNLayer(const LayerConfig& config) : Layer(config), - inputElemenCnt_(0), - bs_(0), - ic_(0), - ih_(0), - iw_(0), - oc_(0), - oh_(0), - ow_(0), + condition_(0), needResetBwd_(true), outputOnlyMKLDNN_(false), engine_(mkldnn::engine::cpu, 0), @@ -125,31 +114,28 @@ public: virtual void backward(const UpdateCallback& callback); /** - * reshape the input image sizes - * and reset output image and buffer size - * output channel can not be changed + * reshape the input and output channels and image sizes + * and reset output buffer size */ virtual void reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) = 0; + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) = 0; /** * reset the mkldnn forward primitve and memories * only would be called when input size changes + * weight and bias buffers should be coverd by child class itself */ virtual void resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) = 0; /** * reset the mkldnn backward primitve and memories * only would be called when needed + * weight and bias buffers should be coverd by child class itself */ virtual void resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) = 0; /** @@ -175,13 +161,19 @@ public: void addOutputArgument(int deviceId) { Layer::addOutputArgument(deviceId); } protected: + /** + * Some layers may have different condition to reset the forward. + * The function returns the condition that do not need reset forward. + */ + inline virtual size_t keepCondition() { + // reset when the first input element size changed, not only the batchsize + return inputLayers_[0]->getOutputValue()->getElementCnt(); + } + /** * reshape the input image sizes and input batchsize */ - void reshapeInput(int& batchsize, - int& height, - int& width, - size_t inputIdx = 0); + void reshapeInput(int& batchsize, int& height, int& width, size_t idx = 0); /** * reshape output image sizes @@ -199,11 +191,13 @@ protected: /** * reset input value from input MKLDNNMatrix and internal primitive desc. * reset both internal and external buffer and create reorder if necessary. + * input channel may be different in concat. */ void resetInValue( MKLDNNMatrixPtr& in, const std::shared_ptr& intPD = nullptr, - size_t inputIdx = 0); + size_t idx = 0, + int inputChannel = 0); /** * reset output value from internal primitive desc. @@ -218,7 +212,7 @@ protected: */ void resetInGrad(MKLDNNMatrixPtr& in, mkldnn::memory::primitive_desc intPD, - size_t inputIdx = 0); + size_t idx = 0); /** * reset output grad from internal primitive desc. @@ -296,17 +290,19 @@ protected: * print the mkldnn memory format of value */ virtual void printValueFormat() { - if (extInVal_) { - VLOG(MKLDNN_FMTS) << extInVal_->getFormat() << " >>> "; - } - if (inVal_) { - VLOG(MKLDNN_FMTS) << inVal_->getFormat() << " >>>"; + for (size_t i = 0; i < inVals_.size(); ++i) { + if (!inVals_[i]) { + continue; + } + VLOG(MKLDNN_FMTS) << "Input " << i << ", " << inputLayers_[i]->getName() + << ": " << (extInVals_[i] ? extInVals_[i]->getFormat() + : inVals_[i]->getFormat()) + << " >>> " << inVals_[i]->getFormat() << " >>>"; } if (outVal_) { - VLOG(MKLDNN_FMTS) << outVal_->getFormat() << " >>> "; - } - if (extOutVal_) { - VLOG(MKLDNN_FMTS) << extOutVal_->getFormat(); + VLOG(MKLDNN_FMTS) << outVal_->getFormat() << " >>> " + << (extOutVal_ ? extOutVal_->getFormat() + : outVal_->getFormat()); } if (wgtVal_) { VLOG(MKLDNN_FMTS) << "Weight value format: " << wgtVal_->getFormat(); @@ -320,17 +316,19 @@ protected: * print the mkldnn memory format of grad */ virtual void printGradFormat() { - if (extOutGrad_) { - VLOG(MKLDNN_FMTS) << extOutGrad_->getFormat(); - } if (outGrad_) { - VLOG(MKLDNN_FMTS) << outGrad_->getFormat() << " <<< "; + VLOG(MKLDNN_FMTS) << outGrad_->getFormat() << " <<< " + << (extOutGrad_ ? extOutGrad_->getFormat() + : outGrad_->getFormat()); } - if (inGrad_) { - VLOG(MKLDNN_FMTS) << inGrad_->getFormat() << " <<<"; - } - if (extInGrad_) { - VLOG(MKLDNN_FMTS) << extInGrad_->getFormat() << " <<< "; + for (size_t i = 0; i < inGrads_.size(); ++i) { + if (!inGrads_[i]) { + continue; + } + VLOG(MKLDNN_FMTS) << "Input " << i << ", " << inputLayers_[i]->getName() + << ": " << (extInGrads_[i] ? extInGrads_[i]->getFormat() + : inGrads_[i]->getFormat()) + << " <<< " << inGrads_[i]->getFormat() << " <<<"; } if (wgtGrad_) { VLOG(MKLDNN_FMTS) << "Weight grad format: " << wgtGrad_->getFormat(); @@ -437,6 +435,41 @@ private: outputOtherDevice_[i].cpuSequenceDims = output_.cpuSequenceDims; } } + + void prepareValueConversions(std::vector& pipeline) { + // MKLDNNLayer output value should be MKLDNNMatrix + // so external output value is necessary. + // Then external input value is not necessary, + // since input may be mkldnn internal buffer. + CHECK(extOutVal_) << "external output value is necessary"; + output_.value = std::dynamic_pointer_cast(extOutVal_); + CHECK(inVals_[0] && outVal_) << "internal memories are necessary"; + for (size_t i = 0; i < cvtInVals_.size(); ++i) { + if (cvtInVals_[i]) { + pipeline.insert(pipeline.begin(), *cvtInVals_[i]); + } + } + if (cvtOutVal_) { + pipeline.push_back(*cvtOutVal_); + } + } + void prepareGradConversions(std::vector& pipeline) { + // external output grad is not necessary + // since output may be mkldnn internal buffer or merge them directly. + CHECK(outGrad_) << "internal output grad is necessary"; + if (extOutGrad_) { + CHECK_EQ(extOutGrad_->getData(), output_.grad->getData()) + << "the external buffer should share the same data with output_.grad"; + } + if (cvtOutGrad_) { + pipeline.insert(pipeline.begin(), *cvtOutGrad_); + } + for (size_t i = 0; i < cvtInGrads_.size(); ++i) { + if (cvtInGrads_[i]) { + pipeline.push_back(*cvtInGrads_[i]); + } + } + } }; } // namespace paddle diff --git a/paddle/gserver/layers/MKLDNNPoolLayer.cpp b/paddle/gserver/layers/MKLDNNPoolLayer.cpp index a18c455bea..a8252593c8 100644 --- a/paddle/gserver/layers/MKLDNNPoolLayer.cpp +++ b/paddle/gserver/layers/MKLDNNPoolLayer.cpp @@ -58,10 +58,11 @@ bool MKLDNNPoolLayer::init(const LayerMap& layerMap, } void MKLDNNPoolLayer::reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) { + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) { reshapeInput(bs, ih, iw); // ic_ and oc can not be changed - CHECK_EQ(inputElemenCnt_ / bs / ih / iw, (size_t)ic) + CHECK_EQ((size_t)ic, + inputLayers_[0]->getOutputValue()->getElementCnt() / bs / ih / iw) << "Input channel can not be changed"; // cal output sizes @@ -74,29 +75,25 @@ void MKLDNNPoolLayer::reshape( } void MKLDNNPoolLayer::resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) { - resetFwdBuffers(in, out); + resetFwdBuffers(inputs[0], out); - resetFwdPD(fwdPD_, in, out); + resetFwdPD(fwdPD_, inputs[0], out); - resetFwdPipeline(pipeline, fwdPD_, in, out); + resetFwdPipeline(pipeline, fwdPD_, inputs[0], out); } void MKLDNNPoolLayer::resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) { std::shared_ptr pd; - resetBwdBuffers(in, out); + resetBwdBuffers(inputs[0], out); - resetBwdPD(pd, in, out); + resetBwdPD(pd, inputs[0], out); - resetBwdPipeline(pipeline, pd, in, out); + resetBwdPipeline(pipeline, pd, inputs[0], out); } void MKLDNNPoolLayer::resetFwdBuffers(MKLDNNMatrixPtr& in, @@ -151,9 +148,9 @@ void MKLDNNPoolLayer::resetFwdPipeline( void MKLDNNPoolLayer::resetBwdBuffers(MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& out) { - CHECK(inVal_ && outVal_); + CHECK(inVals_[0] && outVal_); resetOutGrad(out, outVal_->getPrimitiveDesc()); - resetInGrad(in, inVal_->getPrimitiveDesc()); + resetInGrad(in, inVals_[0]->getPrimitiveDesc()); } void MKLDNNPoolLayer::resetBwdPD(std::shared_ptr& pd, diff --git a/paddle/gserver/layers/MKLDNNPoolLayer.h b/paddle/gserver/layers/MKLDNNPoolLayer.h index c5ec87828b..dad60156f0 100644 --- a/paddle/gserver/layers/MKLDNNPoolLayer.h +++ b/paddle/gserver/layers/MKLDNNPoolLayer.h @@ -53,18 +53,14 @@ public: const ParameterMap& parameterMap) override; void reshape( - int& bs, int& ic, int& ih, int& iw, int oc, int& oh, int& ow) override; + int& bs, int& ic, int& ih, int& iw, int& oc, int& oh, int& ow) override; void resetFwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) override; void resetBwd(std::vector& pipeline, - MKLDNNMatrixPtr& in, - MKLDNNMatrixPtr& wgt, - MKLDNNMatrixPtr& bias, + std::vector& inputs, MKLDNNMatrixPtr& out) override; void printSizeInfo() override { @@ -75,11 +71,6 @@ public: } protected: - /** - * Forward functions: reset buffers(input, output), - * reset primitive descriptor, - * reset pipeline. - */ void resetFwdBuffers(MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& out); void resetFwdPD(std::shared_ptr& pd, MKLDNNMatrixPtr in, @@ -88,12 +79,6 @@ protected: std::shared_ptr& pd, MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& out); - - /** - * Backward functions: reset buffers(input, output), - * reset primitive descriptor, - * reset pipeline. - */ void resetBwdBuffers(MKLDNNMatrixPtr& in, MKLDNNMatrixPtr& out); void resetBwdPD(std::shared_ptr& pd, MKLDNNMatrixPtr& in, diff --git a/paddle/gserver/tests/test_MKLDNN.cpp b/paddle/gserver/tests/test_MKLDNN.cpp index 42644e9601..56b523f220 100644 --- a/paddle/gserver/tests/test_MKLDNN.cpp +++ b/paddle/gserver/tests/test_MKLDNN.cpp @@ -315,7 +315,7 @@ TEST(MKLDNNLayer, AddtoLayer) { static void getMKLDNNConcatConfig(TestConfig& cfg, const std::vector& inputs) { - CHECK_GE(inputs.size(), 2) << "at least two inputs"; + CHECK_GE(inputs.size(), 2UL) << "at least two inputs"; int oc = inputs[0].ic; for (size_t i = 1; i < inputs.size(); ++i) { CHECK_EQ(inputs[i].bs, inputs[0].bs); diff --git a/paddle/operators/activation_op.cc b/paddle/operators/activation_op.cc index 83d35a450d..c66d575d24 100644 --- a/paddle/operators/activation_op.cc +++ b/paddle/operators/activation_op.cc @@ -98,7 +98,6 @@ $y = \max(x, 0)$ } }; -template class LeakyReluOpMaker : public framework::OpProtoAndCheckerMaker { public: LeakyReluOpMaker(framework::OpProto *proto, @@ -106,8 +105,7 @@ class LeakyReluOpMaker : public framework::OpProtoAndCheckerMaker { : OpProtoAndCheckerMaker(proto, op_checker) { AddInput("X", "Input of LeakyRelu operator"); AddOutput("Y", "Output of LeakyRelu operator"); - AddAttr("alpha", "The small negative slope") - .SetDefault(static_cast(0.02f)); + AddAttr("alpha", "The small negative slope").SetDefault(0.02f); AddComment(R"DOC( LeakyRelu Activation Operator. @@ -117,7 +115,6 @@ $y = \max(x, \alpha * x)$ } }; -template class SoftShrinkOpMaker : public framework::OpProtoAndCheckerMaker { public: SoftShrinkOpMaker(framework::OpProto *proto, @@ -125,8 +122,7 @@ class SoftShrinkOpMaker : public framework::OpProtoAndCheckerMaker { : OpProtoAndCheckerMaker(proto, op_checker) { AddInput("X", "Input of Softshrink operator"); AddOutput("Y", "Output of Softshrink operator"); - AddAttr("lambda", "non-negative offset") - .SetDefault(static_cast(0.5f)); + AddAttr("lambda", "non-negative offset").SetDefault(0.5f); AddComment(R"DOC( Softshrink Activation Operator. @@ -173,7 +169,6 @@ $$y = x - \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$ } }; -template class HardShrinkOpMaker : public framework::OpProtoAndCheckerMaker { public: HardShrinkOpMaker(framework::OpProto *proto, @@ -181,8 +176,8 @@ class HardShrinkOpMaker : public framework::OpProtoAndCheckerMaker { : OpProtoAndCheckerMaker(proto, op_checker) { AddInput("X", "Input of HardShrink operator"); AddOutput("Y", "Output of HardShrink operator"); - AddAttr("threshold", "The value of threshold for HardShrink") - .SetDefault(static_cast(0.5)); + AddAttr("threshold", "The value of threshold for HardShrink") + .SetDefault(0.5f); AddComment(R"DOC( HardShrink Activation Operator. @@ -308,17 +303,16 @@ $$y = \frac{x}{1 + |x|}$$ } }; -template class BReluOpMaker : public framework::OpProtoAndCheckerMaker { public: BReluOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker) : OpProtoAndCheckerMaker(proto, op_checker) { AddInput("X", "Input of BRelu operator"); AddOutput("Y", "Output of BRelu operator"); - AddAttr("t_min", "The min marginal value of BRelu") - .SetDefault(static_cast(0)); - AddAttr("t_max", "The max marginal value of BRelu") - .SetDefault(static_cast(24)); + AddAttr("t_min", "The min marginal value of BRelu") + .SetDefault(static_cast(0)); + AddAttr("t_max", "The max marginal value of BRelu") + .SetDefault(static_cast(24)); AddComment(R"DOC( BRelu Activation Operator. @@ -328,7 +322,6 @@ $y = \max(\min(x, t_{min}), t_{max})$ } }; -template class SoftReluOpMaker : public framework::OpProtoAndCheckerMaker { public: SoftReluOpMaker(framework::OpProto *proto, @@ -336,8 +329,8 @@ class SoftReluOpMaker : public framework::OpProtoAndCheckerMaker { : OpProtoAndCheckerMaker(proto, op_checker) { AddInput("X", "Input of SoftRelu operator"); AddOutput("Y", "Output of SoftRelu operator"); - AddAttr("threshold", "The threshold value of SoftRelu") - .SetDefault(static_cast(40)); + AddAttr("threshold", "The threshold value of SoftRelu") + .SetDefault(40.0f); AddComment(R"DOC( SoftRelu Activation Operator. @@ -347,15 +340,13 @@ $y = \ln(1 + \exp(\max(\min(x, threshold), threshold))$ } }; -template class ELUOpMaker : public framework::OpProtoAndCheckerMaker { public: ELUOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker) : OpProtoAndCheckerMaker(proto, op_checker) { AddInput("X", "Input of ELU operator"); AddOutput("Y", "Output of ELU operator"); - AddAttr("alpha", "The alpha value of ELU") - .SetDefault(static_cast(1.0f)); + AddAttr("alpha", "The alpha value of ELU").SetDefault(1.0f); AddComment(R"DOC( ELU Activation Operator. @@ -368,15 +359,14 @@ $y = \max(0, x) + \min(0, \alpha * (e^x - 1))$ } }; -template class Relu6OpMaker : public framework::OpProtoAndCheckerMaker { public: Relu6OpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker) : OpProtoAndCheckerMaker(proto, op_checker) { AddInput("X", "Input of Relu6 operator"); AddOutput("Y", "Output of Relu6 operator"); - AddAttr("threshold", "The threshold value of Relu6") - .SetDefault(static_cast(6)); + AddAttr("threshold", "The threshold value of Relu6") + .SetDefault(6.0f); AddComment(R"DOC( Relu6 Activation Operator. @@ -386,15 +376,13 @@ $y = \min(\max(0, x), 6)$ } }; -template class PowOpMaker : public framework::OpProtoAndCheckerMaker { public: PowOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker) : OpProtoAndCheckerMaker(proto, op_checker) { AddInput("X", "Input of Pow operator"); AddOutput("Y", "Output of Pow operator"); - AddAttr("factor", "The exponential factor of Pow") - .SetDefault(static_cast(1)); + AddAttr("factor", "The exponential factor of Pow").SetDefault(1.0f); AddComment(R"DOC( Pow Activation Operator. @@ -404,17 +392,16 @@ $y = x^{factor}$ } }; -template class STanhOpMaker : public framework::OpProtoAndCheckerMaker { public: STanhOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker) : OpProtoAndCheckerMaker(proto, op_checker) { AddInput("X", "Input of STanh operator"); AddOutput("Y", "Output of STanh operator"); - AddAttr("scale_a", "The scale parameter of a for the input") - .SetDefault(static_cast(2 / 3)); - AddAttr("scale_b", "The scale parameter of b for the input") - .SetDefault(static_cast(1.7159)); + AddAttr("scale_a", "The scale parameter of a for the input") + .SetDefault(2.0f / 3.0f); + AddAttr("scale_b", "The scale parameter of b for the input") + .SetDefault(1.7159f); AddComment(R"DOC( STanh Activation Operator. @@ -424,7 +411,6 @@ $$y = b * \frac{e^{a * x} - e^{-a * x}}{e^{a * x} + e^{-a * x}}$$ } }; -template class ThresholdedReluOpMaker : public framework::OpProtoAndCheckerMaker { public: ThresholdedReluOpMaker(framework::OpProto *proto, @@ -432,8 +418,8 @@ class ThresholdedReluOpMaker : public framework::OpProtoAndCheckerMaker { : OpProtoAndCheckerMaker(proto, op_checker) { AddInput("X", "Input of ThresholdedRelu operator"); AddOutput("Y", "Output of ThresholdedRelu operator"); - AddAttr("threshold", "The threshold location of activation") - .SetDefault(static_cast(1.0)); + AddAttr("threshold", "The threshold location of activation") + .SetDefault(1.0f); AddComment(R"DOC( ThresholdedRelu Activation Operator. @@ -448,7 +434,6 @@ $$ } }; -template class HardSigmoidOpMaker : public framework::OpProtoAndCheckerMaker { public: HardSigmoidOpMaker(framework::OpProto *proto, @@ -456,10 +441,10 @@ class HardSigmoidOpMaker : public framework::OpProtoAndCheckerMaker { : OpProtoAndCheckerMaker(proto, op_checker) { AddInput("X", "Input of HardSigmoid operator"); AddOutput("Y", "Output of HardSigmoid operator"); - AddAttr("slope", "Slope for linear approximation of sigmoid") - .SetDefault(static_cast(0.2)); - AddAttr("offset", "Offset for linear approximation of sigmoid") - .SetDefault(static_cast(0.5)); + AddAttr("slope", "Slope for linear approximation of sigmoid") + .SetDefault(0.2f); + AddAttr("offset", "Offset for linear approximation of sigmoid") + .SetDefault(0.5f); AddComment(R"DOC( HardSigmoid Activation Operator. @@ -499,7 +484,7 @@ REGISTER_OP(tanh, ops::ActivationOp, ops::TanhOpMaker, tanh_grad, REGISTER_OP(tanh_shrink, ops::ActivationOp, ops::TanhShrinkOpMaker, tanh_shrink_grad, ops::ActivationOpGrad); -REGISTER_OP(softshrink, ops::ActivationOp, ops::SoftShrinkOpMaker, +REGISTER_OP(softshrink, ops::ActivationOp, ops::SoftShrinkOpMaker, softshrink_grad, ops::ActivationOpGrad); REGISTER_OP(sqrt, ops::ActivationOp, ops::SqrtOpMaker, sqrt_grad, @@ -523,35 +508,34 @@ REGISTER_OP(softplus, ops::ActivationOp, ops::SoftplusOpMaker, softplus_grad, REGISTER_OP(softsign, ops::ActivationOp, ops::SoftsignOpMaker, softsign_grad, ops::ActivationOpGrad); -REGISTER_OP(brelu, ops::ActivationOp, ops::BReluOpMaker, brelu_grad, +REGISTER_OP(brelu, ops::ActivationOp, ops::BReluOpMaker, brelu_grad, ops::ActivationOpGrad); -REGISTER_OP(leaky_relu, ops::ActivationOp, ops::LeakyReluOpMaker, +REGISTER_OP(leaky_relu, ops::ActivationOp, ops::LeakyReluOpMaker, leaky_relu_grad, ops::ActivationOpGrad); -REGISTER_OP(soft_relu, ops::ActivationOp, ops::SoftReluOpMaker, - soft_relu_grad, ops::ActivationOpGrad); +REGISTER_OP(soft_relu, ops::ActivationOp, ops::SoftReluOpMaker, soft_relu_grad, + ops::ActivationOpGrad); -REGISTER_OP(elu, ops::ActivationOp, ops::ELUOpMaker, elu_grad, +REGISTER_OP(elu, ops::ActivationOp, ops::ELUOpMaker, elu_grad, ops::ActivationOpGrad); -REGISTER_OP(relu6, ops::ActivationOp, ops::Relu6OpMaker, relu6_grad, +REGISTER_OP(relu6, ops::ActivationOp, ops::Relu6OpMaker, relu6_grad, ops::ActivationOpGrad); -REGISTER_OP(pow, ops::ActivationOp, ops::PowOpMaker, pow_grad, +REGISTER_OP(pow, ops::ActivationOp, ops::PowOpMaker, pow_grad, ops::ActivationOpGrad); -REGISTER_OP(stanh, ops::ActivationOp, ops::STanhOpMaker, stanh_grad, +REGISTER_OP(stanh, ops::ActivationOp, ops::STanhOpMaker, stanh_grad, ops::ActivationOpGrad); -REGISTER_OP(hard_shrink, ops::ActivationOp, ops::HardShrinkOpMaker, +REGISTER_OP(hard_shrink, ops::ActivationOp, ops::HardShrinkOpMaker, hard_shrink_grad, ops::ActivationOpGrad); -REGISTER_OP(thresholded_relu, ops::ActivationOp, - ops::ThresholdedReluOpMaker, thresholded_relu_grad, - ops::ActivationOpGrad); +REGISTER_OP(thresholded_relu, ops::ActivationOp, ops::ThresholdedReluOpMaker, + thresholded_relu_grad, ops::ActivationOpGrad); -REGISTER_OP(hard_sigmoid, ops::ActivationOp, ops::HardSigmoidOpMaker, +REGISTER_OP(hard_sigmoid, ops::ActivationOp, ops::HardSigmoidOpMaker, hard_sigmoid_grad, ops::ActivationOpGrad); #define REGISTER_ACTIVATION_CPU_KERNEL(act_type, functor, grad_functor) \ diff --git a/paddle/operators/adadelta_op.cc b/paddle/operators/adadelta_op.cc index b717e1647e..16a7794d5b 100644 --- a/paddle/operators/adadelta_op.cc +++ b/paddle/operators/adadelta_op.cc @@ -109,4 +109,5 @@ paramOut = param + paramUpdate$$ namespace ops = paddle::operators; REGISTER_OP_WITHOUT_GRADIENT(adadelta, ops::AdadeltaOp, ops::AdadeltaOpMaker); REGISTER_OP_CPU_KERNEL( - adadelta, ops::AdadeltaOpKernel); + adadelta, ops::AdadeltaOpKernel, + ops::AdadeltaOpKernel); diff --git a/paddle/operators/adadelta_op.cu b/paddle/operators/adadelta_op.cu index 3af1c8c8e9..9fb6185207 100644 --- a/paddle/operators/adadelta_op.cu +++ b/paddle/operators/adadelta_op.cu @@ -17,4 +17,5 @@ namespace ops = paddle::operators; REGISTER_OP_GPU_KERNEL( - adadelta, ops::AdadeltaOpKernel); + adadelta, ops::AdadeltaOpKernel, + ops::AdadeltaOpKernel); diff --git a/paddle/operators/adadelta_op.h b/paddle/operators/adadelta_op.h index d29e15c435..a8c5f0c8aa 100644 --- a/paddle/operators/adadelta_op.h +++ b/paddle/operators/adadelta_op.h @@ -33,8 +33,8 @@ class AdadeltaOpKernel : public framework::OpKernel { avg_squared_grad_out_tensor->mutable_data(ctx.GetPlace()); avg_squared_update_out_tensor->mutable_data(ctx.GetPlace()); - float rho = ctx.Attr("rho"); - float epsilon = ctx.Attr("epsilon"); + T rho = static_cast(ctx.Attr("rho")); + T epsilon = static_cast(ctx.Attr("epsilon")); auto param = framework::EigenVector::Flatten( *ctx.Input("Param")); diff --git a/paddle/operators/adagrad_op.cu b/paddle/operators/adagrad_op.cu index 5b869e6bc5..1c870214b2 100644 --- a/paddle/operators/adagrad_op.cu +++ b/paddle/operators/adagrad_op.cu @@ -14,8 +14,8 @@ #define EIGEN_USE_GPU #include "paddle/operators/adagrad_op.h" -#include "paddle/operators/math/selected_rows_functor.h" #include "paddle/operators/math/math_function.h" +#include "paddle/operators/math/selected_rows_functor.h" #include "paddle/platform/cuda_helper.h" namespace paddle { @@ -134,8 +134,8 @@ struct SparseAdagradFunctor { T, 256><<(context) .stream()>>>(grad_merge_data, grad_merge->rows().data(), - lr, param_data, - moment_data, grad_width, epsilon); + lr, param_data, moment_data, grad_width, + epsilon); } }; diff --git a/paddle/operators/adam_op.cc b/paddle/operators/adam_op.cc index 97a091ae76..03faa2a7c5 100644 --- a/paddle/operators/adam_op.cc +++ b/paddle/operators/adam_op.cc @@ -127,4 +127,5 @@ paramOut = param - learningRate * moment_1/ ($\sqrt{(moment_2)} + \epsilon)$$ namespace ops = paddle::operators; REGISTER_OP_WITHOUT_GRADIENT(adam, ops::AdamOp, ops::AdamOpMaker); REGISTER_OP_CPU_KERNEL(adam, - ops::AdamOpKernel); + ops::AdamOpKernel, + ops::AdamOpKernel); diff --git a/paddle/operators/adam_op.cu b/paddle/operators/adam_op.cu index a3def912e5..6e34f7818c 100644 --- a/paddle/operators/adam_op.cu +++ b/paddle/operators/adam_op.cu @@ -17,4 +17,5 @@ namespace ops = paddle::operators; REGISTER_OP_GPU_KERNEL(adam, - ops::AdamOpKernel); + ops::AdamOpKernel, + ops::AdamOpKernel); diff --git a/paddle/operators/adam_op.h b/paddle/operators/adam_op.h index 45938006db..7f7fa1da1c 100644 --- a/paddle/operators/adam_op.h +++ b/paddle/operators/adam_op.h @@ -31,9 +31,9 @@ class AdamOpKernel : public framework::OpKernel { moment1_out_tensor->mutable_data(ctx.GetPlace()); moment2_out_tensor->mutable_data(ctx.GetPlace()); - float beta1 = ctx.Attr("beta1"); - float beta2 = ctx.Attr("beta2"); - float epsilon = ctx.Attr("epsilon"); + T beta1 = static_cast(ctx.Attr("beta1")); + T beta2 = static_cast(ctx.Attr("beta2")); + T epsilon = static_cast(ctx.Attr("epsilon")); auto param = framework::EigenVector::Flatten( *ctx.Input("Param")); diff --git a/paddle/operators/adamax_op.cc b/paddle/operators/adamax_op.cc index 14cf3841b3..d5bbc672e1 100644 --- a/paddle/operators/adamax_op.cc +++ b/paddle/operators/adamax_op.cc @@ -126,4 +126,5 @@ division by 0 error. namespace ops = paddle::operators; REGISTER_OP_WITHOUT_GRADIENT(adamax, ops::AdamaxOp, ops::AdamaxOpMaker); REGISTER_OP_CPU_KERNEL(adamax, - ops::AdamaxOpKernel); + ops::AdamaxOpKernel, + ops::AdamaxOpKernel); diff --git a/paddle/operators/adamax_op.cu b/paddle/operators/adamax_op.cu index fee3b6fc6b..057ef39025 100644 --- a/paddle/operators/adamax_op.cu +++ b/paddle/operators/adamax_op.cu @@ -17,4 +17,5 @@ namespace ops = paddle::operators; REGISTER_OP_GPU_KERNEL(adamax, - ops::AdamaxOpKernel); + ops::AdamaxOpKernel, + ops::AdamaxOpKernel); diff --git a/paddle/operators/adamax_op.h b/paddle/operators/adamax_op.h index 2c99832ec0..bf36ed7860 100644 --- a/paddle/operators/adamax_op.h +++ b/paddle/operators/adamax_op.h @@ -31,9 +31,9 @@ class AdamaxOpKernel : public framework::OpKernel { moment_out_tensor->mutable_data(ctx.GetPlace()); inf_norm_out_tensor->mutable_data(ctx.GetPlace()); - float beta1 = ctx.Attr("beta1"); - float beta2 = ctx.Attr("beta2"); - float epsilon = ctx.Attr("epsilon"); + T beta1 = static_cast(ctx.Attr("beta1")); + T beta2 = static_cast(ctx.Attr("beta2")); + T epsilon = static_cast(ctx.Attr("epsilon")); auto param = framework::EigenVector::Flatten( *ctx.Input("Param")); diff --git a/paddle/operators/beam_search_op.cc b/paddle/operators/beam_search_op.cc index 17926a813d..8c3e2a303f 100644 --- a/paddle/operators/beam_search_op.cc +++ b/paddle/operators/beam_search_op.cc @@ -139,7 +139,7 @@ bool BeamSearch::NextItemSet(std::vector *items) { items->reserve(framework::product(ids.dims())); for (size_t offset = abs_lod[lod_level_][sent_offset_]; offset < abs_lod[lod_level_][sent_offset_ + 1]; offset++) { - for (int d = 0; d < instance_dim; d++) { + for (size_t d = 0; d < instance_dim; d++) { const size_t dim_offset = offset * instance_dim + d; items->emplace_back(offset, ids_data[dim_offset], scores_data[dim_offset]); diff --git a/paddle/operators/ftrl_op.cc b/paddle/operators/ftrl_op.cc new file mode 100644 index 0000000000..cb7ae69196 --- /dev/null +++ b/paddle/operators/ftrl_op.cc @@ -0,0 +1,139 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include "paddle/operators/ftrl_op.h" + +namespace paddle { +namespace operators { + +class FTRLOp : public framework::OperatorWithKernel { + public: + using framework::OperatorWithKernel::OperatorWithKernel; + + protected: + void InferShape(framework::InferShapeContext *ctx) const override { + PADDLE_ENFORCE(ctx->HasInput("Param"), + "Input(Param) of FTRL should not be null."); + PADDLE_ENFORCE(ctx->HasInput("SquaredAccumulator"), + "Input(SquaredAccumulator) of FTRL should not be null."); + PADDLE_ENFORCE(ctx->HasInput("LinearAccumulator"), + "Input(LinearAccumulator) of FTRL should not be null."); + PADDLE_ENFORCE(ctx->HasInput("Grad"), + "Input(Grad) of FTRL should not be null."); + PADDLE_ENFORCE(ctx->HasInput("LearningRate"), + "Input(LearningRate) of FTRL should not be null."); + + PADDLE_ENFORCE(ctx->HasOutput("ParamOut"), + "Output(ParamOut) of FTRL should not be null."); + PADDLE_ENFORCE(ctx->HasOutput("SquaredAccumOut"), + "Output(SquaredAccumOut) of FTRL should not be null."); + PADDLE_ENFORCE(ctx->HasOutput("LinearAccumOut"), + "Output(LinearAccumOut) of FTRL should not be null."); + + auto param_dim = ctx->GetInputDim("Param"); + PADDLE_ENFORCE_EQ(param_dim, ctx->GetInputDim("Grad"), + "Two input of FTRL Op's dimension must be same."); + + auto lr_dim = ctx->GetInputDim("LearningRate"); + PADDLE_ENFORCE_EQ(framework::product(lr_dim), 1, + "Learning Rate should be a scalar."); + + ctx->SetOutputDim("ParamOut", param_dim); + ctx->SetOutputDim("SquaredAccumOut", param_dim); + ctx->SetOutputDim("LinearAccumOut", param_dim); + } +}; + +class FTRLOpMaker : public framework::OpProtoAndCheckerMaker { + public: + FTRLOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker) + : OpProtoAndCheckerMaker(proto, op_checker) { + AddInput("Param", + "(Tensor, default Tensor) " + "Input parameter value that has to be updated."); + AddInput("SquaredAccumulator", + "(Tensor, default Tensor) " + "Accumulator that accumulates squared gradients."); + AddInput("LinearAccumulator", + "(Tensor, default Tensor) " + "Accumulator that accumulates linear gradients."); + AddInput("Grad", + "(Tensor, default Tensor) " + "Input gradient of the parameter."); + AddInput("LearningRate", + "(Tensor, default Tensor) " + "The learning rate should be a tensor of size 1."); + + AddOutput("ParamOut", "(Tensor) Output updated parameter value."); + AddOutput("SquaredAccumOut", + "(Tensor) Output accumulated squared" + " gradients."); + AddOutput("LinearAccumOut", + "(Tensor) Output accumulated linear" + " gradients."); + + AddAttr("l1", + "(float, default 0.0) " + "L1 regularization strength.") + .SetDefault(0.0f); + AddAttr("l2", + "(float, default 0.0) " + "L2 regularization strength.") + .SetDefault(0.0f); + AddAttr("lr_power", + "(float, default -0.5f) " + "Learning Rate Power.") + .SetDefault(-0.5f); + AddComment(R"DOC( +FTRL (Follow The Regularized Leader) Operator. + +Optimizer that implements the FTRL algorithm: + +$$ +new\_accum = squared\_accum + grad^2 \\ +if (lr\_power == -0.5) { + linear\_accum += grad - (\surd(new\_accum) - \surd(squared\_accum)) / + (learning\_rate * param) \\ +} else { + linear\_accum += grad - + (new\_accum^{-lr\_power} - accum^{-lr\_power}) / + (learning\_rate * param) \\ +} + +x = (l1 * sign(linear\_accum) - linear\_accum) +if (lr\_power == -0.5) { + y = \frac{\surd(new\_accum)}{learning\_rate} + (2 * l2) \\ + pre\_shrink = \frac{x}{y} \\ + param = (abs(linear\_accum) > l1).select(pre\_shrink, 0.0) \\ +} else { + y = \frac{new\_accum^{-lr\_power}}{learning\_rate} + (2 * l2) \\ + pre\_shrink = \frac{x}{y} \\ + param = (abs(linear\_accum) > l1).select(pre\_shrink, 0.0) \\ +} +squared\_accum += grad^2; +$$ + +The paper that proposed Follow The Regularized Leader (FTRL): +(https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf) + +)DOC"); + } +}; +} // namespace operators +} // namespace paddle + +namespace ops = paddle::operators; +REGISTER_OP_WITHOUT_GRADIENT(ftrl, ops::FTRLOp, ops::FTRLOpMaker); +REGISTER_OP_CPU_KERNEL(ftrl, + ops::FTRLOpKernel); diff --git a/paddle/operators/ftrl_op.cu b/paddle/operators/ftrl_op.cu new file mode 100644 index 0000000000..97b36dade6 --- /dev/null +++ b/paddle/operators/ftrl_op.cu @@ -0,0 +1,19 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +You may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software distributed +under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR +CONDITIONS OF ANY KIND, either express or implied. See the License for the +specific language governing permissions and limitations under the License. */ + +#define EIGEN_USE_GPU +#include "paddle/operators/ftrl_op.h" + +namespace ops = paddle::operators; +REGISTER_OP_GPU_KERNEL(ftrl, + ops::FTRLOpKernel); diff --git a/paddle/operators/ftrl_op.h b/paddle/operators/ftrl_op.h new file mode 100644 index 0000000000..b040162f8d --- /dev/null +++ b/paddle/operators/ftrl_op.h @@ -0,0 +1,96 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#pragma once +#include "paddle/framework/eigen.h" +#include "paddle/framework/op_registry.h" + +namespace paddle { +namespace operators { + +using Tensor = framework::Tensor; +template +using EigenVector = framework::EigenVector; + +template +class FTRLOpKernel : public framework::OpKernel { + public: + void Compute(const framework::ExecutionContext& ctx) const override { + auto* param_out = ctx.Output("ParamOut"); + auto* sq_accum_out = ctx.Output("SquaredAccumOut"); + auto* lin_accum_out = ctx.Output("LinearAccumOut"); + + param_out->mutable_data(ctx.GetPlace()); + sq_accum_out->mutable_data(ctx.GetPlace()); + lin_accum_out->mutable_data(ctx.GetPlace()); + + auto grad = ctx.Input("Grad"); + + auto l1 = static_cast(ctx.Attr("l1")); + auto l2 = static_cast(ctx.Attr("l2")); + auto lr_power = static_cast(ctx.Attr("lr_power")); + + auto p = EigenVector::Flatten(*ctx.Input("Param")); + auto sq_accum = + EigenVector::Flatten(*ctx.Input("SquaredAccumulator")); + auto lin_accum = + EigenVector::Flatten(*ctx.Input("LinearAccumulator")); + auto g = EigenVector::Flatten(*grad); + auto lr = EigenVector::Flatten(*ctx.Input("LearningRate")); + + auto p_out = EigenVector::Flatten(*param_out); + auto s_acc_out = EigenVector::Flatten(*sq_accum_out); + auto l_acc_out = EigenVector::Flatten(*lin_accum_out); + auto place = ctx.GetEigenDevice(); + + Eigen::DSizes grad_dsize(grad->numel()); + + auto new_accum = sq_accum + g * g; + // Special case for lr_power = -0.5 + if (lr_power == static_cast(-0.5)) { + l_acc_out.device(place) = + lin_accum + g - + ((new_accum.sqrt() - sq_accum.sqrt()) / lr.broadcast(grad_dsize)) * p; + } else { + l_acc_out.device(place) = + lin_accum + g - + ((new_accum.pow(-lr_power) - sq_accum.pow(-lr_power)) / + lr.broadcast(grad_dsize)) * + p; + } + + auto x = (l_acc_out.constant(l1) * l_acc_out.sign() - l_acc_out); + if (lr_power == static_cast(-0.5)) { + auto y = (new_accum.sqrt() / lr.broadcast(grad_dsize)) + + l_acc_out.constant(static_cast(2) * l2); + auto pre_shrink = x / y; + p_out.device(place) = + (l_acc_out.abs() > l_acc_out.constant(l1)) + .select(pre_shrink, p.constant(static_cast(0))); + } else { + auto y = (new_accum.pow(-lr_power) / lr.broadcast(grad_dsize)) + + l_acc_out.constant(static_cast(2) * l2); + auto pre_shrink = x / y; + p_out.device(place) = + (l_acc_out.abs() > l_acc_out.constant(l1)) + .select(pre_shrink, p.constant(static_cast(0))); + } + + s_acc_out.device(place) = sq_accum + g * g; + } +}; + +} // namespace operators +} // namespace paddle diff --git a/paddle/operators/gru_unit_op.cc b/paddle/operators/gru_unit_op.cc index 89c027ff1e..877c969103 100644 --- a/paddle/operators/gru_unit_op.cc +++ b/paddle/operators/gru_unit_op.cc @@ -114,18 +114,19 @@ class GRUUnitOpMaker : public framework::OpProtoAndCheckerMaker { .SetDefault(sigmoid) .InEnum({identity, sigmoid, tanh, relu}); AddComment(R"DOC( -GRUUnit Operator. - -This operator implements partial calculations of the GRU unit as follows: +GRUUnit Operator implements partial calculations of the GRU unit as following: $$ -update \ gate: u_t = actGate(xu_t + W_u * hidden_{prev} + bias_u) \\ -reset \ gate: r_t = actGate(xr_t + W_r * hidden_{prev} + bias_r) \\ -output \ candidate: {h}_t = actNode({xc}_t + W_c * dot(r_t, hidden_{prev}) + bias_c) \\ -output: h_t = dot((1-u_t), {h}_t) + dot(u_t, hidden_{prev}) +update \ gate: u_t = actGate(xu_t + W_u * h_{t-1} + b_u) \\ +reset \ gate: r_t = actGate(xr_t + W_r * h_{t-1} + b_r) \\ +output \ candidate: {h}_t = actNode(xc_t + W_c * dot(r_t, h_{t-1}) + b_c) \\ +output: h_t = dot((1 - u_t), h_{t-1}) + dot(u_t, {h}_t) $$ -The rest of GRU unit can be completed by using FCOp's output as the input of GRUUnitOp. +which is same as one time step of GRU Operator. + +@note To implement the complete GRU unit, fully-connected operator must be +used before to feed xu, xr and xc as the Input of GRUUnit operator. )DOC"); } @@ -150,12 +151,6 @@ class GRUUnitGradOp : public framework::OperatorWithKernel { "ResetHiddenPrev"); PADDLE_ENFORCE(ctx->HasInput("Hidden"), "Input(%s) of GRUUnitGradOp should not be null.", "Hidden"); - PADDLE_ENFORCE(ctx->HasInput(framework::GradVarName("Gate")), - "Input(%s@GRAD) of GRUUnitGradOp should not be null.", - "Gate"); - PADDLE_ENFORCE(ctx->HasInput(framework::GradVarName("ResetHiddenPrev")), - "Input(%s@GRAD) of GRUUnitGradOp should not be null.", - "ResetHiddenPrev"); PADDLE_ENFORCE(ctx->HasInput(framework::GradVarName("Hidden")), "Input(%s@GRAD) of GRUUnitGradOp should not be null.", "Hidden"); diff --git a/paddle/operators/gru_unit_op.h b/paddle/operators/gru_unit_op.h index c53e7d9827..050430d325 100644 --- a/paddle/operators/gru_unit_op.h +++ b/paddle/operators/gru_unit_op.h @@ -110,7 +110,7 @@ class GRUUnitKernel : public framework::OpKernel { auto c = g.slice(c_offsets, extents); // output candidate // calculate final output - h.device(place) = u * (h_p - c) + c; + h.device(place) = u * (c - h_p) + h_p; } }; @@ -146,35 +146,27 @@ class GRUUnitGradKernel : public framework::OpKernel { auto* weight_grad = context.Output(framework::GradVarName("Weight")); auto* bias_grad = context.Output(framework::GradVarName("Bias")); - input_grad->mutable_data(context.GetPlace()); - hidden_prev_grad->mutable_data(context.GetPlace()); - weight_grad->mutable_data(context.GetPlace()); Tensor gate_grad; - gate_grad.mutable_data(input->dims(), context.GetPlace()); Tensor reset_hidden_prev_grad; - reset_hidden_prev_grad.mutable_data(reset_hidden_prev->dims(), - context.GetPlace()); - - int batch_size = input->dims()[0]; - int frame_size = hidden_prev->dims()[1]; const T* hidden_prev_data = hidden_prev->data(); - T* hidden_prev_grad_data = hidden_prev_grad->data(); const T* weight_data = weight->data(); - T* weight_grad_data = weight_grad->data(); - T* gate_grad_data = gate_grad.data(); + T* gate_grad_data = + gate_grad.mutable_data(input->dims(), context.GetPlace()); const T* reset_hidden_prev_data = reset_hidden_prev->data(); - T* reset_hidden_prev_grad_data = reset_hidden_prev_grad.data(); + T* reset_hidden_prev_grad_data = reset_hidden_prev_grad.mutable_data( + reset_hidden_prev->dims(), context.GetPlace()); auto h_p = EigenMatrix::From(*hidden_prev); auto g = EigenMatrix::From(*gate); auto d_h = EigenMatrix::From(*hidden_grad); - auto d_x = EigenMatrix::From(*input_grad); - auto d_h_p = EigenMatrix::From(*hidden_prev_grad); auto d_g = EigenMatrix::From(gate_grad); auto d_r_h_p = EigenMatrix::From(reset_hidden_prev_grad); auto place = context.GetEigenDevice(); + int batch_size = input->dims()[0]; + int frame_size = hidden_prev->dims()[1]; + Eigen::array extents({{batch_size, frame_size}}); Eigen::array u_offsets({{0, 0}}); auto u = g.slice(u_offsets, extents); // update gate @@ -185,38 +177,52 @@ class GRUUnitGradKernel : public framework::OpKernel { // backward for unactivated update gate ActGradCompute(context.Attr("gate_activation"), place, u, u, - d_g.slice(u_offsets, extents), d_h * (h_p - c)); + d_g.slice(u_offsets, extents), d_h * (c - h_p)); // backward for unactivated output candidate ActGradCompute(context.Attr("activation"), place, c, c, - d_g.slice(c_offsets, extents), d_h * (u.constant(T(1)) - u)); + d_g.slice(c_offsets, extents), d_h * u); // backward for reset_hidden_prev math::gemm(context.device_context(), false, true, batch_size, frame_size, frame_size, 1, gate_grad_data + frame_size * 2, frame_size * 3, weight_data + frame_size * frame_size * 2, frame_size, 0, reset_hidden_prev_grad_data, frame_size); - // backward for state_weight - math::gemm( - context.device_context(), true, false, frame_size, frame_size, - batch_size, 1, reset_hidden_prev_data, frame_size, - gate_grad_data + frame_size * 2, frame_size * 3, 0, - weight_grad_data + frame_size * frame_size * 2, frame_size); // backward for unactivated reset gate ActGradCompute(context.Attr("gate_activation"), place, r, r, d_g.slice(r_offsets, extents), d_r_h_p * h_p); - // backward for update_gate_weight and reset_gate_weight - math::gemm(context.device_context(), true, false, frame_size, - frame_size * 2, batch_size, 1, hidden_prev_data, - frame_size, gate_grad_data, frame_size * 3, 0, - weight_grad_data, frame_size * 2); + // backward for weight + if (weight_grad) { + T* weight_grad_data = weight_grad->mutable_data(context.GetPlace()); + // backward for state_weight + math::gemm( + context.device_context(), true, false, frame_size, frame_size, + batch_size, 1, reset_hidden_prev_data, frame_size, + gate_grad_data + frame_size * 2, frame_size * 3, 0, + weight_grad_data + frame_size * frame_size * 2, frame_size); + + // backward for update_gate_weight and reset_gate_weight + math::gemm(context.device_context(), true, false, frame_size, + frame_size * 2, batch_size, 1, hidden_prev_data, + frame_size, gate_grad_data, frame_size * 3, 0, + weight_grad_data, frame_size * 2); + } // backward for hidden_prev - d_h_p.device(place) = d_r_h_p * r + d_h * u; - math::gemm(context.device_context(), false, true, batch_size, - frame_size, frame_size * 2, 1, gate_grad_data, - frame_size * 3, weight_data, frame_size * 2, 1, - hidden_prev_grad_data, frame_size); + if (hidden_prev_grad) { + T* hidden_prev_grad_data = + hidden_prev_grad->mutable_data(context.GetPlace()); + auto d_h_p = EigenMatrix::From(*hidden_prev_grad); + d_h_p.device(place) = d_r_h_p * r + d_h * (u.constant(T(1)) - u); + math::gemm(context.device_context(), false, true, batch_size, + frame_size, frame_size * 2, 1, gate_grad_data, + frame_size * 3, weight_data, frame_size * 2, 1, + hidden_prev_grad_data, frame_size); + } // backward for input - d_x.device(place) = d_g; + if (input_grad) { + input_grad->mutable_data(context.GetPlace()); + auto d_x = EigenMatrix::From(*input_grad); + d_x.device(place) = d_g; + } // backward for bias if (bias_grad) { bias_grad->mutable_data(context.GetPlace()); diff --git a/paddle/operators/linear_chain_crf_op.h b/paddle/operators/linear_chain_crf_op.h index ddf7398175..872f659fed 100644 --- a/paddle/operators/linear_chain_crf_op.h +++ b/paddle/operators/linear_chain_crf_op.h @@ -271,7 +271,7 @@ class LinearChainCRFOpKernel : public framework::OpKernel { ll -= std::log(sum); // Now ll is equal to -log(Z). - const int* lbl = label.data(); + const int64_t* lbl = label.data(); PADDLE_ENFORCE_LT( static_cast(*std::max_element(lbl, lbl + seq_length)), tag_num, "An invalid tag label that execesses the largest tag number."); @@ -449,7 +449,7 @@ class LinearChainCRFGradOpKernel : public framework::OpKernel { Tensor* emission_grad) const { const T* w_exps = transition_exps.data(); const T* x_exps = emission_exps.data(); - const int* label_value = label.data(); + const int64_t* label_value = label.data(); T* beta_value = beta->data(); auto x_dims = emission_exps.dims(); diff --git a/paddle/operators/sequence_conv_op.cc b/paddle/operators/sequence_conv_op.cc index 41cadce4c6..c5533732d4 100644 --- a/paddle/operators/sequence_conv_op.cc +++ b/paddle/operators/sequence_conv_op.cc @@ -179,7 +179,9 @@ REGISTER_OP(sequence_conv, ops::SequenceConvOp, ops::SequenceConvOpMaker, sequence_conv_grad, ops::SequenceConvGradOp); REGISTER_OP_CPU_KERNEL( - sequence_conv, ops::SequenceConvKernel); + sequence_conv, ops::SequenceConvKernel, + ops::SequenceConvKernel); REGISTER_OP_CPU_KERNEL( sequence_conv_grad, - ops::SequenceConvGradKernel); + ops::SequenceConvGradKernel, + ops::SequenceConvGradKernel); diff --git a/paddle/operators/sequence_conv_op.cu.cc b/paddle/operators/sequence_conv_op.cu.cc index 6106b0e46c..c8136dbcb3 100644 --- a/paddle/operators/sequence_conv_op.cu.cc +++ b/paddle/operators/sequence_conv_op.cu.cc @@ -16,7 +16,9 @@ namespace ops = paddle::operators; REGISTER_OP_GPU_KERNEL( - sequence_conv, ops::SequenceConvKernel); + sequence_conv, ops::SequenceConvKernel, + ops::SequenceConvKernel); REGISTER_OP_GPU_KERNEL( sequence_conv_grad, - ops::SequenceConvGradKernel); + ops::SequenceConvGradKernel, + ops::SequenceConvGradKernel); diff --git a/paddle/scripts/docker/build.sh b/paddle/scripts/docker/build.sh index 595d25fd48..fda2a2f1b7 100644 --- a/paddle/scripts/docker/build.sh +++ b/paddle/scripts/docker/build.sh @@ -144,7 +144,7 @@ function gen_dockerfile() { DOCKERFILE_GPU_ENV="" DOCKERFILE_CUDNN_DSO="" if [[ ${WITH_GPU:-OFF} == 'ON' ]]; then - DOCKERFILE_GPU_ENV="ENV LD_LIBRARY_PATH /usr/lib/x86_64-linux-gnu:${LD_LIBRARY_PATH}" + DOCKERFILE_GPU_ENV="ENV LD_LIBRARY_PATH /usr/lib/x86_64-linux-gnu:\${LD_LIBRARY_PATH}" DOCKERFILE_CUDNN_DSO="RUN ln -s /usr/lib/x86_64-linux-gnu/libcudnn.so.5 /usr/lib/x86_64-linux-gnu/libcudnn.so" fi diff --git a/paddle/trainer/Trainer.cpp b/paddle/trainer/Trainer.cpp index 88e684849d..3e4a2b5fa8 100644 --- a/paddle/trainer/Trainer.cpp +++ b/paddle/trainer/Trainer.cpp @@ -138,7 +138,7 @@ void Trainer::init(const std::shared_ptr& config, } if (FLAGS_use_mkldnn) { - CHECK_EQ(FLAGS_trainer_count, 1UL) << "MKLDNN only need 1 trainer"; + CHECK_EQ(FLAGS_trainer_count, 1) << "MKLDNN only need 1 trainer"; } if (testing) { diff --git a/paddle/trainer/tests/CMakeLists.txt b/paddle/trainer/tests/CMakeLists.txt index 80665551ec..2739878b7f 100644 --- a/paddle/trainer/tests/CMakeLists.txt +++ b/paddle/trainer/tests/CMakeLists.txt @@ -11,7 +11,6 @@ add_unittest_without_exec(test_Trainer test_Trainer.cpp) add_test(NAME test_Trainer COMMAND ${PADDLE_SOURCE_DIR}/paddle/.set_python_path.sh -d ${PADDLE_SOURCE_DIR}/python/ - ${PYTHON_EXECUTABLE} ${PADDLE_SOURCE_DIR}/paddle/trainer/tests/gen_proto_data.py && ${PADDLE_SOURCE_DIR}/paddle/.set_python_path.sh -d ${PADDLE_SOURCE_DIR}/python/ ${CMAKE_CURRENT_BINARY_DIR}/test_Trainer WORKING_DIRECTORY ${PADDLE_SOURCE_DIR}/paddle/) diff --git a/paddle/trainer/tests/chunking.conf b/paddle/trainer/tests/chunking.conf deleted file mode 100644 index d88df919df..0000000000 --- a/paddle/trainer/tests/chunking.conf +++ /dev/null @@ -1,125 +0,0 @@ -#edit-mode: -*- python -*- -# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -#Todo(luotao02) This config is only used for unitest. It is out of date now, and will be updated later. - -TrainData(ProtoData( - files = 'trainer/tests/train_files.txt', - usage_ratio = 1.0, -)) - -TestData(ProtoData( - files = 'trainer/tests/test_files.txt' -)) - -default_initial_std(1) -default_decay_rate(4e-4) -default_device(0) - -Inputs("features", "word", "pos", "chunk") - -Outputs("crf") - -Layer( - name = "features", - type = "data", - size = 4339, -) - -Layer( - name = "word", - type = "data", - size = 478, -) - -Layer( - name = "pos", - type = "data", - size = 45 -) - -Layer( - name = "chunk", - type = "data", - size = 23 -) - -Layer( - name = "output", - type = "mixed", - size = 23, - bias = False, - device = -1, - inputs = [ - FullMatrixProjection("features", parameter_name="feature_weights"), - # TableProjection("word"), - # TableProjection("pos"), - ], -) - -Layer( - name = "crf", - type = "crf", - size = 23, - device = -1, - inputs = [ - Input("output", parameter_name="crfw"), - "chunk" - ] -) - -Layer( - name = "crf_decoding", - type = "crf_decoding", - size = 23, - device = -1, - inputs = [ - Input("output", parameter_name="crfw"), - "chunk" - ] -) - -Evaluator( - name = "error", - type = "sum", - inputs = "crf_decoding", -) - -''' -# chuck evaluator cannot be used for GPU training -Evaluator( - name = "chunk_f1", - type = "chunk", - inputs = ["crf_decoding", "chunk"], - chunk_scheme = "IOB", - num_chunk_types = 11, -) -''' - -Settings( - algorithm = 'sgd', - batch_size = 100, - average_window = 0.5, - max_average_window = 2500, - learning_rate = 1e-1, - learning_rate_decay_a = 5e-7, - learning_rate_decay_b = 0.75, - l1weight = 0, - l2weight = 1, - c1 = 0.0001, - backoff = 0.5, - owlqn_steps = 100, - max_backoff = 5, -) diff --git a/paddle/trainer/tests/compare_sparse_data b/paddle/trainer/tests/compare_sparse_data deleted file mode 100644 index 18fc654138..0000000000 Binary files a/paddle/trainer/tests/compare_sparse_data and /dev/null differ diff --git a/paddle/trainer/tests/data_bin_part b/paddle/trainer/tests/data_bin_part deleted file mode 100644 index 66ede391b0..0000000000 --- a/paddle/trainer/tests/data_bin_part +++ /dev/null @@ -1,214 +0,0 @@ -F -X -X -X -X -X -X -X -X -HC=TFTIַ;H=TFTIYW.8T˔I͚48TN8TE98TW8T&6ͅTTHC=TFTIַ;><.8˔I͚48+E98W8&68H=TFTIHC=TFTIַ;H=TFTI86HC=TFTIַ;W8T;8TJJ8T&$H=TFTIW8Ю+JJ8HC=TFTIַ;H=TFTI HC=TFTIַ;@?H=TFTI@HC=TFTIַ;H=TFTI868T8T&9C6HC=TFTIַ;BT&$88&Ӗ5H=TFTIBTHC=TFTIַ;H=TFTIVTHC=TFTIַ;8T8TͅTT8T&86;8T@N8T8T;9H=TFTI8888&86;8@N88HC=TFTIַ;H=TFTIMKHC=TFTIַ;ٟ@17ȣ8Gȣ8/>7;BAUQUT0A?H=TFTIٟ@17G/>7;BAUQUT0HC=TFTIַ;H=TFTIHC=TFTIַ;H=TFTIHC=TFTIַ;H=TFTI.8T˔I͚48TN8TE98TW8T&6ͅTTHC=TFTIַ;'JA-EJ@8T-Eބ248TYW.8˔I͚48+E98W8&68H=TFTIAM18Mބ248HC=TFTIַ;H=TFTIYW.8T˔I͚48TN8TE98TW8T&6ͅTTHC=TFTIַ;><.8˔I͚48+E98W8&68H=TFTIHC=TFTIַ;H=TFTI HC=TFTIַ;@KH=TFTI@KHC=TFTIַ;H=TFTI HC=TFTIַ;@?H=TFTI@HC=TFTIַ;H=TFTI#!14UƕT6.Q8T@Ԛ<14ƕT6.Q8@Ԛ<HC=TFTIַ;H=TFTIVTHC=TFTIַ;8T8TͅTT8T&86;8T@N8T8T;9H=TFTI8888&86;8@N88HC=TFTIַ;H=TFTIHC=TFTIַ;ܥ6H=TFTIܥ6HC=TFTIַ;H=TFTIHC=TFTIַ;H=TFTIHC=TFTIַ;H=TFTI;9HC=TFTIַ;Q;B !H=TFTIQBHC=TFTIַ;H=TFTIYW.8T˔I͚48TN8TE98TW8T&6ͅTTHC=TFTIַ;><.8˔I͚48+E98W8&68H=TFTIHC=TFTIַ;H=TFTI53HW8T;8T8THC=TFTIַ;#!HW8Ю+8H=TFTIHC=TFTIַ;H=TFTI HC=TFTIַ;@?H=TFTI@HC=TFTIַ;H=TFTI&$HC=TFTIַ;VGD; H=TFTIVGD;  ̣ OG  ̣ OG&$Eʌ3OXMQ̣ Jʌ3D4T#!Eʌ3OXMQ̣ Jʌ3UT  ̣ OG  ̣ OGG͡S<%&б ̣ Fۧ11ņAǧ1ņAņA<6ҥ3߫UVKTVU6>VMUF>M5%̋'wuG͡S<% ̣ Fۧ11ņAǧ1ņAņA<6UVKTV6>VMUF>ʶM%̋'  ̣ OG  ̣ OG&$Eʌ3OXMQ̣ Jʌ3D4T#!Eʌ3OXMQ̣ Jʌ3UT  ̣ OG  ̣ OG̣ '@@@  @@  ̣ OG  ̣ OG&$Eʌ3OXMQ̣ Jʌ3D4T#!Eʌ3OXMQ̣ Jʌ3UT  ̣ OG  ̣ OG&$O4=ӪN/>K/;8,T O4=ӪN/>K;,T  ̣ OG  ̣ OG&$Eʌ3OXMQ̣ Jʌ3D4T#!Eʌ3OXMQ̣ Jʌ3UT  ̣ OG  ̣ OG><,9O8.̣ TB0O!./WDSW53,9O8.TB0O!./WDSW  ̣ OG  ̣ OG&$Eʌ3OXMQ̣ Jʌ3D4T#!Eʌ3OXMQ̣ Jʌ3UT  ̣ OG  ̣ OG:=X̣ QUTG܂=X̣ QTG  ̣ OG  ̣ OG&$Eʌ3OXMQ̣ Jʌ3D4T#!Eʌ3OXMQ̣ Jʌ3UT  ̣ OG  ̣ OG)'= 0̣ M6ͅTO,@Ԛ<#!=ؐ0̣ M6ͅTO,@Ԛ<  ̣ OG  ̣ OG&$Eʌ3OXMQ̣ Jʌ3D4T#!Eʌ3OXMQ̣ Jʌ3UT  ̣ OG  ̣ OG/-= 0̣ M6ͅTO,DSDA)'=ؐ0̣ M6ͅTO,DSDA  ̣ OG  ̣ OG&$Eʌ3OXMQ̣ Jʌ3D4T#!Eʌ3OXMQ̣ Jʌ3UT  ̣ OG  ̣ OG  ̣ Ҧ)GG4Q>.ŞGGщQ4Q>.GщQ 4Q.ŞG6P6T4Q.6P64Q>.ŞGGщQ4Q>.GщQ204AQ.ŞGщQHAVTJD8DAP&$4AQ.щQHAVTD8A4Q>.ŞGGщQ4Q>.GщQ 4Q.ŞG6P6T4Q.6P64Q>.ŞGGщQ4Q>.GщQ&$R4Q>.ŞGGщQ6?@Ԛ<#!R4Q>.GщQ6?@Ԛ<4Q>.ŞGGщQ4Q>.GщQ 4Q.ŞG6P6T4Q.6P64Q>.ŞGGщQ4Q>.GщQ&$4Q.ŞGJIGщQDSDA#!4Q.JIGщQDSDA4Q>.ŞGGщQ4Q>.GщQ 4Q.ŞG6P6T4Q.6P64Q>.ŞGGщQ4Q>.GщQ&$.ŞGٟ@6G5IGщQA7B.ٟ@6G5IGщQ+4Q>.ŞGGщQ4Q>.GщQ 4Q.ŞG6P6T4Q.6P64Q>.ŞGGщQ4Q>.GщQ534Q>.ŞGDAP;0T?6T)! 4Q>.A;T6T)4Q>.ŞGGщQ4Q>.GщQ 4Q.ŞG6P6T4Q.6P64Q>.ŞGGщQ4Q>.GщQ534Q>.ŞGDAP;0T?6T)! 4Q>.A;T6T)4Q>.ŞGGщQ4Q>.GщQ 4Q.ŞG6P6T4Q.6P64Q>.ŞGGщQ4Q>.GщQ><49KQ.ŞGRGD9HOKJA.ŞG=RJ/-4-Q.RGD9HKJA.RJ4Q>.ŞGGщQ4Q>.GщQ 4Q.ŞG6P6T4Q.6P64Q>.ŞGGщQ4Q>.GщQ534AIQ.ŞGщQHAVTJD8DAP)'4AIQ.щQHAVTD8A4Q>.ŞGGщQ4Q>.GщQ 4Q.ŞG6P6T4Q.6P64Q>.ŞGGщQ4Q>.GщQ/-4=R4Q>AE.ŞGC/W99 4R4Q>C/W9CPH5CPH5;9H91GRFP.ܤKHUA6)ʪ86H1GRFP.ܤKHUA6)ʪCPH5CPH5UPH>G@Ԛ<UPH>G@Ԛ<CPH5CPH5&$CPHA>GDSPԮK߀3#!CPHA>GDSPٮKCPH5CPH5AHACPG@Ԛ<AHACP@Ԛ<CPH5CPH5;9H91GRFP.ܤKHUA6)ʪ86H1GRFP.ܤKHUA6)ʪCPH5CPH5MKHFșK>7QKH.CQR>“JMB>WMLG,@Ԛ<MKHFșK>7QKH.CQR>“JMB>WMLG,@Ԛ<CPH5CPH5&$CPHA>GDSPԮK߀3#!CPHA>GDSPٮKCPH5CPH553AHMDP58Qٟ@H3/A@@@/-AHMDP8Qٟ@H3/A@@CPH5CPH5;9H91GRFP.ܤKHUA6)ʪ86H1GRFP.ܤKHUA6)ʪCPH5CPH5#!AHACPGDSDA AHACPDSDACPH5CPH5&$CPHA>GDSPԮK߀3#!CPHA>GDSPٮKCPH5CPH5YWI==R>H//GM>ϪJRK22U׵AHTUA6)ʪYWI==R>H//GM>ϪJRK22U׵AHTUA6)ʪCPH5CPH5;9H91GRFP.ܤKHUA6)ʪ86H1GRFP.ܤKHUA6)ʪCPH5CPH5 6PH>5HOAB 6PH>5HOABCPH5CPH5&$CPHA>GDSPԮK߀3#!CPHA>GDSPٮKCPH5CPH5HG22A@@@HG22A@@ B߹-O B߹-O߹-BTCO@L:߹-BCO@L: B߹-O B߹-O20 N߹-7BO1ַ;L߹-NA7OIַ;)' N߹-7BO1;߹-NA7I B߹-O B߹-O߹-BTCO@L:߹-BCO@L: B߹-O B߹-O,* N߹-BO߹-7O߹-ַ;OʈF<4)' N߹-BO߹-7߹-ַ;OʈF<4 B߹-O B߹-O߹-BTCO@L:߹-BCO@L: B߹-O B߹-O&$A N߹-BO>8ֽHٟ@@Ԛ<#!A N߹-BO>8ٟ@@Ԛ< B߹-O B߹-O߹-BTCO@L:߹-BCO@L: B߹-O B߹-O/- - N߹-C7FBOR1:?T)' - Nں-7BOR1:?T B߹-O B߹-O߹-BTCO@L:߹-BCO@L: B߹-O B߹-O ߹-7O߹-BT ߹-7߹-B B߹-O B߹-O߹-BTCO@L:߹-BCO@L: B߹-O B߹-O/- N߹-BO7FOO?L߹-OǧBT)' N߹-BO7OO?L߹-OT B߹-O B߹-O߹-BTCO@L:߹-BCO@L: B߹-O B߹-O><߹- NLB7FOQӮDDA40AT(",*߹- NLOQӮDDA0AT B߹-O B߹-O߹-BTCO@L:߹-BCO@L: B߹-O B߹-O,* ߹-7BOİU1>CBBUQ4,* ߹-7BOİU1>CBBUQ4 L17A¶7J/ L17NJ/GE/1RLA¶7CʡH =;>W=ѾC -:K48?:T86/1LNCʡH =.=ѾC -:48?:T L17A¶7J/ L17NJ/>< - /@ʡH9H1RLA¶7/JDO8,T#!N91LN/JD,T L17A¶7J/ L17NJ/GE/1RLA¶7CʡH =;>W=ѾC -:K48?:T86/1LNCʡH =.=ѾC -:48?:T L17A¶7J/ L17NJ/b`1RLDA¶7/ - J0EKB8//OEKю2E,/WT)ʪDB1LDN/J0KB8/OEю2E)ʪ L17A¶7J/ L17NJ/GE/1RLA¶7CʡH =;>W=ѾC -:K48?:T86/1LNCʡH =.=ѾC -:48?:T L17A¶7J/ L17NJ/20 - 1RLA¶7/J0EO@K&$1LN/J0EO@K L17A¶7J/ L17NJ/GE/1RLA¶7CʡH =;>W=ѾC -:K48?:T86/1LNCʡH =.=ѾC -:48?:T L17A¶7J/ L17NJ/>T7O=P; >7=P L17A¶7J/ L17NJ/GE/1RLA¶7CʡH =;>W=ѾC -:K48?:T86/1LNCʡH =.=ѾC -:48?:T L17A¶7J/ L17NJ//-DA¶7/1RLJʡHWWT%! DN/1LJʡHWWՄO L17A¶7J/ L17NJ/GE/1RLA¶7CʡH =;>W=ѾC -:K48?:T86/1LNCʡH =.=ѾC -:48?:T L17A¶7J/ L17NJ/>< - N1RLA¶7CH231RLA¶7//&$N1LN޻/231LN/ L17A¶7J/ L17NJ/GE/1RLA¶7CʡH =;>W=ѾC -:K48?:T86/1LNCʡH =.=ѾC -:48?:T L17A¶7J/ L17NJ//- LGR1¶7/17>>G>GW=ѾC -:K48?:T86/1LNCʡH =.=ѾC -:48?:T L17A¶7J/ L17NJ/JHA¶7/C1RLH7/N=,::84SQH9T86N/C1L+N=,ў84SQH9T L17A¶7J/ L17NJ/GE/1RLA¶7CʡH =;>W=ѾC -:K48?:T86/1LNCʡH =.=ѾC -:48?:T L17A¶7J/ L17NJ/DB - /@ʡH9H1RLA¶7/JDOEJ< NT΂:8/CT΂:KT΂:WJT΂:ì,UWJ&$ NTCT:Tژ< NT΂:8/CT΂:KT΂:WJT΂:ì,UWJ&$ NTCT:TژBDJ99щQ#!#HK9>BDJ99щQ&$#%9T@A6WDPDA #9@A6WDPDA)'#%HK9T>BDJ99щQ#!#HK9>BDJ99щQ\ZRBDJ99щQ#!#HK9>BDJ99щQ&$#%9T@A6WDPDA #9@A6WDPDA)'#%HK9T>BDJ99щQ#!#HK9>BDJ99щQ;9>R>%B>ڜ>A9TK91A#%@@@20>R>%B>ڜ>A9K91A#@@)'#%HK9T>BDJ99щQ#!#HK9>BDJ99щQ&$#%9T@A6WDPDA #9@A6WDPDA)'#%HK9T>BDJ99щQ#!#HK9>BDJ99щQ#!#%9TKڜ>BEIUT#9Kڜ>BEIU)'#%HK9T>BDJ99щQ#!#HK9>BDJ99щQ&$#%9T@A6WDPDA #9@A6WDPDA)'#%HK9T>BDJ99щQ#!#HK9>BDJ99щQ#!#%K9TD06O@Ԛ<#K9D06@Ԛ<)'#%HK9T>BDJ99щQ#!#HK9>BDJ99щQ&$#%9T@A6WDPDA #9@A6WDPDA)'#%HK9T>BDJ99щQ#!#HK9>BDJ99щQ#%9TCۚK@Ԛ<#9CۚK@Ԛ<)'#%HK9T>BDJ99щQ#!#HK9>BDJ99щQ&$#%9T@A6WDPDA #9@A6WDPDA)'#%HK9T>BDJ99щQ#!#HK9>BDJ99щQGE6W#%>9T?#%6O/OO/U!'B8>ڜ>;96W#>9?#6O/O/U!'B8>ڜ>)'#%HK9T>BDJ99щQ#!#HK9>BDJ99щQ&$#%9T@A6WDPDA #9@A6WDPDA)'#%HK9T>BDJ99щQ#!#HK9>BDJ99щQYW#%9T>K-A96TWB:OSRQ9#%ѾCHTL6LTJH#9>KA96TWB:OSRQ9#%5L6LT,*E6FA6ܤKJV8=B>S,,*E6FA6ܤKJV8=B>S, ؓ =BܤKS/C8Tœ =BܤKS8T,*E6FA6ܤKJV8=B>S,,*E6FA6ܤKJV8=B>S,DBGDG>W-3M8F=Bٟ@6S9ܤKȟN U686GDG>W38F=B5S9ܤKȟN U,*E6FA6ܤKJV8=B>S,,*E6FA6ܤKJV8=B>S,>THH8@9FFSA@Ԛ<53ER=B67>HH8@9FFA@Ԛ<,*E6FA6ܤKJV8=B>S,,*E6FA6ܤKJV8=B>S,PN84C81=BRVT6CAE/:6LUUNԛL@;6GDB8C81=BRVTCAE:6LUUNԛL@6G,*E6FA6ܤKJV8=B>S,,*E6FA6ܤKJV8=B>S,JHH=B/-8>ܤKDA9=S˱U8QTָUJ)ʪDBH=B/8>ܤKDA9=S˱U8QTU)ʪ,*E6FA6ܤKJV8=B>S,,*E6FA6ܤKJV8=B>S,GEABRBE9A6BϜ>8=B6ץRRDO6ө ۆ ;9ABRBE9A6BϜ>8=B6ץR6ө ,*E6FA6ܤKJV8=B>S,,*E6FA6ܤKJV8=B>S,;9RQSAEM8=B>ץR9)NU6!GJ53RQSAEC=B>ץR9)NU6!1,*E6FA6ܤKJV8=B>S,,*E6FA6ܤKJV8=B>S,/-VJV18=BR6?#%@@@)'VJV18=BR6?#@@,*E6FA6ܤKJV8=B>S,,*E6FA6ܤKJV8=B>S,D>EȊ56RT8JF=BKT:8J=BRFK,34DH@CӽDҾWK?>S@99ISDPDAzxD>Eˊ5RT8S=BАT:8J=BRF,34DH@CӽDҾWK?>S@9ISDPDA,*E6FA6ܤKJV8=B>S,,*E6FA6ܤKJV8=B>S,V68BXʉ5=B>ܤK%&Ξ)ʉ5VTVEXGVXGV8G&Ξ)VEBVƔ>XVU8—P=ۚKC>JU̟KO4>LV68BX=B>ܤK%&Ξ)ʉ5VTVEXVXV8G&Ξ)VEBV۔>VU8=CJ.4>HD6߻WXHD6߻WXC߻WX@Ԛ<C߻WX@Ԛ<HD6߻WXHD6߻WXDCGR@NDCG@NHD6߻WXHD6߻WXC߻WX@Ԛ<C߻WX@Ԛ<HD6߻WXHD6߻WX#!6CGDʉ5>R#!6CGDʉ5>RHD6߻WXHD6߻WXC߻WX@Ԛ<C߻WX@Ԛ<HD6߻WXHD6߻WX86GR>RP>R699VADSDA20GR>RP>R69VADSDAHD6߻WXHD6߻WXC߻WX@Ԛ<C߻WX@Ԛ<HD6߻WXHD6߻WX#!DR߻W99@@@DR߻W99@@HD6߻WXHD6߻WXC߻WX@Ԛ<C߻WX@Ԛ<HD6߻WXHD6߻WXUV1;2X4UV1;2XHD6߻WXHD6߻WXC߻WX@Ԛ<C߻WX@Ԛ<HD6߻WXHD6߻WXnl>A6߻W$6XT6/ҥ3)T:6X-6ME@EU%!)!MK>A6߻W$6‰XɺRҥ3?:6X-6E@E )PHD6߻WXHD6߻WXC߻WX@Ԛ<C߻WX@Ԛ<HD6߻WXHD6߻WXA?6=C߻WED>3K֟MȬTT(#$!,*6=C߻WED>3K֟MȬTT HD6߻WXHD6߻WXC߻WX@Ԛ<C߻WX@Ԛ<HD6߻WXHD6߻WX;966GȂ3ʉ5>R>BCT6;3D5366GȂ3ʉ5>R>BCT;3DHD6߻WXHD6߻WXC߻WX@Ԛ<C߻WX@Ԛ<HD6߻WXHD6߻WXDC߻WR1@KDCW1@K,*SPKO—P=D9RB5966#!SPO=DRB5966@@@,*SPKO—P=D9RB5966#!SPO=DRB5966GE6/KOٟ@—P=>8E9RBHAVTJD8DAP536/Oٟ@=>8ERBHAVTD8A,*SPKO—P=D9RB5966#!SPO=DRB5966@@@,*SPKO—P=D9RB5966#!SPO=DRB5966&$CKOI9RB2SCI9COIRB2SC9,*SPKO—P=D9RB5966#!SPO=DRB5966@@@,*SPKO—P=D9RB5966#!SPO=DRB5966)'LPKO9RB6P6T LPORB6P6,*SPKO—P=D9RB5966#!SPO=DRB5966@@@,*SPKO—P=D9RB5966#!SPO=DRB5966PN6KO9RBEIT6>SK?KI—P=>KI90C9T><6ORBEIT6>SK?K=>K90CT,*SPKO—P=D9RB5966#!SPO=DRB5966@@@,*SPKO—P=D9RB5966#!SPO=DRB5966,*кBPKOK=9F9RHG8T#!кBPOK=9FRHG8,*SPKO—P=D9RB5966#!SPO=DRB5966@@@,*SPKO—P=D9RB5966#!SPO=DRB5966JHHKO>6/—P=9RH>DAP;0T?6T)!/-HO>6/=RH>A;T6T),*SPKO—P=D9RB5966#!SPO=DRB5966@@@,*SPKO—P=D9RB5966#!SPO=DRB5966MKKO6/—P=KORDB6OKKO696KO6щQ@Ԛ<53O6/=ORDB6KO9O6щQ@Ԛ<,*SPKO—P=D9RB5966#!SPO=DRB5966@@@,*SPKO—P=D9RB5966#!SPO=DRB5966,*6/KO9RBDǬP/-C9AT0?9-8ٟ@6EE>PC9AT0?=C9AT0?=#!C9AT0?9-8@Ԛ<#!C9AT0?9-8@Ԛ<C9AT0?=C9AT0?=20ʻ?0?9<9=C9ATVB$/?BRÙKBTA?D>0?9<9=C9ATVB$/BEBC9AT0?=C9AT0?=#!C9AT0?9-8@Ԛ<#!C9AT0?9-8@Ԛ<C9AT0?=C9AT0?=0?6TU7ח>\ZHS=HˮD>7KOUJҲ.щQHT-:66(UʡH966SQHS=HˮD>7KOUJҲ.щQHT:6(UʡH966RTU7ח>6TU7ח>B7Uח>D6@Ԛ<B7Uח>D6@Ԛ<6RTU7ח>6TU7ח>\ZHS=HˮD>7KOUJҲ.щQHT-:66(UʡH966SQHS=HˮD>7KOUJҲ.щQHT:6(UʡH966RTU7ח>6TU7ח> Uח>@K Uח>@K6RTU7ח>6TU7ח>\ZHS=HˮD>7KOUJҲ.щQHT-:66(UʡH966SQHS=HˮD>7KOUJҲ.щQHT:6(UʡH966RTU7ח>6TU7ח>B7Uח>ͦBOERB7Uח>ͦBOER6RTU7ח>6TU7ח>\ZHS=HˮD>7KOUJҲ.щQHT-:66(UʡH966SQHS=HˮD>7KOUJҲ.щQHT:6(UʡH966RTU7ח>6TU7ח>B7Uח>8;BٖTTB7Uח>8;BT6RTU7ח>6TU7ח>\ZHS=HˮD>7KOUJҲ.щQHT-:66(UʡH966SQHS=HˮD>7KOUJҲ.щQHT:6(UʡH966RTU7ח>6TU7ח>86AHFS=@=՞RU70ח>GDSPԮK߀320AHF=@=՞RU70ח>GDSPٮK6RTU7ח>6TU7ח>\ZHS=HˮD>7KOUJҲ.щQHT-:66(UʡH966SQHS=HˮD>7KOUJҲ.щQHT:6(UʡH966RTU7ח>6TU7ח> B7Uח>DT("B7Uח>DT6RTU7ח>6TU7ח>\ZHS=HˮD>7KOUJҲ.щQHT-:66(UʡH966SQHS=HˮD>7KOUJҲ.щQHT:6(UʡH966RTU7ח>6TU7ח>6RTU7HˮDDA6TU7HˮDDA6RTU7ח>6TU7ח>\ZHS=HˮD>7KOUJҲ.щQHT-:66(UʡH966SQHS=HˮD>7KOUJҲ.щQHT:6(UʡH966RTU7ח>6TU7ח> B7Uח> B7Uח>6RTU7ח>6TU7ח>\ZHS=HˮD>7KOUJҲ.щQHT-:66(UʡH966SQHS=HˮD>7KOUJҲ.щQHT:6(UʡH966RTU7ח>6TU7ח>20AHFS=@=՞RU70ח>GPB6,*AHF=@=՞RU70ח>GPB  ;GB;9ӱQL4ߩ75Q-<>;G  48@@@<ߩ7>48@@  F9Q?WɤKIԊX>F9Q?WɤK 3ϊXQK  ԊXQK,*3ϊX17Q7G/׆N8GF̛<ԊX13G/NGF 3ϊXQK  ԊXQK&$R3ϊX46߻WLQG8@Ԛ< RԊX46߻WLQG@Ԛ< 3ϊXQK  ԊXQK,*3ϊX17Q7G/׆N8GF̛<ԊX13G/NGF 3ϊXQK  ԊXQK><3ϊXR7Q7@475@:ȥB@AT/-ԊXR3@475@:ȥBA 3ϊXQK  ԊXQK,*3ϊX17Q7G/׆N8GF̛<ԊX13G/NGF 3ϊXQK  ԊXQK"!F>"FN߀3/ڶ>F7N߀3/ڶ>F7,*DN.ی'79Ԛ<=/ڶ>J7@Ԛ<)'DN.ی'71=/ڶ>J7@Ԛ<N߀3/ڶ>F7N߀3/ڶ>F720DN/EL>7Aڶ>F7CDƹ;@Ԛ<,*DNȜML>7Aڶ>F7C4@Ԛ<N߀3/ڶ>F7N߀3/ڶ>F7,*DN.ی'79Ԛ<=/ڶ>J7@Ԛ<)'DN.ی'71=/ڶ>J7@Ԛ<N߀3/ڶ>F7N߀3/ڶ>F7)'HN/KN/ڶ>F7=A7B#!HN/KN/ڶ>F7=+N߀3/ڶ>F7N߀3/ڶ>F7,*DN.ی'79Ԛ<=/ڶ>J7@Ԛ<)'DN.ی'71=/ڶ>J7@Ԛ<N߀3/ڶ>F7N߀3/ڶ>F7 H/67—P=DG@KH/67=D@KN߀3/ڶ>F7N߀3/ڶ>F7,*DN.ی'79Ԛ<=/ڶ>J7@Ԛ<)'DN.ی'71=/ڶ>J7@Ԛ<N߀3/ڶ>F7N߀3/ڶ>F7ܤKKA7B  ܤKK+N߀3/ڶ>F7N߀3/ڶ>F7,*DN.ی'79Ԛ<=/ڶ>J7@Ԛ<)'DN.ی'71=/ڶ>J7@Ԛ<N߀3/ڶ>F7N߀3/ڶ>F77>1T֛7ٟ@9F6U>ʔ71/>ٟ@6LD7>/I/>=щQDDHIN./59Ԛ<ڶ>S-=DN@UW=-щQܭDHTDS=DSDA7>1֛7ٟ@9F6U>ʔ71/>5LD>/I/>=щQDDHIN./51S-=DN@UW=-щQܭDHTDS=DSDAN߀3/ڶ>F7N߀3/ڶ>F7,*DN.ی'79Ԛ<=/ڶ>J7@Ԛ<)'DN.ی'71=/ڶ>J7@Ԛ<N߀3/ڶ>F7N߀3/ڶ>F7,*DN=8T=4ڶ>F7S@@@)'DN=8T=4ڶ>F7S@@N߀3/ڶ>F7N߀3/ڶ>F7,*DN.ی'79Ԛ<=/ڶ>J7@Ԛ<)'DN.ی'71=/ڶ>J7@Ԛ<N߀3/ڶ>F7N߀3/ڶ>F7 H/67Dƹ;DG@KH/674D@KN߀3/ڶ>F7N߀3/ڶ>F7,*DN.ی'79Ԛ<=/ڶ>J7@Ԛ<)'DN.ی'71=/ڶ>J7@Ԛ<N߀3/ڶ>F7N߀3/ڶ>F7>S=>7ʗ74=>SB7ST86D1ƹ;T4>S=>7ʗ74>SB7STN߀3/ڶ>F7N߀3/ڶ>F7,*DN.ی'79Ԛ<=/ڶ>J7@Ԛ<)'DN.ی'71=/ڶ>J7@Ԛ<N߀3/ڶ>F7N߀3/ڶ>F7V/67=DG@KV/67=D@K$5H149A$5H149Aec$/4UR5RH$>#=1,1>Bٟ@T9ALKٟ@6J=@Ԛ<\Z$/4U5RH$>#=1,1>@T9ALKٟ@6=@Ԛ<$5H149A$5H149A>EѾCT86VOTBA?$US/6T9A6APɺDEXET8VOTB$5H149A$5H149Aec$/4UR5RH$>#=1,1>Bٟ@T9ALKٟ@6J=@Ԛ<\Z$/4U5RH$>#=1,1>@T9ALKٟ@6=@Ԛ<$5H149A$5H149A53ER91@5H1Bٟ@49AE@@@/-ER91@5H1@49AE@@$5H149A$5H149A/-$U5/8=49Aٟ@5DSDA/-$U5/8=49Aٟ@5DSDA$5H149A$5H149A,*$9656549Q5؂=@Ԛ<,*$9656549Q5؂=@Ԛ<$5H149A$5H149Aec$/4UR5RH$>#=1,1>Bٟ@T9ALKٟ@6J=@Ԛ<\Z$/4U5RH$>#=1,1>@T9ALKٟ@6=@Ԛ<$5H149A$5H149ADB"Ξ)69$R549AIٟ@TN>CJ@@Ԛ<><"69$R549A@TN>CJ@@Ԛ<$5H149A$5H149A/-$U5/8=49Aٟ@5DSDA/-$U5/8=49Aٟ@5DSDA$5H149A$5H149A;9E4WN$RB5H4LDLIĪNCS@K;9E4WN$RB5H4LDLIĪNCS@K$5H149A$5H149Aec$/4UR5RH$>#=1,1>Bٟ@T9ALKٟ@6J=@Ԛ<\Z$/4U5RH$>#=1,1>@T9ALKٟ@6=@Ԛ<$5H149A$5H149A53@;5RH$ULT9A6DPDA/-@5RHULT9A6DPDA$5H149A$5H149A/-$U5/8=49Aٟ@5DSDA/-$U5/8=49Aٟ@5DSDA$5H149A$5H149A86DP>E5H"$ĪNL=496A7B/-P>E5H"$ĪNL=496+:/SʡH99SH :S9HDBSWJ9?9?:/SʡH99:/SʡH995ܛ?M)'WJ99:S9:S95ܛ?M:/SʡH99SH :S9H/-:/SʡH999?99?D6T:S9999D6:/SʡH99SH :S9H&$SV:/SʡH99S6TV:S96:/SʡH99SH :S9H#!S:/SʡH999?Έ;F:S99Έ;F:/SʡH99SH :S9HDBSWJ9?9?:/SʡH99:/SʡH995ܛ?M)'WJ99:S9:S95ܛ?M:/SʡH99SH :S9H&$SV:/SʡH999?<>KDH><>KJHRD>HHHHHH<>KDH><>Kwu7RDH><>K,07R2 -.TʆL@ϡS4,ܢEM,.O2J6MKR2 -.TʆL@ϡS4,E,.OJ6DH><>KDH><>KJHRD>HHHHHH<>KDH><>KMKRDH><>K,0IO9491یV0—P=—PH>.E6A?RH><>K,0IO94V0=—PH>.E6DH><>KDH><>KJHRD>HHHHHH<>KDH><>K#!RD>HH<>KDH><>KJHRD>HHHHHH<>KDH><>K/-DH>K=<,D6R=4,@Ԛ<&$D54,D6R=4,@Ԛ<DH><>KDH><>KJHRD>HHHHHH<>KDH><>K7RDH><>K2>7.ʆJ6ʆG1?—P=1?I2K7>>MGMߎM6>JRʆ.J6~.ʆJ6ʆG1?=1?IK7>MGMߎM6>JRʆ.J6DH><>KDH><>KJHRD>HHHHHH<>KDH><>K20RDH><>K2>J6/;IN9,*RH><>K2>J6/;N9DH><>KDH><>KJHRD>HHHHHH<>KDH><>K_]RDH><>K2>ʆ>I2́N4TȇN4TI(—Pބ2>N4ʆN4GERH><>K2>ʆ>I(N4ȇN4I(܉2>NʆNDH><>KDH><>KJHRD>HHHHHH<>KDH><>KGERDH><>K2>J>I2ˏR3˰(IB>—P3ˏR2;9RH><>K2>JIˏR3˰(IB>3ˏR2ʰDBNMG> BMG>JHɵO9FDSC4ʰDBN5>35-=9O2:@@@53ɵO9DSC4B5>I-=O2G@@ʰDBNMG> BMG> LNLBʰDBN@@@LNLB@@ʰDBNMG> BMG>JHɵO9FDSC4ʰDBN5>35-=9O2:@@@53ɵO9DSC4B5>I-=O2G@@ʰDBNMG> BMG>)'$";0Q8ҐJ9ҽ6WH)'$";0Q8ҐJ9ҽ6WHCARVCWOAWCARVCWOA4ARQJOA4ARQJOCARVCWOAWCARVCWOA 4AR=J DG@K4AR= D@KCARVCWOAWCARVCWOA4ARQJOA4ARQJOCARVCWOAWCARVCWOA/4ARQ=JB4/4ARQ=BCARVCWOAWCARVCWOA4ARQJOA4ARQJOCARVCWOAWCARVCWOA><İFE1;TVL8ARO8L0AWН?/Н?T,*İFBTVL8ARO8LAН?-CARVCWOAWCARVCWOA4ARQJOA4ARQJOCARVCWOAWCARVCWOA20İFE1;TVL8ARO8L0AW&$İFBTVL8ARO8LACARVCWOAWCARVCWOA4ARQJOA4ARQJOCARVCWOAWCARVCWOAPNİFE1;TVL8ARO8L0AWWН?W?UUWTН?>;9İFBTVL8ARO8LAWW?UUW?CARVCWOAWCARVCWOA4ARQJOA4ARQJOCARVCWOAWCARVCWOA,*İFE1;TVL8O3߫UТ@HT&$İFBTVL8O3߫UТ@HTCARVCWOAWCARVCWOA4ARQJOA4ARQJOCARVCWOAWCARVCWOA AR4J AR4JCARVCWOAWCARVCWOA4ARQJOA4ARQJOCARVCWOAWCARVCWOAL/4ARQ>L/4ARQ>CARVCWOAWCARVCWOA4ARQJOA4ARQJOCARVCWOAWCARVCWOA&$ŷ5/BAR4JX>BHH9;>B 9XR9  9R9PNMRF=:9X94.б H>N̛<;TTН?T("'!53MRF=:994.б H>N;Tܞ? 9XR9  9R9CR9Xnj8@Ԛ<CR9nj8@Ԛ< 9XR9  9R9&$KX/9CR=U93ATX9CR=U93A 9XR9  9R99XUTI9XNS;UOIַ;URIIIKIHBOF;F;N̛<;TTН?T("'!53MRF=:994.б H>N;Tܞ? 9XR9  9R9&$9X9C5I91ӛ?69; 9F5I91ӛ?69; 9XR9  9R9&$KX/9CR=U93ATX9CR=U93A 9XR9  9R99XUC;- 9UC- 9XR9  9R9PNMRF=:9X94.б H>N̛<;TTН?T("'!53MRF=:994.б H>N;Tܞ? 9XR9  9R9\ZG9XWF5ԎB@JP11.3>72PNG9WF5BJP11.3>72 9XR9  9R9&$KX/9CR=U93ATX9CR=U93A 9XR9  9R9869XB9ԎB@@OLWFR9B9N̛<;TTН?T("'!53MRF=:994.б H>N;Tܞ? 9XR9  9R9&$9X9X59QCͦ(!995ƋQC 9XR9  9R9&$KX/9CR=U93ATX9CR=U93A 9XR9  9R9 9X@?9@ 9XR9  9R9PNMRF=:9X94.б H>N̛<;TTН?T("'!53MRF=:994.б H>N;Tܞ? 9XR9  9R9YW9XB9ԎB@>54WFR9B9 IC70FŔ6ADMIַ;70DB9B9B>54WFR9B9 IC0FŔ61I7 9XR9  9R9&$KX/9CR=U93ATX9CR=U93A 9XR9  9R9DB9XCK29R5>9XWA/1C2ODKOD539C2R5>9WA1C2ODKOD 9XR9  9R9PNMRF=:9X94.б H>N̛<;TTН?T("'!53MRF=:994.б H>N;Tܞ? 9XR9  9R9&$9X>KTCΚIRН?>AT9>KCΚIR?A 9XR9  9R9&$KX/9CR=U93ATX9CR=U93A 9XR9  9R920CCTC7VCEICַ;C;-CTCCCVĸIַ;C-C 9XR9  9R9PNMRF=:9X94.б H>N̛<;TTН?T("'!53MRF=:994.б H>N;Tܞ? 9XR9  9R9DBHW:9XB9ԎB@=ʼnEDWFR9B99XCT86HW:9B9B=ʼnEDWFR9B99C 9XR9  9R9&$KX/9CR=U93ATX9CR=U93A 9XR9  9R99X@T9XR0ܥ69@T9Rܥ6 9XR9  9R9PNMRF=:9X94.б H>N̛<;TTН?T("'!53MRF=:994.б H>N;Tܞ? 9XR9  9R9G7;CT G7;C 9XR9  9R9&$KX/9CR=U93ATX9CR=U93A 9XR9  9R9A?Hʜ2RA@RS9@>9X3>)כ$>;GB;9Hʜ2RA@RS9@>93>)כ$>;G 9XR9  9R9PNMRF=:9X94.б H>N̛<;TTН?T("'!53MRF=:994.б H>N;Tܞ? 9XR9  9R9><NR=9XC9S99׵AAKEAABC/;9NR=9C9S99׵AAKEAABC/=@KE= =@E=><@Q0H@KûAQH@KûAQ,HPHCB020@0H@ûAQH@ûAQ,HPHB0=@KE= =@E==@J@KI5@=@J@I5@=@KE= =@E=/-7ûAK3@3@K7KK3!#!7ûAK3@3@7K3=@KE= =@E=86=@KAKCK-3O?3377CT)'=@AKCK-.?.7C=@KE= =@E=/-K6S5@KE=4I,S@@@)'K6S5@E=4I,S@@=@KE= =@E= @K@?@@=@KE= =@E=)'C@ַ;C@GC@K=@AB&$C@ַ;C@GC@=@AB=@KE= =@E=DBIK@KQOַ;OE6V=ԋ J>JT7LJ653IK@QOַ;OE6V=JJ7LJ6ԃP;ܢE4JAˑ+86Q FM1UܢE4NԃP;O4HН?U,T#!Q FM1UAOH,ԃP;ܢE4JAˑ+,*ԃP;Q8ȘIK5ܢE4N>4OJAQ8K5>4OԃP;ܢE4JAˑ+ ԃP;1ܢE4NН?̛4׶K21T)'AHQ8K5C>4׶K21ԃP;ܢE4JAˑ+86ԃP;HQ8ȘIK5ܢE4NC>4б XQT)'AHQ8K5C>4б XQԃP;ܢE4JAˑ+><ԃP;HQ8ȘIK5ܢE4NC>4HН?Н?>HT,*AHQ8K5C>4H?HTԃP;ܢE4JAˑ+MKԃP;HQ8ȘIK5NC>4ԃP;Q:33ȘIJ82THA>4AQ:33ȘIJ82HԃP;ܢE4JAˑ+53ԃP;HQ8ȘIK5ܢE4NC>4TН?T&$AHQ8K5C>4Tܞ?ԃP;ܢE4JAˑ+/-ԃP;HL-TܢE4NC41TН?> AHL-TC41?ԃP;ܢE4JAˑ+Dֈ;0OFԃP;ܢE4JAˑ+,*ԃP;HQ8ȘIK5ܢE4NC>4 AHQ8K5C>4ԃP;ܢE4JAˑ+86Q FM1UܢE4NԃP;O4HН?U,T#!Q FM1UAOH,ԃP;ܢE4JAˑ+)'ԃP;E72TܢE4NŇ7̛4б 3QT)'AHQ8K5C>4б 3QԃP;ܢE4JAˑ+86ԃP;HQ8ȘIK5ܢE4NC>4׶K21T)'AHQ8K5C>4׶K21ԃP;ܢE4JAˑ+86ԃP;HQ8ȘIK5ܢE4NCT?TCܢE0&$AHQ8K5CT?T/ԃP;ܢE4JAˑ+><ԃP;HQ8ȘIK5ܢE4NC>4HН?Н?>HT,*AHQ8K5C>4H?HTԃP;ܢE4JAˑ+86ԃP;HQ8ȘIK5ܢE4NC>4XН?2J&$AHQ8K5C>4X2ԃP;ܢE4JAˑ+53ԃP;HQ8ȘIK5ܢE4NC>4TН?T&$AHQ8K5C>4Tܞ?ԃP;ܢE4JAˑ+86ԃP;HQ8ȘIK5ܢE4NC>4Н?̛4?ETԃP;ܢE4JAˑ+Dֈ;0OFԃP;ܢE4JAˑ+86ԃP;HQ8ȘIK5ܢE4NC>4Н?̛4?UԃP;ܢE4JAˑ+86Q FM1UܢE4NԃP;O4HН?U,T#!Q FM1UAOH,ԃP;ܢE4JAˑ+#!Q1NÚQ8ȘIKTԃP;4Q1N8KTA4ԃP;ܢE4JAˑ+ ԃP;1ܢE4NН?̛C1A1J>=)'-AHC;>C1Aܹ1>=ԃP;ܢE4JAˑ+86ԃP;HQ8ȘIK5ܢE4NC>4׶K21T)'AHQ8K5C>4׶K21ԃP;ܢE4JAˑ+~4NU.̤3@>ϥJ=T.-0ܢE4N5H01ԃP;R:?=N.̤3@>PTT>JFF8G3b`4NU.LϥJ=T.-05H01AR:=N.LPT>JFF8GԃP;ܢE4JAˑ+><ԃP;HQ8ȘIK5ܢE4NC>4HН?Н?>HT,*AHQ8K5C>4H?HTԃP;ܢE4JAˑ+20ԃP;߽4Q8ȘIK5ܢE4N,4U/T&$A߽4Q8K5,4U/TԃP;ܢE4JAˑ+53ԃP;HQ8ȘIK5ܢE4NC>4TН?T&$AHQ8K5C>4Tܞ?ԃP;ܢE4JAˑ+,*Q1ʡH9BXTܢE4NН?̛<7TQ19XT?7ԃP;ܢE4JAˑ+Dֈ;0OFԃP;ܢE4JAˑ+86ԃP;HQ8ȘIK5ܢE4NC>4Н?>RT&$AHQ8K5C>4?RԃP;ܢE4JAˑ+86Q FM1UܢE4NԃP;O4HН?U,T#!Q FM1UAOH,ԃP;ܢE4JAˑ+20ԃP;߽4Q8ȘIK5ܢE4N,4XQT#!A߽4Q8K5,4XQԃP;ܢE4JAˑ+ ԃP;1ܢE4NН?̛4C-HН?̛<&##!)'AHQ8K5C>4*? ԃP;ܢE4JAˑ+86ԃP;HQ8ȘIK5ܢE4NC>4׶K21T)'AHQ8K5C>4׶K21ԃP;ܢE4JAˑ+SQԃP;HQʡHɤUBUHMܢE4NCT۹/8HMT>JT8:G3>JT:GԃP;ܢE4JAˑ+><ԃP;HQ8ȘIK5ܢE4NC>4HН?Н?>HT,*AHQ8K5C>4H?HTԃP;ܢE4JAˑ+ecԃP;߽4Q8ȘIK5ܢE4N,4ԃP;ܢE4N5NģCF4QO1MJEа.TН?>;9A߽4Q8K5,4AQO-Eа.T?ԃP;ܢE4JAˑ+53ԃP;HQ8ȘIK5ܢE4NC>4TН?T&$AHQ8K5C>4Tܞ?ԃP;ܢE4JAˑ+/-ԃP;HUܢE4NCRKD?TيR̛<&$AHUCRKD?TيR̛<ԃP;ܢE4JAˑ+Dֈ;0OFԃP;ܢE4JAˑ+,*7ԃP;E72TܢE4NН?>AT7AE7T?AԃP;ܢE4JAˑ+86Q FM1UܢE4NԃP;O4HН?U,T#!Q FM1UAOH,ԃP;ܢE4JAˑ+86߹-JН?̛<ԃP;HQ8ȘIK5ܢE4NC>4&$-?AHQ8K5C>4ԃP;ܢE4JAˑ+ ԃP;1ܢE4NН?̛4б XQT)'AHQ8K5C>4б XQԃP;ܢE4JAˑ+86ԃP;HQ8ȘIK5ܢE4NC>4׶K21T)'AHQ8K5C>4׶K21ԃP;ܢE4JAˑ+#!ԃP;߽4UL6.TܢE4NA߽4UL6TԃP;ܢE4JAˑ+><ԃP;HQ8ȘIK5ܢE4NC>4HН?Н?>HT,*AHQ8K5C>4H?HTԃP;ܢE4JAˑ+20ԃP;߽4U72TܢE4NԃP;߽4TН?T A߽4U7TA߽4Tܞ?ԃP;ܢE4JAˑ+53ԃP;HQ8ȘIK5ܢE4NC>4TН?T&$AHQ8K5C>4Tܞ?ԃP;ܢE4JAˑ+;9ԃP;HQ8ȘIK5ܢE4NC>4 0̛4 0QTIOT TIOT 53RP4JTIOT> Sߢ?U>9@Ԛ<,*R4TIOT> S?>9@Ԛ<TIOT TIOT /-TIOTބ2BJ768T7P4J#!TIOTބ2BќJ6874TIOT TIOT &$TIOT0Q7J6J7&$TIOT0Q7J6J7TIOT TIOT JHRTIOT4/ >BԚԚ U@Ԛ< RTIOT> U@Ԛ<TIOT TIOT 53RP4JTIOT> Sߢ?U>9@Ԛ<,*R4TIOT> S?>9@Ԛ<TIOT TIOT 53RP4JTIOT> Sߢ?U>9@Ԛ<,*R4TIOT> S?>9@Ԛ<TIOT TIOT #!TIOTބ2B>TV>T#!TIOTބ2B>TV>TTIOT TIOT &$TIOT0Q7J6J7&$TIOT0Q7J6J7TIOT TIOT SQRP4D3TMɾSBTIOTL;U$ N,%!@Ԛ<;9R4D3TMBTIOTL;U N,@Ԛ<TIOT TIOT  RTIOT> U@Ԛ< RTIOT> U@Ԛ<TIOT TIOT 86R9TIOT> BK1١-JL;@@@/-R9TIOT> BK1١-8@@TIOT TIOT 53RP4JTIOT> Sߢ?U>9@Ԛ<,*R4TIOT> S?>9@Ԛ<TIOT TIOT trT>IOTմ2O̤@ROWBǞV<>MɾS3D UJDP>W>5ֈD,DL9ADSDAkiT>IOTմ2@ROWBȞV>M3D UJDP>W>5ֈD,DL9ADSDATIOT TIOT &$TIOT0Q7J6J7&$TIOT0Q7J6J7TIOT TIOT 53TIOT*B6J768T7P4J2)'TIOT*B6ќJ68742TIOT TIOT  RTIOT> U@Ԛ< RTIOT> U@Ԛ<TIOT TIOT ,*TIOT> ,:%!@Ԛ< TIOT> ,:@Ԛ<TIOT TIOT 53RP4JTIOT> Sߢ?U>9@Ԛ<,*R4TIOT> S?>9@Ԛ<TIOT TIOT A?T7IOT> 3D,R,SUUP4J@@@53T7IOT> 3D,R,SU4@@TIOT TIOT &$TIOT0Q7J6J7&$TIOT0Q7J6J7TIOT TIOT 53RP4JTIOT> Sߢ?U>9@Ԛ<,*R4TIOT> S?>9@Ԛ<TIOT TIOT  RTIOT> U@Ԛ< RTIOT> U@Ԛ<TIOT TIOT 86RT>IOTK>SF> P4J@@@)'RT>IOTKS> 4@@TIOT TIOT 53RP4JTIOT> Sߢ?U>9@Ԛ<,*R4TIOT> S?>9@Ԛ<TIOT TIOT /-TIOTB62LCP4J>T#!TIOTB62C4>TIOT TIOT &$TIOT0Q7J6J7&$TIOT0Q7J6J7TIOT TIOT MK9QDT7IOT>SFDU>F> ;/?BRÙKBT><9QDT7IOTSDU>F> ;/BEBTIOT TIOT  RTIOT> U@Ԛ< RTIOT> U@Ԛ<TIOT TIOT 20P4JTIOTSUXߢ?U,6XT&$4TIOTSUX?6XTTIOT TIOT 53RP4JTIOT> Sߢ?U>9@Ԛ<,*R4TIOT> S?>9@Ԛ<TIOT TIOT 20TIOT47>4 3DFDSDA,*TIOT4> 3DFDSDATIOT TIOT &$TIOT0Q7J6J7&$TIOT0Q7J6J7TIOT TIOT 20TIOTB6J768T7P4BT)'TIOTB6ќJ6874BTTIOT TIOT  RTIOT> U@Ԛ< RTIOT> U@Ԛ<TIOT TIOT hfRT>IOT> UP4>4—P=AN,:L%!**P4>٬J=$@Ԛ<SQRT>IOT> U4>4=AN,:L**4>٬J=$@Ԛ<TIOT TIOT 53RP4JTIOT> Sߢ?U>9@Ԛ<,*R4TIOT> S?>9@Ԛ<TIOT TIOT DBRP4JTIOT>MKJIOTKK DPDA>MKJIOTKK DPDAA,G߇;G߇;%>MA,G߇;G߇;%>M\ZAPIDK4,G,G,G߇;5>,VCʿ7NPI>>>V0>@Ԛ<\ZAPIDK4,G,G,G߇;5>,VCʿ7NPI>>>V0>@Ԛ<A,G߇;G߇;%>MA,G߇;G߇;%>M%A%AG  %AAA,G߇;G߇;%>MA,G߇;G߇;%>M%A%A%AAA,G߇;G߇;%>MA,G߇;G߇;%>M&'%IIA$ۏ"&'%IIAG&'%II :AGD3AT(%!AG}{&'%IIA&'%IIA&'%II :AD3ATVAA,G߇;G߇;%>MA,G߇;G߇;%>M\ZAPIDK4,G,G,G߇;5>,VCʿ7NPI>>>V0>@Ԛ<\ZAPIDK4,G,G,G߇;5>,VCʿ7NPI>>>V0>@Ԛ<A,G߇;G߇;%>MA,G߇;G߇;%>M%A%A %AA A,G߇;G߇;%>MA,G߇;G߇;%>M%A%A%AAA,G߇;G߇;%>MA,G߇;G߇;%>M20%CV2%0J%2CWFTOWW)'%CV2%0%2WFTO9A,G߇;G߇;%>MA,G߇;G߇;%>M\ZAPIDK4,G,G,G߇;5>,VCʿ7NPI>>>V0>@Ԛ<\ZAPIDK4,G,G,G߇;5>,VCʿ7NPI>>>V0>@Ԛ<A,G߇;G߇;%>MA,G߇;G߇;%>M20%BF%JW DG%AG@F:=#!%<%J D%A@:=A,G߇;G߇;%>MA,G߇;G߇;%>M%A%A%AAA,G߇;G߇;%>MA,G߇;G߇;%>MJHD9GM>AQٟ@DBU,G߇;G3MVٟ@6DPDA>AQٟ@DK,G߇;G3MV5DPDAA,G߇;G߇;%>MA,G߇;G߇;%>M\ZAPIDK4,G,G,G߇;5>,VCʿ7NPI>>>V0>@Ԛ<\ZAPIDK4,G,G,G߇;5>,VCʿ7NPI>>>V0>@Ԛ<A,G߇;G߇;%>MA,G߇;G߇;%>M/-AG%;̽>MŹ(Źʿ@@@)'AG%;>Ź(Źʿ@@A,G߇;G߇;%>MA,G߇;G߇;%>M%A%A%AAA,G߇;G߇;%>MA,G߇;G߇;%>M20%DJW.>=V%JW G%A)'%DJW.>=V%J GA 2EֈD$& 2EֈD)'$ 2̙EֈD>ܤK"6"&#!$ 2̙EֈD>ܤK"6" 2EֈD$& 2EֈD86$& C2̙EϪJֈDT9J9@AB/- C2̙EϪJֈDTJ9@AB 2EֈD$& 2EֈD)'$ 2̙EֈD>ܤK"6"&#!$ 2̙EֈD>ܤK"6" 2EֈD$& 2EֈD)' 2EC$&E̛<0>WT 2ECE0>W 2EֈD$& 2EֈD)'$ 2̙EֈD>ܤK"6"&#!$ 2̙EֈD>ܤK"6" 2EֈD$& 2EֈD)'$& C2GE9ֈD@Ԛ<#! C2GE9ֈD@Ԛ< 2EֈD$& 2EֈD)'$ 2̙EֈD>ܤK"6"&#!$ 2̙EֈD>ܤK"6" 2EֈD$& 2EֈD;9Sޡ8$&>&2̙E ֈD>ܤK$'&9Q')'S>&2̙E ֈD>ܤKƋQ' 2EֈD$& 2EֈD)'$ 2̙EֈD>ܤK"6"&#!$ 2̙EֈD>ܤK"6" 2EֈD$& 2EֈD~6AB6T 2EۈXD:ۈX>ў7&B$&,&ίB>T7>KUVJJKUQTI1R/0Qec6AB6T 2EۈXD:ۈX>ў7&B,&ίB>T7KVQI1R/Q 2EֈD$& 2EֈD)'$ 2̙EֈD>ܤK"6"&#!$ 2̙EֈD>ܤK"6" 2EֈD$& 2EֈD$&2@ 8,T2@ ,T 2EֈD$& 2EֈD)'$ 2̙EֈD>ܤK"6"&#!$ 2̙EֈD>ܤK"6" 2EֈD$& 2EֈD_]$֗>AS 19EŹ4(>&24 EB߻WֈD1H%,9: >I\Z$֗>AS 19EŹ4(>&24 EB߻WֈD1H%,: >I 2EֈD$& 2EֈD)'$ 2̙EֈD>ܤK"6"&#!$ 2̙EֈD>ܤK"6" 2EֈD$& 2EֈDA?$& ۈX2@QTWNEܾW,;PT,T86 ۈX2@QTWNEܾW,;ٱP,T 2EֈD$& 2EֈD)'$ 2̙EֈD>ܤK"6"&#!$ 2̙EֈD>ܤK"6" 2EֈD$& 2EֈD53ޥ0CE$&0> 2EֈDJ<=@,*ޥ0CE0> 2EֈDJ=@;1>DH ;1>D,*;>DH66;DH9FA@Ԛ<#!;>D6;D9FA@Ԛ<;1>DH ;1>D;DHؕ7;EE@;Dؕ7;EE@;1>DH ;1>D,*;>DH66;DH9FA@Ԛ<#!;>D6;D9FA@Ԛ<;1>DH ;1>DXŷ5D/D/ Xŷ5DD;1>DH ;1>D,*;>DH66;DH9FA@Ԛ<#!;>D6;D9FA@Ԛ<;1>DH ;1>D ;DHBU>UW6T;DΑB>U6;1>DH ;1>D,*;>DH66;DH9FA@Ԛ<#!;>D6;D9FA@Ԛ<;1>DH ;1>D;DHDHDHT;DDDT;1>DH ;1>D,*;>DH66;DH9FA@Ԛ<#!;>D6;D9FA@Ԛ<;1>DH ;1>D ;DH>  ;D>;1>DH ;1>D,*;>DH66;DH9FA@Ԛ<#!;>D6;D9FA@Ԛ<;1>DH ;1>D BD/>  BD>;1>DH ;1>D,*;>DH66;DH9FA@Ԛ<#!;>D6;D9FA@Ԛ<;1>DH ;1>D;DH=DH ;1>D,*;>DH66;DH9FA@Ԛ<#!;>D6;D9FA@Ԛ<;1>DH ;1>D;ӈ5UD>DHDH;5D>DDE1?0;E1?0;ַ;E1?,;@Ԛ<ַ;E1?,;@Ԛ<E1?0;E1?0;;9K6>HE1K/Q4DGKIAB86K6>HE1K/Q4GKIABE1?0;E1?0;GEDKOFHE1K/Q4DGKOJܤK>6DG@K20HE1K/Q4GܤK>6D@KE1?0;E1?0;#!DE1ߢ?08IDE1?1BT/>׆B/1/69IPTR;I@Ԛ<MKܤ5ַ;>E1?1BT/>׆B/1/69IPTR;I@Ԛ<E1?0;E1?0;GEDKOFHE1K/Q4DGKOJܤK>6DG@K20HE1K/Q4GܤK>6D@KE1?0;E1?0;A?A׆B?KUEI3R>7DE1?P;66@Ԛ<;9A׆B?KUEI3>7DE1?P;6@Ԛ<QE1?0;E1?0;1A?Iַ;  1AIE1?0;E1?0;ַ;E1?,;@Ԛ<ַ;E1?,;@Ԛ<E1?0;E1?0;53AUE1AIٟ@;N?985D@@@/-AUE1A@;N?985D@@G=ݰFBSF G=FF#!BN0ݰFBSF2Uа.TBNFF2*G=ݰFBSF G=FF)'AOݰFBFASF>LS2 AOFFAF>LSG=ݰFBSF G=FF#!BN0ݰFBSF2Uа.TBNFF2*G=ݰFBSF G=FF86ݰFBSFQBJ768T7QݰFBSFB&$FFQBќJ687QFFBG=ݰFBSF G=FF#!BN0ݰFBSF2Uа.TBNFF2*G=ݰFBSF G=FF AסET/ݰFBٟ@3@Ԛ<ATFٟ@3@Ԛ<G=ݰFBSF G=FF#!BN0ݰFBSF2Uа.TBNFF2*G=ݰFBSF G=FFSFUR7T FU7T11F֎TPAJ11F֎TPAJ20ڶ>S:—PG2&**11F֎T@Ԛ<,*S:I2&**11F֎T@Ԛ<11F֎TPAJ11F֎TPAJ11F֎T V>б 11F֎T Vб 11F֎TPAJ11F֎TPAJA?7D2T:֎T11F֎T=?N;TTK;2072T:֎T11F֎TTTK;11F֎TPAJ11F֎TPAJ/-SAS11F֎T=>щQCE@@@,*SAS11F֎T=>щQCE@@11F֎TPAJ11F֎TPAJ20ڶ>S:—PG2&**11F֎T@Ԛ<,*S:I2&**11F֎T@Ԛ<11F֎TPAJ11F֎TPAJ)'11F֎T=?N;78K11F֎T7K11F֎TPAJ11F֎TPAJA?7D2T:֎T11F֎T=?N;TTK;2072T:֎T11F֎TTTK;11F֎TPAJ11F֎TPAJ=?N;C;MC;M11F֎TPAJ11F֎TPAJ20ڶ>S:—PG2&**11F֎T@Ԛ<,*S:I2&**11F֎T@Ԛ<11F֎TPAJ11F֎TPAJJH$U-£-E7-Ҳ0AʡH9DS&11F֎T7J6!A?$U-£-E7-Ҳ0AʡH9DS&11F֎T7611F֎TPAJ11F֎TPAJA?7D2T:֎T11F֎T=?N;TTK;2072T:֎T11F֎TTTK;11F֎TPAJ11F֎TPAJ;911F֎TBJHį-HUHڶ>2>AR@Ԛ<;911F֎TBJHį-HUHڶ>2>AR@Ԛ<11F֎TPAJ11F֎TPAJ20ڶ>S:—PG2&**11F֎T@Ԛ<,*S:I2&**11F֎T@Ԛ<11F֎TPAJ11F֎TPAJ ӪN11F֎TE@@@ӪN11F֎TE@@11F֎TPAJ11F֎TPAJA?7D2T:֎T11F֎T=?N;TTK;2072T:֎T11F֎TTTK;11F֎TPAJ11F֎TPAJ8611F֎T03VCJ768T711F֎T2011F֎T03VCќJ68711F֎T11F֎TPAJ11F֎TPAJ20ڶ>S:—PG2&**11F֎T@Ԛ<,*S:I2&**11F֎T@Ԛ<11F֎TPAJ11F֎TPAJ11F֎T@?11F֎T@11F֎TPAJ11F֎TPAJA?7D2T:֎T11F֎T=?N;TTK;2072T:֎T11F֎TTTK;11F֎TPAJ11F֎TPAJSQDR07>I8Ҳ02AXڃN>11F֎TAKAٟ@HDPDAPNDR07>8Ҳ02AXڃN>11F֎TAKAٟ@HDPDA11F֎TPAJ11F֎TPAJ20ڶ>S:—PG2&**11F֎T@Ԛ<,*S:I2&**11F֎T@Ԛ<11F֎TPAJ11F֎TPAJ5311F֎TW")$IK46)'11F֎TW")I411F֎TPAJ11F֎TPAJA?7D2T:֎T11F֎T=?N;TTK;2072T:֎T11F֎TTTK;11F֎TPAJ11F֎TPAJJH7&:֎T11F֎TTTT=?N;T!537&:֎T11F֎TTTTTK11F֎TPAJ11F֎TPAJ20ڶ>S:—PG2&**11F֎T@Ԛ<,*S:I2&**11F֎T@Ԛ<11F֎TPAJ11F֎TPAJ&$CE>11F֎T@0=@Ԛ<&$CE>11F֎T@0=@Ԛ<11F֎TPAJ11F֎TPAJA?7D2T:֎T11F֎T=?N;TTK;2072T:֎T11F֎TTTK;11F֎TPAJ11F֎TPAJPN11F֎T=?N;7=?N;GTTT - !.,11F֎T7GTTT+11F֎TPAJ11F֎TPAJ20ڶ>S:—PG2&**11F֎T@Ԛ<,*S:I2&**11F֎T@Ԛ<11F֎TPAJ11F֎TPAJki11F֎TKSħ;S C9>>4K.TRҲ0AGB@>=?N;)ʪ\Z11F֎TKSS Cޖ>>4K.TRҲ0AGB@>)ʪ11F֎TPAJ11F֎TPAJA?7D2T:֎T11F֎T=?N;TTK;2072T:֎T11F֎TTTK;11F֎TPAJ11F֎TPAJ53&11F֎TRBOEVCE@@@,*&11F֎TRBOECE@@11F֎TPAJ11F֎TPAJ20ڶ>S:—PG2&**11F֎T@Ԛ<,*S:I2&**11F֎T@Ԛ<11F֎TPAJ11F֎TPAJ,*7DT11F֎T6U=?N;7T11F֎T611F֎TPAJ11F֎TPAJA?7D2T:֎T11F֎T=?N;TTK;2072T:֎T11F֎TTTK;11F֎TPAJ11F֎TPAJ_]$U-£-E7-Ҳ0AʡH9DS&11F֎T$U-CɤUTҲ0AB!YW$U-£-E7-Ҳ0AʡH9DS&11F֎T$U-CɤUTҲ0AB86X,19CK/ - NW=HDEģCKX1KNW/DCVTX,19CʡH97/ - NW=HDEģCKGģC:7BWT53X1ʡH97NW/DCGģC7BW86X,19CK/ - NW=HDEģCKX1KNW/DCJHX,19CʡH97/ - NW=HDE8KDG@K/-X1ʡH97΂NW/D8KD@K86X,19CK/ - NW=HDEģCKX1KNW/DCGEX,19CʡH97/ - NWCHDEģCK΂:6T)'X1ʡH97NW޻/DC΂:686X,19CK/ - NW=HDEģCKX1KNW/DC\Z-AX,19CʡH97/ - NW=HDEģCK -:K48?:T><-AX1ʡH97NW/DC -:48?:T86X,19CK/ - NW=HDEģCKX1KNW/DC;9X,19CK/ - NW=HDEGI#!X1KNW/DGI86X,19CK/ - NW=HDEģCKX1KNW/DCb`X,19CʡH97/ - NW=HDEK?IU>DE?T΂:C̛<A?X1ʡH97NW/DE?IUD?΂:C̛<86X,19CK/ - NW=HDEģCKX1KNW/DC86X,19CK/ - NW=HDEģCKX1KNW/DC86X,19CK/ - NW=HDEģCKX1KNW/DCGEX,19CʡH97/ - NW=HDE8K΂:4T/-X1ʡH97NW/D8K΂:4T86X,19CK/ - NW=HDEģCKX1KNW/DCVTX,19CʡH97/ - NW=HDEģCKGģC:7BWT53X1ʡH97NW/DCGģC7BW86X,19CK/ - NW=HDEģCKX1KNW/DC86X,19CK/ - NW=HDE>KX1KNW/D>86X,19CK/ - NW=HDEģCKX1KNW/DCGEX,19CʡH97/ - NWCHDEģCK΂:6T)'X1ʡH97NW޻/DC΂:686X,19CK/ - NW=HDEģCKX1KNW/DCSQX,19CʡH97/ - NW=HDEOKDOGDO6G20X1ʡH97NW/DODGD6G86X,19CK/ - NW=HDEģCKX1KNW/DC;9X,19CK/ - NW=HDEGI#!X1KNW/DGI86X,19CK/ - NW=HDEģCKX1KNW/DCJHX,19CʡH97/ - /@CHWDEģCKùBNL,*X1ʡH97N޻/WDCùBNL86X,19CK/ - NW=HDEģCKX1KNW/DC86X,19CK/ - NW=HDEģCKX1KNW/DC86X,19CK/ - NW=HDEģCKX1KNW/DCqoX,19CʡH97/ - NW=HDE>KL28AWT6O0U—PD7>6;PNX1ʡH97NW/D>LPAW6O0U—PD7>6;86X,19CK/ - NW=HDEģCKX1KNW/DCVTX,19CʡH97/ - NW=HDEģCKGģC:7BWT53X1ʡH97NW/DCGģC7BW86X,19CK/ - NW=HDEģCKX1KNW/DC>E6DSDA53-II6I6I66U>E6DSDANB-<66N-<66#!NB-<66ODSDAN-<66DSDANB-<66N-<66_]-I6DD9D66>=/,ֈ;N?KCL3;ނB/6/7TNؕ7؄/ESQ-I6D966>=/,ֈ;N?KCL3ނB/6/7TNڕ7ENB-<66N-<66><-I66OE60FǂSHAVTJD8DAP/--I66E6FǂSHAVTD8ANB-<66N-<6686-II6I6I66OU>E6DSDA53-II6I6I66U>E6DSDANB-<66N-<66#!NB-<6OC8A99N-<6OC8A9NB-<66N-<66_]-I6DD9D66>=/,ֈ;N?KCL3;ނB/6/7TNؕ7؄/ESQ-I6D966>=/,ֈ;N?KCL3ނB/6/7TNڕ7ENB-<66N-<66E6DSDA53-II6I6I66U>E6DSDANB-<66N-<66)'D-IHD6/E6-116)'D-IHD6/E6-116NB-<66N-<66_]-I6DD9D66>=/,ֈ;N?KCL3;ނB/6/7TNؕ7؄/ESQ-I6D966>=/,ֈ;N?KCL3ނB/6/7TNڕ7ENB-<66N-<66 кB-<ԋ/C66JƱCTкB-<ԋ/C66JϱCNB-<66N-<6686-II6I6I66OU>E6DSDA53-II6I6I66U>E6DSDANB-<66N-<66,*NB-=/,ֈ;N?KCL3;ނB/6/7TNؕ7؄/ESQ-I6D966>=/,ֈ;N?KCL3ނB/6/7TNڕ7ENB-<66N-<66865-Н?T  R>ܞ? İU7/ İU7/204UİU7/5.W@ßNWF/ÐWW/-4UİU7/5.W@ßNW/ÐWW İU7/ İU7//-UİU7/.W@ßN1T7̛<,*UİU7/.W@ßN17̛< İU7/ İU7/  -NUİU7/.@K  -NUİU7/.@K İU7/ İU7/534İU7/5:S9İU:4K"!,*4İU7/5:S9İU:4K" İU7/ İU7/86T14UİU7/5.:S9İUAWAT20T14UİU7/5.:S9İUAA İU7/ İU7/;94UİU7/5.W@ßNWF?9GHН?T204UİU7/5.W@ßNW?9G/ İU7/ İU7/204UİU7/5.W@ßNWF/ÐWW/-4UİU7/5.W@ßNW/ÐWW İU7/ İU7/GEUİU7/.W@ßNWF/ɴ9Н?Tɴ9ʡH9?/T;9UİU7/.W@ßNW/ɴ9ܞ?ɴ99/T İU7/ İU7/  -NUİU7/.@K  -NUİU7/.@K İU7/ İU7/#!4UİU7/5.W@ßN#!4UİU7/5.W@ßN İU7/ İU7/86T14UİU7/5.:S9İUAWAT20T14UİU7/5.:S9İUAA İU7/ İU7/864UİU7/5.W@ßNWF/̝5̛FˎWBDIKT)ʪ/-KFEڶ>FˎWBDIK)ʪį-KEˎWٟ@6֬4Jį-KEˎW5֬4J-K-Kį-KEˎWٟ@6֬4Jį-KEˎW5֬4J&$Sį-K>JNTCTT#!Sį-K>JϞNCTTį-KEˎWٟ@6֬4Jį-KEˎW5֬4J-K-Kį-KEˎWٟ@6֬4Jį-KEˎW5֬4J кB6Sį-KIKT:KкB6Sį-KIK:Kį-KEˎWٟ@6֬4Jį-KEˎW5֬4J-K-Kį-KEˎWٟ@6֬4Jį-KEˎW5֬4J#!;Kʗ,/Sտ7PC@;B ;Kʗ,/Sտ7PC;Bį-KEˎWٟ@6֬4Jį-KEˎW5֬4J-K-Kį-KEˎWٟ@6֬4Jį-KEˎW5֬4J203BBDK6S9A@S@060T203BBDK6S9A@S@060Tį-KEˎWٟ@6֬4Jį-KEˎW5֬4J-K-Kį-KEˎWٟ@6֬4Jį-KEˎW5֬4J/-;Kʗ,/—PL>CBFRKAKB,*;Kʗ,/—PL>CBFRKAKį-KEˎWٟ@6֬4Jį-KEˎW5֬4J-K-Kį-KEˎWٟ@6֬4Jį-KEˎW5֬4J)';Kʗ,/SKD͙7IRN͙7T&$;Kʗ,/SKDIRN͙7Tį-KEˎWٟ@6֬4Jį-KEˎW5֬4J-K-Kį-KEˎWٟ@6֬4Jį-KEˎW5֬4J#!SKб J768T7U>SKб ќJ687U>R/,B;R/,B;)'NЃB;W$,BΞ)9"@@@&$NЃB;W$,BΞ)9"@@R/,B;R/,B;,BR/>47,BR/>47R/,B;R/,B;)'NЃB;W$,BΞ)9"@@@&$NЃB;W$,BΞ)9"@@R/,B;R/,B;;9ѹ67,BƸ=DJ7.K/B9A=B@@@&$չ6,BƸ=DJ*/BA@@R/,B;R/,B;)'NЃB;W$,BΞ)9"@@@&$NЃB;W$,BΞ)9"@@R/,B;R/,B;;9R/B,B.P԰'0VAUѹ6FG,*R/B,B.P0VA"Uݹ6GR/,B;R/,B;)'NЃB;W$,BΞ)9"@@@&$NЃB;W$,BΞ)9"@@R/,B;R/,B;zxR/,BCMR/@BBR-P2KONJ768T7;2/ޟEŮß1QİL R/Ξ),BWβI3I@K/->ß1QİL R/Ξ),BWβI3I@R/,B;R/,B;)'NЃB;W$,BΞ)9"@@@&$NЃB;W$,BΞ)9"@@R/,B;R/,B;\Z(<7N6B=G;3>7K  #!<K  R/,B;R/,B;)'NЃB;W$,BΞ)9"@@@&$NЃB;W$,BΞ)9"@@R/,B;R/,B;)'N6@4,BHAR/D@Ԛ<&$N@4,BHAR/D@Ԛ<R/,B;R/,B;)'NЃB;W$,BΞ)9"@@@&$NЃB;W$,BΞ)9"@@R/,B;R/,B;/-R/,B@Hٟ@ʜ2IAN6@@@)'R/,B@Hٟ@ʜ2IAN@@R/,B;R/,B;)'NЃB;W$,BΞ)9"@@@&$NЃB;W$,BΞ)9"@@R/,B;R/,B;,*/>,BJ>,BJ>,BAB,*/>,BJ>,BJ>,BABB78;U B8;UB;U>C@KB;U>C@KB78;U B8;U,*ʡH9=7B;U>CEJCEJC7CC78N@>;GB B;>8N@>;GB78;U B8;U,*B7;>8N@Ɓ-67Ɓ-6HT#!B;>8N@ȁ-7ȁ-HTB78;U B8;U7B;U>C8,T7B;U>C,TB78;U B8;UB;ULC8,TB;ULC,TB78;U B8;UB;U>C@KB;U>C@KB78;U B8;U)'7B;U>CBU8JCBU8JC7CC7;UN8C.VI<7; B>;UN8C.I7FU/J.ʭB/ FJ.ϭBMKDVD:JTʭB/>ڶ>9ԚGJE@Ԛ<A?DVD:JTϭB>9ԚGJE@Ԛ<FU/J.ʭB/ FJ.ϭBJ.ʭB/@? J.ϭB@FU/J.ʭB/ FJ.ϭB#!J.ʭB/L FUO@KJ.ϭBL FO@KFU/J.ʭB/ FJ.ϭB)'J.ʭB/L F;F?8,T J.ϭBL F;F,TFU/J.ʭB/ FJ.ϭBMKDVD:JTʭB/>ڶ>9ԚGJE@Ԛ<A?DVD:JTϭB>9ԚGJE@Ԛ<FU/J.ʭB/ FJ.ϭB,*J.ʭB/L FUO'GNOC&$J.ϭBL FO'GNOCFU/J.ʭB/ FJ.ϭB#!J.ʭB/L FUO@KJ.ϭBL FO@KFU/J.ʭB/ FJ.ϭB,*J.ʭB/L FUOLBڶ>9ԚGJE@Ԛ<A?DVD:JTϭB>9ԚGJE@Ԛ<FU/J.ʭB/ FJ.ϭBJ.ʭB/>LJ.ϭB>LFU/J.ʭB/ FJ.ϭB#!J.ʭB/L FUO@KJ.ϭBL FO@KFU/J.ʭB/ FJ.ϭB53J.ʭB/8NJ.ʭB/G>98F>T,*J.ϭB8NJ.ϭBG>98F>FU/J.ʭB/ FJ.ϭBMKDVD:JTʭB/>ڶ>9ԚGJE@Ԛ<A?DVD:JTϭB>9ԚGJE@Ԛ<FU/J.ʭB/ FJ.ϭB/-J.ʭB/8IC¨03?;9<>TJ.ϭB8IϨ0-<>FU/J.ʭB/ FJ.ϭB#!J.ʭB/L FUO@KJ.ϭBL FO@KFU/J.ʭB/ FJ.ϭBJ.ʭB/;J6J.ϭB;J6G8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=/-?;8WB=&;WɾS2SCI9)'?;8WB=&;W2SC9G8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=/-H޽B;8AE0WB=щQUP.T,*H޽B;8AE0WB=щQUP.G8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=JHWBRPI9=50׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=20PG,DNG806WB=C=S7,*PG,DNG85WB=CS7G8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=GEW=D,?R;G0G8DN@WG7ӽDIECӽDI>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=20޽BR0WB>=M>I?;8щQ@Ԛ<20޽BR0WB>=M>I?;8щQ@Ԛ<G8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=)';80WB=D>щQDSDA&$;80WB=ӗ>щQDSDAG8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=JHԓ459D0ԓ4B=SRJ>E;86ST!!";9ԓ45D0ԓ4B=SRJ>E;86STXG8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=#!;8>E6QWB=@N ;>E6QWB=@NG8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=R8G8>=>PR8G8>=>PG8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=VTԓ459D0ԓ4B=O׽RG6ST!!"DBԓ45D0ԓ4B=O׽RG6STXG8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=DBG׫;@2>H8GK0G8WB=F?HG,H,DBG׫;@2>H8GK0G8WB=F?HG,H,G8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=\Z7WCȻ22HG/CNK08W=ߌ,3=GGև9>TYW7WCȻ22HG/CNK08W=ߌ,3=GGև9>G8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=H$,GG88W-BGHHH$,GG88W-BGHHQH$,GG88W-BGHHDETLBL=,KH$,GG88W-BGHH$,GG88W-BGHQH$,GG88W-BGHDETLBL,KG8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=86G,DNG806WB=C=Pֈ;̛׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=DBTCRJG<8QG8O60G6U<8Gڶ>S=86CJG<8QG8O60G6<8GS=G8ԓ4BWC=G8ԓ4BWC=SQDŽPB;8>׽RG>G8;?Sԓ459D0ԓ4B=R/AEATMKDŽPB;8>׽RG>G8;?Sԓ45D0ԓ4B=R/AEAG8ԓ4BWC=G8ԓ4BWC=DBS9I/CD<8JGԓ4GWB-RN= -KF7DBS9I/CD<8JGԓ4GWB-RN= -KF7 ʉ5  ʉ5ʉ5 @K ʉ5@K ʉ5  ʉ520 Ͳ4ʉ5/%DHGAAOC4ˉ5%DHAAOC ʉ5  ʉ5ʉ5 @K ʉ5@K ʉ5  ʉ5  ʉ5ޚTDG@K5D@K ʉ5  ʉ5ʉ5 @K ʉ5@K ʉ5  ʉ5&$ ۚKʉ5RG̛<"&ۚK݉5G̛<" ʉ5  ʉ5ʉ5 @K ʉ5@K ʉ5  ʉ5ʉ5 8,T ʉ5,T ʉ5  ʉ5ʉ5 @K ʉ5@K ʉ5  ʉ5207 ʉ5ޚT4L/ȈXʉ5B-AB#!H6=>ʉ5B-AB ʉ5  ʉ5ʉ5 @K ʉ5@K ʉ5  ʉ5;9 ۚK4ʉ5G8OE>έ;LSDʡH9;,*ۚK4ʉ5GOE>٭;SDʡH9; ʉ5  ʉ5ʉ5 @K ʉ5@K ʉ5  ʉ5#!@ >ʉ5DSDA@>ʉ5DSDA ʉ5  ʉ5ʉ5 @K ʉ5@K ʉ5  ʉ5&$$6 6ʉ5@Ԛ<$66ʉ5@Ԛ<,*,BA@D64AE54A6O&$,BA@64AE54A6/-D4A,HB54A6OUP.T)'D4A,HB54A6UP.,*,BA@D64AE54A6O&$,BA@64AE54A6 @K-; @K-;,*,BA@D64AE54A6O&$,BA@64AE54A6/-D4A,HB54A6OUP.T)'D4A,HB54A6UP.,*,BA@D64AE54A6O&$,BA@64AE54A653.HB@M64A6OI0щQUP.T/-.HB@M64A6I0щQUP.,*,BA@D64AE54A6O&$,BA@64AE54A6/-D4A,HB54A6OUP.T)'D4A,HB54A6UP.,*,BA@D64AE54A6O&$,BA@64AE54A6><.HB@M64A6OHAVTJD8DAP20.HB@M64A6HAVTD8A,*,BA@D64AE54A6O&$,BA@64AE54A6/-D4A,HB54A6OUP.T)'D4A,HB54A6UP.,*,BA@D64AE54A6O&$,BA@64AE54A6)'.49B3I6OFUPUT#!.49B3I6FUPU,*,BA@D64AE54A6O&$,BA@64AE54A6/-D4A,HB54A6OUP.T)'D4A,HB54A6UP.,*,BA@D64AE54A6O&$,BA@64AE54A6A?ڤ55D>.1B@D4A= @6OG;P20ܤ5D>.1B@4A= @6G;P,*,BA@D64AE54A6O&$,BA@64AE54A6/-D4A,HB54A6OUP.T)'D4A,HB54A6UP.,*,BA@D64AE54A6O&$,BA@64AE54A6GE.JS=HB@DH4ADAP;0T?6T)!,*.SHB@H4AA;T6T),*,BA@D64AE54A6O&$,BA@64AE54A6/-D4A,HB54A6OUP.T)'D4A,HB54A6UP.,*,BA@D64AE54A6O&$,BA@64AE54A620.BKM4AHAVTJD8DAP)'.BKM4AHAVTD8A,*,BA@D64AE54A6O&$,BA@64AE54A6/-D4A,HB54A6OUP.T)'D4A,HB54A6UP.,*,BA@D64AE54A6O&$,BA@64AE54A6/-.HB@D4A=6OGUP9T#!.HB@4ASGUP9 @GMT  @GM>  BIɤU1.@GMTC3G9/-VN>BIɤU1.@GMC3G9 @GMT  @GMDB@G.MTA/B@G.MTQ8ޚTNGKTOT,*@G6A/@G6Q8+KTO @GMT  @GM;9@G.MT,;MT73;E=57TIַ;)'@G6,;M7;E57TI @GMT  @GM)'@G@MT/-56P9?ַ;#!@G@M/-56P9? @GMT  @GM@G.MTG@=@GMT.@MTC3G3G9ܞNTTOC3G98Iַ;@G.MTG@=@GMT.@MTki@G6G=@GM.@MC3G3G9NTC3G9I@G6G=@GM.@M @GMT  @GM86@G.4@ϚL4MT;M4߹-WHԓ6Iַ;&$@G.@4M6߹-WHԓ6I @GMT  @GM&$@G.MT߹-5TOOIַ;@G6߹-5TOI @GMT  @GM.@MT.MT@MTܞND>.MTE=.MT=.MTIϪJ1.M@G.@MTDC3G98Iַ;\Z.@M6@MN>6E=6=6IϪJ1.M@G.@MDC3G9I @GMT  @GM&$.M@GMTJ-U@ؙDT#!.M@GMJ-U@ؙDT @GMT  @GM)''=.@GMTIB.<.M@GM6.@M@MEM=6C3G9I @GMT  @GM/-@G=@G.MT=.MTIG@ @G=@G6=6IG @GMT  @GM>  BIɤU1.@GMTC3G9/-VN>BIɤU1.@GMC3G9 @GMT  @GMDBMU@G@MT@MTMTMU,HP5ѳBʈFP?53M@G@M@MMM,HP5ѳBʈFP? @GMT  @GM;9@G.MT,;MT73;E=57TIַ;)'@G6,;M7;E57TI @GMT  @GM53@G.MTַ;@G.MTD,BPַ;Υ6&$@G6ַ;@G6D,Pַ;Υ6cI6;0ڳQ  +0ڳQ I6;ٟ@9ٟ@0A@Ԛ<+90A@Ԛ<KI6;0ڳQ  +0ڳQI6;-N  +-NI6;0ڳQ  +0ڳQ20I6;0ʭBќ:-WI6;I6>S2&$+0ʭBќ:-WI6I6>SoI6;0ڳQ  +0ڳQ&$UII6;-N1D@@@UI+-N1ځD@I6;0ڳQ  +0ڳQSQI6;096WI-:PUPޜFTI—PRMTI6ޜF6JH+096WI-:PUPޜFTIRMTI6ޜF6I6;0ڳQ  +0ڳQA?Q2?EC=E@.=9QCB9QCͦ(!)'Q2?EC=@ƋQCBƋQCiI6;0ڳQ  +0ڳQ .IWI6;8TAB.IW+8TABI6;0ڳQ  +0ڳQ86I6;6U=9=>C<ʡH6IHC<ʡH6IHTI6>6;DPDA86>I6>6;DPDA<I6>6;DPDA86>I6>6;DPDA,*ä=FBNLI6>6;DPDA86>I6>6;DPDA3PϪJBE҄JJ9R>9ֈDCSW9ٟ@192D>9ED>9@S6;,DP>=/UP.T.M@D>3PϪJBEԄJ9R>DCSW@192D>BD>9@S6;,DP>=/UP.I6>6;DPDA86>I6>6;DPDAI6>6;DPDA86>I6>6;DPDA6E>6E>I6>6;DPDA86>I6>6;DPDAI6>6;DPDA86>I6>6;DPDASE>C=,B/7Ȼ;T=.LGENA=C,B/7Ȼ;T=LȥW> 1ڶ>SGȥW> 1SG,*A> Q5=Qڶ>SȥW@@@&$A> Q5=QSȥW@@ȥW> 1ڶ>SGȥW> 1SG53>W5CȥWG8E<=?N;†M8T)'>W5CȥWG8E<†M8ȥW> 1ڶ>SGȥW> 1SGDB> @GWC;9Q66BW4 ȥW@@@><> @GWC;9Q6BW4 ȥW@@ȥW> 1ڶ>SGȥW> 1SGhf;>>WȥW,:K>;=?N;7=?N;GTTT - !FD;>>WȥW,:K>;7GTTT+ȥW> 1ڶ>SGȥW> 1SG,*A> Q5=Qڶ>SȥW@@@&$A> Q5=QSȥW@@ȥW> 1ڶ>SGȥW> 1SGDB>W5CWȥWG8E<=?N;TTTG8̛<86>W5CWȥWG8E<TTTG8ȥW> 1ڶ>SGȥW> 1SGDB> @GWC;9Q66BW4 ȥW@@@><> @GWC;9Q6BW4 ȥW@@ȥW> 1ڶ>SGȥW> 1SG20ȥW>W2G/I֣.ŞG9/;7;20ȥW>W2G/I֣.ŞG9/;7;ȥW> 1ڶ>SGȥW> 1SG,*A> Q5=Qڶ>SȥW@@@&$A> Q5=QSȥW@@ȥW> 1ڶ>SGȥW> 1SG20> >QR@8S֗T7ȥW@@@/-> >QR@8S֗T7ȥW@@ȥW> 1ڶ>SGȥW> 1SGDB> @GWC;9Q66BW4 ȥW@@@><> @GWC;9Q6BW4 ȥW@@ȥW> 1ڶ>SGȥW> 1SG20> ȥWS8D0;T=?N;)'> ȥWS8D0;TȥW> 1ڶ>SGȥW> 1SG,*A> Q5=Qڶ>SȥW@@@&$A> Q5=QSȥW@@ȥW> 1ڶ>SGȥW> 1SG#!ȥWȥWKȥW,:ĝ ȥWȥWKȥW,:؝ȥW> 1ڶ>SGȥW> 1SGDB> @GWC;9Q66BW4 ȥW@@@><> @GWC;9Q6BW4 ȥW@@ȥW> 1ڶ>SGȥW> 1SG_]N9UL=>˾3ȥW> G/NIǡ6TTT=?N;T!DBNU=>˾3ȥW> G/NIǡ6TTTTK  ?J=  ?J=)'VHDJ>4=5D3Ȼ;>T VD>4=5D3Ȼ;>  ?J=  ?J=DJ>?=DJ>?=}  ?J=  ?J=?J=Uа.T ?J=*  ?J=  ?J= J?,= J?,=  ?J=  ?J=;9?EJ=׍Q7E70 NʡH -H064T53?EJ=׍Q,0 NʡH -H064T  ?J=  ?J=DJ>?=GĊA>TDJ>?=GĊA>  ?J=  ?J=/-D9DDG?>J>,NDSDA#!9G?>J>=DSDA  ?J=  ?J=?EJ׍QDG@K?EJ׍QD@K  ?J=  ?J= D/F;  DF;  ?J=  ?J=,*DJ>?=E?NKLF9@K)'DJ>?=E?NKLF9@  ?J=  ?J=?=EJ=׍QPB6?=EJ=׍QPB  ?J=  ?J=;?1KEJ>=׍QCPDCK9K>ٟ@9@9W>4R/ҾWB1.O>NB9KJK>N9͝,ڪ3.WȻBDEA¶7ģC:Q;?1KEJ>=׍QCPDC9>ٟ@9@9W>4R/ҾWB1.O>NB8J>N9Ν,.WȻBDENģC:Q  ?J=  ?J=20?>?J>,N166==@Ԛ<)'?>?J>=16=@Ԛ<  ?J=  ?J=DJ>?=@KDJ>?=@K  ?J=  ?J=>?=4FSCܞN/OJ-0E/-DJ>?=4FSNOJ7E  ?J=  ?J=?J=4Н?A3AT?J=4AA  ?J=  ?J=)'VHDJ>4=5D3Ȼ;>T VD>4=5D3Ȼ;>  ?J=  ?J= ?EJ=׍QFK AB ?EJ=׍QFK AB  ?J=  ?J=?J=Uа.T ?J=*  ?J=  ?J=)'J>?=ʡH۩RV-T.6.T&$J>?=ʡH۩RV-T.6.  ?J=  ?J=;9?EJ=׍Q7E70 NʡH -H064T53?EJ=׍Q,0 NʡH -H064T  ?J=  ?J=20UWX=6?KJJ=3WН?>AT,*UWX=6?KJJ=3W?A  ?J=  ?J=/-D9DDG?>J>,NDSDA#!9G?>J>=DSDA  ?J=  ?J=864?߸3ѝ6B5-0IJ?߸3==I̛=F>>@>T#!DJ7>=F>>@>IFET> IFT>_]IFE71UC56K7WE>VWA75SJS24.@7Uև9>TVTIF71UC6K7WE>VWA75SJS24.@7Uև9>IFET> IFT>&$1FEWK.WKC:ET1FWKWKC:EIFET> IFT>,*IFED6AS1F՟?>>@Ԛ<#!IFD6Aū1?>>@Ԛ<IFET> IFT>;9IFEAW̋?6FF1UK>626::@20IFA̋?6.1UK>626::@IFET> IFT>_]IFE71UC56K7WE>VWA75SJS24.@7Uև9>TVTIF71UC6K7WE>VWA75SJS24.@7Uև9>IFET> IFT>&$IKMFE->CϨHQRTIKMF-CΨQRTIFET> IFT>,*IFED6AS1F՟?>>@Ԛ<#!IFD6Aū1?>>@Ԛ<IFET> IFT>20IFED6AS1F՟?>>DSDA)'IFD6Aū1?>>DSDAIFET> IFT>_]IFE71UC56K7WE>VWA75SJS24.@7Uև9>TVTIF71UC6K7WE>VWA75SJS24.@7Uև9>IFET> IFT>53FE>>M*ɬI*I*55TH>M*ɬI*I*5THTIFET> IFT>,*IFED6AS1F՟?>>@Ԛ<#!IFD6Aū1?>>@Ԛ<IFET> IFT>53HFE>>@IU>J-F>TLP20HF>>@IU>J-F>TLPIFET> IFT>_]IFE71UC56K7WE>VWA75SJS24.@7Uև9>TVTIF71UC6K7WE>VWA75SJS24.@7Uև9>IFET> IFT>20I—P=E>>FEDH>QIB,ܔN)'I=E>>FDH>QIBG DEO; DEO;:O;J@Ԛ<:;J@Ԛ< DEO; DEO;:O;4P@Ԛ<:;4P@Ԛ< DEO; DEO;  N:O;BF8@K N:;BF8@K DEO; DEO; -:O;WL/?T -:;W. DEO; DEO;:O;J@Ԛ<:;J@Ԛ< DEO; DEO;O:4;DG@KO:;D@K DEO; DEO;  N:O;BF8@K N:;BF8@K DEO; DEO;:OD>;@K:D>;@K DEO; DEO;:O;J@Ԛ<:;J@Ԛ< DEO; DEO; DO;2  D;2 DEO; DEO;  N:O;BF8@K N:;BF8@K DEO; DEO; :O;2,LDG@K:;2,D@K DEO; DEO;:O;J@Ԛ<:;J@Ԛ< DEO; DEO; :O;2  :;2 DEO; DEO;  N:O;BF8@K N:;BF8@K DEO; DEO;:O;28,T:;2,T DEO; DEO;:O;J@Ԛ<:;J@Ԛ< DEO; DEO;&$ :OƔ>;21ET!! :Ɣ>;21ET DEO; DEO;  N:O;BF8@K N:;BF8@K DEO; DEO;:O;28,T:;2,T DEO; DEO;:O;J@Ԛ<:;J@Ԛ< DEO; DEO; P:O8;:I̺@:TP:8;:@ DEO; DEO;  N:O;BF8@K N:;BF8@K DEO; DEO;#!:O;J:O4974T:;J:474T DEO; DEO;:O;J@Ԛ<:;J@Ԛ< DEO; DEO;DO;2:TD;2:T DEO; DEO;  N:O;BF8@K N:;BF8@K DEO; DEO;:O;2DG@K:;2D@K DEO; DEO;:O;J@Ԛ<:;J@Ԛ< DEO; DEO;:O;27Cͺ?@Ԛ<(TR;>ͺ?@Ԛ< (T;ͺ? (T;ͺ?&$/IMTS;ͺ?ٟ@6A7BITS;ͺ?5+ (T;ͺ? (T;ͺ?(TR;>ͺ?@Ԛ<(TR;>ͺ?@Ԛ< (T;ͺ? (T;ͺ?&$(TS;>6/IM@@@(TS;>6I@@ (T;ͺ? (T;ͺ?(TR;>ͺ?@Ԛ<(TR;>ͺ?@Ԛ< (T;ͺ? (T;ͺ?86/IMTR;>>VBͺ?C7=V-AB)'ITR;>>Bͺ?C7VAB (T;ͺ? (T;ͺ?(TR;>ͺ?@Ԛ<(TR;>ͺ?@Ԛ< (T;ͺ? (T;ͺ?(TS64ͺ?(TS64ͺ? (T;ͺ? (T;ͺ?(TR;>ͺ?@Ԛ<(TR;>ͺ?@Ԛ< (T;ͺ? (T;ͺ? /IMT;ͺ?DSDAIT;ͺ?DSDA (T;ͺ? (T;ͺ?(TR;>ͺ?@Ԛ<(TR;>ͺ?@Ԛ< (T;ͺ? (T;ͺ?A?/IMPD;Fͺ?M7K/1I-I-@Ԛ<53IPD;Fͺ?MK/I-I-@Ԛ< (T;ͺ? (T;ͺ?(TR;>ͺ?@Ԛ<(TR;>ͺ?@Ԛ< (T;ͺ? (T;ͺ? /IMF̽>S6>NBIF̽>S6>NB (T;ͺ? (T;ͺ?(TR;>ͺ?@Ԛ<(TR;>ͺ?@Ԛ< (T;ͺ? (T;ͺ?53;ͺ?9T./I/J@/TA/IMT,*;ͺ?9T.I/J@/TAIT (T;ͺ? (T;ͺ?(TR;>ͺ?@Ԛ<(TR;>ͺ?@Ԛ< (T;ͺ? (T;ͺ?86/IM̺ٟ@6ʔ7;Vͺ?2(/IMI@)'I̺5ʔ7;Vͺ?2(II@G>SEU G>S8/-L44ȣ8G>EUSIBEU̍ L4ȣ8G>8SIB8G>SEU G>S8&$UJG>SIBEU3H8UG>SIB8H8G>SEU G>S8/-L44ȣ8G>EUSIBEU̍ L4ȣ8G>8SIB8G>SEU G>S8zx7HܞNDG>SEU7HܞNDG>SEUQ7HܞNDG>SEUDET߹-8Lԓ6Iַ;C=.b`7HNG>S87HNG>S8Q7HNG>S8DET߹-8Lԓ6IC=G>SEU G>S8/-L44ȣ8G>EUSIBEU̍ L4ȣ8G>8SIB8G>SEU G>S8ki/K@G>SEUSTSUQ=WBSEUSIBEU߹-=EMSIַ;BU1TPN/KG>S8SŘSEU G>S8/-L44ȣ8G>EUSIBEU̍ L4ȣ8G>8SIB8G>SEU G>S8DB-ܞNDG>! )SEUQ-Q;ۓRTCG0/--NG>S8Q-Q;ۓRCG0G>SEU G>S8/-L44ȣ8G>EUSIBEU̍ L4ȣ8G>8SIB8G>SEU G>S886RNUG>SEUIBSEU) :/B#!NG>S8IBS8:/G>SEU G>S8/-L44ȣ8G>EUSIBEU̍ L4ȣ8G>8SIB8G>SEU G>S8/-ܞNDG>SIBEU;SIBEU&$NG>SIB8;SIB8G>SEU G>S8/-L44ȣ8G>EUSIBEU̍ L4ȣ8G>8SIB8G>SEU G>S8SQ-ܞNDG>SEUQD2VFȣ84XIUҔB<֗TI7Iַ;ŒATJH-NG>S8QD2VFȣ84XIUҔB<֗TI7IŒATG>SEU G>S8/-L44ȣ8G>EUSIBEU̍ L4ȣ8G>8SIB8G>SEU G>S8 P PG>SEU G>S8/-L44ȣ8G>EUSIBEU̍ L4ȣ8G>8SIB8G>SEU G>S820DGIBEUSEUV;EUBEU#!DGIB8S8V8B8 DBCDIٟ@964>DBR@54>,*BDCD94>,6ODPDA&$BDR94>,6DPDA DBCDIٟ@964>DBR@54>#!BCDO94>6O@Ԛ<BRO94>6@Ԛ< DBCDIٟ@964>DBR@54>DBDBD>CD.NA>%>R6Iٟ@97DSDA53B>R.NA>%>R6@4DSDA DBCDIٟ@964>DBR@54>20DCD97UDE4Oٟ@6AA7B DR5UDMOٟ@6+ DBCDIٟ@964>DBR@54>,*BDCD94>,6ODPDA&$BDR94>,6DPDA DBCDIٟ@964>DBR@54>PNDBCDIٟ@964E>йSDK9ٟ@9SM>BU-щQ@Ԛ<>޹S @9SM>BU-щQ@Ԛ< DBCDIٟ@964>DBR@54>DBDBD>CD.NA>%>R6Iٟ@97DSDA53B>R.NA>%>R6@4DSDA DBCDIٟ@964>DBR@54>PNOD6>D=7ADBDCD=9>DIٟ@OD2O@@@>D=7ABR9>D@OD2O@@ DBCDIٟ@964>DBR@54>,*BDCD94>,6ODPDA&$BDR94>,6DPDA DBCDIٟ@964>DBR@54>b`DBDCDCٟ@9ɤKE7>RɤK/ϪJ>H=Q996ɤKA>A910TDPNBR@9ɤKE7>RɤK/ϪJ>H=Q95ɤKA>A10TD DBCDIٟ@964>DBR@54>DBDBD>CD.NA>%>R6Iٟ@97DSDA53B>R.NA>%>R6@4DSDA DBCDIٟ@964>DBR@54>_]DBCDN59OH348BD4R4O@4WOŮPO4/TDOTDBDBRN5O38BD4MO@4WX޵+TOT DBCDIٟ@964>DBR@54>,*BDCD94>,6ODPDA&$BDR94>,6DPDA DBCDIٟ@964>DBR@54>#!BCD94>A6O@Ԛ<BR94>A6@Ԛ< DBCDIٟ@964>DBR@54>DBDBD>CD.NA>%>R6Iٟ@97DSDA53B>R.NA>%>R6@4DSDA DBCDIٟ@964>DBR@54> UCD94>A6?,UR94>A6?, BU06˩5FE91PBU06FE1PA?BTS6˩5؇9?˩5OMR9I1FUFFPJ86BS6؇9?˩5OMR9I1FUFPJ BU06˩5FE91PBU06FE1P/-B؇96˩5ֲR1FQ?ٟ@SPG3&$B؇96ֲR1FQ?ٟ@SG BU06˩5FE91PBU06FE1P20UC>B06˩5NR31SFщQ@Ԛ</-UC>B06NR31SFщQ@Ԛ< BU06˩5FE91PBU06FE1P#!BOFR6˩5֛7>3PJBOFR673PJ BU06˩5FE91PBU06FE1PA?BTS6˩5؇9?˩5OMR9I1FUFFPJ86BS6؇9?˩5OMR9I1FUFPJ BU06˩5FE91PBU06FE1PkiBTS6˩50QN?9H9RIJIН?TXLI/I/I/B=6I6B=-0YWBS60QN?9H9RIJIܞ?ɜXI/II/B=6I6B=0 BU06˩5FE91PBU06FE1P20UC>B06˩5NR31SFщQ@Ԛ</-UC>B06NR31SFщQ@Ԛ< BU06˩5FE91PBU06FE1PDB05OȨKFD9IVBTELȨKF9IV:TН?>/-05OӨKDIVBELӨKIV:? BU06˩5FE91PBU06FE1PA?BTS6˩5؇9?˩5OMR9I1FUFFPJ86BS6؇9?˩5OMR9I1FUFPJ BU06˩5FE91PBU06FE1P20BTS6˩50BT6˩51T7H;T#!BS60B617H;T BU06˩5FE91PBU06FE1P20UC>B06˩5NR31SFщQ@Ԛ</-UC>B06NR31SFщQ@Ԛ< BU06˩5FE91PBU06FE1P BT66˩50QGН?>B660QG? BU06˩5FE91PBU06FE1PA?BTS6˩5؇9?˩5OMR9I1FUFFPJ86BS6؇9?˩5OMR9I1FUFPJ BU06˩5FE91PBU06FE1PBR6˩51?FBTBR61?BT BU06˩5FE91PBU06FE1P20UC>B06˩5NR31SFщQ@Ԛ</-UC>B06NR31SFщQ@Ԛ< BU06˩5FE91PBU06FE1P BT؇96˩5M5RFFB؇96M5RFFCE.TGTK6TCE.TGK6T><.,9T>GTP6ڜ>KDTWAПCDSDA;9.,9T>GP6ڜ>KDTWAПCDSDACE.TGTK6TCE.TGK6T.TGT6>?>P.TG6>?>PCE.TGTK6TCE.TGK6T><.,9T>GTP6ڜ>KDTWAПCDSDA;9.,9T>GP6ڜ>KDTWAПCDSDACE.TGTK6TCE.TGK6T@N>PC @N>PCE.TGTK6TCE.TGK6T><.,9T>GTP6ڜ>KDTWAПCDSDA;9.,9T>GP6ڜ>KDTWAПCDSDACE.TGTK6TCE.TGK6T;9G߹-.TGTޚT>9BKR9KϋIL‡KAB20G߹-.TGޚT>BR9KϋIL‡KABCE.TGTK6TCE.TGK6T><.,9T>GTP6ڜ>KDTWAПCDSDA;9.,9T>GP6ڜ>KDTWAПCDSDACE.TGTK6TCE.TGK6T)'.TGT6>7KM?U>T .TG6>7KM?,CE.TGTK6TCE.TGK6T><.,9T>GTP6ڜ>KDTWAПCDSDA;9.,9T>GP6ڜ>KDTWAПCDSDACE.TGTK6TCE.TGK6T20.T9Kʉ55>A>BK=U;Н?T)'.T9Kʉ55>A>BU;ܞ?CE.TGTK6TCE.TGK6T><.,9T>GTP6ڜ>KDTWAПCDSDA;9.,9T>GP6ڜ>KDTWAПCDSDACE.TGTK6TCE.TGK6T/-.T9Kʉ55>A>BK=3RT&$.T9Kʉ55>A>B3ҔRCE.TGTK6TCE.TGK6T><.,9T>GTP6ڜ>KDTWAПCDSDA;9.,9T>GP6ڜ>KDTWAПCDSDACE.TGTK6TCE.TGK6TDBO<>TRIO.TRIOVTIOB<ȬTIQ>86O<>TRO.TROVTIOB<ЬTQCE.TGTK6TCE.TGK6T><.,9T>GTP6ڜ>KDTWAПCDSDA;9.,9T>GP6ڜ>KDTWAПCDSDACE.TGTK6TCE.TGK6TMK.TGT9.DS>9>AK@—PB@ A6O:@@@><.TG9.DS>IAK@B@ A6:@@CE.TGTK6TCE.TGK6T><.,9T>GTP6ڜ>KDTWAПCDSDA;9.,9T>GP6ڜ>KDTWAПCDSDACE.TGTK6TCE.TGK6T.TGT6>@Ԛ<.TG6>@Ԛ<̾-,AJT0ޡ8;-AJT0AJT?@KAJT?@K̾-,AJT0ޡ8;-AJT0;9̾-,4FE4AJT54T?1WQ̛<7T/-;-4E4AJT54T?WQ̛<7̾-,AJT0ޡ8;-AJT0\Z̾--,̾-,6.ΩWH8443AJT4Q۹/85?1WK40ޡ8>1MK̾--,;-6ΩWH8443AJT4Q۹/85?WåK0>1̾-,AJT0ޡ8;-AJT0PN̾--H,̾-,6.ΩW4L5/B4W5H/OTANя7>1GE̾--H,;-6ΩW4L5/B4W5H/OTAN>1̾-,AJT0ޡ8;-AJT0,*7H984AJT54>0ޡ87̾-,AJT0ޡ8;-AJT0 ̾-/XT ̾-/XT̾-,AJT0ޡ8;-AJT0  ̾-CT  ̾-CT̾-,AJT0ޡ8;-AJT0,*H84AJT540ޡ8>1@K)'H84AJT540>1@K̾-,AJT0ޡ8;-AJT0AJT?@KAJT?@K̾-,AJT0ޡ8;-AJT0\ZH,̾-,XΩW84ALT540ޡ8>11DD>7U ̾-X̾-X-TMKH,;-X84ALT540>11D>7U ̾-X̾-X-̾-,AJT0ޡ8;-AJT0\Z̾--,̾-,6.ΩWH8443AJT4Q۹/85?1WK40ޡ8>1MK̾--,;-6ΩWH8443AJT4Q۹/85?WåK0>1̾-,AJT0ޡ8;-AJT0JH8AJTOC6̾-,84L5/TS:-1QBU/;868AJTOC6;-84L5/S:-ڠ#/̾-,AJT0ޡ8;-AJT0,*7H984AJT54>0ޡ87̾-,AJT0ޡ8;-AJT0/-̾-//?84AJT5T;U/T&$̾-//84AJT5T;*̾-,AJT0ޡ8;-AJT0  ̾-CT  ̾-CT̾-,AJT0ޡ8;-AJT0;9̾-,EAJTTDɍPMA:7.U/T/-;-EAJTTDӍPA:7.*̾-,AJT0ޡ8;-AJT0AJT?@KAJT?@K̾-,AJT0ޡ8;-AJT0ILIL̾-,AJT0ޡ8;-AJT0\Z̾--,̾-,6.ΩWH8443AJT4Q۹/85?1WK40ޡ8>1MK̾--,;-6ΩWH8443AJT4Q۹/85?WåK0>1̾-,AJT0ޡ8;-AJT0&$̾-4AT95/?V/?T ̾-4AT95/@?T)'FBUQDND6S?F: FBUQND6S?,*7F:BP1ND?F:@Ԛ<#!7:BP1ND?@Ԛ<)'FBUQDND6S?F: FBUQND6S?R6!8,TR6,T)'FBUQDND6S?F: FBUQND6S?207F:BP1ND?F:6S@Ԛ<)'7:BP1ND?6S@Ԛ<)'FBUQDND6S?F: FBUQND6S?531K>QP?F:Bб 4D=3-AB,*1K>QP?Bб 4D=-AB)'FBUQDND6S?F: FBUQND6S?,*7F:BP1ND?F:@Ԛ<#!7:BP1ND?@Ԛ<)'FBUQDND6S?F: FBUQND6S?;94F:̔6BUPV715CS?F:@Ԛ<204:̔6BUPV715CS?@Ԛ<)'FBUQDND6S?F: FBUQND6S?207F:BP1ND?F:6S@Ԛ<)'7:BP1ND?6S@Ԛ<)'FBUQDND6S?F: FBUQND6S?&$̔6ַ;IBUVԋ/CS?F:1IBUVԋ/CS?)'FBUQDND6S?F: FBUQND6S?,*7F:BP1ND?F:@Ԛ<#!7:BP1ND?@Ԛ<)'FBUQDND6S?F: FBUQND6S?GEF:̔6BU>ȣ89071KK6S?F:DSDA><:̔6BU>ȣ89071KK6S?DSDA)'FBUQDND6S?F: FBUQND6S?207F:BP1ND?F:6S@Ԛ<)'7:BP1ND?6S@Ԛ<)'FBUQDND6S?F: FBUQND6S? DA7O=—PRߑ4PTDA7=Rߑ4PT)'FBUQDND6S?F: FBUQND6S?,*7F:BP1ND?F:@Ԛ<#!7:BP1ND?@Ԛ<)'FBUQDND6S?F: FBUQND6S? ?F:6S>JK2@Ԛ<JHU2QNDHF/@SKDND SC>K2@Ԛ<UNDHF/UDHF/#!0-0:Nٟ@HFVFT0-:ٟ@HFFTUNDHF/UDHF/GEM:5UND8F/?PS6 1B>UDF?PS6UNDHF/UDHF/PNU2QN5DHF/Bٟ@SKDND SC>K2@Ԛ<JHU2QNDHF/@SKDND SC>K2@Ԛ<UNDHF/UDHF//-ӟ;N@R>8FS/"ҥ3!@;6&$ӟ;NR>8FS"ҥ3!@6UNDHF/UDHF/GEM:5K2@Ԛ<JHU2QNDHF/@SKDND SC>K2@Ԛ<UNDHF/UDHF/&$ FS5/ FS/UNDHF/UDHF/GEM:5K2@Ԛ<JHU2QNDHF/@SKDND SC>K2@Ԛ<UNDHF/UDHF/>/@K8FENܜ>@K  ,ݠ.A,A_]O70CT,ݠ.7>DGܤKP04TVAV07>?Q;GEO߫B>GK04TVAV07>?Q;  ,ݠ.A,A86,ݠ.>O/19O616ABTGA7B/-,>O/19O616ABTG+  ,ݠ.A,A,ݠ.ݠ.O ,ݠ.O  ,ݠ.A,AO,ݠ.B:DG@KO,BD@K  ,ݠ.A,A_]O70CT,ݠ.7>DGܤKP04TVAV07>?Q;GEO߫B>GK04TVAV07>?Q;  ,ݠ.A,A CN,ݠ.QADPDACN,QADPDA  ,ݠ.A,A,ݠ.ݠ.O ,ݠ.O  ,ݠ.A,A)'Iַ;DN0CT,ݠ.AщQ@Ԛ<Iַ;DNAщQ@Ԛ<  ,ݠ.A,A_]O70CT,ݠ.7>DGܤKP04TVAV07>?Q;GEO߫B>GK04TVAV07>?Q;  ,ݠ.A,A,*>T,ݠ.9ABAA4˛5DA4>,9ABA˛5DAn  ,ݠ.A,A,ݠ.ݠ.O ,ݠ.O  ,ݠ.A,A NT,ݠ.Nĵ*  ,ݠ.A,A_]O70CT,ݠ.7>DGܤKP04TVAV07>?Q;GEO߫B>GK04TVAV07>?Q;  ,ݠ.A,APNX>T9;;>X>QA7AO7RN;X7:U>E8DBX>;>X>QA7AO7N;X7:U>E8  ,ݠ.A,A,ݠ.ݠ.O ,ݠ.O  ,ݠ.A,AA?O߹-5,ݠ.߹-,ݠ.:߹-HİUMANC)O8,T53O߹-5,߹-,:߹-HMANC)O,TFIֈD:0DFI:0DIֈD:0@? I:0@FIֈD:0DFI:0D IֈDN0D:DG@KIN0DD@KFIֈD:0DFI:0D20D3ԚIֈD0>D:DSDA#!3IF>I>DDSDAFIֈD:0DFI:0DIֈDGC?DIGC?DFIֈD:0DFI:0DIֈD:0@? I:0@FIֈD:0DFI:0DD:IֈD14  DI1FIֈD:0DFI:0D20D3ԚIֈD0>D:DSDA#!3IF>I>DDSDAFIֈD:0DFI:0D20D3ԚIֈD0>D:DSDA#!3IF>I>DDSDAFIֈD:0DFI:0DIֈD:0@? I:0@FIֈD:0DFI:0D><يRIֈD:0DيR4IֈD:0BIيR4TC,>)'يRI:0DRI:0BIRCFIֈD:0DFI:0D20D3ԚIֈD0>D:DSDA#!3IF>I>DDSDAFIֈD:0DFI:0D;9IֈD>0EFR4:0>ğCѭDӮD:ٟ@H@Ԛ</-I>0EFM:0>ɟCܮDٟ@H@Ԛ<FIֈD:0DFI:0DIֈD:0@? I:0@FIֈD:0DFI:0D,*IֈD:0ߢ?DT7N79UAT#!I:0ߢ?D7N79UAFIֈD:0DFI:0D20D3ԚIֈD0>D:DSDA#!3IF>I>DDSDAFIֈD:0DFI:0D><0IֈD:0D04IֈD:0BI04TC,>/-0I:0D04I:0BI04CC@N= ->>C@N= ->>DB - -HE07!7LJ677/-H0LJ677C@N= ->>C@N= ->>;9E87CC@N.H˱U=FCסE@@@2087CC@N.H˱U=FC@@C@N= ->>C@N= ->>DB - -HE07!7LJ677/-H0LJ677C@N= ->>C@N= ->>GE= ->C@N7U0>ٟ@6MVIW>EDSDA><= ->C@NU0>ٟ@6V=>EDSDAC@N= ->>C@N= ->>DB - -HE07!7LJ677/-H0LJ677C@N= ->>C@N= ->>A?C@N= -F>EMӛ?ߤ8>4FC@N@@@;9C@N= -F>EMӛ?ߤ8>4C@N@@C@N= ->>C@N= ->>DB - -HE07!7LJ677/-H0LJ677C@N= ->>C@N= ->>DB= ->CסEC@NDE0**ԑ49A*/@@@;9= ->CC@ND0**ԑ49A*/@@C@N= ->>C@N= ->>DB - -HE07!7LJ677/-H0LJ677C@N= ->>C@N= ->>b`B˩55=>O*7C@N7C@ĕ6TFR/HFH4ĕ6TPNB˩55=>O*C@ĕ6TFR/HF4ĕ6TC@N= ->>C@N= ->>DB - -HE07!7LJ677/-H0LJ677C@N= ->>C@N= ->>_]6ɵO=>C@Nð.A ->>ٟ@9ٟ@DDܢESܤKA@CסESܤKA@Ԛ<SQ6ɵO=>C@Nð.A ->>9DܢESܤKA@CSܤKA@Ԛ<C@N= ->>C@N= ->>DB - -HE07!7LJ677/-H0LJ677C@N= ->>C@N= ->>/-7C@N7 -=FSÐW7#! -=FSÐW7C@N= ->>C@N= ->>DB - -HE07!7LJ677/-H0LJ677C@N= ->>C@N= ->>><= -F>C@NPEMӛ?M>>Fٟ@6@Ԛ<;9= -F>C@NPEMӛ?M>>F5@Ԛ<C@N= ->>C@N= ->>DB - -HE07!7LJ677/-H0LJ677C@N= ->>C@N= ->>)'C@N= -F(D>M@Ԛ<)'C@N= -F(D>M@Ԛ<HS/ON4/:HSON4/VT7H>S/OB4ʡH9ɰ5ȥ7/:į?I EL/ EHS/ON4/:HSON4/JHùBLW¶7/J7H>/B/WȥOB4784/:ĹBN/J84/HS/ON4/:HSON4/,*HS/ON4/:ҁX?L-T#!HSON4/ҁX?L-HS/ON4/:HSON4/&$HW>S/94/:AƭIHW>S94/AHS/ON4/:HSON4/)'7H/B/>ON47/:/HS/ON4/:HSON4/;9H>S/>4NO/://9¶7JùBL/#!>4NO//NJĹB/HS/ON4/:HSON4/;97H>S/>OB47BR:0ABBR:0ABHS/ON4/:HSON4/20/:7H>S/OB47/://HS/ON4/:HSON4/VT7H>S/OB4ʡH9ɰ5ȥ7/:į?I EL/ EHS/ON4/:HSON4/&$H>S/OB4">OB4>HS/ON4/:HSON4/,*HS/ON4/:ҁX?L-T#!HSON4/ҁX?L-HS/ON4/:HSON4/PN7HS/47/: ȥǶ,W¶7/>;GB20/ ȥǶ,N/>;GHS/ON4/:HSON4/)'7H/B/>ON47/:/HS/ON4/:HSON4/kiùBL9¶7/J7HS/9ȥ4NO7/:9¶7//:66ȈX4&20ĹBN/J/N//66ȈX4HS/ON4/:HSON4/;97H>S/>OB47BR:0ABBR:0ABHS/ON4/:HSON4/><ʡHU٨I7HS/47/::,AF> ʡHU٨I/:,>HS/ON4/:HSON4/VT7H>S/OB4ʡH9ɰ5ȥ7/:į?I EL/ EHS/ON4/:HSON4/HN1,;TLH1,;TLX:86˩5 X:86&$X:86˩5>X:86˩57 X:86>X:867X:86˩5 X:8686X:86˩54X:864VDT(!)'X:864X:86VDTX:86˩5 X:86&$X:86˩5>X:86˩57 X:86>X:867X:86˩5 X:86)'E8:X66˩5H38@@@#!E8:X66H38@@X:86˩5 X:86&$X:86˩5>X:86˩57 X:86>X:867X:86˩5 X:8620X:86ӻBOX:86˩5Q464T#!X:80X:86Q6TX:86˩5 X:86&$X:86˩5>X:86˩57 X:86>X:867X:86˩5 X:86GEX:86ӻBOX:86ӻBOX:86H6T$!20X:80X:86OX:866TX:86˩5 X:86&$X:86˩5>X:86˩57 X:86>X:867X:86˩5 X:86#!5X:8>6˩56R@Ԛ< 5X:8>66R@Ԛ<߹-U.:D>߹-U.:D>_]AU.?:D>>Rٟ@6U>G1@:?;0@W;0:GȻ;T=.LYWAU.?:D>>Rٟ@6>G1@:?;0@W;0:GȻ;T=L߹-U.:D>߹-U.:D> ߹-U.8 ߹-U.8߹-U.:D>߹-U.:D>#!߹-U.:DM=6S@Ԛ<#!߹-U.:DM=6S@Ԛ<߹-U.:D>߹-U.:D>/-U.>DP?14:щQȻ;T=.L)'U.>D?14:щQȻ;T=L߹-U.:D>߹-U.:D>ec1O .9.RU.,D6/EDCD>ڤ55J@C;ϵ>͵ATJ.PTVT1O.9.RU.,D6/EDCD>ܤ5J@Aϵ>͵ATJ.P߹-U.:D>߹-U.:D>)'кBU.6:DP߇;Ȼ;T=.L#!кBU.6:D߇;Ȼ;T=L߹-U.:D>߹-U.:D>_]AU.?:D>>Rٟ@6U>G1@:?;0@W;0:GȻ;T=.LYWAU.?:D>>Rٟ@6>G1@:?;0@W;0:GȻ;T=L߹-U.:D>߹-U.:D> U.8߹-U..ʺBPT U.8߹-U..ʺBPT߹-U.:D>߹-U.:D>#!߹-U.:DM=6S@Ԛ<#!߹-U.:DM=6S@Ԛ<߹-U.:D>߹-U.:D> ߹-U.:/0EFT6 ߹-U.:/0EFT6߹-U.:D>߹-U.:D>ec1O .9.RU.,D6/EDCD>ڤ55J@C;ϵ>͵ATJ.PTVT1O.9.RU.,D6/EDCD>ܤ5J@Aϵ>͵ATJ.P߹-U.:D>߹-U.:D>&$U.VP1B,, 5&$U.VP1B,, 5߹-U.:D>߹-U.:D>_]AU.?:D>>Rٟ@6U>G1@:?;0@W;0:GȻ;T=.LYWAU.?:D>>Rٟ@6>G1@:?;0@W;0:GȻ;T=L߹-U.:D>߹-U.:D>߹-V;T64߹-V;T6߹-U.:D>߹-U.:D>#!߹-U.:DM=6S@Ԛ<#!߹-U.:DM=6S@Ԛ<߹-U.:D>߹-U.:D>MK߹-U.6>P5,A߹-U.˭V6,3T߹-˭V6܈IU?90GE߹-U.6>P59߹-U.˭V6,3T߹-˭V6߈I?90߹-U.:D>߹-U.:D>ec1O .9.RU.,D6/EDCD>ڤ55J@C;ϵ>͵ATJ.PTVT1O.9.RU.,D6/EDCD>ܤ5J@Aϵ>͵ATJ.P߹-U.:D>߹-U.:D>MK N =.H= F0BU.$D: N =MPMPJH N =U= F0BU.$D: N =MPMP߹-U.:D>߹-U.:D>_]AU.?:D>>Rٟ@6U>G1@:?;0@W;0:GȻ;T=.LYWAU.?:D>>Rٟ@6>G1@:?;0@W;0:GȻ;T=L߹-U.:D>߹-U.:D>/-߹-U.DI429-DIV=RJ#!߹-U.D429-+RJ߹-U.:D>߹-U.:D>#!߹-U.:DM=6S@Ԛ<#!߹-U.:DM=6S@Ԛ<߹-U.:D>߹-U.:D>53߹-U./0Bб DD7=EUT۹/UD,*߹-U./0Bб DD7,U/D߹-U.:D>߹-U.:D>ec1O .9.RU.,D6/EDCD>ڤ55J@C;ϵ>͵ATJ.PTVT1O.9.RU.,D6/EDCD>ܤ5J@Aϵ>͵ATJ.P߹-U.:D>߹-U.:D>;9߹-U.0NUOބ2E   =ĪC'AB&$߹-U.0NUOǷ. =AB߹-U.:D>߹-U.:D>_]AU.?:D>>Rٟ@6U>G1@:?;0@W;0:GȻ;T=.LYWAU.?:D>>Rٟ@6>G1@:?;0@W;0:GȻ;T=L߹-U.:D>߹-U.:D>SQ:Aб =>U.=9V>D>9ԚU.V>D>1OISÄN989FT6߹-U.:D>߹-U.:D>#!߹-U.:DM=6S@Ԛ<#!߹-U.:DM=6S@Ԛ<߹-U.:D>߹-U.:D>)'U.>DP6:,Ȼ;T=.L#!U.>D6:,Ȼ;T=L߹-U.:D>߹-U.:D>ec1O .9.RU.,D6/EDCD>ڤ55J@C;ϵ>͵ATJ.PTVT1O.9.RU.,D6/EDCD>ܤ5J@Aϵ>͵ATJ.P߹-U.:D>߹-U.:D> ߹-1U.9TDSDA ߹-1U.9TDSDA߹-U.:D>߹-U.:D>_]AU.?:D>>Rٟ@6U>G1@:?;0@W;0:GȻ;T=.LYWAU.?:D>>Rٟ@6>G1@:?;0@W;0:GȻ;T=L߹-U.:D>߹-U.:D>GE3Ԛ9VC=6RMK -PT61TP6531U.D9VC6RK -PT61TڀP߹-U.:D>߹-U.:D>#!߹-U.:DM=6S@Ԛ<#!߹-U.:DM=6S@Ԛ<߹-U.:D>߹-U.:D>;9߹-U.:SM?B;BɵOMSB#**.T53߹-U.:SM?B;BֵOSB#**.  U8SUS/-1۠N -FɹKU=S5ۓR:ϡSFAT#!ޠN -FU=S5ۓR:ݡSA  U8SUS&$FMGMM>.3ˠS87TFMGMM>.87  U8SUSA?N,ˏR0#>ˌD3U=SNۥN&7><N,ˏR0#>ьDU=SNۥN&7  U8SUS)'VX?AM—PS>SM8GJ#!VX?AM—PS>SٶM1  U8SUS/-1۠N -FɹKU=S5ۓR:ϡSFAT#!ޠN -FU=S5ۓR:ݡSA  U8SUS/-U=Sб .65J?O4ʄ/&87&$U=Sб .6JO4ʄ/&8  U8SUSA?N,ˏR0#>ˌD3U=SNۥN&7><N,ˏR0#>ьDU=SNۥN&7  U8SUSVT70:7KU6A8>C¾98T—P7—PX>¾987;>C¾98—P7—PX>¾98;  U8SUS/-1۠N -FɹKU=S5ۓR:ϡSFAT#!ޠN -FU=S5ۓR:ݡSA  U8SUSA?U=Sб 7̛<87E7C77̛<(!53U=Sб 7̛<87E7C77̛<  U8SUSA?N,ˏR0#>ˌD3U=SNۥN&7><N,ˏR0#>ьDU=SNۥN&7  U8SUS86W7IU8>SESٟ@M߫U@U'@@@20W7IU>SESٟ@M߫U@U'@@  U8SUS/-1۠N -FɹKU=S5ۓR:ϡSFAT#!ޠN -FU=S5ۓR:ݡSA  U8SUS>< -4J6 NLF;8T786 -4J6 NLF87 N/,QEO. N/+O.53 N/,QI/E4OL/.DQET)' N/Q@E4OL/.T N/,QEO. N/+O.;9/,<7F NCN:QEI/4O5.L,*/<7F NN:+@4O5L N/,QEO. N/+O.53 N/,QI/E4OL/.DQET)' N/Q@E4OL/.T N/,QEO. N/+O.b`/QET N/QEVK/QEL9O/Լ=ET/QE/4/VQE1WJH/+T N/+V/+L9O/=T/+//V71W N/,QEO. N/+O.53 N/,QI/E4OL/.DQET)' N/Q@E4OL/.T N/,QEO. N/+O.&$ NCN/QEL)5E6>  )@?)@)ٟ@6E6>)5E6>)'UC9S;ٟ@>6E6>@Ԛ<)'UC9S;ٟ@>6E6>@Ԛ<)ٟ@6E6>)5E6>  )@?)@)ٟ@6E6>)5E6>&$VX,)EBٟ@&EϜVQTVX)E@&EϜVQ)ٟ@6E6>)5E6>  )@?)@)ٟ@6E6>)5E6>DBDԚ<(!ٟ@6ES>)%"6"&)'D5ES>)%"6")ٟ@6E6>)5E6>  )@?)@)ٟ@6E6>)5E6>#!;ښL)E6??OKT;ښL)E6?AT)ٟ@6E6>)5E6>  )@?)@)ٟ@6E6>)5E6>&$8V1)ٟ@>6E6>@Ԛ<#!81)ٟ@>6E6>@Ԛ<)ٟ@6E6>)5E6>  )@?)@)ٟ@6E6>)5E6>53K:S;ٟ@Sٟ@>6E66>GA7B/-K:S;ٟ@Sٟ@>6E66>G+)ٟ@6E6>)5E6>  )@?)@)ٟ@6E6>)5E6>\Z$B)KFE6>RDI6PGH>R5K9>66;NDSPԮK߀3VT$B)KFE6>RDI6PGH>RK9>66;NDSPٮK)ٟ@6E6>)5E6>  )@?)@)ٟ@6E6>)5E6>/-R—Pٟ@)%ٟ@6E6>DPDA&$R—Pٟ@)5E6>DPDA)ٟ@6E6>)5E6>  )@?)@)ٟ@6E6>)5E6>GE!Rٟ@6E6>ٟ@щQKB)B$&9U>щQ@Ԛ<;9!R5E6>ٟ@щQKB)BU>щQ@Ԛ< ?9T$ɞ9$A?W6J/?9T$8:W6J,HPHCI9I20WF/ɞ9$8:WF,HPHI9I ?9T$ɞ9$hf$>I?9TWO$8$>I?9TWO$8Q$>I?9TWO$8,9PMK$>ɞ9WO$8$>ɞ9WO$8Q$>ɞ9WO$8,9P ?9T$ɞ9$A?W6J/?9T$8:W6J,HPHCI9I20WF/ɞ9$8:WF,HPHI9I ?9T$ɞ9$)'֥>$8?9TW8QH.T#!֥>$8ɞ9W8QH.T ?9T$ɞ9$A?W6J/?9T$8:W6J,HPHCI9I20WF/ɞ9$8:WF,HPHI9I ?9T$ɞ9$)'U"҈$4T޲F?9TU"4T޲Fɞ9 ?9T$ɞ9$A?W6J/?9T$8:W6J,HPHCI9I20WF/ɞ9$8:WF,HPHI9I ?9T$ɞ9$trL:V1T>B;W8׫B!UH?I?9T$8CWO?98W8ɳQWQBHO_]L:V1T>;W8׫B!UH?Iɞ9$8CWOǞ9W8ɳQWвQH ?9T$ɞ9$A?W6J/?9T$8:W6J,HPHCI9I20WF/ɞ9$8:WF,HPHI9I ?9T$ɞ9$;9µ$?9Tµ$?9T@M@>KT@/Bɞ9ɞ9@ܱM>K@/ ?9T$ɞ9$A?W6J/?9T$8:W6J,HPHCI9I20WF/ɞ9$8:WF,HPHI9I ?9T$ɞ9$86<4T޲Fɞ9"A/4T޲Fɞ9"Q8+KTO ?9T$ɞ9$A?W6J/?9T$8:W6J,HPHCI9I20WF/ɞ9$8:WF,HPHI9I ?9T$ɞ9$&$?9TQ0"lj:?9TɳQQɞ9Q0"lj:ɞ9ɳQQ ?9T$ɞ9$A?W6J/?9T$8:W6J,HPHCI9I20WF/ɞ9$8:WF,HPHI9I ?9T$ɞ9$ec?9T88I?9T$8WO888O?98QD2CI0C98>ŒATSQɞ988Iɞ9$8WO888O?98QD2C0C9>ŒAT ?9T$ɞ9$A?W6J/?9T$8:W6J,HPHCI9I20WF/ɞ9$8:WF,HPHI9I ?9T$ɞ9$20?9T$8ܞND֥>W8ݶ;UW89T#!ɞ9$8N֥>W΀8U89T ?9T$ɞ9$A?W6J/?9T$8:W6J,HPHCI9I20WF/ɞ9$8:WF,HPHI9I ?9T$ɞ9$,*"҈$4T޲F?9Tlj:""4T޲Fɞ9lj:" ?9T$ɞ9$A?W6J/?9T$8:W6J,HPHCI9I20WF/ɞ9$8:WF,HPHI9I ?9T$ɞ9$53"҈$4T޲F?9TQD2DT#!"4T޲Fɞ9QD2DT ?9T$ɞ9$A?W6J/?9T$8:W6J,HPHCI9I20WF/ɞ9$8:WF,HPHI9I ?9T$ɞ9$A?µ$?9TA=ULEQ?DZ.߰4>N.X8BLFJO;5ORܠ9ݩ5N.FJO:ݩ5ʅ>߰4>OXFJORܠ9љ55KUS̛SQT07>S7S˩5W˩5U˩5ORܠ9ݩ5JUS̛<N4NX8BLFաO5OMݩ5NFO:ݩ54OXFOMљ55US̛<աO3US̛SQT07>S7S˩5W˩5UOMߩ5US̛</N/40O;  N0աOMK/N/4ʅ>߰4>N.OX,FJO:9/N/4@@@,*N4NOX,FO:9N@@/N/40O;  N0աO/N/4ʅ>߰4>N.X8BLFJO;5ORܠ9ݩ5N.FJO:ݩ5ʅ>߰4>OXFJORܠ9љ55KUS̛SQT07>S7S˩5W˩5U˩5ORܠ9ݩ5JUS̛<N4NX8BLFաO5OMݩ5NFO:ݩ54OXFOMљ55US̛<աO3US̛SQT07>S7S˩5W˩5UOMߩ5US̛</N/40O;  N0աO>߰4>N.Xҥ3߫UBWOFJUQJ&$N4NXUBWOFUQJ/N/40O;  N0աO/N/4ʅ>߰4>N.X8BLFJO;5ORܠ9ݩ5N.FJO:ݩ5ʅ>߰4>OXFJORܠ9љ55KUS̛SQT07>S7S˩5W˩5U˩5ORܠ9ݩ5JUS̛<N4NX8BLFաO5OMݩ5NFO:ݩ54OXFOMљ55US̛<աO3US̛SQT07>S7S˩5W˩5UOMߩ5US̛</N/40O;  N0աO6Mӛ?6Mӛ?O;O/N/47>6744B9HS1HŞ1Kį?Dߋ5 Gބ24PK ۥNɿCR S2ބ2B@Bބ2ͩ- ۥN BDBM/N7߰4>N.X8BLFJO;5ORܠ9ݩ5N.FJO:ݩ5ʅ>߰4>OXFJORܠ9љ55KUS̛SQT07>S7S˩5W˩5U˩5ORܠ9ݩ5JUS̛<N4NX8BLFաO5OMݩ5NFO:ݩ54OXFOMљ55US̛<աO3US̛SQT07>S7S˩5W˩5UOMߩ5US̛</N/40O;  N0աO86/N/4ʅ>߰4>N.XWBOFJUQJ#!N4NXWBOFUQJ/N/40O;  N0աO/N/4ʅ>߰4>N.X8BLFJO;5ORܠ9ݩ5N.FJO:ݩ5ʅ>߰4>OXFJORܠ9љ55KUS̛SQT07>S7S˩5W˩5U˩5ORܠ9ݩ5JUS̛<N4NX8BLFաO5OMݩ5NFO:ݩ54OXFOMљ55US̛<աO3US̛SQT07>S7S˩5W˩5UOMߩ5US̛</N/40O;  N0աOSQ4/N/45F>JFJIݩ5ORܠ94/N/45F>JFJ,*4N54FIݩ5OM4N54F/N/40O;  N0աO/N/4ʅ>߰4>N.X8BLFJO;5ORܠ9ݩ5N.FJO:ݩ5ʅ>߰4>OXFJORܠ9љ55KUS̛SQT07>S7S˩5W˩5U˩5ORܠ9ݩ5JUS̛<N4NX8BLFաO5OMݩ5NFO:ݩ54OXFOMљ55US̛<աO3US̛SQT07>S7S˩5W˩5UOMߩ5US̛</N/40O;  N0աO 00,B4.Iַ;@?0,B4.I@/N/40O;  N0աO/N/4ʅ>߰4>N.X8BLFJO;5ORܠ9ݩ5N.FJO:ݩ5ʅ>߰4>OXFJORܠ9љ55KUS̛SQT07>S7S˩5W˩5U˩5ORܠ9ݩ5JUS̛<N4NX8BLFաO5OMݩ5NFO:ݩ54OXFOMљ55US̛<աO3US̛SQT07>S7S˩5W˩5UOMߩ5US̛</N/40O;  N0աO&$9:9;2—PX>9:;#!9:9;—PX>9:;,6BJ>P7BJ>P><76NJF3P;7N@N;JT;JQ;J/-7NJF3P;7NN;T;Q;,6BJ>P7BJ>P)'76BJPT;<̖@@TML&$7BJPT;<̖@@TML,6BJ>P7BJ>P/-176NJǭ;J2=>PQ@@@)'17NJǭ;J2=>PQ@@,6BJ>P7BJ>P,*CF76BJԿ7;˨OO/JIַ;&$CF7BJԿ7;˨OO/JI,6BJ>P7BJ>PUFJB76͎?/UFJB7͎?/,6BJ>P7BJ>P—PHIL2COJ—PHIL2COJسSB6BJDʿ7E>P—PHIL2COJ—PHIL2COJQ0N>>KJNBIL2COJBIL2COJ۳S6BJϿ7E>PBIL2COJBIL2COJQ0N׎>KɏJ,6BJ>P7BJ>P&$76BJ>PP/MGQT 7BJ>PPMGQT,6BJ>P7BJ>P,*JRJCJD6PV.6;JT)'JRJCJD6PV.6;T,6BJ>P7BJ>P 6BJDʿ7E>P@Ԛ<6BJϿ7E>P@Ԛ<,6BJ>P7BJ>P)'ARJBJD6PщQU;7P&$ARJBJD6PщQU;7,6BJ>P7BJ>P/-76BJF6F,QVMG.D6,*7BJF6F,QVMG.D6,6BJ>P7BJ>P53BܥNFCS7B76BR6HJ>AP/-BܥNFCS7B7B7HJ>AP,6BJ>P7BJ>P><76NJF3P;7N@N;JT;JQ;J/-7NJF3P;7NN;T;Q;,6BJ>P7BJ>PYWJǭ;N,6>PJǭ;DƂGщQJǭ;D@щQ,6>G3.ٟ@DƂGщQ@Ԛ<SQJǭ;N7>PJǭ;DƂGщQJǭ;D@щQ7>G3.ٟ@DƂGщQ@Ԛ<,6BJ>P7BJ>P/-176NJǭ;J2=>PQ@@@)'17NJǭ;J2=>PQ@@,6BJ>P7BJ>P/-FJō/NJD0PL36>;GB&$JNJD0PL36>;G,6BJ>P7BJ>PUFJB76͎?/UFJB7͎?/,6BJ>P7BJ>P#!6ǭ;>Q6NJ>P;7 6>Q6NJ>P;7,6BJ>P7BJ>P&$76BJ>PP/MGQT 7BJ>PPMGQT,6BJ>P7BJ>PCFJB6ǭ;@Ԛ<CFJB6@Ԛ<,6BJ>P7BJ>P 6BJDʿ7E>P@Ԛ<6BJϿ7E>P@Ԛ<,6BJ>P7BJ>P 6BJD6E>P@Ԛ< 6BJD6E>P@Ԛ<,6BJ>P7BJ>P/-76BJF6F,QVMG.D6,*7BJF6F,QVMG.D6,6BJ>P7BJ>P 76BJ>PHڶ>@Ԛ<7BJ>PHڶ>@Ԛ<,6BJ>P7BJ>P><76NJF3P;7N@N;JT;JQ;J/-7NJF3P;7NN;T;Q;,6BJ>P7BJ>P Lǭ;BϨHJ>PA7BLBϨHJ>P+,6BJ>P7BJ>P/-176NJǭ;J2=>PQ@@@)'17NJǭ;J2=>PQ@@,6BJ>P7BJ>P#!Lǭ;BϨHJ>PDG@KLBϨHJ>PD@K,6BJ>P7BJ>PUFJB76͎?/UFJB7͎?/,6BJ>P7BJ>P ;-M= ;-M=,6BJ>P7BJ>P&$76BJ>PP/MGQT 7BJ>PPMGQT,6BJ>P7BJ>P ;-M= ;-M=,6BJ>P7BJ>P 6BJDʿ7E>P@Ԛ<6BJϿ7E>P@Ԛ<,6BJ>P7BJ>P/-ϨHJō/BJ>PϨHJō/BJڶ>F=/-ϨHJō/BJ>PϨHJō/BJڶ>F=,6BJ>P7BJ>P/-76BJF6F,QVMG.D6,*7BJF6F,QVMG.D6,6BJ>P7BJ>P2076BCJ>P/G=Q>BD>ÐW,*7BCJ>PG=Q>BD>ÐW,6BJ>P7BJ>P><76NJF3P;7N@N;JT;JQ;J/-7NJF3P;7NN;T;Q;,6BJ>P7BJ>P Lǭ;BϨHJ>P:ÐW4LBϨHJ>P:ÐW44T7@<<>47@<>JH>CT<7@<6R>16R>7,O9ϪJ1<>@Ԛ<;9>CT<7@6>16>7,91<>@Ԛ<4T7@<<>47@<> <6>7T<@9:T<6>7T?9:4T7@<<>47@<>JH>CT<7@<6R>16R>7,O9ϪJ1<>@Ԛ<;9>CT<7@6>16>7,91<>@Ԛ<4T7@<<>47@<> M4TCT7@<@Ԛ<M4CT7@@Ԛ<4T7@<<>47@<>JH>CT<7@<6R>16R>7,O9ϪJ1<>@Ԛ<;9>CT<7@6>16>7,91<>@Ԛ<4T7@<<>47@<>539TB@>TK7<:7@<ǭ;?AB,*9TB>TK7<:7@ՄNAB4T7@<<>47@<>JH>CT<7@<6R>16R>7,O9ϪJ1<>@Ԛ<;9>CT<7@6>16>7,91<>@Ԛ<4T7@<<>47@<>A?>T<@>/26SCSET<@>-/7B6;9>T?>/26SCSET?>-/7B64T7@<<>47@<>JH>CT<7@<6R>16R>7,O9ϪJ1<>@Ԛ<;9>CT<7@6>16>7,91<>@Ԛ<4T7@<<>47@<>/-T@47@<>JH>CT<7@<6R>16R>7,O9ϪJ1<>@Ԛ<;9>CT<7@6>16>7,91<>@Ԛ<4T7@<<>47@<>,*4T<@HAVTJD8DAP4?HAVTD8A4T7@<<>47@<>JH>CT<7@<6R>16R>7,O9ϪJ1<>@Ԛ<;9>CT<7@6>16>7,91<>@Ԛ<4T7@<<>47@<>4T47@<>JH>CT<7@<6R>16R>7,O9ϪJ1<>@Ԛ<;9>CT<7@6>16>7,91<>@Ԛ<4T7@<<>47@<>;94TRF7@<5@2D0O6P6T,*4RI@5@20O6P64T7@<<>47@<>JH>CT<7@<6R>16R>7,O9ϪJ1<>@Ԛ<;9>CT<7@6>16>7,91<>@Ԛ<4T7@<<>47@<>,*C>8T<7@<1>DPDA&$C>8<7@1>DPDA6NBUC6O  @U66NBV1UC6O@V1U66NBUC6O  @U6866NBUC-9ԚETBET&$@6OCN3>ETBET,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9 VHLć?OD6L@Ԛ< VHLć?OD6L@Ԛ<,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9/-DHLKD‡?OAO6:,A7B)'DHLKD‡?OAO6:,+,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9 VHLć?OD6L@Ԛ< VHLć?OD6L@Ԛ<,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O986H:!DƇ>O-8WHOWK-4=RJ53H:!DƇ>O-8WHOWK-4RJ,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9 VHLć?OD6L@Ԛ< VHLć?OD6L@Ԛ<,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O986NA9=H5D‡?OJٟ@6:G2@@@,*N9H5D‡?OJ5:G2@@,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9 VHLć?OD6L@Ԛ< VHLć?OD6L@Ԛ<,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9SQDǭ;DQDUH:DO>&DB7DOOJDIPAFE>6MKDǭ;DQDUH:DO>&DB7DOOJDPAF>6,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9 VHLć?OD6L@Ԛ< VHLć?OD6L@Ԛ<,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9&$H=Dć?O=9=ϷAH@H=Dć?O9A@,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9 VHLć?OD6L@Ԛ< VHLć?OD6L@Ԛ<,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9A?,O-HDBٟ@;?=1PK@‡?O=9=@Ԛ<53,O-HD@?=1PK@‡?O9@Ԛ<,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9 VHLć?OD6L@Ԛ< VHLć?OD6L@Ԛ<,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9JHH=WK=:B:D‡?O:D1=@9=D9D5@Ԛ<>O=9=&$@CӽD=HK:=-Ƈ>O9 VHLć?OD6L@Ԛ< VHLć?OD6L@Ԛ<,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9&$H=Dć?O=9=DSDA H=Dć?O9DSDA,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9 VHLć?OD6L@Ԛ< VHLć?OD6L@Ԛ<,*@CӽD=HK:=-Ƈ>O=9=&$@CӽD=HK:=-Ƈ>O9>1#!H8V27Cî7/T>1GE2,߀3՟?4H8V2,7C7G/T>1?TJQ>;92߀3՟?4H8V27Cî7/T>1?TJQ)'H8V2,7C7G/T>1#!H8V27Cî7/T>120H,82,ֈ;04VC7G/T>1)'H,82ڈ;4VCî7/T>1)'H8V2,7C7G/T>1#!H8V27Cî7/T>1GE2,߀3՟?4H8V2,7C7G/T>1?TJQ>;92߀3՟?4H8V27Cî7/T>1?TJQ)'H8V2,7C7G/T>1#!H8V27Cî7/T>1,*V@,1V2,7C7G.V@ M,1V27Cî7.M)'H8V2,7C7G/T>1#!H8V27Cî7/T>1GE2,߀3՟?4H8V2,7C7G/T>1?TJQ>;92߀3՟?4H8V27Cî7/T>1?TJQ)'H8V2,7C7G/T>1#!H8V27Cî7/T>153H82,7C7G/T>1?TJQ>,*H827Cî7/T>1?TJQ)'H8V2,7C7G/T>1#!H8V27Cî7/T>1GE2,߀3՟?4H8V2,7C7G/T>1?TJQ>;92߀3՟?4H8V27Cî7/T>1?TJQ)'H8V2,7C7G/T>1#!H8V27Cî7/T>1/-W?A;OV2,7C7GA.T#!W?ҞMOV27Cî7A.)'H8V2,7C7G/T>1#!H8V27Cî7/T>1GE2,߀3՟?4H8V2,7C7G/T>1?TJQ>;92߀3՟?4H8V27Cî7/T>1?TJQ)'H8V2,7C7G/T>1#!H8V27Cî7/T>12,>B-45J2>B-5)'H8V2,7C7G/T>1#!H8V27Cî7/T>1GE2,߀3՟?4H8V2,7C7G/T>1?TJQ>;92߀3՟?4H8V27Cî7/T>1?TJQ)'H8V2,7C7G/T>1#!H8V27Cî7/T>1hfH8 -N2,ԓ4DC7G77BK;9/T>1KL/U5 -5>2,WFVTH8 -N24Cî77BK;9/T>1KL/U5 -5>2W)'H8V2,7C7G/T>1#!H8V27Cî7/T>1GE2,߀3՟?4H8V2,7C7G/T>1?TJQ>;92߀3՟?4H8V27Cî7/T>1?TJQ)'H8V2,7C7G/T>1#!H8V27Cî7/T>186H,7H82,RNVC7G/T7>1 H,>1)'H8V2,7C7G/T>1#!H8V27Cî7/T>1GE2,߀3՟?4H8V2,7C7G/T>1?TJQ>;92߀3՟?4H8V27Cî7/T>1?TJQ)'H8V2,7C7G/T>1#!H8V27Cî7/T>12C2C)'H8V2,7C7G/T>1#!H8V27Cî7/T>1GE2,߀3՟?4H8V2,7C7G/T>1?TJQ>;92߀3՟?4H8V27Cî7/T>1?TJQ)'H8V2,7C7G/T>1#!H8V27Cî7/T>1V2,7C7G¶;V27Cî7¶;BRADK BRADDBRADKC5—P=—Pބ2RADKC58:-9ET86RADC5=܉2RADC58:-9ETBRADK BRAD><@GD5ՂPRA5HDKϲLK2!QH9T;9@GD5ՂPRA5HDϲLK2!QH9TBRADK BRADJHH 5ՂP2CDKLARAK3DKMK5DKև9>TA?H 5ՂP2RKLARAK3DMK5DKև9>BRADK BRAD,*F7CPL߫WA=RADKS7)'F7CPL߫WA=RADS7BRADK BRADPNՂPLE;ߏGKCRADKCBAMKCK?KCCPD7LRABADK><,9;DR؇9U8ȴS>CPD7LRABADBRADK BRADDBRADKC5—P=—Pބ2RADKC58:-9ET86RADC5=܉2RADC58:-9ETBRADK BRAD\Z:DKCLCBCL5?LFL>HDKCRAK?MKߏGKCBùFPN:DCLCBC5?LL>HDCRAK?MKߏGKCBùFBRADK BRADJHH 5ՂP2CDKLARAK3DKMK5DKև9>TA?H 5ՂP2RKLARAK3DMK5DKև9>BRADK BRADqo -2CDKՂPLARAK3DKMKߏGK HӒC,NDK5=TUߋ5,,=>:J_] -2RKՂPLARAK3DMKߏGK HӒC,D5=TUߋ5,=>: 6ѤI; 6ѤI; ѤI;@?  ѤI;@ 6ѤI; 6ѤI;ѤI;;KFABѤI;;KFAB 6ѤI; 6ѤI; ѤI;@?  ѤI;@ 6ѤI; 6ѤI; ѤI;S>>ٟ@6;@Ԛ<ѤI;S>>5;@Ԛ< 6ѤI; 6ѤI; ѤI;@?  ѤI;@ 6ѤI; 6ѤI;JH3Ԛ<ѤI;>6;6SF;.TTD6;6SF;.TDTMSѤI22E7>>2OD@TDBѤI;A @69>TMSѤI22E7>>2OD@T 6ѤI; 6ѤI; ѤI;@?  ѤI;@ 6ѤI; 6ѤI;><ѤI;B2ѤI;2ѤI;0ѤI;SNѤI;NOFT><ѤI;B2ѤI;2ѤI;0ѤI;SNѤI;NOFT 6ѤI; 6ѤI; ѤI;@?  ѤI;@ 6ѤI; 6ѤI;53ѤI;>>;U0>;D6PGDSDA53ѤI;>>;U0>;D6PGDSDA 6ѤI; 6ѤI; ѤI;@?  ѤI;@ 6ѤI; 6ѤI;ѤI;M@KѤI;M@K 6ѤI; 6ѤI; ѤI;@?  ѤI;@ 6ѤI; 6ѤI; ѤI;8ٟ@>6CA7BѤI;8ٟ@>6C+&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG,*9EN39>ڹ3T21M1T)'9EN39>ڹ3T2M1T&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG,*:B7>B31CTCCԃP-C#!:B7>B31CCƠB7Dڹ32:TCG#!5D>B7Dڹ32:CG531TН?>/3>ND3>2HTC.:)'1?/3ND3>2HC.:&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG/-Ԋ/BNP92K1W>2Ԋ/Lؒ.=#!Ԋ/N9K1W>2Lؒ.=&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG,*V>NDͯ?ڹ3F1ȇN;9FGB)'V>NDͯ?ڹ3F1ׇN9FGB&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CGDBNW>ڹ321%K9E?AFF?DJEʡH9?/86NW>ڹ32%K9E?AFFDJE9/&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG20;@7:TCUDTڹ3>NщQA7B#!;7:CUDڹ3>NщQ+&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG)'AFF?9E1ڹ321KW(#!AFF9E1ڹ32KW(&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG53EEO35B5Iٟ@7A:5Gς16T&$E>3B5@7A:5G+&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG86>>8RVGBڹ3;2F5>HK7<653>>8RVGBڹ3;2F5>H7<6&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CGA?5BJH:ɚK73GHAVTJD8DAP865BJH:ɚK73GHAVTD8A&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CGDBDNADV93>R9B>:D:TCS-@@@86DADV93>R9B>:D:CS@@&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CGJB7>J3/:J7>J3/:&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG20AF?9C1NWڹ321K:&87)'AF9C1NWڹ32K:&8&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG,*9EN39>ڹ3T21M1T)'9EN39>ڹ3T2M1T&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CGSQ9DBB3ҾW19659D:QTC2ʶU>3.ٟ@6ǽ=G@Ԛ<A?DBB3ҾW1965ՔDQC2ʶU>3ٟ@6G@Ԛ<&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG531TН?>/3>ND3>2HTC.:)'1?/3ND3>2HC.:&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CGDB53>RD>B7HLTD>B7:LGDSDADB53>RD>B7HLTD>B7:LGDSDA&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG,*V>NDͯ?ڹ3F1ȇN;9FGB)'V>NDͯ?ڹ3F1ׇN9FGB&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG,*5DBڹ3G><97>?LS:)'5DBڹ3G><97>FS:&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG20;@7:TCUDTڹ3>NщQA7B#!;7:CUDڹ3>NщQ+&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CGkiRV>NDڹ32į?1T9Fܫ7MN6K9D,K69.1R3RFBOBTec/>NDڹ32į?1T9Fܫ7MN6K9D,K69.1R3RFBOB&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG53EEO35B5Iٟ@7A:5Gς16T&$E>3B5@7A:5G+&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CGSQ63P7S4DT9I871Dڹ32:TCRٍBKЅJCG>6DB63P7SCT871Dڹ32:CRٍBKЅJC>6&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CGA?5BJH:ɚK73GHAVTJD8DAP865BJH:ɚK73GHAVTD8A&$5D>B7Dڹ32:TCG#!5D>B7Dڹ32:CG&$>>8RVGBڹ3;2F5&$>>8RVGBڹ3;2F5يR28يR28DBOHD-6J=FHيR28>DН?>QTيR453OD6J=FHيR28D?QيR4يR28يR2886يR2A8>يR2A8>8J-IN=JT)'يR2ŞيR2Ş8J-I=JTيR28يR28DBOHD-6J=FHيR28>DН?>QTيR453OD6J=FHيR28D?QيR4يR28يR28_]>7JЁH?ʡHWOUA7J1HN=FFHيR28>G@K\Z>7JЁH?ʡHWOUA7J1HN=FFHيR28G@KيR28يR28DBOHD-6J=FHيR28>DН?>QTيR453OD6J=FHيR28D?QيR4يR28يR28zx(" -UA7J1H -N= FFHيR28>DA7JUA7DН?>QTيR453OD6J=FHيR28D?QيR4يR28يR28><يRJTيR8T يRDН?>QTيR453OD6J=FHيR28D?QيR4يR28يR2886يR2A8>يR2A8>8J-WN8T,*يR2ŞيR2Ş8J-WN8TيR28يR28DBOHD-6J=FHيR28>DН?>QTيR453OD6J=FHيR28D?QيR4يR28يR28VT -UA7J1H= FFHيR28>107 A@H۰M3AMK -UA7J1H= FFHيR28107 @H3AيR28يR28DBOHD-6J=FHيR28>DН?>QTيR453OD6J=FHيR28D?QيR4يR28يR28;9يR28>9KA8D6P>JщQN.6@Ԛ<53يR289KA8DP>JщQN.6@Ԛ<يR28يR28DBOHD-6J=FHيR28>DН?>QTيR453OD6J=FHيR28D?QيR4يR28يR28b` UA7J1H N=б FFHيR28>DA7JUA7D UP/ŕD/PPQAP,9P/PPAP,9PUP/ڶ>D UP/ŕD/UPG,NKQM/UPG,KQMUP/ڶ>D UP/ŕD/PPQAP,9P/PPAP,9PUP/ڶ>D UP/ŕD /JPC98?UPT/JP98?UPTUP/ڶ>D UP/ŕD/PPQAP,9P/PPAP,9PUP/ڶ>D UP/ŕDA?//P//PO—P=-//PC?KP//Pĩ8>D UP/ŕD/PPQAP,9P/PPAP,9PUP/ڶ>D UP/ŕD>9S9Ԛ9S1/9RK@Ԛ<UP/ڶ>D UP/ŕD/PPQAP,9P/PPAP,9PUP/ڶ>D UP/ŕD2059P/ַ;/P/PA/P?PF7,*59P/ַ;/P/P/P?PFUP/ڶ>D UP/ŕD/PPQAP,9P/PPAP,9PUP/ڶ>D UP/ŕDPNUP?İUHP.F-S51SSAPK85G6)ʪJHUP?İUH1F-S51SSAPK8G6)ʪUP/ڶ>D UP/ŕD/PPQAP,9P/PPAP,9PUP/ڶ>D UP/ŕD UPʡH98CCH/TUP9CCH/TUP/ڶ>D UP/ŕD/PPQAP,9P/PPAP,9PUP/ڶ>D UP/ŕDPʰD/Fַ; PʰD/1 ܷT18W  U8WܷT18W@? U8W@ ܷT18W  U8WGE>ܷT18W>/26SCSEܷT18W>-/7B6A?>U8W>/26SCSEU8W>-/7B6 ܷT18W  U8WܷT18W@? U8W@ ܷT18W  U8WܷT1W>/>/CSܷT1W>/USܷT1W>/ܷT1W>/>8M6@66>ќ:0F6267(%!"~UW>/>/CSUW>/USUW>/UW>/>8M6@66>ќ:0F6267 ܷT18W  U8WܷT18W@? U8W@ ܷT18W  U8WGE>ܷT18W>/26SCSEܷT18W>-/7B6A?>U8W>/26SCSEU8W>-/7B6 ܷT18W  U8WܷT18W@? U8W@ ܷT18W  U8WA?6DQ66NیVOH2ܷT18W/Q66;6=;96Q66NیVOH2U8W/Q66;6= ܷT18W  U8WܷT18W@? U8W@ ܷT18W  U8W8ܷT1OW=;8UOW=; ܷT18W  U8WܷT18W@? U8W@ ܷT18W  U8W ܷT1WFM>Л6;@KUWF>Л6;@ ܷT18W  U8WܷT18W@? U8W@ ܷT18W  U8W20A89QEܷT1G4WE>FWAB)'A8ƋQEUG4WE>FAB:?9WΚI=X:?9WΚI=X86:?9>WΚI5DXK8WщQ# @@@/-:?̖>WΚI5DXK8WщQ#@@:?9WΚI=X:?9WΚI=XMK:?:?LIMW#DE=XWΚI5DXK8WщQ# @@@/-:?̖>WΚI5DXK8WщQ#@@:?9WΚI=X:?9WΚI=X WR:?9Iʉ5X@Ԛ<WR:?Iʉ5X@Ԛ<:?9WΚI=X:?9WΚI=X86:?9>WΚI5DXK8WщQ# @@@/-:?̖>WΚI5DXK8WщQ#@@:?9WΚI=X:?9WΚI=XVT9WI>:?щQV46V6#6#%6# 6$ 6#88GE9WI>:?щQV6V6#6#6#66#88:?9WΚI=X:?9WΚI=X86:?9>WΚI5DXK8WщQ# @@@/-:?̖>WΚI5DXK8WщQ#@@:?9WΚI=X:?9WΚI=X9Wʉ5X@N9Wʉ5X@N:?9WΚI=X:?9WΚI=X86:?9>WΚI5DXK8WщQ# @@@/-:?̖>WΚI5DXK8WщQ#@@:?9WΚI=X:?9WΚI=X>ܤKV#%ѾCHTL6LT53WR:?IG>ܤKV#%5L6LT:?9WΚI=X:?9WΚI=X86:?9>WΚI5DXK8WщQ# @@@/-:?̖>WΚI5DXK8WщQ#@@:?9WΚI=X:?9WΚI=X;9:?б 9WڶU5PRT53:?9WڶU5PRT:?9WΚI=X:?9WΚI=X86:?9>WΚI5DXK8WщQ# @@@/-:?̖>WΚI5DXK8WщQ#@@:?9WΚI=X:?9WΚI=X53EWN6=A9S:?9I>WΚI5DXK8WщQ# @@@/-:?̖>WΚI5DXK8WщQ#@@:?9WΚI=X:?9WΚI=X86:?Gʉ5BW/UX7AE3D#&&$:?GBW/UXAED,?RFD,?RF;9,?RFQUBDAP;0T?6T)!&$,?RFQUA;T6T)D,?RFD,?RF/-?,FR>,62ɀ?EBP22>)'8FR>,62ɀ?EBP22D,?RFD,?RFMKD=D3Dٟ@FR?,1@?>19Kٟ@9ٟ@-4,@Ԛ<>19K9-4,@Ԛ<D,?RFD,?RF,*RF>BϨH,@?,6DPDA&$RF>B؋8@86DPDAD,?RFD,?RF>EщQ@Ԛ<53D,.F?H.JV9S6>EщQ@Ԛ<D,?RFD,?RF,*RF,?HAVTJD8DAP R8?HAVTD8AD,?RFD,?RFGED3DRIF,2?.@PیVDHAVDAPT>T0N6Q20S9Xֈ?NXIWN,ڶ>T0N6Q9CXֈ?NXIַ;9Xֈ?NXI,*9CXֈ?NXIַ;M/TۓR7K 9Xֈ?NXIMTۓR79CXֈ?NXIַ;9Xֈ?NXI)'9CXֈ?NXIַ;B<ނBB@2A6ODPDA&$17>B@2A6DPDA 7BO֊2  7BOPNDD7OC-SO֊2>SDɵO689HAVTJD8DAPA?DD7C-SO>SDɵO689HAVTD8A 7BO֊2  7BO)'17>B@2A6ODPDA&$17>B@2A6DPDA 7BO֊2  7BOA?78BBر/D2ѺKٟ@6TCMUB@2A6ODPDA&$17>B@2A6DPDA 7BO֊2  7BO 57:CDO֊2ѺK@Ԛ<57:CDOѺK@Ԛ< 7BO֊2  7BO)'17>B@2A6ODPDA&$17>B@2A6DPDA 7BO֊2  7BODBкB9N7:CO֊2>ٟ@6߇;1G3F7;Q67;QT><кB9N7:CO>5߇;1G3F7;Q67;QT 7BO֊2  7BO)'17>B@2A6ODPDA&$17>B@2A6DPDA 7BO֊2  7BODB7OB62>R@2A57;QԚ<7;QT7N3>M><7OB62>R@2A57;QԚ<7;QT7N> 7BO֊2  7BO)'17>B@2A6ODPDA&$17>B@2A6DPDA 7BO֊2  7BO)'VDD7BC92>/67T)'VDD7BC92>/67T 7BO֊2  7BO)'17>B@2A6ODPDA&$17>B@2A6DPDA 7BO֊2  7BO&$V7JR1:2R<@@@#!V7JR1:2R<@@28>1S6Mſ2>15M;9KS6MԚ<28DAP;0T?6T)!#!K5MԚ<ſ2A;T6T)28>1S6Mſ2>15M;928=S0M28GN06W,646T20ſ2=S0Mſ2GN06W,64628>1S6Mſ2>15M><28ٟ@6Q>D1.S6M>=6P6T20ſ25Q>D1.5M>=6P628>1S6Mſ2>15M/-28>M281S6MۓR9T,K&$ƿ2>Mƿ215MۓR9T,K28>1S6Mſ2>15M><281S6M>1H3PیVDHAVDAPT20ſ215M>1H3PیVDHAVAT28>1S6Mſ2>15M53A28=χ71S6MN1SщQχ7=RJ,*Aſ2=χ715MN1SщQχ7RJ28>1S6Mſ2>15M;9KS6MԚ<28DAP;0T?6T)!#!K5MԚ<ſ2A;T6T)28>1S6Mſ2>15MS6MES28@Ԛ<5MESſ2@Ԛ<28>1S6Mſ2>15M><28ٟ@6Q>D1.S6M>=6P6T20ſ25Q>D1.5M>=6P628>1S6Mſ2>15MYWDNԚ<281S6M1H3Vٟ@281DAP;0T?6T)!86Nſ215M1H3Vٟ@ſ21A;T6T)28>1S6Mſ2>15M><281S6M>1H3PیVDHAVDAPT20ſ215M>1H3PیVDHAVAT28>1S6Mſ2>15M53MVٟ@28DAP;0T?6T)!Mٟ@ſ2A;T6T)28>1S6Mſ2>15M;9KS6MԚ<28DAP;0T?6T)!#!K5MԚ<ſ2A;T6T)28>1S6Mſ2>15M#!A281S6MDSDAAƿ215MDSDA —PJ>RJЍ—PJ>RJ,*M: D>J6߻WDSDA,*M: D>J6߻WDSDA —PJ>RJЍ—PJ>RJA?VNN,̥6:D9SJ6OQNέ;LSDʡH9;86VN,̥6:D9SJ6QN٭;SDʡH9; —PJ>RJЍ—PJ>RJ/-R߻W—PۃJ>JR6߻W,ƛK9@Ԛ</-R߻W—PۃJ>JR6߻W,ƛK9@Ԛ< —PJ>RJЍ—PJ>RJ53DȂ3@>Q—PJ—PJ>RCRA7B/-DȂ3@>Q—PJ—PJ>RCR+ —PJ>RJЍ—PJ>RJqo=>QH,<5Wį?;>—PJB—PۃJD9SIF>J6RN7>809DSPԮK߀3hf=>QH,5Wį?;>—PJB—PۃJD9SIF>J6R7>809DSPٮK —PJ>RJЍ—PJ>RJ;95$,U, 6D>:5JЂJDJA7B/-5$,,Ѝ6D>:5JЂJDJ+ —PJ>RJЍ—PJ>RJPN P;>LCD9J9KBDL=Ė16ǽ=EX>PGEЍP;>LCD9J9KBDL=Ė16E>P —PJ>RJЍ—PJ>RJG7TQ-G7TQ- —PJ>RJЍ—PJ>RJ)'Q—PJR69:ADSDA#!Q—PJR9ADSDA —PJ>RJЍ—PJ>RJDBA2ûR9?A>;BTûR9?A>5653TA2ûR9?A>;BûR9?A>56 —PJ>RJЍ—PJ>RJ,*M: D>J6߻WDSDA,*M: D>J6߻WDSDA —PJ>RJЍ—PJ>RJ><6JD9SJ6ȻW̑-9ٟ@—PJ>RJ@Ԛ<866JD9SJ6ȻW̑- @—PJ>RJ@Ԛ< —PJ>RJЍ—PJ>RJ/-R߻W—PۃJ>JR6߻W,ƛK9@Ԛ</-R߻W—PۃJ>JR6߻W,ƛK9@Ԛ< —PJ>RJЍ—PJ>RJMK66DJQ—PL>JRJJQ—PL@BJ9Uڤ55@Ԛ<GE66DJQ—PL>JRJJQ—PL@BJUܤ5@Ԛ< —PJ>RJЍ—PJ>RJqo=>QH,<5Wį?;>—PJB—PۃJD9SIF>J6RN7>809DSPԮK߀3hf=>QH,5Wį?;>—PJB—PۃJD9SIF>J6R7>809DSPٮK —PJ>RJЍ—PJ>RJ4B4յGWGXF4B4WX —PJ>RJЍ—PJ>RJPN P;>LCD9J9KBDL=Ė16ǽ=EX>PGEЍP;>LCD9J9KBDL=Ė16E>P —PJ>RJЍ—PJ>RJ,* R:D>ڝJRK2DG@K&$ЍR:D>ڝJRK2D@K —PJ>RJЍ—PJ>RJ)'Q—PJR69:ADSDA#!Q—PJR9ADSDA —PJ>RJЍ—PJ>RJ;95$,U, 6D>:5JЂJ9WWC/-5$,,Ѝ6D>:5JЂJ9āRA —PJ>RJЍ—PJ>RJ,*M: D>J6߻WDSDA,*M: D>J6߻WDSDA —PJ>RJЍ—PJ>RJ AF8,TЍAF,T —PJ>RJЍ—PJ>RJ/-R߻W—PۃJ>JR6߻W,ƛK9@Ԛ</-R߻W—PۃJ>JR6߻W,ƛK9@Ԛ< —PJ>RJЍ—PJ>RJ  4B  4B —PJ>RJЍ—PJ>RJqo=>QH,<5Wį?;>—PJB—PۃJD9SIF>J6RN7>809DSPԮK߀3hf=>QH,5Wį?;>—PJB—PۃJD9SIF>J6R7>809DSPٮK —PJ>RJЍ—PJ>RJ/-UCUTʡH>/X>>A2  UUʡH>/X>2Ѝ —PJ>RJЍ—PJ>RJPN P;>LCD9J9KBDL=Ė16ǽ=EX>PGEЍP;>LCD9J9KBDL=Ė16E>P —PJ>RJЍ—PJ>RJ AFPCRJЍ—PJ>RJ)'Q—PJR69:ADSDA#!Q—PJR9ADSDA —PJ>RJЍ—PJ>RJ&$UR:D>JB/T&$UR:D>JB/TPC11ȯBPC1ȯB&$&DC1ȯBI91PI@Ԛ<#!&DC1ȯBI1PI@Ԛ<PC11ȯBPC1ȯB 2<;> 2<;>PC11ȯBPC1ȯB&$&DC1ȯBI91PI@Ԛ<#!&DC1ȯBI1PI@Ԛ<PC11ȯBPC1ȯB)'V2PKC4EȯB-;J6&$V2PKC4EȯB-;ϜJPC11ȯBPC1ȯB&$&DC1ȯBI91PI@Ԛ<#!&DC1ȯBI1PI@Ԛ<PC11ȯBPC1ȯB20T3=C;D9>:CO-֛7:CO-2P:ȯBK6NKDSDA,*>2P:ȯBK6NKDSDAPC11ȯBPC1ȯB&$&DC1ȯBI91PI@Ԛ<#!&DC1ȯBI1PI@Ԛ<PC11ȯBPC1ȯB#!PNȯB>9H-BV6#!PNȯB>9H-BV6PC11ȯBPC1ȯB&$&DC1ȯBI91PI@Ԛ<#!&DC1ȯBI1PI@Ԛ<PC11ȯBPC1ȯBJHDPRȯBIH,56:LIB,I;9V;KXܤK$GEDPRȯBIH,56:LIB,I;V;KXܤK$ X˩5R9: X˩5ֲ9/-' ڲ߹-:X>˩56I:,@Ԛ<,* ڲ߹-:X>˩56I:,@Ԛ< X˩5R9: X˩5ֲ9,*ȏBҲU>Rɸ˩5G@Ԛ<DBݩ5C B6:X/ݩ5ٟ@5UI: .X>˩5G@Ԛ< X˩5R9: X˩5ֲ9GE B߹-;:XܷT6˩5J˩54 B߹-;:XܷT6˩5/7;9 B-:XܷT6J˩54 B-:XܷT6/7 X˩5R9: X˩5ֲ9DB: >˩5AKB: >˩5AK X˩5R9: X˩5ֲ96T' ߹-X6˩56T ߹-X6 X˩5R9: X˩5ֲ9/-' ڲ߹-:X>˩56I:,@Ԛ<,* ڲ߹-:X>˩56I:,@Ԛ< X˩5R9: X˩5ֲ9A?Uٟ@5߹-:XD˩5I:XB9D˩5ƛK6@@@;9Uٟ@5߹-:XD˩5I:XBD˩5ƛK6@@ X˩5R9: X˩5ֲ9GEݩ5TC B6:X/ݩ5ٟ@5UI: .X>˩5G@Ԛ<DBݩ5C B6:X/ݩ5ٟ@5UI: .X>˩5G@Ԛ< X˩5R9: X˩5ֲ9GE BI;:XܷT6˩5J˩54 BI;:XܷT6˩5/7A? BI;:XܷT6J˩54 BI;:XܷT6/7 X˩5R9: X˩5ֲ9DB: >˩5AKB: >˩5AK X˩5R9: X˩5ֲ9;95˱U̾-C3CIQ:> :X>6˩5,;865˱U̾-C3CIQ:> :X>6,; X˩5R9: X˩5ֲ9/-' ڲ߹-:X>˩56I:,@Ԛ<,* ڲ߹-:X>˩56I:,@Ԛ< X˩5R9: X˩5ֲ9\Z BH 6ӻBO ߹-:XܷTBH ߹-XܷT;W; N= FJ˩54YW BH 6O ߹-:XܷTBH ߹-XܷT;W; N= FJ˩54 X˩5R9: X˩5ֲ9GEݩ5TC B6:X/ݩ5ٟ@5UI: .X>˩5G@Ԛ<DBݩ5C B6:X/ݩ5ٟ@5UI: .X>˩5G@Ԛ< X˩5R9: X˩5ֲ9)' >6˩55WR9:DSDA  >65Wֲ9DSDA X˩5R9: X˩5ֲ9DB: >˩5AKB: >˩5AK X˩5R9: X˩5ֲ9A?б = F߹-=X B˩5HFST:TʡH?CگD/86б = F߹-=X BHFSTT9CگD/ X˩5R9: X˩5ֲ9/-' ڲ߹-:X>˩56I:,@Ԛ<,* ڲ߹-:X>˩56I:,@Ԛ< X˩5R9: X˩5ֲ9nlRA߹-:X> B6˩51D0;Hٟ@R9:KBB>5IBEKRFTD>6@Ԛ<_]RA߹-:X> B610Hٟ@ֲ9KBB>5IBEKRFTD>6@Ԛ< X˩5R9: X˩5ֲ9GEݩ5TC B6:X/ݩ5ٟ@5UI: .X>˩5G@Ԛ<DBݩ5C B6:X/ݩ5ٟ@5UI: .X>˩5G@Ԛ< X˩5R9: X˩5ֲ9GE KB6NEI:X5R9:˩5UIR>:DSDAA? KB6NEI:X5ֲ9˩5UIR>:DSDA X˩5R9: X˩5ֲ9DB: >˩5AKB: >˩5AK X˩5R9: X˩5ֲ9DB߹-:XܷT6H߹-:XܷT6˩5Q' ѲB6ӻBO453߹-:XܷT6߹-:XܷT6Q ѲB04 X˩5R9: X˩5ֲ9/-' ڲ߹-:X>˩56I:,@Ԛ<,* ڲ߹-:X>˩56I:,@Ԛ< X˩5R9: X˩5ֲ9/-0:X6˩50:X6˩5>464T&$0:X60:X6>46T X˩5R9: X˩5ֲ9GEݩ5TC B6:X/ݩ5ٟ@5UI: .X>˩5G@Ԛ<DBݩ5C B6:X/ݩ5ٟ@5UI: .X>˩5G@Ԛ< X˩5R9: X˩5ֲ9A?б = F߹-=X B˩5HFST:TʡH?CگD/86б = F߹-=X BHFSTT9CگD/ X˩5R9: X˩5ֲ9DB: >˩5AKB: >˩5AK X˩5R9: X˩5ֲ9\ZDRA9į?߹-=X>6˩5H0-DE06EщQI.6щQ22DSDAYWDRA9į?߹-=X>6H0-DE06EщQI.6щQ22DSDA NF;WH4EK NF;WH4EK&$ NF;WHEK9ݠ.ET#! NF;WHEK9ET NF;WH4EK NF;WH4EK#! N;WHEK9ݠ.ET N;WHE9ET NF;WH4EK NF;WH4EK&$ NF;WHEK9ݠ.ET#! NF;WHEK9ET NF;WH4EK NF;WH4EK&$ N;WHEKK9ݠ.ET N;WHE9ET NF;WH4EK NF;WH4EK&$ NF;WHEK9ݠ.ET#! NF;WHEK9ET NF;WH4EK NF;WH4EK)' N9ݠ.;WKE 9ݠ.ET  N9;WK 9ET NF;WH4EK NF;WH4EK&$ NF;WHEK9ݠ.ET#! NF;WHEK9ET NF;WH4EK NF;WH4EK#! N;WKEK9ݠ.ET N;WKK9ET NF;WH4EK NF;WH4EK&$ NF;WHEK9ݠ.ET#! NF;WHEK9ET NF;WH4EK NF;WH4EK&$ NF;WHEK9ݠ.ET#! NF;WHEK9ET NF;WH4EK NF;WH4EK&$ NF;WHEK9ݠ.ET#! NF;WHEK9ET NF;WH4EK NF;WH4EK#! NF;WEK9ݠ.ET  NF;WEK9ET NF;WH4EK NF;WH4EK&$ NF;WHEK9ݠ.ET#! NF;WHEK9ET NF;WH4EK NF;WH4EK)' NF;W2T9K9ݠ.ET&$ NF;W2T9K9ET NF;WH4EK NF;WH4EK&$ NF;WHEK9ݠ.ET#! NF;WHEK9ET NF;WH4EK NF;WH4EK#! NF;WHK9ݠ.ET  NF;WHK9ET NF;WH4EK NF;WH4EK&$ NF;WHEK9ݠ.ET#! NF;WHEK9ET NF;WH4EK NF;WH4EK)' N9ݠ.;WHE 9ݠ.ET#! N9;WHE 9ET:CT67Iַ;:T67I 7EU:CT67Iַ;78:T67I:CT67Iַ;:T67I/-:T678:T67Iַ;—P=8-)':T678:T67I=8-:CT67Iַ;:T67I 7EU:CT67Iַ;78:T67I:CT67Iַ;:T67I T18:CT67Iַ;T18:T67I:CT67Iַ;:T67I 7EU:CT67Iַ;78:T67I:CT67Iַ;:T67Iܥ60T67ȣ8Iַ;ܥ60T67I:CT67Iַ;:T67I 7EU:CT67Iַ;78:T67I:CT67Iַ;:T67I,*:0EUPU,I:T67Iַ;#!:08P,I:T67I:CT67Iַ;:T67I 7EU:CT67Iַ;78:T67I:CT67Iַ;:T67I0T67ȣ8Iַ;0T67I:CT67Iַ;:T67I 7EU:CT67Iַ;78:T67I:CT67Iַ;:T67I:CT67Iַ;@?:T67I@:CT67Iַ;:T67I 7EU:CT67Iַ;78:T67I:CT67Iַ;:T67I/-:CT67Iַ;:CT67Iַ;;#!:T67I:T67I;:CT67Iַ;:T67I 7EU:CT67Iַ;78:T67I:CT67Iַ;:T67I,*0EU4J8:CT67Iַ;ܥ6 084J:T67Iܥ6:CT67Iַ;:T67I 7EU:CT67Iַ;78:T67I:CT67Iַ;:T67I)':CT67Iַ;M/TۓR7K:T67IMTۓR7:CT67Iַ;:T67I 7EU:CT67Iַ;78:T67I:CT67Iַ;:T67I)':CT67Iַ;M/TۓR7K:T67IMTۓR7:CT67Iַ;:T67I 7EU:CT67Iַ;78:T67I:CT67Iַ;:T67I P P4X>E1; 4X>B E1;>XHMʭBWTB>XHMʭBW4X>E1; 4X>B:D>75.T:D>75.T4X>E1; 4X>B/--XE1;7߹-WD7ՕNծH-XBՕNծH4X>E1; 4X>BMK%X6Xޡ8XSX8XNX.XCXCXFX2X4XCA?%X6Xޡ8XX8XNXXXXFX2X4XC4X>E1; 4X>B E1;/64DG@KB/6D@K4X>E1; 4X>BSQİFE1;/64X۹/>OX۹/>TʭBS>OʭBS>TU>6K53İFB/6X>OX>B>OB>U>64X>E1; 4X>BkiE1;MIB>ю2/4AT23WS;XIю2Xю2>ю2UATXIX)PNBMI>ю2/4F23WS;X22ю2UFXI)4X>E1; 4X>B/4?BOBT/4?BOB4X>E1; 4X>B E1;>XHMʭBWTB>XHMʭBW4X>E1; 4X>B)'/43>L??HF? FT#!/43>L?HF FT4X>E1; 4X>B/--XE1;7߹-WD7ՕNծH-XBՕNծH4X>E1; 4X>BGE˛59/=T4>X?ޡ8RV4>E1;6T44K2,*-4>/4>B6T5K24X>E1; 4X>B E1;/64DG@KB/6D@K4X>E1; 4X>BVTDG:/4X>3?X?FBTF?ޡ8H?.:FʭB.4?F6>3?XFBTFޡ8H.FF64X>E1; 4X>BkiE1;MIB>ю2/4AT23WS;XIю2Xю2>ю2UATXIX)PNBMI>ю2/4F23WS;X22ю2UFXI)4X>E1; 4X>BVTE1;>C6PKH,-X?71E70NʡHH064TDBB>C6PKH,-X0NʡHH064T4X>E1; 4X>B E1;>XHMʭBWTB>XHMʭBW4X>E1; 4X>B#!U/4X>3B?8,T U/4X>3B?,T4X>E1; 4X>B/--XE1;7߹-WD7ՕNծH-XBՕNծH4X>E1; 4X>B/--446M; ->>@W>W>)'-56M; ->>@>W>4X>E1; 4X>B E1;/64DG@KB/6D@K4X>E1; 4X>B53?41K>F7>>D<(6հL3T53?41K>F7>>D<(6հL3T4X>E1; 4X>BkiE1;MIB>ю2/4AT23WS;XIю2Xю2>ю2UATXIX)PNBMI>ю2/4F23WS;X22ю2UFXI)4X>E1; 4X>B_]E1;1-X?P@4BS?H-M>ԁ:FT,;J8L0(!MKB1-X?P@4BS?H-M>ԁ:FT,;J8L0X4X>E1; 4X>B E1;>XHMʭBWTB>XHMʭBW4X>E1; 4X>B)'/43?>LHJX/ENB#!/43?>LHJXEN4X>E1; 4X>B/--XE1;7߹-WD7ՕNծH-XBՕNծH4X>E1; 4X>B3H24CM΄/ǟ9=Tޡ8?$ڻ($!3H24CMτ/-8$ڻ($!4X>E1; 4X>B E1;/64DG@KB/6D@K4X>E1; 4X>B,*/43?>LHJX/E1ʞ:-)'/43?>LHJXE1ʞ:-4X>E1; 4X>BkiE1;MIB>ю2/4AT23WS;XIю2Xю2>ю2UATXIX)PNBMI>ю2/4F23WS;X22ю2UFXI)4X>E1; 4X>B,*HE1;>/4H?LB/4HLBBQT2>FWA>BL>F)'4>BWAQT2>JF@Ԛ< 4>BWAL>S@Ԛ<WA>BQT2>FWA>BL>F869GOBQT2>PVP.5AJ>P)'9GOBL>PP.J>PWA>BQT2>FWA>BL>F20R0W6>BйSQT2>FDSDA,*R0W6>BйSL>FDSDAWA>BQT2>FWA>BL>FA?COW>M>BWAQT296O8GDSDA;9COW>M>BWAL96O8GDSDAWA>BQT2>FWA>BL>FMKR9GMWWAI>BN==9=A>MNS9=A7B86RךGWWAI>BN==>MNS9=+WA>BQT2>FWA>BL>F209G>BQT2>VJ768T7=&$9G>BL>VќJ687=WA>BQT2>FWA>BL>F;9D9DI>BWRQT2>SQU>V@Ԛ<&$9I>BWRLS>V@Ԛ<WA>BQT2>FWA>BL>F/- -FWLSJ$2AB/- -FWLSJ$2ABWA>BQT2>FWA>BL>F86DS8G>BWAQT2>M@?@@@/-DS8G>BWAL>M@?@@WA>BQT2>FWA>BL>F)'NFHFOFOVVA4@K&$NFHFOFOVVA4@WA>BQT2>FWA>BL>F&$W>V>BWAQT2@Ԛ< W>V>BWAL@Ԛ<WA>BQT2>FWA>BL>F86>BQT2>΂PF;/U N5LUٶ,*>BL>΂PF;/U N5LUWA>BQT2>FWA>BL>F)'4>BWAQT2>JF@Ԛ< 4>BWAL>S@Ԛ<WA>BQT2>FWA>BL>F#!>BWBQT2>F@Ԛ<>BWBL>F@Ԛ<WA>BQT2>FWA>BL>F20R0W6>BйSQT2>FDSDA,*R0W6>BйSL>FDSDAWA>BQT2>FWA>BL>F#!W2E>DQT2ϩNFBWE>DLϩNFWA>BQT2>FWA>BL>FMKR9GMWWAI>BN==9=A>MNS9=A7B86RךGWWAI>BN==>MNS9=+WA>BQT2>FWA>BL>F8F5R.UES28F=.UESWA>BQT2>FWA>BL>F;9D9DI>BWRQT2>SQU>V@Ԛ<&$9I>BWRLS>V@Ԛ<WA>BQT2>FWA>BL>F)'J>R8"FK%FJ>RF%FWA>BQT2>FWA>BL>F86DS8G>BWAQT2>M@?@@@/-DS8G>BWAL>M@?@@WA>BQT2>FWA>BL>F53D96MEK>BQT2>VD@@@&$D96EK>BL>V@@WA>BQT2>FWA>BL>F&$W>V>BWAQT2@Ԛ< W>V>BWAL@Ԛ<WA>BQT2>FWA>BL>F ֖F>PMމ6J6J7+Mމ6J6J7WA>BQT2>FWA>BL>F)'4>BWAQT2>JF@Ԛ< 4>BWAL>S@Ԛ<WA>BQT2>FWA>BL>F/-UMӛ?1?7F,7MRQ#!UMӛ?1?MRQWA>BQT2>FWA>BL>F20R0W6>BйSQT2>FDSDA,*R0W6>BйSL>FDSDAWA>BQT2>FWA>BL>F/-D96M>BWAQT2DSDA&$D96>BWALDSDAIDT0I ID0IDBDT30IDT30I4DT30IDZ.>4I?86D30ID30I4D30IDZ.>4I2IDT0I ID0I,*IDT01,ICDT0IDT#!IDT01ID0IDTIDT0I ID0I&$RIDTN0I0I4@Ԛ<RID00I4@Ԛ<IDT0I ID0I/-0I9Q6S=KI8KI:@Ԛ</-0I9Q6S=KI8KI:@Ԛ<IDT0I ID0I0I5I?0I5I2IDT0I ID0IA?DT3I58DT38I5DT3X58I?53D3I58D38I5D3X58I2IDT0I ID0I0IIַ;4DG@K0II4D@KIDT0I ID0I&$IDTPDN0I0I@Ԛ<IDPD00I@Ԛ<IDT0I ID0I#!I0IػKI0I4I0I#!I0IػKI0I4I0IIDT0I ID0I/-0IDTFַ;8- 8T !0D18- 8IDT0I ID0I0I4@Ԛ<0I4@Ԛ<IDT0I ID0I20D0ID0I4D0IDZ.>4I?/-D0ID0I4D0IDZ.>4I2IDT0I ID0I0IDZ.>4@Ԛ<0IDZ.>4@Ԛ<IDT0I ID0I20IDT,;01,ICDT0IDT)'IDT,;01ID0IDTIDT0I ID0IDBDT30IDT30I4DT30IDZ.>4I?86D30ID30I4D30IDZ.>4I2IDT0I ID0IDB,TܷT0I,TܷT0I4,TܷT0IDZ.>4I?86,ܷT0I,ܷT0I4,ܷT0IDZ.>4I2IDT0I ID0I&$RIDTN0I0I4@Ԛ<RID00I4@Ԛ<IDT0I ID0IMKDT3N0IDT3N0I4DT3N0IDZ.>4I?/-D30D304D30DZ.>4I2IDT0I ID0I0I5I?0I5I2IDT0I ID0I0IػK4@K0IػK4@KIDT0I ID0I0IIַ;4DG@K0II4D@KIDT0I ID0I0IDG@K0ID@KIDT0I ID0I#!I0IػKI0I4I0I#!I0IػKI0I4I0IIDT0I ID0I53DT3IDT3Iַ;DT3OII?#!D3ID3ID3OI2L7ٟ@8 Lٟ@8 L7@?L@L7ٟ@8 Lٟ@8 L7B6  LB6L7ٟ@8 Lٟ@8,*6—P,L7ٟ@8H7@K7@Ԛ<#!6ٟ@8H7@K7@Ԛ<L7ٟ@8 Lٟ@8 L7@K  L@KL7ٟ@8 Lٟ@8 L7@?L@L7ٟ@8 Lٟ@8 L76?  L6?L7ٟ@8 Lٟ@8,*6—P,L7ٟ@8H7@K7@Ԛ<#!6ٟ@8H7@K7@Ԛ<L7ٟ@8 Lٟ@8,*6L78>ٟ@HF@F76>P)'6L8>ٟ@HF@F76>PL7ٟ@8 Lٟ@8 L7@?L@L7ٟ@8 Lٟ@820A7L7Hٟ@8EP;:PO@@@,*A7LHٟ@8EP;:PO@@L7ٟ@8 Lٟ@8,*6—P,L7ٟ@8H7@K7@Ԛ<#!6ٟ@8H7@K7@Ԛ<L7ٟ@8 Lٟ@8—P,L7?60 ?60L7ٟ@8 Lٟ@8 L7@?L@L7ٟ@8 Lٟ@8/-L7ٟ@8AR>:6>NDSDA,*Lٟ@8AR>:6>NDSDAL7ٟ@8 Lٟ@8,*6—P,L7ٟ@8H7@K7@Ԛ<#!6ٟ@8H7@K7@Ԛ<L7ٟ@8 Lٟ@8wuL7DF6L7B7L76<6—P,L7 -Gٟ@867@75L78>ٟ@;FJ>N1S_]LDF6LB7L6Ǥ< -Gٟ@867@75L8>ٟ@;FJ>N1S¨0A=Tɾ=S0=Tɾ=S20¨0A=Tɾ=SN.W0AT("0=T̗<.0AT¨0A=Tɾ=S0=Tɾ=S,*¨0A=Tɾ=CPI/C/9?T#!0=PI/C/9?T¨0A=Tɾ=S0=Tɾ=S¨0ʽ=>=Tɾ=R@Ԛ<0>=Tɾ=R@Ԛ<¨0A=Tɾ=S0=Tɾ=S86¨0ʽ==Tɾ=C6=Tɾ=C6AANTAT#!0=T6=T6AATA¨0A=Tɾ=S0=Tɾ=S20¨0ʽ=Dٟ@ޢ7C7C=Tɾ=CѲ/DT("0D>=Ѳ/DT¨0A=Tɾ=S0=Tɾ=S/-=Tɾ=>¨0ʽ=ʇXQޢ0ʇXQޢΉX˛5¨0A/TD¨0A/A4J53-ʇXDQ=>ΉX˛50/TD0/AJ¨0A=Tɾ=S0=Tɾ=S¨0ʽ=>=Tɾ=R@Ԛ<0>=Tɾ=R@Ԛ<¨0A=Tɾ=S0=Tɾ=S)'¨0A=Tɾ=W9L/͒A4T0=Tɾ=W/͒A4¨0A=Tɾ=S0=Tɾ=S20¨0ʽ=Dٟ@ޢ=Tɾ=CѲ/DT("0D>=Ѳ/DT¨0A=Tɾ=S0=Tɾ=S,*¨0A=Tɾ=6=T3OTDA4&$0=Tɾ=6=T3OTDA¨0A=Tɾ=S0=Tɾ=S53¨0A=Tɾ=CDA4AATUʡH9A/,*0=T˾=DAAATUʡH9A/¨0A=Tɾ=S0=Tɾ=S ¨0A=Tɾ=68,T0=Tɾ=6,T¨0A=Tɾ=S0=Tɾ=S20¨0A=Tɾ=SN.W0AT("0=T̗<.0AT¨0A=Tɾ=S0=Tɾ=SDBS48¨0ʽ=P=Tɾ=Cϛ)ϛ)))QTɾ=C98KT/-ФO8-=ϛ)ϛ)))Q98KT¨0A=Tɾ=S0=Tɾ=S¨0ʽ=>=Tɾ=R@Ԛ<0>=Tɾ=R@Ԛ<¨0A=Tɾ=S0=Tɾ=S#!E=¨0ʽ==Tɾ=.8?̛<=0=Tɾ=.?¨0A=Tɾ=S0=Tɾ=S20¨0ʽ=Dٟ@ޢ=Tɾ=CѲ/DT("0D>=Ѳ/DT¨0A=Tɾ=S0=Tɾ=S ¨0AD>=Tɾ=C@K0D>=@K¨0A=Tɾ=S0=Tɾ=S53¨0A=Tɾ=CDA4AATUʡH9A/,*0=T˾=DAAATUʡH9A/¨0A=Tɾ=S0=Tɾ=S=Tɾ=C6¨0AT=T60T  S1/W/߹-CʡH97Qן9ں-ʡH97Qן9  S1/W/)'//Æ.J:NLJS1/GB //Æ.J:NLW/G  S1/W/S1/B;AATW/BAAT  S1/W/ S1D?  WD?  S1/W/߹-CʡH97Qן9ں-ʡH97Qן9  S1/W/53S1/E70C/77S1/AB#!W/E7W/ABw  S1/W/S1/B;AATW/BAAT  S1/W/ABAB  S1/W/߹-CʡH97Qן9ں-ʡH97Qן9  S1/W/209J/?ſQ5ߕJCMCRURН?QT)'9J/?ſQ5ߕJCMCRQ  S1/W/S1/B;AATW/BAAT  S1/W/>9@VWF?Wַ;;E-S1Н?>AT20B>9@VWF?Wַ;;E-W?A  S1/W/߹-CʡH97Qן9ں-ʡH97Qן9  S1/W/S1/IA6W/IA6EG?>-EG?>-/-G?R142TN5=7@P:J#!G?142T5=@PJEG?>-EG?>-)'G?>-PL΅/Bڶ>SJ@Ԛ< G?>-΅/BSJ@Ԛ<EG?>-EG?>- G?T4 G?T4EG?>-EG?>- GW-TG*EG?>-EG?>-E,G?/-"D:EG?/-"DEG?>-EG?>-20G?>-G64?9ʉ5;˫N¶;PNT,*G?>-G4?9ʉ5;ΫNPNTEG?>-EG?>-86G?>-22΅/8B?¶7ģCCщQDPDA,*G?>-΅/8BNCщQDPDAEG?>-EG?>-20NE,G?>-?¶7ʡHWB:ģCO#!NEG?>-NW:CEG?>-EG?>-&$G?>-8G?>-4-2#!G?>-G?>-4-2EG?>-EG?>- G?>-/.BʭBѡ8¶;G?>-/BʭBѡ8¶;>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>;9>؞Cׄ9?B:9ڶ>ST=O>I,TJ@Ԛ<53ρ>ׄ9?B:9ST=O>I,TJ@Ԛ<>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>,*K=9:ׄ9?DϪJP>؞C@@@ =:ׄ9?DϪJPρ>@@>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>;9>؞Cׄ9?B:9ڶ>ST=O>I,TJ@Ԛ<53ρ>ׄ9?B:9ST=O>I,TJ@Ԛ<>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>>؞C19Tׄ9?@Ԛ<ρ>19Tׄ9?@Ԛ<>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>;9>؞Cׄ9?B:9ڶ>ST=O>I,TJ@Ԛ<53ρ>ׄ9?B:9ST=O>I,TJ@Ԛ<>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>Ư8Hׄ9?>؞C@@@Ư8Hׄ9?ρ>@@>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>;9>؞Cׄ9?B:9ڶ>ST=O>I,TJ@Ԛ<53ρ>ׄ9?B:9ST=O>I,TJ@Ԛ<>ׄ9?ϪJJ1>>ׄ9?ϪJJ1> ׄ9?=7 ׄ9?=7>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>;9>؞Cׄ9?B:9ڶ>ST=O>I,TJ@Ԛ<53ρ>ׄ9?B:9ST=O>I,TJ@Ԛ<>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>&$>؞Cׄ9?6R1TDPDA ρ>ׄ9?61TDPDA>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>;9>؞Cׄ9?B:9ڶ>ST=O>I,TJ@Ԛ<53ρ>ׄ9?B:9ST=O>I,TJ@Ԛ<>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>DB=>19Tׄ9?ׄ9B9>>Ư8I>؞Cб †M86><=>19Tׄ9?ׄ9B9>>Ư8Iρ>б †M8>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>;9>؞Cׄ9?B:9ڶ>ST=O>I,TJ@Ԛ<53ρ>ׄ9?B:9ST=O>I,TJ@Ԛ<>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>#!U—P۴2>MN,BMСGTUP>MNBMСGT>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>;9>؞Cׄ9?B:9ڶ>ST=O>I,TJ@Ԛ<53ρ>ׄ9?B:9ST=O>I,TJ@Ԛ<>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>hf1 TSׄ9?AJ9JOT,Q SF>T9P,1R>؞Cб :6)ʪ_]1 TSׄ9?A˱9OT,Q SF>T9P,1Rρ>б :6)ʪ>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>;9>؞Cׄ9?B:9ڶ>ST=O>I,TJ@Ԛ<53ρ>ׄ9?B:9ST=O>I,TJ@Ԛ<>ׄ9?ϪJJ1>>ׄ9?ϪJJ1>A?:91STׄ9?9M,.T>BϪJ9>؞C@@@;9:91STׄ9?9M,.T>BϪJ9ρ>@@#!&6D>49@P>2#!&6D>49@P>26942A7B694+#!&6D>49@P>2#!&6D>49@P>2#!C14>@D2>@Ԛ<#!C14>@D2>@Ԛ<#!&6D>49@P>2#!&6D>49@P>2;9&FD6D249@D2>1XJVV53&FD6D249@D2>1JV#!&6D>49@P>2#!&6D>49@P>2SQ&L492IщQ—P=&1X4BD71XG:&T6GEީ L492IщQ=&14BD71G:&T6#!&6D>49@P>2#!&6D>49@P>26942A7B694+#!&6D>49@P>2#!&6D>49@P>2 Cڜ>42K.B@KCڜ>4K.@K#!&6D>49@P>2#!&6D>49@P>2;9&FD6D249@D2>1XJVV53&FD6D249@D2>1JV#!&6D>49@P>2#!&6D>49@P>2;961&6P>429Q1@&@@@2061&6P>4ƋQ1@&@@#!&6D>49@P>2#!&6D>49@P>26942A7B694+#!&6D>49@P>2#!&6D>49@P>2#! 6E424ڜ>2AЍ6E44ڜ>2A#!&6D>49@P>2#!&6D>49@P>2;9&FD6D249@D2>1XJVV53&FD6D249@D2>1JV#!&6D>49@P>2#!&6D>49@P>26>42EX@N6>4E@N#!&6D>49@P>2#!&6D>49@P>26942A7B694+#!&6D>49@P>2#!&6D>49@P>2>4ڜ>F5@Ԛ<>4ڜ>F5@Ԛ<#!&6D>49@P>2#!&6D>49@P>2;9&FD6D249@D2>1XJVV53&FD6D249@D2>1JV#!&6D>49@P>2#!&6D>49@P>2;9CRW6?۱URT:R&6D>62486CRW6?۱URT:R&6D>D4UEϨHWV@8Ǡ2ϨHWV@8>V;>Wٟ@2>6@2>6>؞C@@@53Ǡ2>V;>Wٟ@2>6@2>6ρ>@@UEϨHWV@8Ǡ2ϨHWV@8MKUE;6֊2>W6,ϨH@FL6,B,TE;>A7BDBǠ2;6֊2>W6,ϨH@FL6,B,TE;>+UEϨHWV@8Ǡ2ϨHWV@8>V;>Wٟ@2>6@2>6>؞C@@@53Ǡ2>V;>Wٟ@2>6@2>6ρ>@@UEϨHWV@8Ǡ2ϨHWV@853DUE>W@P21HSV9;W@Ԛ<,*DǠ2>W@71HSV9=@Ԛ<UEϨHWV@8Ǡ2ϨHWV@8>V;>Wٟ@2>6@2>6>؞C@@@53Ǡ2>V;>Wٟ@2>6@2>6ρ>@@UEϨHWV@8Ǡ2ϨHWV@886UE2V=L296T=ȟN2DS>؞CԚ<20Ǡ22V=L296T=ȟN2DSρ>Ԛ<UEϨHWV@8Ǡ2ϨHWV@8>V;>Wٟ@2>6@2>6>؞C@@@53Ǡ2>V;>Wٟ@2>6@2>6ρ>@@UEϨHWV@8Ǡ2ϨHWV@8JH>؞CXAN;WSV626DUE=WL6,6@Ԛ<A?ρ>XAN=SV626DǠ2=WL6,6@Ԛ<UEϨHWV@8Ǡ2ϨHWV@8>V;>Wٟ@2>6@2>6>؞C@@@53Ǡ2>V;>Wٟ@2>6@2>6ρ>@@UEϨHWV@8Ǡ2ϨHWV@8PN9;2UEDSV16=GB<6>؞Cб :6)ʪDB9;2Ǡ2DSV16=G<ρ>б :6)ʪUEϨHWV@8Ǡ2ϨHWV@8>V;>Wٟ@2>6@2>6>؞C@@@53Ǡ2>V;>Wٟ@2>6@2>6ρ>@@UEϨHWV@8Ǡ2ϨHWV@8DBD7>UE;ASVϨH,ϨHW;62>T6@Ԛ<>Ǡ2;ASV؋8ϨHW;62>T6@Ԛ<UEϨHWV@8Ǡ2ϨHWV@8>V;>Wٟ@2>6@2>6>؞C@@@53Ǡ2>V;>Wٟ@2>6@2>6ρ>@@UEϨHWV@8Ǡ2ϨHWV@8;9>UE;FWOT7,>A8SVDPDA20>Ǡ2;FW37,>ASVDPDAUEϨHWV@8Ǡ2ϨHWV@8>V;>Wٟ@2>6@2>6>؞C@@@53Ǡ2>V;>Wٟ@2>6@2>6ρ>@@UEϨHWV@8Ǡ2ϨHWV@886UE616=V6>6L=>؞C@@@/-Ǡ2616=V6>6L=ρ>@@UEϨHWV@8Ǡ2ϨHWV@8>V;>Wٟ@2>6@2>6>؞C@@@53Ǡ2>V;>Wٟ@2>6@2>6ρ>@@UEϨHWV@8Ǡ2ϨHWV@8)'>؞CUE;V626DSDA#!ρ>Ǡ2;V626DSDA# UDT #UDT@@@# UDT #UDTJHR6># HL6M9ٟ@UVUӁGDܤK8<# @@@;9R6>#HL6M@UVUӁGA8<#@@# UDT #UDT@@@# UDT #UDT UUD,A#%@@@UUD,A#@@# UDT #UDT@@@# UDT #UDT20UN.T5ƛK,6I16#%@@@,*UN.T5ƛK,6I16#@@# UDT #UDT@@@# UDT #UDTDB9Ԛ<6ϪJ># >Q@D9DFҾWSܤK# @@@,*16>#>Q@9FҾWS#@@# UDT #UDT@@@# UDT #UDT20#%>UӁGD9D.7>#%@@@#!#>UӁG9.7>#@@# UDT #UDT@@@# UDT #UDTMKR, 9S=ɵOʡH9B>UUD=UL9TM# @@@;9R, SɵO9>UUD=UL9TM#@@# UDT #UDT@@@# UDT #UDT&$DПC,UӁGDܤK# @@@DПC,UӁGA#@@# UDT #UDT@@@# UDT #UDT&$E>F# UDK0@@@ E>F#UDK0@@# UDT #UDT@@@# UDT #UDT,*# UDK-щQRQ# @@@#!#UDK-щQRQ#@@T;JC;XH-T;C;XH-DB;JIٟ@FXH-EDܤKV3ET)ʪ86;@FحXH-EDV3ET)ʪT;JC;XH-T;C;XH-YW9T:B7ٟ@)X-;J%)ѾCTO7%T87FD0A?9T:B7ٟ@X-;)ѾCTO7%T8F0T;JC;XH-T;C;XH-86;J85SXH-8E6O@Ԛ<&$;85SXH-8E6@Ԛ<T;JC;XH-T;C;XH-&$5M;J.B7H1R@Ԛ<#!5M;.B7H1R@Ԛ<T;JC;XH-T;C;XH-><;J85SXH-8E6ODSDA,*;85SXH-8E6DSDAT;JC;XH-T;C;XH-PN;JA5DN8R8EBS;76XH-NFK,DPDAA?;A5DNRNBS;5XH-NF,DPDAT;JC;XH-T;C;XH-DB;JIٟ@FXH-EDܤKV3ET)ʪ86;@FحXH-EDV3ET)ʪT;JC;XH-T;C;XH-_]5M;JDCٟ@F26K:X-RB9S8@D69>ҾWD,DPDAPN5M;D@F26K:X-RB9S8@6ߖ>D,DPDAT;JC;XH-T;C;XH-86;J85SXH-8E6O@Ԛ<&$;85SXH-8E6@Ԛ<T;JC;XH-T;C;XH-;J١-ܤKS/@N;١-ܤKS@NT;JC;XH-T;C;XH-><;J85SXH-8E6ODSDA,*;85SXH-8E6DSDAT;JC;XH-T;C;XH-866C; X-NWHT;J)ʪ/-6C;X-NWH;)ʪ/-5DR9D93A8RR7.,*5DR9D93A8RR720R9D93AR7.6ǽ=DPDA,*R9D93AR76DPDA/-5DR9D93A8RR7.,*5DR9D93A8RR7GE5DR93A7.8RAƛK2TH?T!HA†M86A?5DR93A78RAƛK2TH?T!HA†M8/-5DR9D93A8RR7.,*5DR9D93A8RR720R9D93AR7.6ǽ=DPDA,*R9D93AR76DPDA/-5DR9D93A8RR7.,*5DR9D93A8RR7865RAб D93A7.8RA!@@@205RAб D93A78RA!@@/-5DR9D93A8RR7.,*5DR9D93A8RR720R9D93AR7.6ǽ=DPDA,*R9D93AR76DPDA/-5DR9D93A8RR7.,*5DR9D93A8RR7><ʡH9BR93AV7.RAϪJHA@@@209R93AV7RAϪJHA@@/-5DR9D93A8RR7.,*5DR9D93A8RR720R9D93AR7.6ǽ=DPDA,*R9D93AR76DPDA/-5DR9D93A8RR7.,*5DR9D93A8RR7;9!HA5DR9L9BR7.RϪJ,@Ԛ<86!HA5DR9L9BR7RϪJ,@Ԛ</-5DR9D93A8RR7.,*5DR9D93A8RR720R9D93AR7.6ǽ=DPDA,*R9D93AR76DPDA/-5DR9D93A8RR7.,*5DR9D93A8RR7ki5D93AJR7.BRFD3Bٟ@75Dٟ@7>HAKADP!HA†M86ec5D93AJR7BRFD3Bٟ@75Dٟ@7>HAKADP!HA†M8/-5DR9D93A8RR7.,*5DR9D93A8RR720R9D93AR7.6ǽ=DPDA,*R9D93AR76DPDA/-5DR9D93A8RR7.,*5DR9D93A8RR7b`5DR9L9DR7.3>3RQKUDA-D3D!HA†M86\Z5DR9L9DR73>3RQKUDA-D3D!HA†M8/-5DR9D93A8RR7.,*5DR9D93A8RR720R9D93AR7.6ǽ=DPDA,*R9D93AR76DPDA/-5DR9D93A8RR7.,*5DR9D93A8RR7/-5D9L9D7.RƭI!@@@)'5D9L9D7RƭI!@@/-5DR9D93A8RR7.,*5DR9D93A8RR720R9D93AR7.6ǽ=DPDA,*R9D93AR76DPDA/-5DR9D93A8RR7.,*5DR9D93A8RR7hf95L9DRG7.3AWDEWKѾCHT7HA7:6)ʪVT95L9DRG73AWDEWK5:6)ʪ/-5DR9D93A8RR7.,*5DR9D93A8RR720R9D93AR7.6ǽ=DPDA,*R9D93AR76DPDA/-5DR9D93A8RR7.,*5DR9D93A8RR7><ϪJAHARA9D93ARADϪJ7.K5;9ϪJAHARA9D93ARADϪJ7K5.6O<-  .O</-<-N<-%%O܊7<0>T&$<-N<%%O܊70>.6O<-  .O<DBW<-7R:.6O/1EPٟ@9ٟ@MBʔ77>P/-W<7R:1EP9MBݔ7>P.6O<-  .O</-<-N<-%%O܊7<0>T&$<-N<%%O܊70>.6O<-  .O<R-R-.6O<-  .O</-<-N<-%%O܊7<0>T&$<-N<%%O܊70>.6O<-  .O<539.T&$<-N<%%O܊70>.6O<-  .O<20.6O37;0G .6.6T&$<-N<%%O܊70>.6O<-  .O<.6O8I6T.O8I6T.6O<-  .O</-<-N<-%%O܊7<0>T&$<-N<%%O܊70>.6O<-  .O<.6OTK6.OTK6.6O<-  .O</-<-N<-%%O܊7<0>T&$<-N<%%O܊70>.6O<-  .O<20 .6O/EED.6O/EE"W#! ED.6O/E"W.6O<-  .O</-<-N<-%%O܊7<0>T&$<-N<%%O܊70>.6O<-  .O<DBW<-7R:.6O/1EPٟ@9ٟ@MBDSDA20W<7R:1EP9MBDSDA.6O<-  .O</-<-N<-%%O܊7<0>T&$<-N<%%O܊70>.6O<-  .O<>4M54Н?A3AT UD1ձM4M54AA  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1,*UD1CT%8>9S1ME;)'UD1CT%8>9S1M;  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1 UD1U3ʡHWRDU UD1U3ʡHWRDU  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1 UD1M>4M5G3UD1ձM4M5G  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1/-U8JD1UHAʡH RGM=T,*U8JD1UHAʡH RGM=  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD120UD1FBLL¶7JѾC4W,M4;#!UD1BNJCW14;  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1&$UD1ʡHR:DGAʈO>6#!UD1ʡHR:DGA>6  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1)'UȂ3.1PD>J١-- AB&$UȂ3.1P>J١-- AB  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1SQ>KU9D1M.OGUʡH9>9U199>U69IT@Ԛ<DB>KU9D1M.ǼOU>9U19>U6IT@Ԛ<  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1#!UD19ҧK1B—PϪJ>D UD19ҧK1BJ>D  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1#!UD19ҧK1B—PϪJ>D UD19ҧK1BJ>D  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1)'UȂ31MC—PQ>DԃPEAB U͂3M—PQ>DUAB  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1)'UD1—PRޚ6HU49QÐWB&$UD1Rޚ6HU49QÐWB  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1&$UD1W>β7UщQDG@K UD1W>ƴ7щQD@K  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1UD1MʡHRHUUD1MʡHRHU  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1/-UD1@1GM3̛<:9T!#!UD1@1GM3:TW  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1UD1>NVNFUD1>NVNF  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1,*UD1MʡHWRHUJ6J7,*UD1MʡHWRHUJ6J7  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1JHUBMBUD19ҧK1B—PϪJ>DS1UBDBN@Ԛ<>Dū1UDBN@Ԛ<  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD120AʋMQU,D1U>4,3T5=T&$AQU,D1U>435=  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1534UD1M/5S7H47 N H)'4UD1M5S N H  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1)'UD1U/VӲU>/=WQT UD1*ӲU>/=WQ  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1DBUD1UʡHWR6U>G=SU/T()!/-UD1UʡHWR6U>GSUT  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1#!UD1M>U.61D3̛<2/ќ6HQT&$UD1KU>1D3/HQ  UD1  UD153UD1ۓRD;16ǁRK3K"'!)'UD1ۓRD;16ǁRK3K  UD1  UD1DBUD1ۓR4H5B—PϪJ>D3KT((!)'UD1ۓR4H5BJ>D3KEԼOR@C/8EC/8#!HԼOR@C/8>ٟ@@Ԛ<HC/8>ٟ@@Ԛ<EԼOR@C/8EC/8qoHԼOR@C/D8>ٟ@8 P@NLΊ;J@>@BΊ;RP@NLΊ;J@>DOָ:?ThfHC/D8>ٟ@8 P@NLΊ;J@>@BΊ;RP@NLΊ;J@>Dָ:?TEԼOR@C/8EC/8ԼO@K@K OK@KEԼOR@C/8EC/8nlHԼOR@C/8>ٟ@8PHۇLBDCɕH5ǟVGRPHGLBD=ږH5DOָ:?TecHC/8>ٟ@8PHۇLBDCɕH5ǟVGRPHGLBD=ږH5Dָ:?TEԼOR@C/8EC/8><ԼOR@E>1>THIԓ4C/8>ٟ@A7B,*E>1>TC/8>ٟ@+EԼOR@C/8EC/8}{M:İU;ԼOR@?R5BD5@E7K՞RWKD5C/8>ٟ@>:T(!K;86_]:?R5BD5@EG՞RʼGD5C/8>ٟ@>:TK;86EԼOR@C/8EC/8#!HԼOR@C/8>ٟ@@Ԛ<HC/8>ٟ@@Ԛ<EԼOR@C/8EC/8#!ß<:Dć?ԼO@C/8>ٟ@ ß<:Dć?OC/8>ٟ@EԼOR@C/8EC/8ԼO@K@K OK@KEԼOR@C/8EC/8,*HԼOR@NܒM̺2C/D8>ٟ@#!HNMC/D8>ٟ@EԼOR@C/8EC/8><ԼOR@E>1>THIԓ4C/8>ٟ@A7B,*E>1>TC/8>ٟ@+EԼOR@C/8EC/853HԼOR@C/D8>ٟ@RE@>DW/-HC/D8>ٟ@RE@>DWEԼOR@C/8EC/8#!HԼOR@C/8>ٟ@@Ԛ<HC/8>ٟ@@Ԛ<EԼOR@C/8EC/8&$EԼO@C/8>ٟ@DPDA#!EOC/8>ٟ@DPDAEԼOR@C/8EC/8ԼO@K@K OK@KEԼOR@C/8EC/8HԼOR@C/8>ٟ@D>AIH!D>HIH:@>Hٟ@/ў7:@՞R.ٟ@/ў79OEEXqoHC/8>ٟ@D>AIH!D>HIH:>Hٟ@/ў7:>ٟ@/ў79EEXEԼOR@C/8EC/8><ԼOR@E>1>THIԓ4C/8>ٟ@A7B,*E>1>TC/8>ٟ@+EԼOR@C/8EC/8;9HԼO@C/8>ٟ@H2992653HOC/8>ٟ@H2926EԼOR@C/8EC/8#!HԼOR@C/8>ٟ@@Ԛ<HC/8>ٟ@@Ԛ<EԼOR@C/8EC/820H?RSHIԓ4>ԼO@‹7C/8>ٟ@)'H?RS>O‹7C/8>ٟ@ \ No newline at end of file diff --git a/paddle/trainer/tests/gen_proto_data.py b/paddle/trainer/tests/gen_proto_data.py deleted file mode 100644 index 8cc6d44673..0000000000 --- a/paddle/trainer/tests/gen_proto_data.py +++ /dev/null @@ -1,279 +0,0 @@ -# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from cStringIO import StringIO - -import paddle.proto.DataFormat_pb2 as DataFormat -from google.protobuf.internal.encoder import _EncodeVarint - -import logging -import pprint - -logging.basicConfig( - format='[%(levelname)s %(asctime)s %(filename)s:%(lineno)s] %(message)s', ) -logger = logging.getLogger('paddle') -logger.setLevel(logging.INFO) - -OOV_POLICY_IGNORE = 0 -OOV_POLICY_USE = 1 -OOV_POLICY_ERROR = 2 - -num_original_columns = 3 - -# Feature combination patterns. -# [[-1,0], [0,0]] means previous token at column 0 and current token at -# column 0 are combined as one feature. -patterns = [ - [[-2, 0]], - [[-1, 0]], - [[0, 0]], - [[1, 0]], - [[2, 0]], - [[-1, 0], [0, 0]], - [[0, 0], [1, 0]], - [[-2, 1]], - [[-1, 1]], - [[0, 1]], - [[1, 1]], - [[2, 1]], - [[-2, 1], [-1, 1]], - [[-1, 1], [0, 1]], - [[0, 1], [1, 1]], - [[1, 1], [2, 1]], - [[-2, 1], [-1, 1], [0, 1]], - [[-1, 1], [0, 1], [1, 1]], - [[0, 1], [1, 1], [2, 1]], -] - - -def make_features(sequence): - length = len(sequence) - num_features = len(sequence[0]) - - def get_features(pos): - if pos < 0: - return ['#B%s' % -pos] * num_features - if pos >= length: - return ['#E%s' % (pos - length + 1)] * num_features - return sequence[pos] - - for i in xrange(length): - for pattern in patterns: - fname = '/'.join([get_features(i + pos)[f] for pos, f in pattern]) - sequence[i].append(fname) - - -''' -Source file format: -Each line is for one timestep. The features are separated by space. -An empty line indicates end of a sequence. - -cutoff: a list of numbers. If count of a feature is smaller than this, - it will be ignored. -if oov_policy[i] is OOV_POLICY_USE, id 0 is reserved for OOV features of -i-th column. - -return a list of dict for each column -''' - - -def create_dictionaries(filename, cutoff, oov_policy): - def add_to_dict(sequence, dicts): - num_features = len(dicts) - for features in sequence: - l = len(features) - assert l == num_features, "Wrong number of features " + line - for i in xrange(l): - if features[i] in dicts[i]: - dicts[i][features[i]] += 1 - else: - dicts[i][features[i]] = 1 - - num_features = len(cutoff) - dicts = [] - for i in xrange(num_features): - dicts.append(dict()) - - f = open(filename, 'rb') - - sequence = [] - - for line in f: - line = line.strip() - if not line: - make_features(sequence) - add_to_dict(sequence, dicts) - sequence = [] - continue - features = line.split(' ') - sequence.append(features) - - for i in xrange(num_features): - dct = dicts[i] - n = 1 if oov_policy[i] == OOV_POLICY_USE else 0 - todo = [] - for k, v in dct.iteritems(): - if v < cutoff[i]: - todo.append(k) - else: - dct[k] = n - n += 1 - - if oov_policy[i] == OOV_POLICY_USE: - # placeholder so that len(dct) will be the number of features - # including OOV - dct['#OOV#'] = 0 - - logger.info('column %d dict size=%d, ignored %d' % (i, n, len(todo))) - for k in todo: - del dct[k] - - f.close() - return dicts - - -def encode_varint(v): - out = StringIO() - _EncodeVarint(out.write, v) - return out.getvalue() - - -def write_proto(file, message): - s = message.SerializeToString() - packed_len = encode_varint(len(s)) - file.write(packed_len + s) - - -''' -if oov_policy[i] == OOV_POLICY_USE, features in i-th column which are not -existed in dicts[i] will be assigned to id 0. -if oov_policy[i] == OOV_POLICY_ERROR, all features in i-th column MUST exist -in dicts[i]. -''' - - -def gen_proto_file(input_file, dicts, oov_policy, output_file): - def write_sequence(out, sequence): - num_features = len(dicts) - is_beginning = True - for features in sequence: - assert len(features) == num_features, \ - "Wrong number of features: " + line - sample = DataFormat.DataSample() - for i in xrange(num_original_columns): - id = dicts[i].get(features[i], -1) - if id != -1: - sample.id_slots.append(id) - elif oov_policy[i] == OOV_POLICY_IGNORE: - sample.id_slots.append(0xffffffff) - elif oov_policy[i] == OOV_POLICY_ERROR: - logger.fatal("Unknown token: %s" % features[i]) - else: - sample.id_slots.append(0) - - if patterns: - dim = 0 - vec = sample.vector_slots.add() - for i in xrange(num_original_columns, num_features): - id = dicts[i].get(features[i], -1) - if id != -1: - vec.ids.append(dim + id) - elif oov_policy[i] == OOV_POLICY_IGNORE: - pass - elif oov_policy[i] == OOV_POLICY_ERROR: - logger.fatal("Unknown token: %s" % features[i]) - else: - vec.ids.append(dim + 0) - - dim += len(dicts[i]) - - sample.is_beginning = is_beginning - is_beginning = False - write_proto(out, sample) - - num_features = len(dicts) - f = open(input_file, 'rb') - out = open(output_file, 'wb') - - header = DataFormat.DataHeader() - if patterns: - slot_def = header.slot_defs.add() - slot_def.type = DataFormat.SlotDef.VECTOR_SPARSE_NON_VALUE - slot_def.dim = sum( - [len(dicts[i]) for i in xrange(num_original_columns, len(dicts))]) - logger.info("feature_dim=%s" % slot_def.dim) - - for i in xrange(num_original_columns): - slot_def = header.slot_defs.add() - slot_def.type = DataFormat.SlotDef.INDEX - slot_def.dim = len(dicts[i]) - - write_proto(out, header) - - num_sequences = 0 - sequence = [] - for line in f: - line = line.strip() - if not line: - make_features(sequence) - write_sequence(out, sequence) - sequence = [] - num_sequences += 1 - continue - features = line.split(' ') - sequence.append(features) - - f.close() - out.close() - - logger.info("num_sequences=%s" % num_sequences) - - -dict2 = { - 'B-ADJP': 0, - 'I-ADJP': 1, - 'B-ADVP': 2, - 'I-ADVP': 3, - 'B-CONJP': 4, - 'I-CONJP': 5, - 'B-INTJ': 6, - 'I-INTJ': 7, - 'B-LST': 8, - 'I-LST': 9, - 'B-NP': 10, - 'I-NP': 11, - 'B-PP': 12, - 'I-PP': 13, - 'B-PRT': 14, - 'I-PRT': 15, - 'B-SBAR': 16, - 'I-SBAR': 17, - 'B-UCP': 18, - 'I-UCP': 19, - 'B-VP': 20, - 'I-VP': 21, - 'O': 22 -} - -if __name__ == '__main__': - cutoff = [3, 1, 0] - cutoff += [3] * len(patterns) - oov_policy = [OOV_POLICY_IGNORE, OOV_POLICY_ERROR, OOV_POLICY_ERROR] - oov_policy += [OOV_POLICY_IGNORE] * len(patterns) - dicts = create_dictionaries('trainer/tests/train.txt', cutoff, oov_policy) - dicts[2] = dict2 - gen_proto_file('trainer/tests/train.txt', dicts, oov_policy, - 'trainer/tests/train_proto.bin') - gen_proto_file('trainer/tests/test.txt', dicts, oov_policy, - 'trainer/tests/test_proto.bin') diff --git a/paddle/trainer/tests/test.txt b/paddle/trainer/tests/test.txt deleted file mode 100644 index 3ad503b34f..0000000000 --- a/paddle/trainer/tests/test.txt +++ /dev/null @@ -1,1000 +0,0 @@ -Confidence NN B-NP -in IN B-PP -the DT B-NP -pound NN I-NP -is VBZ B-VP -widely RB I-VP -expected VBN I-VP -to TO I-VP -take VB I-VP -another DT B-NP -sharp JJ I-NP -dive NN I-NP -if IN B-SBAR -trade NN B-NP -figures NNS I-NP -for IN B-PP -September NNP B-NP -, , O -due JJ B-ADJP -for IN B-PP -release NN B-NP -tomorrow NN B-NP -, , O -fail VB B-VP -to TO I-VP -show VB I-VP -a DT B-NP -substantial JJ I-NP -improvement NN I-NP -from IN B-PP -July NNP B-NP -and CC I-NP -August NNP I-NP -'s POS B-NP -near-record JJ I-NP -deficits NNS I-NP -. . O - -Chancellor NNP O -of IN B-PP -the DT B-NP -Exchequer NNP I-NP -Nigel NNP B-NP -Lawson NNP I-NP -'s POS B-NP -restated VBN I-NP -commitment NN I-NP -to TO B-PP -a DT B-NP -firm NN I-NP -monetary JJ I-NP -policy NN I-NP -has VBZ B-VP -helped VBN I-VP -to TO I-VP -prevent VB I-VP -a DT B-NP -freefall NN I-NP -in IN B-PP -sterling NN B-NP -over IN B-PP -the DT B-NP -past JJ I-NP -week NN I-NP -. . O - -But CC O -analysts NNS B-NP -reckon VBP B-VP -underlying VBG B-NP -support NN I-NP -for IN B-PP -sterling NN B-NP -has VBZ B-VP -been VBN I-VP -eroded VBN I-VP -by IN B-PP -the DT B-NP -chancellor NN I-NP -'s POS B-NP -failure NN I-NP -to TO B-VP -announce VB I-VP -any DT B-NP -new JJ I-NP -policy NN I-NP -measures NNS I-NP -in IN B-PP -his PRP$ B-NP -Mansion NNP I-NP -House NNP I-NP -speech NN I-NP -last JJ B-NP -Thursday NNP I-NP -. . O - -This DT B-NP -has VBZ B-VP -increased VBN I-VP -the DT B-NP -risk NN I-NP -of IN B-PP -the DT B-NP -government NN I-NP -being VBG B-VP -forced VBN I-VP -to TO I-VP -increase VB I-VP -base NN B-NP -rates NNS I-NP -to TO B-PP -16 CD B-NP -% NN I-NP -from IN B-PP -their PRP$ B-NP -current JJ I-NP -15 CD I-NP -% NN I-NP -level NN I-NP -to TO B-VP -defend VB I-VP -the DT B-NP -pound NN I-NP -, , O -economists NNS B-NP -and CC O -foreign JJ B-NP -exchange NN I-NP -market NN I-NP -analysts NNS I-NP -say VBP B-VP -. . O - -`` `` O -The DT B-NP -risks NNS I-NP -for IN B-PP -sterling NN B-NP -of IN B-PP -a DT B-NP -bad JJ I-NP -trade NN I-NP -figure NN I-NP -are VBP B-VP -very RB B-ADVP -heavily RB I-ADVP -on IN B-PP -the DT B-NP -down JJ I-NP -side NN I-NP -, , O -'' '' O -said VBD B-VP -Chris NNP B-NP -Dillow NNP I-NP -, , O -senior JJ B-NP -U.K. NNP I-NP -economist NN I-NP -at IN B-PP -Nomura NNP B-NP -Research NNP I-NP -Institute NNP I-NP -. . O - -`` `` O -If IN B-SBAR -there EX B-NP -is VBZ B-VP -another DT B-NP -bad JJ I-NP -trade NN I-NP -number NN I-NP -, , O -there EX B-NP -could MD B-VP -be VB I-VP -an DT B-NP -awful JJ I-NP -lot NN I-NP -of IN B-PP -pressure NN B-NP -, , O -'' '' O -noted VBD B-VP -Simon NNP B-NP -Briscoe NNP I-NP -, , O -U.K. NNP B-NP -economist NN I-NP -for IN B-PP -Midland NNP B-NP -Montagu NNP I-NP -, , O -a DT B-NP -unit NN I-NP -of IN B-PP -Midland NNP B-NP -Bank NNP I-NP -PLC NNP I-NP -. . O - -Forecasts NNS B-NP -for IN B-PP -the DT B-NP -trade NN I-NP -figures NNS I-NP -range VBP B-VP -widely RB B-ADVP -, , O -but CC O -few JJ B-NP -economists NNS I-NP -expect VBP B-VP -the DT B-NP -data NNS I-NP -to TO B-VP -show VB I-VP -a DT B-NP -very RB I-NP -marked VBN I-NP -improvement NN I-NP -from IN B-PP -the DT O -# # O -2 CD O -billion CD O --LRB- ( O -$ $ B-ADJP -3.2 CD O -billion CD O --RRB- ) O -deficit NN B-NP -in IN B-PP -the DT B-NP -current JJ I-NP -account NN I-NP -reported VBD B-VP -for IN B-PP -August NNP B-NP -. . O - -The DT B-NP -August NNP I-NP -deficit NN I-NP -and CC O -the DT B-NP -# # I-NP -2.2 CD I-NP -billion CD I-NP -gap NN I-NP -registered VBN B-VP -in IN B-PP -July NNP B-NP -are VBP B-VP -topped VBN I-VP -only RB B-ADVP -by IN B-PP -the DT B-NP -# # I-NP -2.3 CD I-NP -billion CD I-NP -deficit NN I-NP -of IN B-PP -October NNP B-NP -1988 CD I-NP -. . O - -Sanjay NNP B-NP -Joshi NNP I-NP -, , O -European JJ B-NP -economist NN I-NP -at IN B-PP -Baring NNP B-NP -Brothers NNPS I-NP -& CC I-NP -Co. NNP I-NP -, , O -said VBD B-VP -there EX B-NP -is VBZ B-VP -no DT B-NP -sign NN I-NP -that IN B-SBAR -Britain NNP B-NP -'s POS B-NP -manufacturing NN I-NP -industry NN I-NP -is VBZ B-VP -transforming VBG I-VP -itself PRP B-NP -to TO B-VP -boost VB I-VP -exports NNS B-NP -. . O - -At IN B-PP -the DT B-NP -same JJ I-NP -time NN I-NP -, , O -he PRP B-NP -remains VBZ B-VP -fairly RB B-ADJP -pessimistic JJ I-ADJP -about IN B-PP -the DT B-NP -outlook NN I-NP -for IN B-PP -imports NNS B-NP -, , O -given VBN B-PP -continued VBD B-NP -high JJ I-NP -consumer NN I-NP -and CC I-NP -capital NN I-NP -goods NNS I-NP -inflows NNS I-NP -. . O - -He PRP B-NP -reckons VBZ B-VP -the DT B-NP -current JJ I-NP -account NN I-NP -deficit NN I-NP -will MD B-VP -narrow VB I-VP -to TO B-PP -only RB B-NP -# # I-NP -1.8 CD I-NP -billion CD I-NP -in IN B-PP -September NNP B-NP -. . O - -However RB B-ADVP -, , O -Mr. NNP B-NP -Dillow NNP I-NP -said VBD B-VP -he PRP B-NP -believes VBZ B-VP -that IN B-SBAR -a DT B-NP -reduction NN I-NP -in IN B-PP -raw JJ B-NP -material NN I-NP -stockbuilding VBG I-NP -by IN B-PP -industry NN B-NP -could MD B-VP -lead VB I-VP -to TO B-PP -a DT B-NP -sharp JJ I-NP -drop NN I-NP -in IN B-PP -imports NNS B-NP -. . O - -Combined VBN B-PP -with IN B-PP -at IN B-ADVP -least JJS I-ADVP -some DT B-NP -rebound NN I-NP -in IN B-PP -exports NNS B-NP -after IN B-PP -August NNP B-NP -'s POS B-NP -unexpected JJ I-NP -decline NN I-NP -, , O -the DT B-NP -deficit NN I-NP -could MD B-VP -narrow VB I-VP -to TO B-PP -as RB B-NP -little JJ I-NP -as IN I-NP -# # I-NP -1.3 CD I-NP -billion CD I-NP -. . O - -Mr. NNP B-NP -Briscoe NNP I-NP -, , O -who WP B-NP -also RB B-ADVP -forecasts VBZ B-VP -a DT B-NP -# # I-NP -1.3 CD I-NP -billion CD I-NP -current JJ I-NP -account NN I-NP -gap NN I-NP -, , O -warns VBZ B-VP -that IN B-SBAR -even RB B-SBAR -if IN I-SBAR -the DT B-NP -trade NN I-NP -figures NNS I-NP -are VBP B-VP -bullish JJ B-ADJP -for IN B-PP -sterling NN B-NP -, , O -the DT B-NP -currency NN I-NP -wo MD B-VP -n't RB I-VP -advance VB I-VP -much JJ B-NP -because IN B-SBAR -investors NNS B-NP -will MD B-VP -want VB I-VP -to TO I-VP -see VB I-VP -further JJ B-NP -evidence NN I-NP -of IN B-PP -the DT B-NP -turnaround NN I-NP -before IN B-PP -adjusting VBG B-VP -positions NNS B-NP -. . O - -Nevertheless RB B-ADVP -, , O -he PRP B-NP -noted VBD B-VP -, , O -`` `` O -No DT B-NP -one PRP I-NP -will MD B-VP -want VB I-VP -to TO I-VP -go VB I-VP -into IN B-PP -the DT B-NP -trade NN I-NP -figures NNS I-NP -without IN B-PP -a DT B-NP -flat JJ I-NP -position NN I-NP -'' '' O -in IN B-PP -the DT B-NP -pound NN I-NP -. . O - -Meanwhile RB B-ADVP -, , O -overall JJ B-NP -evidence NN I-NP -on IN B-PP -the DT B-NP -economy NN I-NP -remains VBZ B-VP -fairly RB B-ADJP -clouded VBN I-ADJP -. . O - -In IN B-PP -his PRP$ B-NP -Mansion NNP I-NP -House NNP I-NP -speech NN I-NP -, , O -Mr. NNP B-NP -Lawson NNP I-NP -warned VBD B-VP -that IN B-SBAR -a DT B-NP -further JJ I-NP -slowdown NN I-NP -can MD B-VP -be VB I-VP -expected VBN I-VP -as IN B-SBAR -the DT B-NP -impact NN I-NP -of IN B-PP -the DT B-NP -last JJ I-NP -rise NN I-NP -in IN B-PP -interest NN B-NP -rates NNS I-NP -earlier RBR B-NP -this DT I-NP -month NN I-NP -takes VBZ B-VP -effect NN B-NP -. . O - -U.K. JJ B-NP -base NN I-NP -rates NNS I-NP -are VBP B-VP -at IN B-PP -their PRP$ B-NP -highest JJS I-NP -level NN I-NP -in IN B-PP -eight CD B-NP -years NNS I-NP -. . O - -But CC O -consumer NN B-NP -expenditure NN I-NP -data NNS I-NP -released VBD B-VP -Friday NNP B-NP -do VBP B-VP -n't RB I-VP -suggest VB I-VP -that IN B-SBAR -the DT B-NP -U.K. NNP I-NP -economy NN I-NP -is VBZ B-VP -slowing VBG I-VP -that DT B-ADVP -quickly RB I-ADVP -. . O - -The DT B-NP -figures NNS I-NP -show VBP B-VP -that DT O -spending NN B-NP -rose VBD B-VP -0.1 CD B-NP -% NN I-NP -in IN B-PP -the DT B-NP -third JJ I-NP -quarter NN I-NP -from IN B-PP -the DT B-NP -second JJ I-NP -quarter NN I-NP -and CC O -was VBD B-VP -up IN B-ADVP -3.8 CD B-NP -% NN I-NP -from IN B-PP -a DT B-NP -year NN I-NP -ago RB B-ADVP -. . O - -This DT B-NP -compares VBZ B-VP -with IN B-PP -a DT B-NP -1.6 CD I-NP -% NN I-NP -rise NN I-NP -in IN B-PP -the DT B-NP -second NN I-NP -from IN B-PP -the DT B-NP -first JJ I-NP -quarter NN I-NP -and CC O -a DT B-NP -5.4 CD I-NP -% NN I-NP -increase NN I-NP -from IN B-PP -the DT B-NP -second JJ I-NP -quarter NN I-NP -of IN B-PP -1988 CD B-NP -. . O - -Mr. NNP B-NP -Dillow NNP I-NP -said VBD B-VP -the DT B-NP -data NNS I-NP -show VBP B-VP -the DT B-NP -economy NN I-NP -`` `` O -is VBZ B-VP -still RB B-ADVP -quite RB B-ADJP -strong JJ I-ADJP -, , O -'' '' O -but CC O -suggestions NNS B-NP -that IN B-SBAR -much NN B-NP -of IN B-PP -the DT B-NP -spending NN I-NP -went VBD B-VP -on IN B-PP -services NNS B-NP -rather RB B-PP -than IN I-PP -consumer NN B-NP -goods NNS I-NP -should MD B-VP -reduce VB I-VP -fears NNS B-NP -of IN B-PP -more JJR B-NP -import NN I-NP -rises NNS I-NP -. . O - -Certainly RB B-ADVP -, , O -the DT B-NP -chancellor NN I-NP -has VBZ B-VP -made VBN I-VP -it PRP B-NP -clear JJ B-ADJP -that IN B-SBAR -he PRP B-NP -is VBZ B-VP -prepared VBN I-VP -to TO I-VP -increase VB I-VP -interest NN B-NP -rates NNS I-NP -again RB B-ADVP -if IN B-SBAR -necessary JJ B-ADJP -to TO B-VP -both DT I-VP -ensure VB I-VP -that IN B-SBAR -a DT B-NP -substantial JJ I-NP -slowdown NN I-NP -does VBZ B-VP -take VB I-VP -place NN B-NP -and CC O -that DT O -sterling NN B-NP -does VBZ B-VP -n't RB I-VP -decline VB I-VP -further JJ B-ADVP -. . O - -Thursday NNP B-NP -, , O -he PRP B-NP -reminded VBD B-VP -his PRP$ B-NP -audience NN I-NP -that IN B-SBAR -the DT B-NP -government NN I-NP -`` `` O -can MD B-VP -not RB I-VP -allow VB I-VP -the DT B-NP -necessary JJ I-NP -rigor NN I-NP -of IN B-PP -monetary JJ B-NP -policy NN I-NP -to TO B-VP -be VB I-VP -undermined VBN I-VP -by IN B-PP -exchange NN B-NP -rate NN I-NP -weakness NN I-NP -. . O -'' '' O - -Analysts NNS B-NP -agree VBP B-VP -there EX B-NP -is VBZ B-VP -little JJ B-NP -holding NN B-VP -sterling NN B-NP -firm NN B-ADJP -at IN B-PP -the DT B-NP -moment NN I-NP -other JJ B-ADJP -than IN B-PP -Mr. NNP B-NP -Lawson NNP I-NP -'s POS B-NP -promise NN I-NP -that IN B-SBAR -rates NNS B-NP -will MD B-VP -be VB I-VP -pushed VBN I-VP -higher JJR B-ADJP -if IN B-SBAR -necessary JJ B-ADJP -. . O - -And CC O -, , O -they PRP B-NP -warn VBP B-VP -, , O -any DT B-NP -further JJ I-NP -drop NN I-NP -in IN B-PP -the DT B-NP -government NN I-NP -'s POS B-NP -popularity NN I-NP -could MD B-VP -swiftly RB I-VP -make VB I-VP -this DT B-NP -promise NN I-NP -sound NN B-VP -hollow JJ B-ADJP -. . O - -Sterling NNP B-NP -was VBD B-VP -already RB I-VP -showing VBG I-VP -some DT B-NP -signs NNS I-NP -of IN B-PP -a DT B-NP -lack NN I-NP -of IN B-PP -confidence NN B-NP -in IN B-PP -Mr. NNP B-NP -Lawson NNP I-NP -'s POS B-NP -promise NN I-NP -Friday NNP B-NP -. . O - -In IN B-PP -European JJ B-NP -trading NN I-NP -it PRP B-NP -declined VBD B-VP -to TO B-PP -$ $ B-NP -1.5890 CD I-NP -and CC O -2.9495 CD B-NP -marks NNS I-NP -from IN B-PP -$ $ B-NP -1.5940 CD I-NP -and CC O -2.9429 CD B-NP -marks NNS I-NP -late JJ B-NP -Thursday NNP I-NP -. . O - -Economists NNS B-NP -suggested VBD B-VP -that IN B-SBAR -if IN B-SBAR -the DT B-NP -pound NN I-NP -falls VBZ B-VP -much JJ B-NP -below IN B-PP -2.90 CD B-NP -marks NNS I-NP -, , O -the DT B-NP -government NN I-NP -will MD B-VP -be VB I-VP -forced VBN I-VP -to TO I-VP -increase VB I-VP -rates NNS B-NP -to TO B-PP -16 CD B-NP -% NN I-NP -, , O -both DT B-VP -to TO I-VP -halt VB B-VP -any DT B-NP -further JJ I-NP -decline NN I-NP -and CC O -ensure VB B-VP -that IN B-SBAR -the DT B-NP -balance NN I-NP -of IN B-PP -monetary JJ B-NP -policy NN I-NP -remains VBZ B-VP -unchanged JJ B-ADJP -. . O - -Friday NNP B-NP -'s POS B-NP -Market NNP I-NP -Activity NN I-NP - -The DT B-NP -dollar NN I-NP -posted VBD B-VP -gains NNS B-NP -in IN B-PP -quiet JJ B-NP -trading NN I-NP -as IN B-SBAR -concerns NNS B-NP -about IN B-PP -equities NNS B-NP -abated VBN B-VP -. . O - -Foreign JJ B-NP -exchange NN I-NP -dealers NNS I-NP -said VBD B-VP -that IN B-SBAR -the DT B-NP -currency NN I-NP -market NN I-NP -has VBZ B-VP -begun VBN I-VP -to TO I-VP -distance VB I-VP -itself PRP B-NP -from IN B-PP -the DT B-NP -volatile JJ I-NP -stock NN I-NP -exchange NN I-NP -, , O -which WDT B-NP -has VBZ B-VP -preoccupied VBN I-VP -the DT B-NP -market NN I-NP -since IN B-PP -Oct. NNP B-NP -13 CD I-NP -, , O -when WRB B-ADVP -the DT B-NP -Dow NNP I-NP -Jones NNP I-NP -Industrial NNP I-NP -Average NNP I-NP -plunged VBD B-VP -more JJR B-NP -than IN I-NP -190 CD I-NP -points NNS I-NP -. . O - -Currency NN B-NP -analysts NNS I-NP -predict VBP B-VP -that IN B-SBAR -in IN B-PP -the DT B-NP -coming VBG I-NP -week NN I-NP -the DT B-NP -foreign JJ I-NP -exchange NN I-NP -market NN I-NP -will MD B-VP -shift VB I-VP -its PRP$ B-NP -focus NN I-NP -back RB B-ADVP -to TO B-PP -economic JJ B-NP -fundamentals NNS I-NP -, , O -keeping VBG B-VP -a DT B-NP -close NN I-NP -eye NN I-NP -out IN B-ADVP -for IN B-PP -any DT B-NP -signs NNS I-NP -of IN B-PP -monetary JJ B-NP -easing NN I-NP -by IN B-PP -U.S. NNP B-NP -Federal NNP I-NP -Reserve NNP I-NP -. . O - -Late RB B-ADVP -in IN B-PP -the DT B-NP -New NNP I-NP -York NNP I-NP -trading NN I-NP -day NN I-NP -, , O -the DT B-NP -dollar NN I-NP -was VBD B-VP -quoted VBN I-VP -at IN B-PP -1.8578 CD B-NP -marks NNS I-NP -, , O -up IN B-ADVP -from IN B-PP -1.8470 CD B-NP -marks NNS I-NP -late JJ B-NP -Thursday NNP I-NP -in IN B-PP -New NNP B-NP -York NNP I-NP -. . O - -The DT B-NP -U.S. NNP I-NP -currency NN I-NP -was VBD B-VP -also RB I-VP -changing VBG I-VP -hands NNS B-NP -at IN B-PP -142.43 CD B-NP -yen NN I-NP -, , O -up IN B-ADVP -from IN B-PP -141.70 CD B-NP -yen NN I-NP -in IN B-PP -New NNP B-NP -York NNP I-NP -late JJ B-NP -Thursday NNP I-NP -. . O - -In IN B-PP -Tokyo NNP B-NP -on IN B-PP -Monday NNP B-NP -, , O -the DT B-NP -U.S. NNP I-NP -currency NN I-NP -opened VBD B-VP -for IN B-PP -trading NN B-NP -at IN B-PP -141.95 CD B-NP -yen NN I-NP -, , O -up IN B-ADVP -from IN B-PP -Friday NNP B-NP -'s POS B-NP -Tokyo NNP I-NP diff --git a/paddle/trainer/tests/test_Trainer.cpp b/paddle/trainer/tests/test_Trainer.cpp index 425b3d10a3..394038cf73 100644 --- a/paddle/trainer/tests/test_Trainer.cpp +++ b/paddle/trainer/tests/test_Trainer.cpp @@ -24,7 +24,6 @@ using namespace std; // NOLINT static const string& configFile1 = "trainer/tests/sample_trainer_config.conf"; static const string& configFile2 = "trainer/tests/sample_trainer_config_hsigmoid.conf"; -static const string& configFile3 = "trainer/tests/chunking.conf"; static const string& configFile4 = "trainer/tests/sample_trainer_config_parallel.conf"; @@ -95,13 +94,6 @@ TEST(checkGradient, multi) { TEST(checkGradient, hsigmoid) { checkGradientTest(configFile2, false, false); } -TEST(checkGradient, chunk) { - checkGradientTest(configFile3, false, false); -#ifdef PADDLE_WITH_CUDA - checkGradientTest(configFile3, true, true); -#endif -} - TEST(checkGradient, non_parallel) { checkGradientTest(configFile4, false, false); } diff --git a/paddle/trainer/tests/test_config.conf b/paddle/trainer/tests/test_config.conf index d1bb9b877f..2f86aaa753 100644 --- a/paddle/trainer/tests/test_config.conf +++ b/paddle/trainer/tests/test_config.conf @@ -15,12 +15,7 @@ from paddle.trainer_config_helpers import * -TrainData(ProtoData( - files = "dummy_list", - constant_slots = [1.0], - async_load_data = True)) - -TestData(SimpleData( +TrainData(SimpleData( files = "trainer/tests/sample_filelist.txt", feat_dim = 3, context_len = 0, diff --git a/paddle/trainer/tests/test_files.txt b/paddle/trainer/tests/test_files.txt deleted file mode 100644 index 49002677a8..0000000000 --- a/paddle/trainer/tests/test_files.txt +++ /dev/null @@ -1 +0,0 @@ -trainer/tests/test_proto.bin diff --git a/paddle/trainer/tests/train.list b/paddle/trainer/tests/train.list deleted file mode 100644 index f41e8e8893..0000000000 --- a/paddle/trainer/tests/train.list +++ /dev/null @@ -1 +0,0 @@ -trainer/tests/data_bin_part diff --git a/paddle/trainer/tests/train.txt b/paddle/trainer/tests/train.txt deleted file mode 100644 index 2313aee987..0000000000 --- a/paddle/trainer/tests/train.txt +++ /dev/null @@ -1,5000 +0,0 @@ -Confidence NN B-NP -in IN B-PP -the DT B-NP -pound NN I-NP -is VBZ B-VP -widely RB I-VP -expected VBN I-VP -to TO I-VP -take VB I-VP -another DT B-NP -sharp JJ I-NP -dive NN I-NP -if IN B-SBAR -trade NN B-NP -figures NNS I-NP -for IN B-PP -September NNP B-NP -, , O -due JJ B-ADJP -for IN B-PP -release NN B-NP -tomorrow NN B-NP -, , O -fail VB B-VP -to TO I-VP -show VB I-VP -a DT B-NP -substantial JJ I-NP -improvement NN I-NP -from IN B-PP -July NNP B-NP -and CC I-NP -August NNP I-NP -'s POS B-NP -near-record JJ I-NP -deficits NNS I-NP -. . O - -Chancellor NNP O -of IN B-PP -the DT B-NP -Exchequer NNP I-NP -Nigel NNP B-NP -Lawson NNP I-NP -'s POS B-NP -restated VBN I-NP -commitment NN I-NP -to TO B-PP -a DT B-NP -firm NN I-NP -monetary JJ I-NP -policy NN I-NP -has VBZ B-VP -helped VBN I-VP -to TO I-VP -prevent VB I-VP -a DT B-NP -freefall NN I-NP -in IN B-PP -sterling NN B-NP -over IN B-PP -the DT B-NP -past JJ I-NP -week NN I-NP -. . O - -But CC O -analysts NNS B-NP -reckon VBP B-VP -underlying VBG B-NP -support NN I-NP -for IN B-PP -sterling NN B-NP -has VBZ B-VP -been VBN I-VP -eroded VBN I-VP -by IN B-PP -the DT B-NP -chancellor NN I-NP -'s POS B-NP -failure NN I-NP -to TO B-VP -announce VB I-VP -any DT B-NP -new JJ I-NP -policy NN I-NP -measures NNS I-NP -in IN B-PP -his PRP$ B-NP -Mansion NNP I-NP -House NNP I-NP -speech NN I-NP -last JJ B-NP -Thursday NNP I-NP -. . O - -This DT B-NP -has VBZ B-VP -increased VBN I-VP -the DT B-NP -risk NN I-NP -of IN B-PP -the DT B-NP -government NN I-NP -being VBG B-VP -forced VBN I-VP -to TO I-VP -increase VB I-VP -base NN B-NP -rates NNS I-NP -to TO B-PP -16 CD B-NP -% NN I-NP -from IN B-PP -their PRP$ B-NP -current JJ I-NP -15 CD I-NP -% NN I-NP -level NN I-NP -to TO B-VP -defend VB I-VP -the DT B-NP -pound NN I-NP -, , O -economists NNS B-NP -and CC O -foreign JJ B-NP -exchange NN I-NP -market NN I-NP -analysts NNS I-NP -say VBP B-VP -. . O - -`` `` O -The DT B-NP -risks NNS I-NP -for IN B-PP -sterling NN B-NP -of IN B-PP -a DT B-NP -bad JJ I-NP -trade NN I-NP -figure NN I-NP -are VBP B-VP -very RB B-ADVP -heavily RB I-ADVP -on IN B-PP -the DT B-NP -down JJ I-NP -side NN I-NP -, , O -'' '' O -said VBD B-VP -Chris NNP B-NP -Dillow NNP I-NP -, , O -senior JJ B-NP -U.K. NNP I-NP -economist NN I-NP -at IN B-PP -Nomura NNP B-NP -Research NNP I-NP -Institute NNP I-NP -. . O - -`` `` O -If IN B-SBAR -there EX B-NP -is VBZ B-VP -another DT B-NP -bad JJ I-NP -trade NN I-NP -number NN I-NP -, , O -there EX B-NP -could MD B-VP -be VB I-VP -an DT B-NP -awful JJ I-NP -lot NN I-NP -of IN B-PP -pressure NN B-NP -, , O -'' '' O -noted VBD B-VP -Simon NNP B-NP -Briscoe NNP I-NP -, , O -U.K. NNP B-NP -economist NN I-NP -for IN B-PP -Midland NNP B-NP -Montagu NNP I-NP -, , O -a DT B-NP -unit NN I-NP -of IN B-PP -Midland NNP B-NP -Bank NNP I-NP -PLC NNP I-NP -. . O - -Forecasts NNS B-NP -for IN B-PP -the DT B-NP -trade NN I-NP -figures NNS I-NP -range VBP B-VP -widely RB B-ADVP -, , O -but CC O -few JJ B-NP -economists NNS I-NP -expect VBP B-VP -the DT B-NP -data NNS I-NP -to TO B-VP -show VB I-VP -a DT B-NP -very RB I-NP -marked VBN I-NP -improvement NN I-NP -from IN B-PP -the DT O -# # O -2 CD O -billion CD O --LRB- ( O -$ $ B-ADJP -3.2 CD O -billion CD O --RRB- ) O -deficit NN B-NP -in IN B-PP -the DT B-NP -current JJ I-NP -account NN I-NP -reported VBD B-VP -for IN B-PP -August NNP B-NP -. . O - -The DT B-NP -August NNP I-NP -deficit NN I-NP -and CC O -the DT B-NP -# # I-NP -2.2 CD I-NP -billion CD I-NP -gap NN I-NP -registered VBN B-VP -in IN B-PP -July NNP B-NP -are VBP B-VP -topped VBN I-VP -only RB B-ADVP -by IN B-PP -the DT B-NP -# # I-NP -2.3 CD I-NP -billion CD I-NP -deficit NN I-NP -of IN B-PP -October NNP B-NP -1988 CD I-NP -. . O - -Sanjay NNP B-NP -Joshi NNP I-NP -, , O -European JJ B-NP -economist NN I-NP -at IN B-PP -Baring NNP B-NP -Brothers NNPS I-NP -& CC I-NP -Co. NNP I-NP -, , O -said VBD B-VP -there EX B-NP -is VBZ B-VP -no DT B-NP -sign NN I-NP -that IN B-SBAR -Britain NNP B-NP -'s POS B-NP -manufacturing NN I-NP -industry NN I-NP -is VBZ B-VP -transforming VBG I-VP -itself PRP B-NP -to TO B-VP -boost VB I-VP -exports NNS B-NP -. . O - -At IN B-PP -the DT B-NP -same JJ I-NP -time NN I-NP -, , O -he PRP B-NP -remains VBZ B-VP -fairly RB B-ADJP -pessimistic JJ I-ADJP -about IN B-PP -the DT B-NP -outlook NN I-NP -for IN B-PP -imports NNS B-NP -, , O -given VBN B-PP -continued VBD B-NP -high JJ I-NP -consumer NN I-NP -and CC I-NP -capital NN I-NP -goods NNS I-NP -inflows NNS I-NP -. . O - -He PRP B-NP -reckons VBZ B-VP -the DT B-NP -current JJ I-NP -account NN I-NP -deficit NN I-NP -will MD B-VP -narrow VB I-VP -to TO B-PP -only RB B-NP -# # I-NP -1.8 CD I-NP -billion CD I-NP -in IN B-PP -September NNP B-NP -. . O - -However RB B-ADVP -, , O -Mr. NNP B-NP -Dillow NNP I-NP -said VBD B-VP -he PRP B-NP -believes VBZ B-VP -that IN B-SBAR -a DT B-NP -reduction NN I-NP -in IN B-PP -raw JJ B-NP -material NN I-NP -stockbuilding VBG I-NP -by IN B-PP -industry NN B-NP -could MD B-VP -lead VB I-VP -to TO B-PP -a DT B-NP -sharp JJ I-NP -drop NN I-NP -in IN B-PP -imports NNS B-NP -. . O - -Combined VBN B-PP -with IN B-PP -at IN B-ADVP -least JJS I-ADVP -some DT B-NP -rebound NN I-NP -in IN B-PP -exports NNS B-NP -after IN B-PP -August NNP B-NP -'s POS B-NP -unexpected JJ I-NP -decline NN I-NP -, , O -the DT B-NP -deficit NN I-NP -could MD B-VP -narrow VB I-VP -to TO B-PP -as RB B-NP -little JJ I-NP -as IN I-NP -# # I-NP -1.3 CD I-NP -billion CD I-NP -. . O - -Mr. NNP B-NP -Briscoe NNP I-NP -, , O -who WP B-NP -also RB B-ADVP -forecasts VBZ B-VP -a DT B-NP -# # I-NP -1.3 CD I-NP -billion CD I-NP -current JJ I-NP -account NN I-NP -gap NN I-NP -, , O -warns VBZ B-VP -that IN B-SBAR -even RB B-SBAR -if IN I-SBAR -the DT B-NP -trade NN I-NP -figures NNS I-NP -are VBP B-VP -bullish JJ B-ADJP -for IN B-PP -sterling NN B-NP -, , O -the DT B-NP -currency NN I-NP -wo MD B-VP -n't RB I-VP -advance VB I-VP -much JJ B-NP -because IN B-SBAR -investors NNS B-NP -will MD B-VP -want VB I-VP -to TO I-VP -see VB I-VP -further JJ B-NP -evidence NN I-NP -of IN B-PP -the DT B-NP -turnaround NN I-NP -before IN B-PP -adjusting VBG B-VP -positions NNS B-NP -. . O - -Nevertheless RB B-ADVP -, , O -he PRP B-NP -noted VBD B-VP -, , O -`` `` O -No DT B-NP -one PRP I-NP -will MD B-VP -want VB I-VP -to TO I-VP -go VB I-VP -into IN B-PP -the DT B-NP -trade NN I-NP -figures NNS I-NP -without IN B-PP -a DT B-NP -flat JJ I-NP -position NN I-NP -'' '' O -in IN B-PP -the DT B-NP -pound NN I-NP -. . O - -Meanwhile RB B-ADVP -, , O -overall JJ B-NP -evidence NN I-NP -on IN B-PP -the DT B-NP -economy NN I-NP -remains VBZ B-VP -fairly RB B-ADJP -clouded VBN I-ADJP -. . O - -In IN B-PP -his PRP$ B-NP -Mansion NNP I-NP -House NNP I-NP -speech NN I-NP -, , O -Mr. NNP B-NP -Lawson NNP I-NP -warned VBD B-VP -that IN B-SBAR -a DT B-NP -further JJ I-NP -slowdown NN I-NP -can MD B-VP -be VB I-VP -expected VBN I-VP -as IN B-SBAR -the DT B-NP -impact NN I-NP -of IN B-PP -the DT B-NP -last JJ I-NP -rise NN I-NP -in IN B-PP -interest NN B-NP -rates NNS I-NP -earlier RBR B-NP -this DT I-NP -month NN I-NP -takes VBZ B-VP -effect NN B-NP -. . O - -U.K. JJ B-NP -base NN I-NP -rates NNS I-NP -are VBP B-VP -at IN B-PP -their PRP$ B-NP -highest JJS I-NP -level NN I-NP -in IN B-PP -eight CD B-NP -years NNS I-NP -. . O - -But CC O -consumer NN B-NP -expenditure NN I-NP -data NNS I-NP -released VBD B-VP -Friday NNP B-NP -do VBP B-VP -n't RB I-VP -suggest VB I-VP -that IN B-SBAR -the DT B-NP -U.K. NNP I-NP -economy NN I-NP -is VBZ B-VP -slowing VBG I-VP -that DT B-ADVP -quickly RB I-ADVP -. . O - -The DT B-NP -figures NNS I-NP -show VBP B-VP -that DT O -spending NN B-NP -rose VBD B-VP -0.1 CD B-NP -% NN I-NP -in IN B-PP -the DT B-NP -third JJ I-NP -quarter NN I-NP -from IN B-PP -the DT B-NP -second JJ I-NP -quarter NN I-NP -and CC O -was VBD B-VP -up IN B-ADVP -3.8 CD B-NP -% NN I-NP -from IN B-PP -a DT B-NP -year NN I-NP -ago RB B-ADVP -. . O - -This DT B-NP -compares VBZ B-VP -with IN B-PP -a DT B-NP -1.6 CD I-NP -% NN I-NP -rise NN I-NP -in IN B-PP -the DT B-NP -second NN I-NP -from IN B-PP -the DT B-NP -first JJ I-NP -quarter NN I-NP -and CC O -a DT B-NP -5.4 CD I-NP -% NN I-NP -increase NN I-NP -from IN B-PP -the DT B-NP -second JJ I-NP -quarter NN I-NP -of IN B-PP -1988 CD B-NP -. . O - -Mr. NNP B-NP -Dillow NNP I-NP -said VBD B-VP -the DT B-NP -data NNS I-NP -show VBP B-VP -the DT B-NP -economy NN I-NP -`` `` O -is VBZ B-VP -still RB B-ADVP -quite RB B-ADJP -strong JJ I-ADJP -, , O -'' '' O -but CC O -suggestions NNS B-NP -that IN B-SBAR -much NN B-NP -of IN B-PP -the DT B-NP -spending NN I-NP -went VBD B-VP -on IN B-PP -services NNS B-NP -rather RB B-PP -than IN I-PP -consumer NN B-NP -goods NNS I-NP -should MD B-VP -reduce VB I-VP -fears NNS B-NP -of IN B-PP -more JJR B-NP -import NN I-NP -rises NNS I-NP -. . O - -Certainly RB B-ADVP -, , O -the DT B-NP -chancellor NN I-NP -has VBZ B-VP -made VBN I-VP -it PRP B-NP -clear JJ B-ADJP -that IN B-SBAR -he PRP B-NP -is VBZ B-VP -prepared VBN I-VP -to TO I-VP -increase VB I-VP -interest NN B-NP -rates NNS I-NP -again RB B-ADVP -if IN B-SBAR -necessary JJ B-ADJP -to TO B-VP -both DT I-VP -ensure VB I-VP -that IN B-SBAR -a DT B-NP -substantial JJ I-NP -slowdown NN I-NP -does VBZ B-VP -take VB I-VP -place NN B-NP -and CC O -that DT O -sterling NN B-NP -does VBZ B-VP -n't RB I-VP -decline VB I-VP -further JJ B-ADVP -. . O - -Thursday NNP B-NP -, , O -he PRP B-NP -reminded VBD B-VP -his PRP$ B-NP -audience NN I-NP -that IN B-SBAR -the DT B-NP -government NN I-NP -`` `` O -can MD B-VP -not RB I-VP -allow VB I-VP -the DT B-NP -necessary JJ I-NP -rigor NN I-NP -of IN B-PP -monetary JJ B-NP -policy NN I-NP -to TO B-VP -be VB I-VP -undermined VBN I-VP -by IN B-PP -exchange NN B-NP -rate NN I-NP -weakness NN I-NP -. . O -'' '' O - -Analysts NNS B-NP -agree VBP B-VP -there EX B-NP -is VBZ B-VP -little JJ B-NP -holding NN B-VP -sterling NN B-NP -firm NN B-ADJP -at IN B-PP -the DT B-NP -moment NN I-NP -other JJ B-ADJP -than IN B-PP -Mr. NNP B-NP -Lawson NNP I-NP -'s POS B-NP -promise NN I-NP -that IN B-SBAR -rates NNS B-NP -will MD B-VP -be VB I-VP -pushed VBN I-VP -higher JJR B-ADJP -if IN B-SBAR -necessary JJ B-ADJP -. . O - -And CC O -, , O -they PRP B-NP -warn VBP B-VP -, , O -any DT B-NP -further JJ I-NP -drop NN I-NP -in IN B-PP -the DT B-NP -government NN I-NP -'s POS B-NP -popularity NN I-NP -could MD B-VP -swiftly RB I-VP -make VB I-VP -this DT B-NP -promise NN I-NP -sound NN B-VP -hollow JJ B-ADJP -. . O - -Sterling NNP B-NP -was VBD B-VP -already RB I-VP -showing VBG I-VP -some DT B-NP -signs NNS I-NP -of IN B-PP -a DT B-NP -lack NN I-NP -of IN B-PP -confidence NN B-NP -in IN B-PP -Mr. NNP B-NP -Lawson NNP I-NP -'s POS B-NP -promise NN I-NP -Friday NNP B-NP -. . O - -In IN B-PP -European JJ B-NP -trading NN I-NP -it PRP B-NP -declined VBD B-VP -to TO B-PP -$ $ B-NP -1.5890 CD I-NP -and CC O -2.9495 CD B-NP -marks NNS I-NP -from IN B-PP -$ $ B-NP -1.5940 CD I-NP -and CC O -2.9429 CD B-NP -marks NNS I-NP -late JJ B-NP -Thursday NNP I-NP -. . O - -Economists NNS B-NP -suggested VBD B-VP -that IN B-SBAR -if IN B-SBAR -the DT B-NP -pound NN I-NP -falls VBZ B-VP -much JJ B-NP -below IN B-PP -2.90 CD B-NP -marks NNS I-NP -, , O -the DT B-NP -government NN I-NP -will MD B-VP -be VB I-VP -forced VBN I-VP -to TO I-VP -increase VB I-VP -rates NNS B-NP -to TO B-PP -16 CD B-NP -% NN I-NP -, , O -both DT B-VP -to TO I-VP -halt VB B-VP -any DT B-NP -further JJ I-NP -decline NN I-NP -and CC O -ensure VB B-VP -that IN B-SBAR -the DT B-NP -balance NN I-NP -of IN B-PP -monetary JJ B-NP -policy NN I-NP -remains VBZ B-VP -unchanged JJ B-ADJP -. . O - -Friday NNP B-NP -'s POS B-NP -Market NNP I-NP -Activity NN I-NP - -The DT B-NP -dollar NN I-NP -posted VBD B-VP -gains NNS B-NP -in IN B-PP -quiet JJ B-NP -trading NN I-NP -as IN B-SBAR -concerns NNS B-NP -about IN B-PP -equities NNS B-NP -abated VBN B-VP -. . O - -Foreign JJ B-NP -exchange NN I-NP -dealers NNS I-NP -said VBD B-VP -that IN B-SBAR -the DT B-NP -currency NN I-NP -market NN I-NP -has VBZ B-VP -begun VBN I-VP -to TO I-VP -distance VB I-VP -itself PRP B-NP -from IN B-PP -the DT B-NP -volatile JJ I-NP -stock NN I-NP -exchange NN I-NP -, , O -which WDT B-NP -has VBZ B-VP -preoccupied VBN I-VP -the DT B-NP -market NN I-NP -since IN B-PP -Oct. NNP B-NP -13 CD I-NP -, , O -when WRB B-ADVP -the DT B-NP -Dow NNP I-NP -Jones NNP I-NP -Industrial NNP I-NP -Average NNP I-NP -plunged VBD B-VP -more JJR B-NP -than IN I-NP -190 CD I-NP -points NNS I-NP -. . O - -Currency NN B-NP -analysts NNS I-NP -predict VBP B-VP -that IN B-SBAR -in IN B-PP -the DT B-NP -coming VBG I-NP -week NN I-NP -the DT B-NP -foreign JJ I-NP -exchange NN I-NP -market NN I-NP -will MD B-VP -shift VB I-VP -its PRP$ B-NP -focus NN I-NP -back RB B-ADVP -to TO B-PP -economic JJ B-NP -fundamentals NNS I-NP -, , O -keeping VBG B-VP -a DT B-NP -close NN I-NP -eye NN I-NP -out IN B-ADVP -for IN B-PP -any DT B-NP -signs NNS I-NP -of IN B-PP -monetary JJ B-NP -easing NN I-NP -by IN B-PP -U.S. NNP B-NP -Federal NNP I-NP -Reserve NNP I-NP -. . O - -Late RB B-ADVP -in IN B-PP -the DT B-NP -New NNP I-NP -York NNP I-NP -trading NN I-NP -day NN I-NP -, , O -the DT B-NP -dollar NN I-NP -was VBD B-VP -quoted VBN I-VP -at IN B-PP -1.8578 CD B-NP -marks NNS I-NP -, , O -up IN B-ADVP -from IN B-PP -1.8470 CD B-NP -marks NNS I-NP -late JJ B-NP -Thursday NNP I-NP -in IN B-PP -New NNP B-NP -York NNP I-NP -. . O - -The DT B-NP -U.S. NNP I-NP -currency NN I-NP -was VBD B-VP -also RB I-VP -changing VBG I-VP -hands NNS B-NP -at IN B-PP -142.43 CD B-NP -yen NN I-NP -, , O -up IN B-ADVP -from IN B-PP -141.70 CD B-NP -yen NN I-NP -in IN B-PP -New NNP B-NP -York NNP I-NP -late JJ B-NP -Thursday NNP I-NP -. . O - -In IN B-PP -Tokyo NNP B-NP -on IN B-PP -Monday NNP B-NP -, , O -the DT B-NP -U.S. NNP I-NP -currency NN I-NP -opened VBD B-VP -for IN B-PP -trading NN B-NP -at IN B-PP -141.95 CD B-NP -yen NN I-NP -, , O -up IN B-ADVP -from IN B-PP -Friday NNP B-NP -'s POS B-NP -Tokyo NNP I-NP -close NN I-NP -of IN B-PP -141.35 CD B-NP -yen NN I-NP -. . O - -On IN B-PP -the DT B-NP -Commodity NNP I-NP -Exchange NNP I-NP -in IN B-PP -New NNP B-NP -York NNP I-NP -, , O -gold NN B-NP -for IN B-PP -current JJ B-NP -delivery NN I-NP -settled VBD B-VP -at IN B-PP -$ $ B-NP -367.30 CD I-NP -an DT B-NP -ounce NN I-NP -, , O -up IN B-ADVP -20 CD B-NP -cents NNS I-NP -. . O - -Estimated VBN B-NP -volume NN I-NP -was VBD B-VP -a DT B-NP -light NN I-NP -2.4 CD I-NP -million CD I-NP -ounces NNS I-NP -. . O - -In IN B-PP -early JJ B-NP -trading NN I-NP -in IN B-PP -Hong NNP B-NP -Kong NNP I-NP -Monday NNP B-NP -, , O -gold NN B-NP -was VBD B-VP -quoted VBN I-VP -at IN B-PP -$ $ B-NP -366.50 CD I-NP -an DT B-NP -ounce NN I-NP -. . O - -East NNP B-NP -Rock NNP I-NP -Partners NNP I-NP -Limited NNP I-NP -Partnership NNP I-NP -said VBD B-VP -it PRP B-NP -proposed VBD B-VP -to TO I-VP -acquire VB I-VP -A.P. NNP B-NP -Green NNP I-NP -Industries NNP I-NP -Inc. NNP I-NP -for IN B-PP -$ $ B-NP -40 CD I-NP -a DT B-NP -share NN I-NP -. . O - -In IN B-PP -an DT B-NP -Oct. NNP I-NP -19 CD I-NP -letter NN I-NP -to TO B-PP -A.P. NNP B-NP -Green NNP I-NP -'s POS B-NP -board NN I-NP -, , O -East NNP B-NP -Rock NNP I-NP -said VBD B-VP -the DT B-NP -offer NN I-NP -is VBZ B-VP -subject NN B-ADJP -to TO B-PP -the DT B-NP -signing NN I-NP -of IN B-PP -a DT B-NP -merger NN I-NP -agreement NN I-NP -by IN B-PP -no DT B-ADVP -later RB I-ADVP -than IN B-PP -Oct. NNP B-NP -31 CD I-NP -. . O - -The DT B-NP -letter NN I-NP -, , O -attached VBN B-VP -to TO B-PP -a DT B-NP -filing NN I-NP -with IN B-PP -the DT B-NP -Securities NNP I-NP -and CC I-NP -Exchange NNP I-NP -Commission NNP I-NP -, , O -said VBD B-VP -the DT B-NP -approval NN I-NP -is VBZ B-VP -also RB B-ADVP -contingent JJ B-ADJP -upon IN B-PP -obtaining VBG B-VP -satisfactory JJ B-NP -financing NN I-NP -. . O - -An DT B-NP -A.P. NNP I-NP -Green NNP I-NP -official NN I-NP -declined VBD B-VP -to TO I-VP -comment VB I-VP -on IN B-PP -the DT B-NP -filing NN I-NP -. . O - -The DT B-NP -$ $ I-NP -40-a-share JJ I-NP -proposal NN I-NP -values VBZ B-VP -the DT B-NP -company NN I-NP -at IN B-PP -about RB B-NP -$ $ I-NP -106.6 CD I-NP -million CD I-NP -. . O - -A.P. NNP B-NP -Green NNP I-NP -currently RB B-ADVP -has VBZ B-VP -2,664,098 CD B-NP -shares NNS I-NP -outstanding JJ B-ADJP -. . O - -Its PRP$ B-NP -stock NN I-NP -closed VBD B-VP -at IN B-PP -$ $ B-NP -38 CD I-NP -, , O -up IN B-ADVP -$ $ B-NP -1.875 CD I-NP -, , O -in IN B-PP -national JJ B-NP -over-the-counter JJ I-NP -trading NN I-NP -. . O - -The DT B-NP -company NN I-NP -is VBZ B-VP -a DT B-NP -Mexico NNP I-NP -, , I-NP -Mo. NNP I-NP -, , I-NP -maker NN I-NP -of IN B-PP -refractory JJ B-NP -products NNS I-NP -. . O - -East NNP B-NP -Rock NNP I-NP -also RB B-ADVP -said VBD B-VP -in IN B-PP -the DT B-NP -filing NN I-NP -that IN B-SBAR -it PRP B-NP -boosted VBD B-VP -its PRP$ B-NP -stake NN I-NP -in IN B-PP -A.P. NNP B-NP -Green NNP I-NP -to TO B-PP -8.7 CD B-NP -% NN I-NP -. . O - -It PRP B-NP -now RB B-ADVP -holds VBZ B-VP -233,000 CD B-NP -A.P. NNP I-NP -Green NNP I-NP -common JJ I-NP -shares NNS I-NP -, , O -including VBG B-PP -30,000 CD B-NP -shares NNS I-NP -bought VBD B-VP -last JJ B-NP -Thursday NNP I-NP -for IN B-PP -$ $ B-NP -35.50 CD I-NP -to TO I-NP -$ $ I-NP -36.50 CD I-NP -a DT B-NP -share NN I-NP -. . O - -New NNP B-NP -York-based JJ I-NP -John NNP I-NP -Kuhns NNP I-NP -and CC I-NP -Robert NNP I-NP -MacDonald NNP I-NP -control NN B-VP -East NNP B-NP -Rock NNP I-NP -Partners NNP I-NP -Inc. NNP I-NP -, , O -the DT B-NP -sole JJ I-NP -general JJ I-NP -partner NN I-NP -of IN B-PP -East NNP B-NP -Rock NNP I-NP -Partners NNP I-NP -L.P NNP I-NP -. . O - -The DT B-NP -sole JJ I-NP -limited JJ I-NP -partner NN I-NP -of IN B-PP -the DT B-NP -partnership NN I-NP -is VBZ B-VP -Westwood NNP B-NP -Brick NNP I-NP -Lime NNP I-NP -Inc. NNP I-NP -, , O -an DT B-NP -indirect JJ I-NP -subsidiary NN I-NP -of IN B-PP -Westwood NNP B-NP -Group NNP I-NP -Inc NNP I-NP -. . O - -Both DT B-NP -Westwood NNP B-NP -Brick NNP I-NP -and CC O -Westwood NNP B-NP -Group NNP I-NP -are VBP B-VP -based VBN I-VP -in IN B-PP -Boston NNP B-NP -. . O - -Freight NN B-NP -rates NNS I-NP -, , O -declining VBG B-VP -for IN B-PP -most RBS B-NP -of IN B-PP -the DT B-NP -decade NN I-NP -because IN B-PP -of IN I-PP -competition NN B-NP -spurred VBN B-VP -by IN B-PP -deregulation NN B-NP -, , O -are VBP B-VP -bottoming VBG I-VP -out IN B-PRT -, , O -turning VBG B-VP -upward RB B-ADVP -and CC O -threatening VBG B-VP -to TO I-VP -fuel VB I-VP -inflation NN B-NP -. . O - -Trucking NNP B-NP -, , I-NP -shipping VBG I-NP -and CC I-NP -air-freight NN I-NP -companies NNS I-NP -have VBP B-VP -announced VBN I-VP -rate NN B-NP -increases NNS I-NP -, , O -scheduled VBN B-VP -for IN B-PP -this DT B-NP -fall NN I-NP -or CC O -early JJ B-NP -next JJ I-NP -year NN I-NP -, , O -reflecting VBG B-VP -higher JJR B-NP -costs NNS I-NP -and CC O -tightened VBD B-NP -demand NN I-NP -for IN B-PP -freight NN B-NP -transport NN I-NP -. . O - -Major JJ B-NP -shippers NNS I-NP -say VBP B-VP -they PRP B-NP -expect VBP B-VP -freight NN B-NP -rates NNS I-NP -to TO B-VP -rise VB I-VP -at IN B-ADVP -least JJS I-ADVP -as RB B-ADVP -fast RB I-ADVP -as IN B-PP -inflation NN B-NP -and CC B-ADVP -maybe RB I-ADVP -faster RBR B-ADVP -in IN B-PP -the DT B-NP -next JJ I-NP -few JJ I-NP -years NNS I-NP -. . O - -That DT B-NP -'s VBZ B-VP -a DT B-NP -big JJ I-NP -change NN I-NP -from IN B-PP -recent JJ B-NP -years NNS I-NP -when WRB B-ADVP -freight NN B-NP -haulage NN I-NP -was VBD B-VP -a DT B-NP -bright JJ I-NP -spot NN I-NP -for IN B-PP -U.S. NNP B-NP -productivity NN I-NP -, , O -helping VBG B-VP -to TO I-VP -restrain VB I-VP -inflation NN B-NP -and CC O -make VB B-VP -U.S. NNP B-NP -industry NN I-NP -more RBR B-ADJP -competitive JJ I-ADJP -abroad RB B-ADVP -. . O - -`` `` O -Demand NN B-NP -has VBZ B-VP -caught VBN I-VP -up IN B-PRT -with IN B-PP -the DT B-NP -supply NN I-NP -of IN B-PP -certain JJ B-NP -types NNS I-NP -of IN B-PP -freight NN B-NP -transportation NN I-NP -, , O -and CC O -rates NNS B-NP -are VBP B-VP -starting VBG I-VP -to TO I-VP -move VB I-VP -up IN B-ADVP -'' '' O -at IN B-PP -a DT B-NP -rate NN I-NP -`` `` O -close RB B-ADJP -to TO B-PP -or CC O -slightly RB B-ADJP -more JJR I-ADJP -than IN B-PP -the DT B-NP -inflation NN I-NP -rate NN I-NP -, , O -'' '' O -said VBD B-VP -Clifford NNP B-NP -Sayre NNP I-NP -, , O -director NN B-NP -of IN B-PP -logistics NNS B-NP -at IN B-PP -Du NNP B-NP -Pont NNP I-NP -Co NNP I-NP -. . O - -Shippers NNS B-NP -surveyed VBN B-VP -recently RB B-ADVP -by IN B-PP -Ohio NNP B-NP -State NNP I-NP -University NNP I-NP -said VBD B-VP -they PRP B-NP -expect VBP B-VP -their PRP$ B-NP -freight-transport JJ I-NP -, , I-NP -storage NN I-NP -and CC I-NP -distribution NN I-NP -costs NNS I-NP -to TO B-VP -rise VB I-VP -about IN B-NP -4 CD I-NP -% NN I-NP -this DT B-NP -year NN I-NP -. . O - -Only RB B-NP -10 CD I-NP -% NN I-NP -of IN B-PP -the DT B-NP -250 CD I-NP -shippers NNS I-NP -polled VBN B-VP -expected VBN B-VP -their PRP$ B-NP -freight-transport JJ I-NP -costs NNS I-NP -to TO B-VP -decrease VB I-VP -, , O -compared VBN B-PP -with IN B-PP -30 CD B-NP -% NN I-NP -who WP B-NP -had VBD B-VP -looked VBN I-VP -to TO B-PP -freight VB B-NP -transport NN I-NP -to TO B-VP -reduce VB I-VP -costs NNS B-NP -in IN B-PP -past JJ B-NP -years NNS I-NP -. . O - -`` `` O -This DT B-NP -is VBZ B-VP -the DT B-NP -first JJ I-NP -year NN I-NP -since IN B-PP -transportation NN B-NP -deregulation NN I-NP -in IN B-PP -1980 CD B-NP -that IN B-ADVP -we PRP B-NP -have VBP B-VP -had VBN I-VP -such JJ B-NP -a DT I-NP -dramatic JJ I-NP -and CC I-NP -broad-based JJ I-NP -upturn NN I-NP -in IN B-PP -perceived VBN B-NP -transportation NN I-NP -rates NNS I-NP -, , O -'' '' O -said VBD B-VP -Bernard NNP B-NP -LaLonde NNP I-NP -, , O -a DT B-NP -transportation NN I-NP -logistics NNS I-NP -professor NN I-NP -at IN B-PP -Ohio NNP B-NP -State NNP I-NP -in IN B-PP -Columbus NNP B-NP -. . O - -The DT B-NP -deregulation NN I-NP -of IN B-PP -railroads NNS B-NP -and CC I-NP -trucking NN I-NP -companies NNS I-NP -that WDT B-NP -began VBD B-VP -in IN B-PP -1980 CD B-NP -enabled VBD B-VP -shippers NNS B-NP -to TO B-VP -bargain VB I-VP -for IN B-PP -transportation NN B-NP -. . O - -Carriers NNP B-NP -could MD B-VP -use VB I-VP -their PRP$ B-NP -equipment NN I-NP -more RBR B-ADVP -efficiently RB I-ADVP -, , O -leading VBG B-VP -to TO B-PP -overcapacity NN B-NP -they PRP B-NP -were VBD B-VP -eager JJ B-ADJP -to TO B-VP -fill VB I-VP -. . O - -Shippers NNS B-NP -cut VBP B-VP -about RB B-NP -$ $ I-NP -35 CD I-NP -billion CD I-NP -from IN B-PP -their PRP$ B-NP -annual JJ I-NP -, , I-NP -inter-city JJ I-NP -truck NN I-NP -and CC I-NP -rail NN I-NP -costs NNS I-NP -, , O -to TO B-PP -about RB B-NP -$ $ I-NP -150 CD I-NP -billion CD I-NP -, , O -or CC O -about IN B-NP -6.4 CD I-NP -% NN I-NP -of IN B-PP -gross JJ B-NP -national JJ I-NP -product NN I-NP -, , O -down RB B-ADVP -from IN B-PP -8 CD B-NP -% NN I-NP -of IN B-PP -GNP NNP B-NP -in IN B-PP -1981 CD B-NP -. . O - -But CC O -with IN B-PP -much NN B-NP -of IN B-PP -the DT B-NP -inefficiency NN I-NP -squeezed VBN B-VP -out IN B-PP -of IN B-PP -the DT B-NP -freight-transport JJ I-NP -system NN I-NP -, , O -rising VBG B-NP -costs NNS I-NP -are VBP B-VP -likely JJ B-ADJP -to TO B-VP -be VB I-VP -reflected VBN I-VP -directly RB B-ADVP -in IN B-PP -higher JJR B-NP -freight NN I-NP -rates NNS I-NP -. . O - -`` `` O -Shippers NNS B-NP -are VBP B-VP -saying VBG I-VP -` `` O -the DT B-NP -party NN I-NP -'s POS B-VP -over IN B-ADJP -, , O -' '' O -'' '' O -said VBD B-VP -Mr. NNP B-NP -LaLonde NNP I-NP -. . O - -`` `` O -Shippers NNS B-NP -wo MD B-VP -n't RB I-VP -be VB I-VP -able JJ B-ADJP -to TO B-VP -look VB I-VP -for IN B-PP -transportation-cost JJ B-NP -savings NNS I-NP -as IN B-SBAR -they PRP B-NP -have VBP B-VP -for IN B-PP -the DT B-NP -last JJ I-NP -eight CD I-NP -or CC I-NP -nine CD I-NP -years NNS I-NP -. . O - -Transport NN B-NP -rates NNS I-NP -wo MD B-VP -n't RB I-VP -be VB I-VP -an DT B-NP -opportunity NN I-NP -for IN B-PP -offsetting VBG B-VP -cost NN B-NP -increases NNS I-NP -in IN B-PP -other JJ B-NP -segments NNS I-NP -of IN B-PP -the DT B-NP -economy NN I-NP -. . O -'' '' O - -Robert NNP B-NP -Delaney NNP I-NP -, , O -a DT B-NP -consultant NN I-NP -at IN B-PP -Arthur NNP B-NP -D. NNP I-NP -Little NNP I-NP -Inc. NNP I-NP -, , O -Cambridge NNP B-NP -, , O -Mass. NNP B-NP -, , O -said VBD B-VP -`` `` O -We PRP B-NP -'ve VBP B-VP -gotten VBN I-VP -all PDT B-NP -the DT I-NP -benefits NNS I-NP -of IN B-PP -deregulation NN B-NP -in IN B-PP -freight-cost JJ B-NP -reductions NNS I-NP -. . O - -Now RB B-ADVP -we PRP B-NP -are VBP B-VP -starting VBG I-VP -to TO I-VP -see VB I-VP -real JJ B-NP -freight-rate JJ I-NP -increases NNS I-NP -as IN B-SBAR -carriers NNS B-NP -replace VBP B-VP -equipment NN B-NP -, , O -pay VB B-VP -higher JJR B-NP -fuel NN I-NP -costs NNS I-NP -and CC O -pay VB B-VP -more JJR B-NP -for IN B-PP -labor NN B-NP -. . O - -You PRP B-NP -'ll MD B-VP -see VB I-VP -carriers NNS B-NP -try VB B-VP -to TO I-VP -recoup VB I-VP -some DT B-NP -of IN B-PP -the DT B-NP -price NN I-NP -cutting VBG I-NP -that WDT B-NP -occurred VBD B-VP -previously RB B-ADVP -. . O -'' '' O - -Not RB B-NP -everyone NN I-NP -believes VBZ B-VP -that IN B-SBAR -the DT B-NP -good JJ I-NP -times NNS I-NP -are VBP B-VP -over IN B-ADJP -for IN B-PP -shippers NNS B-NP -. . O - -`` `` O -There EX B-NP -'s VBZ B-VP -still RB B-ADVP -a DT B-NP -lot NN I-NP -of IN B-PP -pressure NN B-NP -on IN B-PP -rates NNS B-NP -in IN B-PP -both DT B-NP -rail NN I-NP -and CC I-NP -truck NN I-NP -, , O -'' '' O -said VBD B-VP -Gerard NNP B-NP -McCullough NNP I-NP -, , O -lecturer NN B-NP -in IN B-PP -transportation NN B-NP -at IN B-PP -Massachusetts NNP B-NP -Institute NNP I-NP -of IN B-PP -Technology NNP B-NP -. . O - -Less-than-truckload JJ B-NP -companies NNS I-NP -, , O -which WDT B-NP -carry VBP B-VP -the DT B-NP -freight NN I-NP -of IN B-PP -several JJ B-NP -shippers NNS I-NP -in IN B-PP -each DT B-NP -truck NN I-NP -trailer NN I-NP -, , O -discounted VBD B-VP -away RB B-ADVP -a DT B-NP -4.7 CD I-NP -% NN I-NP -rate NN I-NP -increase NN I-NP -implemented VBD B-VP -last JJ B-NP -April NNP I-NP -. . O - -The DT B-NP -carriers NNS I-NP -were VBD B-VP -competing VBG I-VP -fiercely RB B-ADVP -for IN B-PP -market NN B-NP -share NN I-NP -. . O - -Railroad-rate JJ B-NP -increases NNS I-NP -are VBP B-VP -likely JJ B-ADJP -to TO B-VP -be VB I-VP -restrained VBN I-VP -by IN B-PP -weakening VBG B-NP -rail-traffic JJ I-NP -levels NNS I-NP -and CC O -keen JJ B-NP -competition NN I-NP -for IN B-PP -freight NN B-NP -from IN B-PP -trucks NNS B-NP -. . O - -An DT B-NP -official NN I-NP -at IN B-PP -Consolidated NNP B-NP -Freightways NNP I-NP -Inc. NNP I-NP -, , O -a DT B-NP -Menlo NNP I-NP -Park NNP I-NP -, , I-NP -Calif. NNP I-NP -, , I-NP -less-than-truckload JJ I-NP -carrier NN I-NP -, , O -said VBD B-VP -rate NN B-NP -discounting NN I-NP -in IN B-PP -that DT B-NP -industry NN I-NP -has VBZ B-VP -begun VBN I-VP -to TO I-VP -`` `` O -stabilize VB B-VP -. . O -'' '' O - -Consolidated NNP B-NP -Freightways NNP I-NP -plans VBZ B-VP -to TO I-VP -raise VB I-VP -its PRP$ B-NP -rates NNS I-NP -5.3 CD B-NP -% NN I-NP -late JJ B-NP -this DT I-NP -year NN I-NP -or CC O -early JJ B-NP -next JJ I-NP -year NN I-NP -, , O -and CC O -at IN B-NP -least JJS I-NP -two CD I-NP -competitors NNS I-NP -have VBP B-VP -announced VBN I-VP -similar JJ B-NP -increases NNS I-NP -. . O - -Truckers NNS B-NP -are VBP B-VP -`` `` O -trying VBG B-VP -to TO I-VP -send VB I-VP -signals NNS B-NP -that IN B-SBAR -they PRP B-NP -need VBP B-VP -to TO I-VP -stop VB I-VP -the DT B-NP -bloodletting NN I-NP -, , O -forget VB B-VP -about IN B-PP -market NN B-NP -share NN I-NP -and CC O -go VB B-VP -for IN B-PP -higher JJR B-NP -rates NNS I-NP -, , O -'' '' O -said VBD B-VP -Michael NNP B-NP -Lloyd NNP I-NP -, , O -an DT B-NP -analyst NN I-NP -at IN B-PP -Salomon NNP B-NP -Bros NNP I-NP -. . O - -And CC O -`` `` O -shippers NNS B-NP -are VBP B-VP -getting VBG I-VP -the DT B-NP -feeling NN I-NP -that IN B-SBAR -they PRP B-NP -have VBP B-VP -played VBN I-VP -one CD B-NP -trucker NN I-NP -off IN B-ADVP -against IN B-PP -another DT B-NP -as RB B-NP -much JJ I-NP -as IN B-SBAR -they PRP B-NP -can MD B-VP -, , O -'' '' O -he PRP B-NP -said VBD B-VP -. . O - -Air-freight NN B-NP -carriers NNS I-NP -raised VBD B-VP -their PRP$ B-NP -rates NNS I-NP -for IN B-PP -U.S. NNP B-NP -products NNS I-NP -going VBG B-VP -across IN B-PP -the DT B-NP -Pacific NNP I-NP -to TO B-PP -Asia NNP B-NP -by IN B-PP -about IN B-NP -20 CD I-NP -% NN I-NP -earlier RBR B-NP -this DT I-NP -month NN I-NP -. . O - -And CC O -Japan NNP B-NP -Air NNP I-NP -Lines NNPS I-NP -said VBD B-VP -it PRP B-NP -plans VBZ B-VP -to TO I-VP -boost VB I-VP -its PRP$ B-NP -rates NNS I-NP -a DT B-NP -further JJ I-NP -25 CD I-NP -% NN I-NP -over IN B-PP -the DT B-NP -next JJ I-NP -two CD I-NP -years NNS I-NP -. . O - -Such JJ B-NP -rate NN I-NP -increases NNS I-NP -`` `` O -will MD B-VP -increase VB I-VP -the DT B-NP -total JJ I-NP -cost NN I-NP -of IN B-PP -U.S. NNP B-NP -products NNS I-NP -and CC O -slow JJ B-VP -down RP B-PRT -the DT B-NP -rate NN I-NP -of IN B-PP -increase NN B-NP -of IN B-PP -U.S. NNP B-NP -exports NNS I-NP -, , O -'' '' O -said VBD B-VP -Richard NNP B-NP -Connors NNP I-NP -, , O -a DT B-NP -senior JJ I-NP -vice NN I-NP -president NN I-NP -of IN B-PP -Yusen NNP B-NP -Air NNP I-NP -& CC I-NP -Sea NNP I-NP -Service NNP I-NP -U.S.A. NNP I-NP -Inc. NNP I-NP -, , O -the DT B-NP -U.S. NNP I-NP -air-freight-forwarding JJ I-NP -subsidiary NN I-NP -of IN B-PP -Nippon NNP B-NP -Yusen NNP I-NP -Kaisha NNP I-NP -of IN B-PP -Japan NNP B-NP -. . O - -Ship NN B-NP -companies NNS I-NP -carrying VBG B-VP -bulk NN B-NP -commodities NNS I-NP -, , O -such JJ B-PP -as IN I-PP -oil NN B-NP -, , O -grain NN B-NP -, , O -coal NN B-NP -and CC O -iron NN B-NP -ore NN I-NP -, , O -have VBP B-VP -been VBN I-VP -able JJ B-ADJP -to TO B-VP -increase VB I-VP -their PRP$ B-NP -rates NNS I-NP -in IN B-PP -the DT B-NP -last JJ I-NP -couple NN I-NP -of IN B-PP -years NNS B-NP -. . O - -Some DT B-NP -bulk NN I-NP -shipping VBG I-NP -rates NNS I-NP -have VBP B-VP -increased VBN I-VP -`` `` O -3 CD B-NP -% NN I-NP -to TO I-NP -4 CD I-NP -% NN I-NP -in IN B-PP -the DT B-NP -past JJ I-NP -few JJ I-NP -months NNS I-NP -, , O -'' '' O -said VBD B-VP -Salomon NNP B-NP -'s POS B-NP -Mr. NNP I-NP -Lloyd NNP I-NP -. . O - -And CC O -ship NN B-NP -lines NNS I-NP -carrying VBG B-VP -containers NNS B-NP -are VBP B-VP -also RB I-VP -trying VBG I-VP -to TO I-VP -raise VB I-VP -their PRP$ B-NP -rates NNS I-NP -. . O - -Carriers NNP B-NP -boosted VBD B-VP -rates NNS B-NP -more JJR B-NP -than IN I-NP -10 CD I-NP -% NN I-NP -in IN B-PP -the DT B-NP -North NNP I-NP -Atlantic NNP I-NP -between IN B-PP -the DT B-NP -U.S. NNP I-NP -and CC O -Europe NNP B-NP -last JJ B-NP -September NNP I-NP -, , O -hoping VBG B-VP -to TO I-VP -partly RB I-VP -restore VB I-VP -rates NNS B-NP -to TO B-PP -earlier JJR B-NP -levels NNS I-NP -. . O - -Ship NN B-NP -lines NNS I-NP -operating VBG B-VP -in IN B-PP -the DT B-NP -Pacific NNP I-NP -plan NN B-VP -to TO I-VP -raise VB I-VP -rates NNS B-NP -on IN B-PP -containers NNS B-NP -carrying VBG B-VP -U.S. NNP B-NP -exports NNS I-NP -to TO B-PP -Asia NNP B-NP -about IN B-NP -10 CD I-NP -% NN I-NP -, , O -effective JJ B-ADJP -next JJ B-NP -April NNP I-NP -. . O - -MGM NNP B-NP -Grand NNP I-NP -Inc. NNP I-NP -said VBD B-VP -it PRP B-NP -filed VBD B-VP -a DT B-NP -registration NN I-NP -statement NN I-NP -with IN B-PP -the DT B-NP -Securities NNP I-NP -and CC I-NP -Exchange NNP I-NP -Commission NNP I-NP -for IN B-PP -a DT B-NP -public JJ I-NP -offering NN I-NP -of IN B-PP -six CD B-NP -million CD I-NP -common JJ I-NP -shares NNS I-NP -. . O - -The DT B-NP -Beverly NNP I-NP -Hills NNP I-NP -, , I-NP -Calif.-based JJ I-NP -company NN I-NP -said VBD B-VP -it PRP B-NP -would MD B-VP -have VB I-VP -26.9 CD B-NP -million CD I-NP -common JJ I-NP -shares NNS I-NP -outstanding JJ B-ADJP -after IN B-PP -the DT B-NP -offering NN I-NP -. . O - -The DT B-NP -hotel NN I-NP -and CC I-NP -Gaming NNP I-NP -company NN I-NP -said VBD B-VP -Merrill NNP B-NP -Lynch NNP I-NP -Capital NNP I-NP -Markets NNPS I-NP -will MD B-VP -lead VB I-VP -the DT B-NP -underwriters NNS I-NP -. . O - -Proceeds NNS B-NP -from IN B-PP -the DT B-NP -sale NN I-NP -will MD B-VP -be VB I-VP -used VBN I-VP -for IN B-PP -remodeling VBG B-NP -and CC I-NP -refurbishing VBG I-NP -projects NNS I-NP -, , B-PP -as RB I-PP -well RB I-PP -as IN I-PP -for IN B-PP -the DT B-NP -planned VBN I-NP -MGM NNP I-NP -Grand NNP I-NP -hotel\/casino NN I-NP -and CC I-NP -theme NN I-NP -park NN I-NP -. . O - -Bob NNP B-NP -Stone NNP I-NP -stewed JJ B-VP -over IN B-PP -a DT B-NP -letter NN I-NP -from IN B-PP -his PRP$ B-NP -manager NN I-NP -putting VBG B-VP -him PRP B-NP -on IN B-PP -probation NN B-NP -for IN B-PP -insubordination NN B-NP -. . O - -Mr. NNP B-NP -Stone NNP I-NP -thought VBD B-VP -the DT B-NP -discipline NN I-NP -was VBD B-VP -unfair JJ B-ADJP -; : O -he PRP B-NP -believed VBD B-VP -that IN B-SBAR -his PRP$ B-NP -manager NN I-NP -wanted VBD B-VP -to TO I-VP -get VB I-VP -rid JJ B-ADJP -of IN B-PP -him PRP B-NP -for IN B-PP -personal JJ B-NP -reasons NNS I-NP -. . O - -Unable JJ B-ADJP -to TO B-VP -persuade VB I-VP -the DT B-NP -manager NN I-NP -to TO B-VP -change VB I-VP -his PRP$ B-NP -decision NN I-NP -, , O -he PRP B-NP -went VBD B-VP -to TO B-PP -a DT B-NP -`` `` I-NP -company NN I-NP -court NN I-NP -'' '' O -for IN B-PP -a DT B-NP -hearing NN I-NP -. . O - -At IN B-PP -the DT B-NP -scheduled VBN I-NP -time NN I-NP -, , O -Mr. NNP B-NP -Stone NNP I-NP -entered VBD B-VP -a DT B-NP -conference NN I-NP -room NN I-NP -in IN B-PP -a DT B-NP -building NN I-NP -near IN B-PP -where WRB B-ADVP -he PRP B-NP -worked VBD B-VP -. . O - -After IN B-SBAR -the DT B-NP -three CD I-NP -members NNS I-NP -of IN B-PP -the DT B-NP -court NN I-NP -introduced VBD B-VP -themselves PRP B-NP -, , O -the DT B-NP -chairman NN I-NP -of IN B-PP -the DT B-NP -panel NN I-NP -said VBD B-VP -: : O -`` `` O -Go VB B-VP -ahead RB B-ADVP -and CC O -tell VB B-VP -us PRP B-NP -what WP B-NP -happened VBD B-VP -. . O - -We PRP B-NP -may MD B-VP -ask VB I-VP -questions NNS B-NP -as IN B-SBAR -you PRP B-NP -go VBP B-VP -along IN B-PRT -, , O -or CC O -we PRP B-NP -may MD B-VP -wait VB I-VP -until IN B-PP -the DT B-NP -end NN I-NP -. . O -'' '' O - -No DT B-NP -lawyers NNS I-NP -or CC I-NP -tape NN I-NP -recorders NNS I-NP -were VBD B-VP -present JJ B-ADJP -. . O - -The DT B-NP -only RB I-NP -extra JJ I-NP -people NNS I-NP -were VBD B-VP -a DT B-NP -couple NN I-NP -of IN B-PP -personnel NNS B-NP -specialists NNS I-NP -, , O -one CD B-NP -of IN B-PP -whom WP B-NP -knew VBD B-VP -Mr. NNP B-NP -Stone NNP I-NP -'s POS B-NP -case NN I-NP -intimately RB B-ADVP -and CC O -would MD B-VP -help VB I-VP -fill VB I-VP -in IN B-PRT -any DT B-NP -facts NNS I-NP -needed VBN B-VP -to TO B-VP -give VB I-VP -the DT B-NP -court NN I-NP -the DT B-NP -full JJ I-NP -picture NN I-NP -. . O - -Over IN B-PP -a DT B-NP -cup NN I-NP -of IN B-PP -coffee NN B-NP -, , O -Mr. NNP B-NP -Stone NNP I-NP -told VBD B-VP -his PRP$ B-NP -story NN I-NP -. . O - -He PRP B-NP -talked VBD B-VP -about IN B-NP -20 CD I-NP -minutes NNS I-NP -. . O - -When WRB B-ADVP -he PRP B-NP -was VBD B-VP -through IN B-ADJP -, , O -the DT B-NP -court NN I-NP -members NNS I-NP -asked VBD B-VP -many JJ B-NP -questions NNS I-NP -, , O -then RB B-ADVP -the DT B-NP -chairman NN I-NP -said VBD B-VP -they PRP B-NP -would MD B-VP -like VB I-VP -to TO I-VP -hear VB I-VP -his PRP$ B-NP -manager NN I-NP -'s POS B-NP -side NN I-NP -and CC O -talk VB B-VP -to TO B-PP -witnesses NNS B-NP -. . O - -The DT B-NP -chairman NN I-NP -promised VBD B-VP -Mr. NNP B-NP -Stone NNP I-NP -a DT B-NP -decision NN I-NP -within IN B-PP -two CD B-NP -weeks NNS I-NP -. . O - -Bob NNP B-NP -Stone NNP I-NP -is VBZ B-VP -a DT B-NP -fictional JJ I-NP -name NN I-NP -, , O -but CC O -the DT B-NP -incident NN I-NP -described VBN B-VP -is VBZ B-VP -real JJ B-ADJP -. . O - -It PRP B-NP -happened VBD B-VP -at IN B-PP -Northrop NNP B-NP -Corp. NNP I-NP -in IN B-PP -Los NNP B-NP -Angeles NNP I-NP -. . O - -The DT B-NP -court NN I-NP -is VBZ B-VP -called VBN I-VP -the DT B-NP -Management NNP I-NP -Appeals NNP I-NP -Committee NNP I-NP -, , O -or CC O -just RB B-NP -`` `` I-NP -MAC NNP I-NP -, , O -'' '' O -and CC O -it PRP B-NP -is VBZ B-VP -likely JJ B-ADJP -to TO B-VP -hear VB I-VP -a DT B-NP -couple NN I-NP -of IN I-NP -dozen NN I-NP -cases VBZ I-NP -a DT B-NP -year NN I-NP -. . O - -Alter VB B-VP -some DT B-NP -details NNS I-NP -of IN B-PP -this DT B-NP -example NN I-NP -and CC O -it PRP B-NP -could MD B-VP -be VB I-VP -taking VBG I-VP -place NN B-NP -today NN B-ADVP -at IN B-PP -Federal NNP B-NP -Express NNP I-NP -in IN B-PP -Memphis NNP B-NP -, , O -the DT B-NP -Defense NNP I-NP -and CC I-NP -Underseas NNP I-NP -Systems NNP I-NP -divisions NNS I-NP -of IN B-PP -Honeywell NNP B-NP -in IN B-PP -Minneapolis NNP B-NP -, , O -a DT B-NP -General NNP I-NP -Electric NNP I-NP -plant NN I-NP -in IN B-PP -Columbia NNP B-NP -, , O -Md. NNP B-NP -, , O -or CC O -a DT B-NP -number NN I-NP -of IN B-PP -other JJ B-NP -companies NNS I-NP -. . O - -These DT B-NP -firms NNS I-NP -are VBP B-VP -pioneers NNS B-NP -in IN B-PP -a DT B-NP -significant JJ I-NP -new JJ I-NP -trend NN I-NP -in IN B-PP -the DT B-NP -corporate JJ I-NP -world NN I-NP -: : O -the DT B-NP -rise NN I-NP -of IN B-PP -what WP B-NP -I PRP B-NP -call VBP B-VP -corporate JJ B-NP -due JJ I-NP -process NN I-NP -. . O - -Although IN B-SBAR -corporate JJ B-NP -due JJ I-NP -process NN I-NP -is VBZ B-VP -practiced VBN I-VP -today NN B-NP -in IN B-PP -few JJ B-NP -companies NNS I-NP --- : O -perhaps RB B-ADVP -40 CD B-NP -to TO I-NP -60 CD I-NP --- : O -it PRP B-NP -is VBZ B-VP -one CD B-NP -of IN B-PP -the DT B-NP -fastest JJS I-NP -developing VBG I-NP -trends NNS I-NP -in IN B-PP -industry NN B-NP -. . O - -In IN B-PP -the DT B-NP -coming VBG I-NP -decade NN I-NP -a DT B-NP -majority NN I-NP -of IN B-PP -people-oriented JJ B-NP -companies NNS I-NP -are VBP B-VP -likely JJ B-ADJP -to TO B-VP -adopt VB I-VP -it PRP B-NP -. . O - -Corporate JJ B-NP -due JJ I-NP -process NN I-NP -appeals NNS B-VP -to TO B-PP -management NN B-NP -for IN B-PP -a DT B-NP -variety NN I-NP -of IN B-PP -reasons NNS B-NP -. . O - -It PRP B-NP -reduces VBZ B-VP -lawsuits NNS B-NP -from IN B-PP -disgruntled JJ B-NP -employees NNS I-NP -and CC I-NP -ex-employees NNS I-NP -, , O -with IN B-PP -all DT B-NP -that WDT B-NP -means VBZ B-VP -for IN B-PP -reduced VBN B-NP -legal JJ I-NP -costs NNS I-NP -and CC O -better RBR B-NP -public JJ I-NP -relations NNS I-NP -. . O - -It PRP B-NP -helps VBZ B-VP -to TO I-VP -keep VB I-VP -out IN B-PRT -unions NNS B-NP -. . O - -It PRP B-NP -increases VBZ B-VP -employee NN B-NP -commitment NN I-NP -to TO B-PP -the DT B-NP -company NN I-NP -, , O -with IN B-PP -all DT B-NP -that WDT B-NP -means VBZ B-VP -for IN B-PP -efficiency NN B-NP -and CC O -quality NN B-NP -control NN I-NP -. . O - -What WP B-NP -must MD O -your PRP$ B-NP -management NN I-NP -team NN I-NP -do VBP B-VP -to TO B-VP -establish VB I-VP -corporate JJ B-NP -due JJ I-NP -process NN I-NP -? . O - -Here RB B-ADVP -are VBP B-VP -four CD B-NP -key JJ I-NP -steps NNS I-NP -: : O - -1 CD B-LST -. . O -Make VB B-VP -sure JJ B-ADJP -you PRP B-NP -have VBP B-VP -a DT B-NP -strong JJ I-NP -personnel NNS I-NP -department NN I-NP -. . O - -It PRP B-NP -must MD B-VP -be VB I-VP -able JJ B-ADJP -to TO B-VP -handle VB I-VP -most RBS B-NP -of IN B-PP -the DT B-NP -complaints NNS I-NP -that WDT B-NP -can MD B-VP -not RB I-VP -be VB I-VP -solved VBN I-VP -in IN B-PP -the DT B-NP -trenches NNS I-NP -by IN B-PP -managers NNS B-NP -and CC O -their PRP$ B-NP -subordinates NNS I-NP -, , O -else RB B-ADVP -the DT B-NP -company NN I-NP -court NN I-NP -or CC I-NP -adjudicators NNS I-NP -will MD B-VP -be VB B-VP -inundated VBN I-VP -with IN B-PP -cases NNS B-NP -. . O - -At IN B-PP -Polaroid NNP B-NP -, , O -the DT B-NP -Personnel NNP I-NP -Policy NNP I-NP -Planning NNP I-NP -Committee NNP I-NP -may MD B-VP -hear VB I-VP -only RB B-NP -about IN I-NP -20 CD I-NP -cases VBZ I-NP -a DT B-NP -year NN I-NP -; : O -the DT B-NP -rest NN I-NP -of IN B-PP -the DT B-NP -many JJ I-NP -hundreds NNS I-NP -of IN B-PP -complaints NNS B-NP -are VBP B-VP -resolved VBN I-VP -at IN B-PP -earlier JJR B-NP -stages NNS I-NP -. . O - -At IN B-PP -TWA NNP B-NP -, , O -the DT B-NP -System NNP I-NP -Board NNP I-NP -of IN B-PP -Adjustment NNP B-NP -hears VBZ B-VP -50 CD B-NP -to TO I-NP -75 CD I-NP -cases VBZ I-NP -a DT B-NP -year NN I-NP -, , O -only RB B-NP -a DT I-NP -fraction NN I-NP -of IN B-PP -the DT B-NP -complaints NNS I-NP -brought VBN B-VP -to TO B-PP -personnel NNS B-NP -specialists NNS I-NP -. . O - -At IN B-PP -Citicorp NNP B-NP -, , O -the DT B-NP -Problem NNP I-NP -Review NNP I-NP -Board NNP I-NP -may MD B-VP -hear VB I-VP -only RB B-NP -12 CD I-NP -or CC I-NP -so RB I-NP -cases VBZ I-NP -because IN B-PP -of IN I-PP -personnel NNS B-NP -'s POS B-NP -skill NN I-NP -in IN B-PP -complaint-resolution NN B-NP -. . O - -In IN B-PP -a DT B-NP -typical JJ I-NP -year NN I-NP -, , O -up IN B-NP -to TO I-NP -20 CD I-NP -% NN I-NP -of IN B-PP -the DT B-NP -work NN I-NP -force NN I-NP -goes VBZ B-VP -to TO B-PP -personnel NNS B-NP -specialists NNS I-NP -with IN B-PP -complaints NNS B-NP -of IN B-PP -unfair JJ B-NP -treatment NN I-NP -. . O - -In IN B-PP -a DT B-NP -large JJ I-NP -company NN I-NP -that WDT B-NP -means VBZ B-VP -many JJ B-NP -hundreds NNS I-NP -of IN B-PP -complaints NNS B-NP -for IN B-PP -personnel NNS B-NP -to TO B-VP -handle VB I-VP -. . O - -2 CD B-LST -. . O -Formally RB B-ADVP -or CC I-ADVP -informally RB I-ADVP -, , O -train NN B-VP -all DT B-NP -your PRP$ I-NP -managers NNS I-NP -and CC I-NP -supervisors NNS I-NP -in IN B-PP -the DT B-NP -company NN I-NP -'s POS B-NP -due-process NN I-NP -approach NN I-NP -. . O - -See VB B-VP -that IN B-SBAR -they PRP B-NP -know VBP B-VP -company NN B-NP -personnel NNS I-NP -policy NN I-NP -backwards RB B-ADVP -and CC I-ADVP -forwards RB I-ADVP -, , O -for IN O -it PRP B-NP -is VBZ B-VP -the DT B-NP -`` `` I-NP -law NN I-NP -'' '' O -governing VBG B-VP -company NN B-NP -courts NNS I-NP -and CC I-NP -adjudicators NNS I-NP -. . O - -Coach NNP B-VP -them PRP B-NP -in IN B-PP -handling NN B-VP -complaints NNS B-NP -so RB B-SBAR -that IN I-SBAR -they PRP B-NP -can MD B-VP -resolve VB I-VP -problems NNS B-NP -immediately RB B-ADVP -. . O - -In IN B-SBAR -case NN O -managers NNS B-NP -and CC O -personnel NNS B-NP -specialists NNS I-NP -are VBP B-VP -unsuccessful JJ B-ADJP -and CC O -subordinates NNS B-NP -take VBP B-VP -their PRP$ B-NP -complaints NNS I-NP -to TO B-PP -a DT B-NP -company NN I-NP -court NN I-NP -or CC I-NP -adjudicator NN I-NP -, , O -teach VB B-VP -managers NNS B-NP -to TO B-VP -accept VB I-VP -reversals NNS B-NP -as IN B-PP -a DT B-NP -fact NN I-NP -of IN B-PP -business NN B-NP -life NN I-NP -, , O -for IN O -in IN B-PP -a DT B-NP -good JJ I-NP -due-process NN I-NP -system NN I-NP -they PRP B-NP -are VBP B-VP -bound VBN I-VP -to TO I-VP -happen VB I-VP -. . O - -In IN B-PP -the DT B-NP -15 CD I-NP -companies NNS I-NP -I PRP B-NP -studied VBD B-VP -, , O -reversal NN B-NP -rates NNS I-NP -range VBP B-VP -on IN B-PP -the DT B-NP -average NN I-NP -from IN B-PP -20 CD B-NP -% NN I-NP -to TO B-PP -40 CD B-NP -% NN I-NP -. . O - -3 CD B-LST -. . O -Decide VB B-VP -whether IN O -you PRP B-NP -want VBP B-VP -a DT B-NP -panel NN I-NP -system NN I-NP -or CC O -a DT B-NP -single JJ I-NP -adjudicator NN I-NP -. . O - -A DT B-NP -panel NN I-NP -system NN I-NP -like IN B-PP -that DT B-NP -in NN B-PP -the DT B-NP -Bob NNP I-NP -Stone NNP I-NP -example NN I-NP -enjoys VBZ B-VP -such JJ B-NP -advantages NNS I-NP -as IN B-PP -high JJ B-NP -credibility NN I-NP -and CC O -, , O -for IN B-PP -the DT B-NP -panelists NNS I-NP -, , O -mutual JJ B-NP -support NN I-NP -. . O - -An DT B-NP -adjudicator NN I-NP -system NN I-NP --- : O -that DT B-INTJ -is VBZ I-INTJ -, , O -an DT B-NP -investigator NN I-NP -who WP B-NP -acts VBZ B-VP -first JJ B-ADVP -as IN B-PP -a DT B-NP -fact-finder NN I-NP -and CC O -then RB O -switches VBZ B-VP -hats NNS B-NP -and CC O -arbitrates VBZ B-VP -the DT B-NP -facts NNS I-NP --- : O -has VBZ B-VP -such JJ B-NP -advantages NNS I-NP -as IN B-PP -speed NN B-NP -, , O -flexibility NN B-NP -and CC O -maximum JJ B-NP -privacy NN I-NP -. . O - -International NNP B-NP -Business NNP I-NP -Machines NNPS I-NP -and CC O -Bank NNP B-NP -of IN B-PP -America NNP B-NP -are VBP B-VP -among IN B-PP -the DT B-NP -companies NNS I-NP -using VBG B-VP -the DT B-NP -single-adjudicator JJ I-NP -approach NN I-NP -. . O - -4 CD B-LST -. . O -Make VB B-VP -your PRP$ B-NP -due-process NN I-NP -system NN I-NP -visible JJ B-ADJP -. . O - -It PRP B-NP -wo MD B-VP -n't RB I-VP -do VB I-VP -any DT B-NP -good NN I-NP -for IN B-PP -anybody NN B-NP -unless IN B-SBAR -employees NNS B-NP -know VBP B-VP -about IN B-PP -it PRP B-NP -. . O - -Most JJS B-NP -managements NNS I-NP -hesitate VBP B-VP -to TO I-VP -go VB I-VP -all DT B-ADVP -out NN I-ADVP -in IN B-PP -advertising VBG B-VP -their PRP$ B-NP -due-process NN I-NP -systems NNS I-NP -for IN B-PP -fear NN B-NP -of IN B-PP -encouraging VBG B-VP -cranks NNS B-NP -and CC O -chronic JJ B-NP -soreheads NNS I-NP -to TO B-VP -file VB I-VP -complaints NNS B-NP -. . O - -On IN B-PP -the DT B-NP -other JJ I-NP -hand NN I-NP -, , O -they PRP B-NP -make VBP B-VP -sure JJ B-ADJP -at IN B-PP -a DT B-NP -minimum NN I-NP -that IN B-SBAR -their PRP$ B-NP -systems NNS I-NP -are VBP B-VP -described VBN I-VP -in IN B-PP -their PRP$ B-NP -employee NN I-NP -handbooks NNS I-NP -and CC O -talked VBD B-VP -up IN B-PRT -by IN B-PP -personnel NNS B-NP -specialists NNS I-NP -. . O - -Smith-Kline NNP B-NP -Beecham NNP I-NP -goes VBZ B-VP -further JJ B-ADVP -and CC O -sometimes RB B-VP -features VBZ I-VP -its PRP$ B-NP -grievance NN I-NP -procedure NN I-NP -in IN B-PP -closed-circuit JJ B-NP -TV NN I-NP -programs NNS I-NP -. . O - -Naturally RB B-ADVP -, , O -one CD B-NP -of IN B-PP -the DT B-NP -best JJS I-NP -ways NNS I-NP -to TO B-VP -guarantee VB I-VP -visibility NN B-NP -for IN B-PP -your PRP$ B-NP -due-process NN I-NP -system NN I-NP -is VBZ B-VP -for IN B-SBAR -top JJ B-NP -management NN I-NP -to TO B-VP -support VB I-VP -it PRP B-NP -. . O - -At IN B-PP -IBM NNP B-NP -, , O -the DT B-NP -company NN I-NP -'s POS B-NP -Open NNP I-NP -Door NNP I-NP -system NN I-NP -is VBZ B-VP -sometimes RB B-ADVP -the DT B-NP -subject NN I-NP -of IN B-PP -memorandums NNS B-NP -from IN B-PP -the DT B-NP -chief JJ I-NP -executive NN I-NP -. . O - -Federal NNP B-NP -Express NNP I-NP -goes VBZ B-VP -further JJ B-ADVP -in IN B-PP -this DT B-NP -respect NN I-NP -than IN B-PP -any DT B-NP -company NN I-NP -I PRP B-NP -know VBP B-VP -of IN B-PP -with IN B-PP -both DT B-NP -Frederick NNP B-NP -Smith NNP I-NP -and CC O -James NNP B-NP -Barksdale NNP I-NP -, , O -chief JJ B-NP -executive NN I-NP -and CC O -chief JJ B-NP -operating VBG I-NP -officer NN I-NP -, , O -respectively RB B-ADVP -, , O -sitting VBG B-VP -in IN B-PRT -on IN B-PP -the DT B-NP -Appeals NNP I-NP -Board NNP I-NP -almost RB B-NP -every DT I-NP -Tuesday NNP I-NP -to TO B-VP -decide VB I-VP -cases NNS B-NP -. . O - -Mr. NNP B-NP -Ewing NNP I-NP -is VBZ B-VP -a DT B-NP -consultant NN I-NP -based VBN B-VP -in IN B-PP -Winchester NNP B-NP -, , O -Mass. NNP B-NP -, , O -and CC O -author NN B-NP -of IN B-PP -`` `` O -Justice NNP B-NP -on IN B-PP -the DT B-NP -Job NNP I-NP -: : O -Resolving NNP B-VP -Grievances NNP B-NP -in IN B-PP -the DT B-NP -Nonunion NNP I-NP -Workplace NN I-NP -'' '' O --LRB- ( O -Harvard NNP B-NP -Business NNP I-NP -School NNP I-NP -Press NNP I-NP -, , O -1989 CD B-NP --RRB- ) O -. . O - -Tokyo NNP B-NP -stocks NNS I-NP -closed VBD B-VP -higher JJR B-ADVP -in IN B-PP -active JJ B-NP -trading NN I-NP -Friday NNP B-NP -, , O -marking VBG B-VP -the DT B-NP -fourth JJ I-NP -consecutive JJ I-NP -daily JJ I-NP -gain NN I-NP -since IN B-PP -Monday NNP B-NP -'s POS B-NP -sharp JJ I-NP -fall NN I-NP -. . O - -London JJ B-NP -shares NNS I-NP -closed VBD B-VP -moderately RB B-ADVP -lower JJR I-ADVP -in IN B-PP -thin JJ B-NP -trading NN I-NP -. . O - -At IN B-PP -Tokyo NNP B-NP -, , O -the DT B-NP -Nikkei NNP I-NP -index NN I-NP -of IN B-PP -225 CD B-NP -selected VBN I-NP -issues NNS I-NP -was VBD B-VP -up IN B-ADVP -112.16 CD B-NP -points NNS I-NP -to TO B-PP -35486.38 CD B-NP -. . O - -The DT B-NP -index NN I-NP -advanced VBD B-VP -266.66 CD B-NP -points NNS I-NP -Thursday NNP B-NP -. . O - -In IN B-PP -early JJ B-NP -trading NN I-NP -in IN B-PP -Tokyo NNP B-NP -Monday NNP B-NP -, , O -the DT B-NP -Nikkei NNP I-NP -index NN I-NP -rose VBD B-VP -101.98 CD B-NP -points NNS I-NP -to TO B-PP -35588.36 CD B-NP -. . O - -Friday NNP B-NP -'s POS B-NP -volume NN I-NP -on IN B-PP -the DT B-NP -First NNP I-NP -Section NN I-NP -was VBD B-VP -estimated VBN I-VP -at IN B-PP -one CD B-NP -billion CD I-NP -shares NNS I-NP -, , O -up IN B-ADVP -from IN B-PP -862 CD B-NP -million CD I-NP -Thursday NNP B-NP -. . O - -Winners NNS B-NP -outpaced VBD B-VP -losers NNS B-NP -, , O -572 CD B-ADVP -to TO I-ADVP -368 CD I-ADVP -, , O -while IN B-SBAR -181 CD B-NP -issues NNS I-NP -remained VBD B-VP -unchanged JJ B-ADJP -. . O - -With IN B-SBAR -investors NNS B-NP -relieved VBN B-ADJP -at IN B-PP -the DT B-NP -overnight JJ I-NP -gain NN I-NP -in IN B-PP -New NNP B-NP -York NNP I-NP -stocks NNS I-NP -, , O -small-lot JJ B-NP -buying NN I-NP -orders NNS I-NP -streamed VBD B-VP -into IN B-PP -the DT B-NP -market NN I-NP -from IN B-PP -early JJ B-NP -morning NN I-NP -, , O -making VBG B-VP -traders NNS B-NP -believe VBP B-VP -the DT B-NP -market NN I-NP -was VBD B-VP -back RB B-ADVP -to TO B-PP -normal JJ B-NP -. . O - -The DT B-NP -Nikkei NNP I-NP -, , O -which WDT B-NP -reached VBD B-VP -as RB B-ADJP -high JJ I-ADJP -as IN B-PP -35611.38 CD B-NP -right NN B-ADVP -after IN B-PP -the DT B-NP -opening NN I-NP -, , O -surrendered VBD B-VP -part NN B-NP -of IN B-PP -its PRP$ B-NP -early JJ I-NP -advance NN I-NP -toward IN B-PP -the DT B-NP -end NN I-NP -of IN B-PP -the DT B-NP -day NN I-NP -because IN B-PP -of IN I-PP -profit-taking NN B-NP -. . O - -`` `` O -Investors NNS B-NP -, , B-NP -especially RB I-NP -dealers NNS B-NP -, , O -do VBP B-VP -n't RB I-VP -want VB I-VP -to TO I-VP -hold VB I-VP -a DT B-NP -position NN I-NP -over IN B-PP -the DT B-NP -weekend NN I-NP -, , O -'' '' O -a DT B-NP -trader NN I-NP -at IN B-PP -Dai-ichi NNP B-NP -Securities NNP I-NP -said VBD B-VP -, , O -adding VBG B-VP -, , O -though RB B-ADVP -, , O -that IN B-SBAR -the DT B-NP -trading NN I-NP -mood NN I-NP -remained VBD B-VP -positive JJ B-ADJP -through IN B-PP -the DT B-NP -afternoon NN I-NP -session NN I-NP -. . O - -The DT B-NP -Tokyo NNP I-NP -Stock NNP I-NP -Price NNP I-NP -Index NNP I-NP --LRB- ( O -Topix NNP B-NP --RRB- ) O -of IN B-PP -all DT B-NP -issues NNS I-NP -listed VBN B-VP -in IN B-PP -the DT B-NP -First NNP I-NP -Section NN I-NP -, , O -which WDT B-NP -gained VBD B-VP -22.78 CD B-NP -points NNS I-NP -Thursday NNP B-NP -, , O -was VBD B-VP -up IN B-ADVP -14.06 CD B-NP -points NNS I-NP -, , O -or CC O -0.53 CD B-NP -% NN I-NP -, , O -at IN B-PP -2679.72 CD B-NP -. . O - -The DT B-NP -Second JJ I-NP -Section NN I-NP -index NN I-NP -, , O -which WDT B-NP -rose VBD B-VP -15.72 CD B-NP -points NNS I-NP -Thursday NNP B-NP -, , O -was VBD B-VP -up IN B-ADVP -11.88 CD B-NP -points NNS I-NP -, , O -or CC O -0.32 CD B-NP -% NN I-NP -, , O -to TO B-VP -close VB I-VP -at IN B-PP -3717.46 CD B-NP -. . O - -Volume NN B-NP -in IN B-PP -the DT B-NP -second JJ I-NP -section NN I-NP -was VBD B-VP -estimated VBN I-VP -at IN B-PP -30 CD B-NP -million CD I-NP -shares NNS I-NP -, , O -up IN B-ADVP -from IN B-PP -28 CD B-NP -million CD I-NP -Thursday NNP B-NP -. . O - -In IN B-PP -turmoil NN B-NP -caused VBN B-VP -by IN B-PP -the DT O -previous JJ B-NP -Friday NNP I-NP -'s POS B-NP -plunge NN I-NP -in IN B-PP -New NNP B-NP -York NNP I-NP -stocks NNS I-NP -, , O -the DT B-NP -Nikkei NNP I-NP -marked VBD B-VP -a DT B-NP -sharp JJ I-NP -647.33-point JJ I-NP -fall NN I-NP -Monday NNP B-NP -. . O - -But CC O -the DT B-NP -Nikkei NNP I-NP -fell VBD B-VP -an DT B-NP -overall JJ I-NP -1.8 CD I-NP -% NN I-NP -in IN B-PP -value NN B-NP -that DT B-NP -day NN I-NP -compared VBN B-PP -with IN B-PP -Wall NNP B-NP -Street NNP I-NP -'s POS I-NP -far RB B-ADJP -sharper JJR I-ADJP -6.9 CD B-ADJP -% NN I-ADJP -drop NN B-NP -on IN B-PP -Oct. NNP B-NP -13 CD I-NP -. . O - -The DT B-NP -Tokyo NNP I-NP -market NN I-NP -'s POS B-NP -resiliency NN I-NP -helped VBD B-VP -participants NNS B-NP -to TO B-VP -regain VB I-VP -confidence NN B-NP -gradually RB B-ADVP -as IN B-SBAR -they PRP B-NP -spent VBD B-VP -more JJR B-NP -time NN I-NP -on IN B-PP -analyzing VBG B-VP -factors NNS B-NP -that WDT B-NP -caused VBD B-VP -the DT B-NP -Friday NNP I-NP -plunge NN I-NP -and CC O -realized VBD B-VP -these DT B-NP -problems NNS I-NP -were VBD B-VP -unique JJ B-ADJP -to TO B-PP -New NNP B-NP -York NNP I-NP -stocks NNS I-NP -and CC B-ADJP -not RB I-ADJP -directly RB B-ADJP -related VBN I-ADJP -to TO B-PP -Tokyo NNP B-NP -. . O - -The DT B-NP -Nikkei NNP I-NP -continued VBD B-VP -to TO I-VP -gain VB I-VP -for IN B-PP -the DT B-NP -rest NN I-NP -of IN B-PP -the DT B-NP -week NN I-NP -, , O -adding VBG B-VP -1017.69 CD B-NP -points NNS I-NP -in IN B-PP -four CD B-NP -days NNS I-NP --- : O -more JJR B-VP -than IN I-VP -erasing VBG I-VP -Monday NNP B-NP -'s POS B-NP -losses NNS I-NP -. . O - -But CC O -further JJ B-NP -major JJ I-NP -advances NNS I-NP -on IN B-PP -the DT B-NP -Nikkei NNP I-NP -are VBP B-VP -n't RB I-VP -foreseen VBN I-VP -this DT B-NP -week NN I-NP -by IN B-PP -market NN B-NP -observers NNS I-NP -. . O - -Investors NNS B-NP -are VBP B-VP -still RB I-VP -waiting VBG I-VP -to TO I-VP -see VB I-VP -how WRB B-ADVP -the DT B-NP -U.S. NNP I-NP -government NN I-NP -will MD B-VP -decide VB I-VP -on IN B-PP -interest NN B-NP -rates NNS I-NP -and CC O -how WRB B-ADVP -the DT B-NP -dollar NN I-NP -will MD B-VP -be VB I-VP -stabilized VBN I-VP -. . O - -Some DT B-NP -high-priced JJ I-NP -issues NNS I-NP -made VBD B-VP -a DT B-NP -comeback NN I-NP -Friday NNP B-NP -. . O - -Pioneer NNP B-NP -surged VBD B-VP -450 CD B-NP -yen NN I-NP --LRB- ( O -$ $ B-NP -3.16 CD I-NP --RRB- ) O -to TO B-PP -6,050 CD B-NP -yen NN I-NP --LRB- ( O -$ $ B-NP -42.60 CD I-NP --RRB- ) O -. . O - -Kyocera NNP B-NP -advanced VBD B-VP -80 CD B-NP -yen NN I-NP -to TO B-PP -5,440 CD B-NP -. . O - -Fanuc NNP B-NP -gained VBD B-VP -100 CD B-NP -to TO B-PP -7,580 CD B-NP -. . O - -Breweries NNP B-NP -attracted VBD B-VP -investors NNS B-NP -because IN B-PP -of IN I-PP -their PRP$ B-NP -land NN I-NP -property NN I-NP -holdings NNS I-NP -that WDT B-NP -could MD B-VP -figure VB I-VP -in IN B-PP -development NN B-NP -or CC O -other JJ B-NP -plans NNS I-NP -, , O -traders NNS B-NP -said VBD B-VP -. . O - -Sapporo NNP B-NP -gained VBD B-VP -80 CD B-NP -to TO B-PP -1,920 CD B-NP -and CC O -Kirin NNP B-NP -added VBD B-VP -60 CD B-NP -to TO B-PP -2,070 CD B-NP -. . O - -Housings NNS B-NP -, , I-NP -constructions NNS I-NP -and CC I-NP -pharmaceuticals NNS I-NP -continued VBD B-VP -to TO I-VP -be VB I-VP -bought VBN I-VP -following VBG B-PP -Thursday NNP B-NP -'s POS B-NP -gains NNS I-NP -because IN B-PP -of IN I-PP -strong JJ B-NP -earnings NNS I-NP -outlooks NNS I-NP -. . O - -Daiwa NNP B-NP -House NNP I-NP -gained VBD B-VP -50 CD B-NP -to TO B-PP -2,660 CD B-NP -. . O - -Misawa NNP B-NP -Homes NNP I-NP -was VBD B-VP -up IN B-ADVP -20 CD B-NP -at IN B-PP -2,960 CD B-NP -. . O - -Kajima NNP B-NP -advanced VBD B-VP -40 CD B-NP -to TO B-PP -2,120 CD B-NP -and CC O -Ohbayashi NNP B-NP -added VBD B-VP -50 CD B-NP -to TO B-PP -1,730 CD B-NP -. . O - -Fujisawa NNP B-NP -added VBD B-VP -80 CD B-NP -to TO B-PP -2,010 CD B-NP -and CC O -Mochida NNP B-NP -advanced VBD B-VP -230 CD B-NP -to TO B-PP -4,400 CD B-NP -. . O - -London JJ B-NP -share NN I-NP -prices NNS I-NP -were VBD B-VP -influenced VBN I-VP -largely RB B-ADVP -by IN B-PP -declines NNS B-NP -on IN B-PP -Wall NNP B-NP -Street NNP I-NP -and CC O -weakness NN B-NP -in IN B-PP -the DT B-NP -British JJ I-NP -pound NN I-NP -. . O - -The DT B-NP -key JJ I-NP -Financial NNP I-NP -Times-Stock NNP I-NP -Exchange NNP I-NP -100-share JJ I-NP -index NN I-NP -ended VBD B-VP -10.2 CD B-NP -points NNS I-NP -lower JJR B-ADVP -at IN B-PP -2179.1 CD B-NP -, , O -above IN B-ADVP -its PRP$ B-NP -intraday JJ I-NP -low NN I-NP -of IN B-PP -2176.9 CD B-NP -, , B-ADVP -but CC I-ADVP -off IN B-ADVP -the DT B-NP -day NN I-NP -'s POS I-NP -high NN B-NP -of IN B-PP -2189 CD B-NP -. . O - -The DT B-NP -index NN I-NP -finished VBD B-VP -2.4 CD B-NP -% NN I-NP -under IN B-PP -its PRP$ B-NP -close NN I-NP -of IN B-PP -2233.9 CD B-NP -the DT B-NP -previous JJ I-NP -Friday NNP I-NP -, , O -although IN B-SBAR -it PRP B-NP -recouped VBD B-VP -some DT B-NP -of IN B-PP -the DT B-NP -sharp JJ I-NP -losses NNS I-NP -staged VBD B-VP -early JJ B-NP -last JJ I-NP -week NN I-NP -on IN B-PP -the DT B-NP -back RB I-NP -of IN B-PP -Wall NNP B-NP -Street NNP I-NP -'s POS B-NP -fall NN I-NP -. . O - -London NNP B-NP -was VBD B-VP -weak JJ B-ADJP -throughout IN B-PP -Friday NNP B-NP -'s POS B-NP -trading NN I-NP -, , O -however RB B-ADVP -, , O -on IN B-PP -what WP B-NP -dealers NNS B-NP -attributed VBD B-VP -to TO B-PP -generally RB B-NP -thin JJ I-NP -interest NN I-NP -ahead RB B-ADVP -of IN B-PP -the DT B-NP -weekend NN I-NP -and CC O -this DT B-NP -week NN I-NP -'s POS I-NP -potentially RB B-ADJP -important JJ I-ADJP -U.K. NNP B-NP -trade NN I-NP -figures NNS I-NP -for IN B-PP -September NNP B-NP -. . O - -The DT B-NP -FT-SE NNP I-NP -100 CD I-NP -largely RB B-ADVP -remained VBD B-VP -within IN B-PP -an DT B-NP -11-point JJ I-NP -range NN I-NP -establshed VBN B-VP -within IN B-PP -the DT B-NP -first JJ I-NP -hour NN I-NP -of IN B-PP -trading NN B-NP -before IN B-PP -it PRP B-NP -eased VBD B-VP -to TO B-PP -an DT B-NP -intraday JJ I-NP -low JJ I-NP -late RB B-ADVP -in IN B-PP -the DT B-NP -session NN I-NP -when WRB B-ADVP -a DT B-NP -flurry NN I-NP -of IN B-PP -program NN B-NP -selling VBG I-NP -pushed VBN B-VP -Wall NNP B-NP -Street NNP I-NP -lower JJR B-ADVP -. . O - -The DT B-NP -FT NNP I-NP -30-share JJ I-NP -index NN I-NP -closed VBD B-VP -11.0 CD B-NP -points NNS I-NP -lower JJR B-ADVP -at IN B-PP -1761.0 CD B-NP -. . O - -Volume NN B-NP -was VBD B-VP -extremely RB B-ADJP -thin JJ I-ADJP -at IN B-PP -351.3 CD B-NP -million CD I-NP -shares NNS I-NP -, , O -the DT B-NP -lightest JJS I-NP -volume NN I-NP -of IN B-PP -the DT B-NP -week NN I-NP -and CC O -modestly RB B-ADVP -under IN B-PP -Thursday NNP B-NP -'s POS B-NP -387.4 CD I-NP -million CD I-NP -shares NNS I-NP -. . O - -Dealers NNS B-NP -said VBD B-VP -the DT B-NP -day NN I-NP -'s POS B-NP -action NN I-NP -was VBD B-VP -featureless JJ B-ADJP -outside IN B-PP -some DT B-NP -response NN I-NP -to TO B-PP -sterling NN B-NP -'s POS B-NP -early JJ I-NP -weakness NN I-NP -against IN B-PP -the DT B-NP -mark NN I-NP -, , O -and CC O -fears NNS B-NP -that IN B-SBAR -Wall NNP B-NP -Street NNP I-NP -might MD B-VP -open RB I-VP -lower JJR B-ADVP -after IN B-PP -its PRP$ B-NP -strong JJ I-NP -leap NN I-NP -forward RB B-ADVP -Thursday NNP B-NP -. . O - -They PRP B-NP -added VBD B-VP -that IN B-SBAR -market-makers NNS B-NP -were VBD B-VP -largely RB I-VP -sidelined VBN I-VP -after IN B-PP -aggressively RB B-VP -supporting VBG I-VP -the DT B-NP -market NN I-NP -Thursday NNP B-NP -in IN B-PP -their PRP$ B-NP -quest NN I-NP -to TO B-VP -cover VB I-VP -internal JJ B-NP -shortages NNS I-NP -of IN B-PP -FT-SE NNP B-NP -100 CD I-NP -shares NNS I-NP -. . O - -Interest NN B-NP -may MD B-VP -remain VB I-VP -limited JJ B-ADJP -into IN B-PP -tomorrow NN B-NP -'s POS B-NP -U.K. NNP I-NP -trade NN I-NP -figures NNS I-NP -, , O -which WDT B-NP -the DT B-NP -market NN I-NP -will MD B-VP -be VB I-VP -watching VBG I-VP -closely RB B-ADVP -to TO B-VP -see VB I-VP -if IN B-SBAR -there EX B-NP -is VBZ B-VP -any DT B-NP -improvement NN I-NP -after IN B-PP -disappointing JJ B-NP -numbers NNS I-NP -in IN B-PP -the DT B-NP -previous JJ I-NP -two CD I-NP -months NNS I-NP -. . O - -The DT B-NP -key JJ I-NP -corporate JJ I-NP -news NN I-NP -of IN B-PP -the DT B-NP -day NN I-NP -was VBD B-VP -that IN B-SBAR -British JJ B-NP -Airways NNPS I-NP -decided VBD B-VP -to TO I-VP -withdraw VB I-VP -from IN B-PP -a DT B-NP -management-led JJ I-NP -bid NN I-NP -for IN B-PP -UAL NNP B-NP -Corp. NNP I-NP -, , O -the DT B-NP -parent NN I-NP -of IN B-PP -United NNP B-NP -Airlines NNPS I-NP -. . O - -British JJ B-NP -Airways NNPS I-NP -rose VBD B-VP -initially RB B-ADVP -after IN B-PP -announcing VBG B-VP -its PRP$ B-NP -withdrawal NN I-NP -from IN B-PP -the DT B-NP -UAL NNP I-NP -deal NN I-NP -. . O - -Dealers NNS B-NP -said VBD B-VP -they PRP B-NP -viewed VBD B-VP -the DT O -initial JJ O -# # O -390-million CD O --LRB- ( O -$ $ B-ADJP -622 CD O -million CD O --RRB- ) O -outlay NN B-NP -for IN B-PP -a DT B-NP -15 CD I-NP -% NN I-NP -stake NN I-NP -in IN B-PP -the DT B-NP -airline NN I-NP -as IN B-PP -a DT B-NP -bit NN I-NP -much JJ I-NP -. . O - -Its PRP$ B-NP -shares NNS I-NP -slid VBD B-VP -in IN B-PP -late JJ B-NP -dealings NNS I-NP -to TO B-VP -close VB I-VP -a DT B-NP -penny NN I-NP -per IN B-PP -share NN B-NP -lower JJR B-ADVP -at IN B-PP -197 CD B-NP -pence NN I-NP -. . O - -The DT B-NP -airline NN I-NP -was VBD B-VP -the DT B-NP -most RBS I-NP -active JJ I-NP -FT-SE NNP I-NP -100 CD I-NP -at IN B-PP -8.2 CD B-NP -million CD I-NP -shares NNS I-NP -traded VBN B-VP -. . O - -The DT B-NP -next JJ I-NP -most RBS I-NP -active JJ I-NP -top-tier JJ I-NP -stock NN I-NP -was VBD B-VP -B.A.T NNP B-NP -Industries NNPS I-NP -, , O -the DT B-NP -target NN I-NP -of IN B-PP -Sir NNP B-NP -James NNP I-NP -Goldsmith NNP I-NP -'s POS B-NP -# # B-ADJP -13.4 CD O -billion CD O -bid NN B-NP -. . O - -The DT B-NP -company NN I-NP -gained VBD B-VP -shareholder NN B-NP -approval NN I-NP -Thursday NNP B-NP -to TO B-VP -restructure VB I-VP -in IN B-PP -a DT B-NP -bid NN I-NP -to TO B-VP -fend VB I-VP -off IN B-PRT -the DT B-NP -hostile JJ I-NP -takeover NN I-NP -. . O - -Sir NNP B-NP -James NNP I-NP -said VBD B-VP -Thursday NNP B-NP -night NN I-NP -that IN B-SBAR -his PRP$ B-NP -plans NNS I-NP -for IN B-PP -the DT B-NP -takeover NN I-NP -had VBD B-VP -n't RB I-VP -changed VBN I-VP -. . O - -B.A.T NNP B-NP -ended VBD B-VP -the DT B-NP -day NN I-NP -at IN B-PP -778 CD B-NP -, , O -down JJ B-ADVP -5 NN B-NP -, , O -on IN B-PP -turnover NN B-NP -of IN B-PP -7.5 CD B-NP -million CD I-NP -shares NNS I-NP -. . O - -Dealers NNS B-NP -said VBD B-VP -it PRP B-NP -was VBD B-VP -hit VBN I-VP -by IN B-PP -some DT B-NP -profit-taking NN I-NP -after IN B-PP -gains NNS B-NP -since IN B-PP -mid-week NN B-NP -. . O - -In IN B-PP -other JJ B-NP -active JJ I-NP -shares NNS I-NP -, , O -Trusthouse NNP B-NP -Forte NNP I-NP -shed VB B-VP -10 CD B-NP -to TO B-PP -294 CD B-NP -on IN B-PP -volume NN B-NP -of IN B-PP -6.4 CD B-NP -million CD I-NP -shares NNS I-NP -after IN B-PP -a DT B-NP -Barclays NNP I-NP -De NNP I-NP -Zoete NNP I-NP -Wedd NNP I-NP -downgrading NN I-NP -, , O -while IN B-SBAR -Hillsdown NNP B-NP -Holdings NNP I-NP -, , O -a DT B-NP -food NN I-NP -products NNS I-NP -concern VBP I-NP -, , O -was VBD B-VP -boosted VBN I-VP -2 CD B-NP -to TO B-PP -271 CD B-NP -after IN O -it PRP B-NP -disclosed VBD B-VP -it PRP B-NP -would MD B-VP -seek VB I-VP -shareholder NN B-NP -approval NN I-NP -to TO B-VP -begin VB I-VP -share NN B-NP -repurchases NNS I-NP -. . O - -Elsewhere RB B-ADVP -in IN B-PP -Europe NNP B-NP -, , O -share NN B-NP -prices NNS I-NP -closed VBD B-VP -higher JJR B-ADVP -in IN B-PP -Stockholm NNP B-NP -, , I-NP -Brussels NNP I-NP -and CC I-NP -Milan NNP I-NP -. . O - -Prices NNS B-NP -were VBD B-VP -lower JJR B-ADJP -in IN B-PP -Frankfurt NNP B-NP -, , I-NP -Zurich NNP I-NP -, , I-NP -Paris NNP I-NP -and CC I-NP -Amsterdam NNP I-NP -. . O - -South JJ B-NP -African JJ I-NP -gold NN I-NP -stocks NNS I-NP -closed VBD B-VP -moderately RB B-ADVP -lower JJR I-ADVP -. . O - -Share NN B-NP -prices NNS I-NP -closed VBD B-VP -higher JJR B-ADVP -in IN B-PP -Sydney NNP B-NP -, , O -Taipei NNP B-NP -, , O -Wellington NNP B-NP -, , O -Manila NNP B-NP -, , O -Hong NNP B-NP -Kong NNP I-NP -and CC O -Singapore NNP B-NP -and CC O -were VBD B-VP -lower JJR B-ADJP -in IN B-PP -Seoul NNP B-NP -. . O - -Here RB B-ADVP -are VBP B-VP -price NN B-NP -trends NNS I-NP -on IN B-PP -the DT B-NP -world NN I-NP -'s POS B-NP -major JJ I-NP -stock NN I-NP -markets NNS I-NP -, , O -as IN B-SBAR -calculated VBN B-VP -by IN B-PP -Morgan NNP B-NP -Stanley NNP I-NP -Capital NNP I-NP -International NNP I-NP -Perspective NNP I-NP -, , O -Geneva NNP B-NP -. . O - -To TO B-VP -make VB I-VP -them PRP B-NP -directly RB B-ADJP -comparable JJ I-ADJP -, , O -each DT B-NP -index NN I-NP -is VBZ B-VP -based VBN I-VP -on IN B-PP -the DT B-NP -close NN I-NP -of IN B-PP -1969 CD B-NP -equaling VBG B-VP -100 CD B-NP -. . O - -The DT B-NP -percentage NN I-NP -change NN I-NP -is VBZ B-VP -since IN B-PP -year-end NN B-NP -. . O - -The DT B-NP -U.S. NNP I-NP -is VBZ B-VP -required VBN I-VP -to TO I-VP -notify VB I-VP -foreign JJ B-NP -dictators NNS I-NP -if IN B-SBAR -it PRP B-NP -knows VBZ B-VP -of IN B-PP -coup NN B-NP -plans NNS I-NP -likely JJ B-ADJP -to TO B-VP -endanger VB I-VP -their PRP$ B-NP -lives NNS I-NP -, , O -government NN B-NP -officials NNS I-NP -said VBD B-VP -. . O - -The DT B-NP -notification NN I-NP -policy NN I-NP -was VBD B-VP -part NN B-NP -of IN B-PP -a DT B-NP -set NN I-NP -of IN B-PP -guidelines NNS B-NP -on IN B-PP -handling NN B-VP -coups NNS B-NP -outlined VBN B-VP -in IN B-PP -a DT B-NP -secret JJ I-NP -1988 CD I-NP -exchange NN I-NP -of IN B-PP -letters NNS B-NP -between IN B-PP -the DT B-NP -Reagan NNP I-NP -administration NN I-NP -and CC O -the DT B-NP -Senate NNP I-NP -Intelligence NNP I-NP -Committee NNP I-NP -. . O - -The DT B-NP -existence NN I-NP -of IN B-PP -the DT B-NP -guidelines NNS I-NP -has VBZ B-VP -become VBN I-VP -known VBN I-VP -since IN B-SBAR -President NNP B-NP -Bush NNP I-NP -disclosed VBD B-VP -them PRP B-NP -privately RB B-ADVP -to TO B-PP -seven CD B-NP -Republican NNP I-NP -senators NNS I-NP -at IN B-PP -a DT B-NP -White NNP I-NP -House NNP I-NP -meeting NN I-NP -last JJ B-NP -Monday NNP I-NP -. . O - -Officials NNS B-NP -familiar JJ B-ADJP -with IN B-PP -the DT B-NP -meeting NN I-NP -said VBD B-VP -Mr. NNP B-NP -Bush NNP I-NP -cited VBD B-VP -the DT B-NP -policy NN I-NP -as IN B-PP -an DT B-NP -example NN I-NP -of IN B-PP -the DT B-NP -sort NN I-NP -of IN B-PP -congressional JJ B-NP -requirements NNS I-NP -the DT B-NP -administration NN I-NP -contends VBZ B-VP -contribute VB B-VP -to TO B-PP -the DT B-NP -failure NN I-NP -of IN B-PP -such JJ B-NP -covert JJ I-NP -actions NNS I-NP -as IN B-PP -this DT B-NP -month NN I-NP -'s POS B-NP -futile JJ I-NP -effort NN I-NP -to TO B-VP -oust VB I-VP -Panamanian JJ B-NP -dictator NN I-NP -Manuel NNP I-NP -Noriega NNP I-NP -. . O - -According VBG B-PP -to TO B-PP -the DT B-NP -officials NNS I-NP -, , O -Mr. NNP B-NP -Bush NNP I-NP -even RB B-ADVP -read VB B-VP -to TO B-PP -the DT B-NP -senators NNS I-NP -selections NNS B-NP -from IN B-PP -a DT B-NP -highly RB I-NP -classified VBN I-NP -letter NN I-NP -from IN B-PP -the DT B-NP -committee NN I-NP -to TO B-PP -the DT B-NP -White NNP I-NP -House NNP I-NP -discussing VBG B-VP -the DT B-NP -guidelines NNS I-NP -. . O - -They PRP B-NP -said VBD B-VP -the DT B-NP -president NN I-NP -conceded VBD B-VP -the DT B-NP -notification NN I-NP -requirement NN I-NP -did VBD B-VP -n't RB I-VP -affect VB I-VP -his PRP$ B-NP -decision NN I-NP -to TO B-VP -lend VB I-VP -only RB B-NP -minor JJ I-NP -support NN I-NP -to TO B-PP -this DT B-NP -month NN I-NP -'s POS B-NP -Panama NNP I-NP -coup NN I-NP -effort NN I-NP -. . O - -No DT B-NP -notification NN I-NP -was VBD B-VP -ever RB I-VP -considered VBN I-VP -, , O -officials NNS B-NP -said VBD B-VP -, , O -apparently RB B-ADVP -because IN B-SBAR -the DT B-NP -U.S. NNP I-NP -did VBD B-VP -n't RB I-VP -think VB I-VP -the DT B-NP -coup NN I-NP -plotters NNS I-NP -intended VBN B-VP -to TO I-VP -kill VB I-VP -Mr. NNP B-NP -Noriega NNP I-NP -, , O -but CC O -merely RB B-VP -sought VBD I-VP -to TO I-VP -imprison VB I-VP -him PRP B-NP -. . O - -What WP B-NP -'s VBZ B-VP -more JJR B-NP -, , O -both DT B-NP -administration NN B-NP -and CC O -congressional JJ B-NP -officials NNS I-NP -hint VBP B-VP -that IN B-SBAR -the DT B-NP -notification NN I-NP -requirement NN I-NP -is VBZ B-VP -likely JJ B-ADJP -to TO B-VP -be VB I-VP -dropped VBN I-VP -from IN B-PP -the DT B-NP -guidelines NNS I-NP -on IN B-PP -coup NN B-NP -attempts NNS I-NP -that WDT B-NP -are VBP B-VP -being VBG I-VP -rewritten VBN I-VP -by IN B-PP -the DT B-NP -panel NN I-NP -and CC O -the DT B-NP -White NNP I-NP -House NNP I-NP -. . O - -The DT B-NP -rewriting VBG I-NP -was VBD B-VP -launched VBN I-VP -at IN B-PP -a DT B-NP -meeting NN I-NP -between IN B-PP -Mr. NNP B-NP -Bush NNP I-NP -and CC O -intelligence NN B-NP -committee NN I-NP -leaders NNS I-NP -Oct. NNP B-NP -12 CD I-NP -, , O -a DT B-NP -few JJ I-NP -days NNS I-NP -before IN B-PP -the DT B-NP -meeting NN I-NP -at IN B-PP -which WDT B-NP -the DT B-NP -president NN I-NP -complained VBD B-VP -about IN B-PP -the DT B-NP -rules NNS I-NP -. . O - -However RB B-ADVP -, , O -the DT B-NP -disclosure NN I-NP -of IN B-PP diff --git a/paddle/trainer/tests/train_files.txt b/paddle/trainer/tests/train_files.txt deleted file mode 100644 index 1c26891495..0000000000 --- a/paddle/trainer/tests/train_files.txt +++ /dev/null @@ -1 +0,0 @@ -trainer/tests/train_proto.bin diff --git a/paddle/trainer/tests/train_sparse.list b/paddle/trainer/tests/train_sparse.list deleted file mode 100644 index 6ea020e220..0000000000 --- a/paddle/trainer/tests/train_sparse.list +++ /dev/null @@ -1 +0,0 @@ -trainer/tests/compare_sparse_data diff --git a/proto/ModelConfig.proto b/proto/ModelConfig.proto index c68387c413..2fcdbbc8bd 100644 --- a/proto/ModelConfig.proto +++ b/proto/ModelConfig.proto @@ -541,8 +541,12 @@ message LayerConfig { // for switch order layer optional ReshapeConfig reshape_conf = 59; + // for batch normalization layer + // The small constant added to the variance to improve numeric stability. + optional double epsilon = 60 [ default = 0.00001 ]; + // for factorization machine layer - optional uint32 factor_size = 60; + optional uint32 factor_size = 61; } message EvaluatorConfig { diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index 27976a6f10..07cabb5c30 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -1116,35 +1116,6 @@ def PyData(files=None, return data_config -@config_func -def ProtoData(files=None, - type=None, - file_group_queue_capacity=None, - load_file_count=None, - constant_slots=None, - load_thread_num=None, - **xargs): - data_config = create_data_config_proto(**xargs) - if type is None: - data_config.type = 'proto' - else: - data_config.type = type - data_config.files = files - - # When type="proto_group", one data provider contains at most - # load_file_count files, and there are at most - # (queue_capacity + load_thread_num + 1) data providers in memory - if file_group_queue_capacity is not None: - data_config.file_group_conf.queue_capacity = file_group_queue_capacity - if load_file_count is not None: - data_config.file_group_conf.load_file_count = load_file_count - if load_thread_num is not None: - data_config.file_group_conf.load_thread_num = load_thread_num - if constant_slots: - data_config.constant_slots.extend(constant_slots) - return data_config - - #real data for training is actually provided by "sub_data" data providers. @config_func def MultiData(sub_data=[]): @@ -2066,13 +2037,20 @@ class ParameterReluLayer(LayerBase): def __init__(self, name, inputs, partial_sum=1, **args): super(ParameterReluLayer, self).__init__( name, self.layer_type, 0, inputs=inputs, **args) + input_layer = self.get_input_layer(0) config_assert(len(self.inputs) == 1, "prelu layer has only one input.") config_assert(input_layer.size % partial_sum == 0, "a wrong setting for partial_sum") + + dims = [1, input_layer.size / partial_sum] self.set_layer_size(input_layer.size) self.config.partial_sum = partial_sum - self.create_input_parameter(0, input_layer.size / partial_sum) + self.create_input_parameter(0, input_layer.size / partial_sum, dims) + + self.set_layer_height_width(self.get_input_layer(0).height, \ + self.get_input_layer(0).width) + self.set_layer_depth(self.get_input_layer(0).depth) @config_layer('conv') @@ -2434,6 +2412,7 @@ class BatchNormLayer(LayerBase): bias=True, img3D=False, use_global_stats=True, + epsilon=1e-5, moving_average_fraction=0.9, batch_norm_type=None, mean_var_names=None, @@ -2482,6 +2461,9 @@ class BatchNormLayer(LayerBase): self.config.use_global_stats = use_global_stats if moving_average_fraction is not None: self.config.moving_average_fraction = moving_average_fraction + if epsilon is not None: + assert epsilon >= 1e-5, "epsilon must be no less than 1e-5." + self.config.epsilon = epsilon input_layer = self.get_input_layer(0) image_conf = self.config.inputs[0].image_conf @@ -2714,7 +2696,7 @@ Usage: max_sort_size = -1, inputs = ["output", "score"]) Input data: Samples of the same query should be loaded as a sequence, - by ProtoDataProvider or PyDataProvider etc.. User should provide + by PyDataProvider etc.. User should provide scores for each sample. The score slot should be the 2nd input of lambdaRank layer. diff --git a/python/paddle/trainer_config_helpers/evaluators.py b/python/paddle/trainer_config_helpers/evaluators.py index 57979db4de..95797fba8f 100644 --- a/python/paddle/trainer_config_helpers/evaluators.py +++ b/python/paddle/trainer_config_helpers/evaluators.py @@ -297,7 +297,7 @@ def auc_evaluator( def pnpair_evaluator( input, label, - info, + query_id, weight=None, name=None, ): """ @@ -308,16 +308,20 @@ def pnpair_evaluator( .. code-block:: python - eval = pnpair_evaluator(input, label, info) + eval = pnpair_evaluator(input, label, query_id) :param input: Input Layer name. The output prediction of network. :type input: LayerOutput :param label: Label layer name. :type label: LayerOutput - :param info: Info layer name. (TODO, explaination) - :type info: LayerOutput + :param query_id: Query_id layer name. Query_id indicates that which query + each sample belongs to. Its shape should be + the same as output of Label layer. + :type query_id: LayerOutput :param weight: Weight Layer name. It should be a matrix with size - [sample_num, 1]. (TODO, explaination) + [sample_num, 1] which indicates the weight of each sample. + The default weight of sample is 1 if the weight layer is None. + And the pair weight is the mean of the two samples' weight. :type weight: LayerOutput :param name: Evaluator name. :type name: None|basestring @@ -326,8 +330,8 @@ def pnpair_evaluator( input = [input] if label: input.append(label) - if info: - input.append(info) + if query_id: + input.append(query_id) evaluator_base( input=input, type="pnpair", diff --git a/python/paddle/trainer_config_helpers/layers.py b/python/paddle/trainer_config_helpers/layers.py index 5b8c37464c..32287cce6c 100644 --- a/python/paddle/trainer_config_helpers/layers.py +++ b/python/paddle/trainer_config_helpers/layers.py @@ -2510,12 +2510,12 @@ def img_conv_layer(input, input is raw pixels of image(mono or RGB), or it may be the previous layer's num_filters * num_group. - There are several group of filter in PaddlePaddle implementation. - Each group will process some channel of the inputs. For example, if an input + There are several groups of filters in PaddlePaddle implementation. + Each group will process some channels of the input. For example, if num_channel = 256, group = 4, num_filter=32, the PaddlePaddle will create - 32*4 = 128 filters to process inputs. The channels will be split into 4 - pieces. First 256/4 = 64 channels will process by first 32 filters. The - rest channels will be processed by rest group of filters. + 32*4 = 128 filters to process the input. The channels will be split into 4 + pieces. First 256/4 = 64 channels will be processed by first 32 filters. The + rest channels will be processed by the rest groups of filters. The example usage is: @@ -2531,53 +2531,68 @@ def img_conv_layer(input, :type name: basestring :param input: The input of this layer. :type input: LayerOutput - :param filter_size: The x dimension of a filter kernel. Or input a tuple for - two image dimension. + :param filter_size: The dimensions of the filter kernel. If the parameter is + set to one integer, the two dimensions on x and y axises + will be same when filter_size_y is not set. If it is set + to a list, the first element indicates the dimension on + the x axis, and the second is used to specify the dimension + on the y axis when filter_size_y is not provided. :type filter_size: int | tuple | list - :param filter_size_y: The y dimension of a filter kernel. Since PaddlePaddle - currently supports rectangular filters, the filter's - shape will be (filter_size, filter_size_y). - :type filter_size_y: int | None + :param filter_size_y: The dimension of the filter kernel on the y axis. If the parameter + is not set, it will be set automatically according to filter_size. + :type filter_size_y: int :param num_filters: Each filter group's number of filter :param act: Activation type. ReluActivation is the default activation. :type act: BaseActivation - :param groups: Group size of filters. + :param groups: The group number. 1 is the default group number. :type groups: int - :param stride: The x dimension of the stride. Or input a tuple for two image - dimension. + :param stride: The strides. If the parameter is set to one integer, the strides + on x and y axises will be same when stride_y is not set. If it is + set to a list, the first element indicates the stride on the x axis, + and the second is used to specify the stride on the y axis when + stride_y is not provided. 1 is the default value. :type stride: int | tuple | list - :param stride_y: The y dimension of the stride. + :param stride_y: The stride on the y axis. :type stride_y: int - :param padding: The x dimension of the padding. Or input a tuple for two - image dimension + :param padding: The padding sizes. If the parameter is set to one integer, the padding + sizes on x and y axises will be same when padding_y is not set. If it + is set to a list, the first element indicates the padding size on the + x axis, and the second is used to specify the padding size on the y axis + when padding_y is not provided. 0 is the default padding size. :type padding: int | tuple | list - :param padding_y: The y dimension of the padding. + :param padding_y: The padding size on the y axis. :type padding_y: int - :param dilation: The x dimension of the dilation. Or input a tuple for two - image dimension + :param dilation: The dimensions of the dilation. If the parameter is set to one integer, + the two dimensions on x and y axises will be same when dilation_y is not + set. If it is set to a list, the first element indicates the dimension + on the x axis, and the second is used to specify the dimension on the y + axis when dilation_y is not provided. 1 is the default dimension. :type dilation: int | tuple | list - :param dilation_y: The y dimension of the dilation. + :param dilation_y: The dimension of the dilation on the y axis. :type dilation_y: int :param bias_attr: The bias attribute. If the parameter is set to False or an object whose type is not ParameterAttribute, no bias is defined. If the parameter is set to True, the bias is initialized to zero. :type bias_attr: ParameterAttribute | None | bool | Any - :param num_channels: number of input channels. If None will be set - automatically from previous output. + :param num_channels: The number of input channels. If the parameter is not set or + set to None, its actual value will be automatically set to + the channel number of the input. :type num_channels: int - :param param_attr: Convolution param attribute. None means default attribute + :param param_attr: The parameter attribute. See ParameterAttribute for + details. :type param_attr: ParameterAttribute - :param shared_biases: Is biases will be shared between filters or not. + :param shared_biases: Whether biases will be shared between filters or not. :type shared_biases: bool - :param layer_attr: Layer Extra Attribute. + :param layer_attr: The extra layer attributes. See ExtraLayerAttribute for + details. :type layer_attr: ExtraLayerAttribute - :param trans: true if it is a convTransLayer, false if it is a convLayer + :param trans: True if it is a convTransLayer, False if it is a convLayer :type trans: bool - :param layer_type: specify the layer_type, default is None. If trans=True, - layer_type has to be "exconvt" or "cudnn_convt", - otherwise layer_type has to be either "exconv" or - "cudnn_conv" - :type layer_type: String + :param layer_type: Specify the layer type. If the dilation's dimension on one axis is + larger than 1, layer_type has to be "cudnn_conv" or "cudnn_convt". + If trans=True, layer_type has to be "exconvt" or "cudnn_convt", + otherwise layer_type has to be either "exconv" or "cudnn_conv". + :type layer_type: basestring :return: LayerOutput object. :rtype: LayerOutput """ @@ -2682,7 +2697,7 @@ def img_pool_layer(input, """ Image pooling Layer. - The details of pooling layer, please refer ufldl's pooling_ . + The details of pooling layer, please refer to ufldl's pooling_ . .. _pooling: http://ufldl.stanford.edu/tutorial/supervised/Pooling/ @@ -2714,32 +2729,37 @@ def img_pool_layer(input, padding_y=2, pool_type=MaxPooling()) - :param padding: pooling padding width. + :param padding: The padding size on the x axis. 0 is the default padding size. :type padding: int - :param padding_y: pooling padding height. It's equal to padding by default. - :type padding_y: int | None - :param name: name of pooling layer - :type name: basestring. + :param padding_y: The padding size on the y axis. If the parameter is not set + or set to None, it will be set to 'padding' automatically. + :param name: The name of this layer. It is optional. + :type name: basestring :param input: The input of this layer. :type input: LayerOutput - :param pool_size: pooling window width + :param pool_size: The pooling window length on the x axis. :type pool_size: int - :param pool_size_y: pooling window height. It's eaqual to pool_size by default. - :type pool_size_y: int | None - :param num_channels: number of input channel. + :param pool_size_y: The pooling window length on the y axis. If the parameter is + not set or set to None, its actual value will be automatically + set to pool_size. + :type pool_size_y: int + :param num_channels: The number of input channels. If the parameter is not set or + set to None, its actual value will be automatically set to + the channels number of the input. :type num_channels: int - :param pool_type: pooling type. MaxPooling or AvgPooling. Default is - MaxPooling. + :param pool_type: Pooling type. MaxPooling is the default pooling. :type pool_type: BasePoolingType - :param stride: stride width of pooling. + :param stride: The stride on the x axis. 1 is the default value. :type stride: int - :param stride_y: stride height of pooling. It is equal to stride by default. - :type stride_y: int | None - :param layer_attr: Extra Layer attribute. + :param stride_y: The stride on the y axis. If the parameter is not set or set to + None, its actual value will be automatically set to 'stride'. + :type stride_y: int + :param layer_attr: The extra layer attribute. See ExtraLayerAttribute for + details. :type layer_attr: ExtraLayerAttribute - :param ceil_mode: Wether to use ceil mode to calculate output height and with. - Defalut is True. If set false, Otherwise use floor. - + :param ceil_mode: Wether to use the ceil function to calculate output height and width. + True is the default. If it is set to False, the floor function will + be used. :type ceil_mode: bool :return: LayerOutput object. :rtype: LayerOutput @@ -2845,24 +2865,32 @@ def img_pool3d_layer(input, :param padding: pooling padding width. :type padding: int | tuple | list - :param name: name of pooling layer + :param name: The name of this layer. It is optional. :type name: basestring. :param input: The input of this layer. :type input: LayerOutput - :param pool_size: pooling window width + :param pool_size: The pooling window lengths along three axises. If the parameter + is set to one integer, the three lengths will be same. :type pool_size: int | tuple | list - :param num_channels: number of input channel. + :param num_channels: The number of input channels. If the parameter is not set or + set to None, its actual value will be automatically set to + the channels number of the input. :type num_channels: int - :param pool_type: pooling type. MaxPooling or AvgPooling. Default is - MaxPooling. + :param pool_type: Pooling type. MaxPooling is the default pooling. :type pool_type: BasePoolingType - :param stride: stride width of pooling. + :param stride: The strides of the pooling along three axises. If the parameter + is set to one integer, the three strides will be same. 1 is the + default value. :type stride: int | tuple | list - :param layer_attr: Extra Layer attribute. + :param padding: The sizes of padding along three axises. If the parameter is set to + one integer, they will be same. 0 is the default padding size. + :type padding: int | tuple | list + :param layer_attr: The extra layer attribute. See ExtraLayerAttribute for + details. :type layer_attr: ExtraLayerAttribute - :param ceil_mode: Wether to use ceil mode to calculate output height and with. - Defalut is True. If set false, Otherwise use floor. - + :param ceil_mode: Wether to use the ceil function to calculate output height and width. + True is the default. If it is set to False, the floor function will + be used. :type ceil_mode: bool :return: LayerOutput object. :rtype: LayerOutput @@ -2941,9 +2969,11 @@ def spp_layer(input, pyramid_height=None, layer_attr=None): """ - Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. - The details please refer to - `Kaiming He's paper `_. + A layer performs spatial pyramid pooling. + + Reference: + Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition + https://arxiv.org/abs/1406.4729 The example usage is: @@ -2958,13 +2988,16 @@ def spp_layer(input, :type name: basestring :param input: The input of this layer. :type input: LayerOutput - :param num_channels: number of input channel. + :param num_channels: The number of input channels. If the parameter is not set or + set to None, its actual value will be automatically set to + the channels number of the input. :type num_channels: int - :param pool_type: Pooling type. MaxPooling or AveragePooling. Default is MaxPooling. + :param pool_type: Pooling type. MaxPooling is the default pooling. :type scale: BasePoolingType - :param pyramid_height: pyramid height. + :param pyramid_height: The pyramid height of this pooling. :type pyramid_height: int - :param layer_attr: Extra Layer Attribute. + :param layer_attr: The extra layer attribute. See ExtraLayerAttribute for + details. :type layer_attr: ExtraLayerAttribute :return: LayerOutput object. :rtype: LayerOutput @@ -3088,6 +3121,7 @@ def batch_norm_layer(input, param_attr=None, layer_attr=None, batch_norm_type=None, + epsilon=1e-5, moving_average_fraction=0.9, use_global_stats=None, mean_var_names=None): @@ -3158,6 +3192,8 @@ def batch_norm_layer(input, will use the mean and variance of the current batch of test data. :type use_global_stats: bool | None. + :param epsilon: The small constant added to the variance to improve numeric stability. + :type epsilon: float. :param moving_average_fraction: Factor used in the moving average computation. :math:`runningMean = newMean*(1-factor) + runningMean*factor` :type moving_average_fraction: float. @@ -3175,6 +3211,7 @@ def batch_norm_layer(input, assert (batch_norm_type is None) or (batch_norm_type == "batch_norm") or \ (batch_norm_type == "mkldnn_batch_norm") or \ (batch_norm_type == "cudnn_batch_norm") + l = Layer( name=name, img3D=img3D, @@ -3184,6 +3221,7 @@ def batch_norm_layer(input, type=LayerType.BATCH_NORM_LAYER, batch_norm_type=batch_norm_type, bias=ParamAttr.to_bias(bias_attr), + epsilon=epsilon, moving_average_fraction=moving_average_fraction, use_global_stats=use_global_stats, mean_var_names=mean_var_names, @@ -4697,7 +4735,7 @@ def conv_projection(input, will be same when filter_size_y is not set. If it is set to a list, the first element indicates the dimension on the x axis, and the second is used to specify the dimension - on the y axis when filter_size is not provided. + on the y axis when filter_size_y is not provided. :type filter_size: int | tuple | list :param filter_size_y: The dimension of the filter kernel on the y axis. If the parameter is not set, it will be set automatically according to filter_size. @@ -6574,10 +6612,11 @@ def row_conv_layer(input, @layer_support() @wrap_name_default() -@wrap_param_attr_default() def prelu_layer(input, name=None, partial_sum=1, + channel_shared=None, + num_channels=None, param_attr=None, layer_attr=None): """ @@ -6608,6 +6647,14 @@ def prelu_layer(input, - partial_sum = number of outputs, indicates all elements share the same weight. :type partial_sum: int + :param channel_shared: whether or not the parameter are shared across channels. + + - channel_shared = True, we set the partial_sum to the number of outputs. + - channel_shared = False, we set the partial_sum to the number of elements in one channel. + + :type channel_shared: bool + :param num_channels: number of input channel. + :type num_channels: int :param param_attr: The parameter attribute. See ParameterAttribute for details. :type param_attr: ParameterAttribute :param layer_attr: The extra layer attribute. See ExtraLayerAttribute for @@ -6618,7 +6665,25 @@ def prelu_layer(input, """ assert isinstance(input, LayerOutput), 'prelu_layer accepts only one input.' - assert isinstance(param_attr, ParameterAttribute) + + if not param_attr: + param_attr = ParamAttr(initial_mean=0.25, initial_std=0.0) + else: + assert isinstance(param_attr, ParameterAttribute) + + if num_channels is None: + assert input.num_filters is not None, \ + 'the input channel cannot be detected, please specify the num_channels parameter' + num_channels = input.num_filters + + if channel_shared is not None: + assert isinstance(channel_shared, bool) + assert (input.height != 0 and input.width != 0), \ + 'input height and widht must be setted' + if channel_shared: + partial_sum = input.height * input.width * num_channels + else: + partial_sum = input.height * input.width l = Layer( name=name, @@ -6630,6 +6695,7 @@ def prelu_layer(input, name=name, layer_type=LayerType.PRELU, parents=input, + num_filters=num_channels, size=l.config.size) @@ -7079,7 +7145,7 @@ def img_conv3d_layer(input, :type layer_attr: ExtraLayerAttribute :param trans: True if it is a convTransLayer, False if it is a convLayer :type trans: bool - :param layer_type: Specify the layer_type. If the parameter is set, it must be "deconv3d" + :param layer_type: Specify the layer type. If the parameter is set, it must be "deconv3d" when trans=True. If not set, it will be automatically set to "deconv3d" when trans=True and "conv3d" when trans=False. :type layer_type: basestring diff --git a/python/paddle/trainer_config_helpers/tests/configs/protostr/img_layers.protostr b/python/paddle/trainer_config_helpers/tests/configs/protostr/img_layers.protostr index b14121e82c..3e0f957648 100644 --- a/python/paddle/trainer_config_helpers/tests/configs/protostr/img_layers.protostr +++ b/python/paddle/trainer_config_helpers/tests/configs/protostr/img_layers.protostr @@ -65,6 +65,7 @@ layers { height: 227 width: 227 depth: 1 + epsilon: 1e-05 } layers { name: "__crmnorm_0__" diff --git a/python/paddle/trainer_config_helpers/tests/configs/protostr/img_trans_layers.protostr b/python/paddle/trainer_config_helpers/tests/configs/protostr/img_trans_layers.protostr index c7a487a112..a18a4652e1 100644 --- a/python/paddle/trainer_config_helpers/tests/configs/protostr/img_trans_layers.protostr +++ b/python/paddle/trainer_config_helpers/tests/configs/protostr/img_trans_layers.protostr @@ -65,6 +65,7 @@ layers { height: 256 width: 256 depth: 1 + epsilon: 1e-05 } layers { name: "__crmnorm_0__" diff --git a/python/paddle/trainer_config_helpers/tests/configs/protostr/test_BatchNorm3D.protostr b/python/paddle/trainer_config_helpers/tests/configs/protostr/test_BatchNorm3D.protostr index 832ed24a31..9b69ae4a3b 100644 --- a/python/paddle/trainer_config_helpers/tests/configs/protostr/test_BatchNorm3D.protostr +++ b/python/paddle/trainer_config_helpers/tests/configs/protostr/test_BatchNorm3D.protostr @@ -36,6 +36,7 @@ layers { height: 6 width: 20 depth: 3 + epsilon: 1e-05 } parameters { name: "___batch_norm_0__.w0" diff --git a/python/paddle/trainer_config_helpers/tests/configs/protostr/test_prelu_layer.protostr b/python/paddle/trainer_config_helpers/tests/configs/protostr/test_prelu_layer.protostr index 94ad56cab0..63fb38c650 100644 --- a/python/paddle/trainer_config_helpers/tests/configs/protostr/test_prelu_layer.protostr +++ b/python/paddle/trainer_config_helpers/tests/configs/protostr/test_prelu_layer.protostr @@ -4,6 +4,8 @@ layers { type: "data" size: 300 active_type: "" + height: 10 + width: 10 } layers { name: "__prelu_layer_0__" @@ -15,6 +17,9 @@ layers { input_parameter_name: "___prelu_layer_0__.w0" } partial_sum: 1 + height: 10 + width: 10 + depth: 1 } layers { name: "__prelu_layer_1__" @@ -26,6 +31,9 @@ layers { input_parameter_name: "___prelu_layer_1__.w0" } partial_sum: 1 + height: 10 + width: 10 + depth: 1 } layers { name: "__prelu_layer_2__" @@ -37,41 +45,100 @@ layers { input_parameter_name: "___prelu_layer_2__.w0" } partial_sum: 5 + height: 10 + width: 10 + depth: 1 +} +layers { + name: "__prelu_layer_3__" + type: "prelu" + size: 300 + active_type: "" + inputs { + input_layer_name: "input" + input_parameter_name: "___prelu_layer_3__.w0" + } + partial_sum: 300 + height: 10 + width: 10 + depth: 1 +} +layers { + name: "__prelu_layer_4__" + type: "prelu" + size: 300 + active_type: "" + inputs { + input_layer_name: "input" + input_parameter_name: "___prelu_layer_4__.w0" + } + partial_sum: 100 + height: 10 + width: 10 + depth: 1 } parameters { name: "___prelu_layer_0__.w0" size: 300 - initial_mean: 0.0 - initial_std: 0.057735026919 + initial_mean: 0.25 + initial_std: 0.0 + dims: 1 + dims: 300 initial_strategy: 0 - initial_smart: true + initial_smart: false } parameters { name: "___prelu_layer_1__.w0" size: 300 - initial_mean: 0.0 - initial_std: 0.057735026919 + initial_mean: 0.25 + initial_std: 0.0 + dims: 1 + dims: 300 initial_strategy: 0 - initial_smart: true + initial_smart: false } parameters { name: "___prelu_layer_2__.w0" size: 60 - initial_mean: 0.0 - initial_std: 0.129099444874 + initial_mean: 0.25 + initial_std: 0.0 + dims: 1 + dims: 60 + initial_strategy: 0 + initial_smart: false +} +parameters { + name: "___prelu_layer_3__.w0" + size: 1 + initial_mean: 0.25 + initial_std: 0.0 + dims: 1 + dims: 1 + initial_strategy: 0 + initial_smart: false +} +parameters { + name: "___prelu_layer_4__.w0" + size: 3 + initial_mean: 0.25 + initial_std: 0.0 + dims: 1 + dims: 3 initial_strategy: 0 - initial_smart: true + initial_smart: false } input_layer_names: "input" -output_layer_names: "__prelu_layer_2__" +output_layer_names: "__prelu_layer_4__" sub_models { name: "root" layer_names: "input" layer_names: "__prelu_layer_0__" layer_names: "__prelu_layer_1__" layer_names: "__prelu_layer_2__" + layer_names: "__prelu_layer_3__" + layer_names: "__prelu_layer_4__" input_layer_names: "input" - output_layer_names: "__prelu_layer_2__" + output_layer_names: "__prelu_layer_4__" is_recurrent_layer_group: false } diff --git a/python/paddle/trainer_config_helpers/tests/configs/test_prelu_layer.py b/python/paddle/trainer_config_helpers/tests/configs/test_prelu_layer.py index aae90fab32..45b02fbf32 100644 --- a/python/paddle/trainer_config_helpers/tests/configs/test_prelu_layer.py +++ b/python/paddle/trainer_config_helpers/tests/configs/test_prelu_layer.py @@ -1,8 +1,10 @@ from paddle.trainer_config_helpers import * -data = data_layer(name='input', size=300) -prelu = prelu_layer(input=data) -prelu = prelu_layer(input=data, partial_sum=1) -prelu = prelu_layer(input=data, partial_sum=5) +data = data_layer(name='input', size=300, height=10, width=10) +prelu = prelu_layer(input=data, num_channels=3) +prelu = prelu_layer(input=data, partial_sum=1, num_channels=3) +prelu = prelu_layer(input=data, partial_sum=5, num_channels=3) +prelu = prelu_layer(input=data, channel_shared=True, num_channels=3) +prelu = prelu_layer(input=data, channel_shared=False, num_channels=3) outputs(prelu) diff --git a/python/paddle/v2/__init__.py b/python/paddle/v2/__init__.py index 7bbe3eaaa6..33a0829ba8 100644 --- a/python/paddle/v2/__init__.py +++ b/python/paddle/v2/__init__.py @@ -62,21 +62,15 @@ __all__ = [ cp.begin_parse() -def init(**kwargs): - import py_paddle.swig_paddle as api - args = [] - args_dict = {} - # NOTE: append arguments if they are in ENV - for ek, ev in os.environ.iteritems(): - if ek.startswith("PADDLE_INIT_"): - args_dict[ek.replace("PADDLE_INIT_", "").lower()] = str(ev) +def set_omp_mkl_env_vars(trainer_count): + '''Auto set CPU environment if have not set before. + export KMP_AFFINITY, OMP_DYNAMIC according to the Hyper Threading status. + export OMP_NUM_THREADS, MKL_NUM_THREADS according to trainer_count. + ''' + import platform + if not platform.system() in ['Linux', 'Darwin']: + return - args_dict.update(kwargs) - # NOTE: overwrite arguments from ENV if it is in kwargs - for key in args_dict.keys(): - args.append('--%s=%s' % (key, str(args_dict[key]))) - - # auto set cpu environment def set_env(key, value): '''If the key has not been set in the environment, set it with value.''' assert isinstance(key, str) @@ -85,22 +79,59 @@ def init(**kwargs): if envset is None: os.environ[key] = value - ht = os.popen("lscpu |grep \"per core\"|awk -F':' '{print $2}'|xargs") - ht = int(ht.read()) - if ht == 1: # ht is off - set_env("OMP_DYNAMIC", "false") - set_env("KMP_AFFINITY", "granularity=fine,compact,0,0") - else: + def num_physical_cores(): + '''Get the number of physical cores''' + if platform.system() == "Linux": + num_sockets = int( + os.popen("lscpu |grep \"Socket\" |awk -F':' '{print $2}'|xargs") + .read()) + num_cores_per_socket = int( + os.popen( + "lscpu |grep \"per socket\" |awk -F':' '{print $2}'|xargs") + .read()) + return num_sockets * num_cores_per_socket + else: + cmds = {"Darwin": "sysctl -n hw.physicalcpu"} + return int(os.popen(cmds.get(platform.system(), "expr 1")).read()) + + def num_logical_processors(): + '''Get the number of logical processors''' + cmds = { + "Linux": "grep \"processor\" /proc/cpuinfo|sort -u|wc -l", + "Darwin": "sysctl -n hw.logicalcpu" + } + return int(os.popen(cmds.get(platform.system(), "expr 1")).read()) + + num_cores = num_physical_cores() + num_processors = num_logical_processors() + if num_processors > num_cores: # Hyper Threading is enabled set_env("OMP_DYNAMIC", "true") set_env("KMP_AFFINITY", "granularity=fine,compact,1,0") - processors = os.popen("grep \"processor\" /proc/cpuinfo|sort -u|wc -l") - processors = int(processors.read()) - trainers = kwargs.get('trainer_count', 1) - threads = processors / trainers + else: + set_env("OMP_DYNAMIC", "false") + set_env("KMP_AFFINITY", "granularity=fine,compact,0,0") + threads = num_processors / trainer_count threads = '1' if threads < 1 else str(threads) set_env("OMP_NUM_THREADS", threads) set_env("MKL_NUM_THREADS", threads) + +def init(**kwargs): + import py_paddle.swig_paddle as api + args = [] + args_dict = {} + # NOTE: append arguments if they are in ENV + for ek, ev in os.environ.iteritems(): + if ek.startswith("PADDLE_INIT_"): + args_dict[ek.replace("PADDLE_INIT_", "").lower()] = str(ev) + + args_dict.update(kwargs) + # NOTE: overwrite arguments from ENV if it is in kwargs + for key in args_dict.keys(): + args.append('--%s=%s' % (key, str(args_dict[key]))) + + set_omp_mkl_env_vars(kwargs.get('trainer_count', 1)) + if 'use_gpu' in kwargs: cp.g_command_config_args['use_gpu'] = kwargs['use_gpu'] if 'use_mkldnn' in kwargs: diff --git a/python/paddle/v2/fluid/framework.py b/python/paddle/v2/fluid/framework.py index acca6ba35c..7f7c310ad8 100644 --- a/python/paddle/v2/fluid/framework.py +++ b/python/paddle/v2/fluid/framework.py @@ -15,6 +15,37 @@ def unique_name(prefix): return "_".join([prefix, str(uid)]) +def convert_np_dtype_to_dtype_(np_dtype): + dtype = np.dtype(np_dtype) + if dtype == np.float32: + return core.DataType.FP32 + elif dtype == np.float64: + return core.DataType.FP64 + elif dtype == np.float16: + return core.DataType.FP16 + elif dtype == np.int32: + return core.DataType.INT32 + elif dtype == np.int16: + return core.DataType.INT16 + elif dtype == np.int64: + return core.DataType.INT64 + elif dtype == np.bool: + return core.DataType.BOOL + else: + raise ValueError("Not supported numpy dtype " + str(dtype)) + + +def dtype_is_floating(dtype): + if not isinstance(dtype, core.DataType): + dtype = convert_np_dtype_to_dtype_(dtype) + + if (dtype == core.DataType.FP16 or dtype == core.DataType.FP32 or + dtype == core.DataType.FP64): + return True + else: + return False + + def _debug_string_(proto, throw_on_error=True): error_fields = list() if not proto.IsInitialized(error_fields) and throw_on_error: @@ -66,7 +97,7 @@ class Variable(object): "matched.".format(self.name, old_shape, shape)) if dtype is not None: if not isinstance(dtype, core.DataType): - dtype = Variable._convert_np_dtype_to_dtype_(dtype) + dtype = convert_np_dtype_to_dtype_(dtype) if is_new_var: self.desc.set_data_type(dtype) else: @@ -148,26 +179,6 @@ class Variable(object): uid = core.unique_integer(prefix) # unique during whole process. return "_".join([prefix, str(uid)]) - @staticmethod - def _convert_np_dtype_to_dtype_(np_dtype): - dtype = np.dtype(np_dtype) - if dtype == np.float32: - return core.DataType.FP32 - elif dtype == np.float64: - return core.DataType.FP64 - elif dtype == np.float16: - return core.DataType.FP16 - elif dtype == np.int32: - return core.DataType.INT32 - elif dtype == np.int16: - return core.DataType.INT16 - elif dtype == np.int64: - return core.DataType.INT64 - elif dtype == np.bool: - return core.DataType.BOOL - else: - raise ValueError("Not supported numpy dtype " + str(dtype)) - def get_all_op_protos(): """ diff --git a/python/paddle/v2/fluid/initializer.py b/python/paddle/v2/fluid/initializer.py index ded144ecd5..1a9d804ee7 100644 --- a/python/paddle/v2/fluid/initializer.py +++ b/python/paddle/v2/fluid/initializer.py @@ -285,3 +285,86 @@ class XavierInitializer(Initializer): }) var.op = op return op + + +class MSRAInitializer(Initializer): + """Implements the MSRA initializer a.k.a. Kaiming Initializer + + This class implements the weight initialization from the paper + Delving Deep into Rectifiers: Surpassing Human-Level Performance on + ImageNet Classification[1] by Kaiming He, Xiangyu Zhang, Shaoqing Ren + and Jian Sun. This is a robust initialization method that particularly + considers the rectifier nonlinearities. In case of Uniform distribution, + the range is [-x, x], where x = sqrt(6 / fan_in). In case of Normal + distribution, the mean is 0 and the standard deviation + is sqrt(2/ fan_in). + + References: + [1] Delving Deep into Rectifiers: Surpassing Human-Level Performance + on ImageNet Classification + (https://arxiv.org/abs/1502.01852) + """ + + def __init__(self, uniform=True, fan_in=None, seed=0): + """Constructor for MSRAInitializer + + Args: + uniform: whether to use uniform or normal distribution + fan_in: fan_in for MSRAInitializer. If None, it is + inferred from the variable. + seed: random seed + + Note: It is recommended to set fan_in to None for most cases. + """ + assert uniform is not None + assert seed is not None + super(MSRAInitializer, self).__init__() + self._uniform = uniform + self._fan_in = fan_in + self._seed = seed + + def __call__(self, var, block): + """Add MSRA initialization ops for a variable + + Args: + var: Variable that needs to be initialized + block: The block in which initialization ops + should be added + + Returns: + the initialization op + """ + assert isinstance(var, framework.Variable) + assert isinstance(block, framework.Block) + f_in, f_out = self._compute_fans(var) + + # If fan_in is passed, use it + fan_in = f_in if self._fan_in is None else self._fan_in + + if self._uniform: + limit = np.sqrt(6.0 / float(fan_in)) + op = block.prepend_op( + type="uniform_random", + outputs={"Out": var}, + attrs={ + "shape": var.shape, + "data_type": int(var.data_type), + "min": -limit, + "max": limit, + "seed": self._seed + }) + + else: + std = np.sqrt(2.0 / float(fan_in)) + op = block.prepend_op( + type="gaussian_random", + outputs={"Out": var}, + attrs={ + "shape": var.shape, + "data_type": int(var.data_type), + "mean": 0.0, + "std": std, + "seed": self._seed + }) + var.op = op + return op diff --git a/python/paddle/v2/fluid/layer_helper.py b/python/paddle/v2/fluid/layer_helper.py index a97e07982b..e40551ca73 100644 --- a/python/paddle/v2/fluid/layer_helper.py +++ b/python/paddle/v2/fluid/layer_helper.py @@ -2,7 +2,7 @@ import copy import itertools from paddle.v2.fluid.framework import Variable, g_main_program, \ - g_startup_program, unique_name, Program + g_startup_program, unique_name, Program, dtype_is_floating from paddle.v2.fluid.initializer import ConstantInitializer, \ UniformInitializer, XavierInitializer @@ -61,7 +61,7 @@ class LayerHelper(object): @property def param_attr(self): - default = {'name': None, 'initializer': XavierInitializer()} + default = {'name': None} actual = self.kwargs.get('param_attr', None) if actual is None: actual = default @@ -72,7 +72,7 @@ class LayerHelper(object): @property def bias_attr(self): - default = {'name': None, 'initializer': ConstantInitializer()} + default = {'name': None} bias_attr = self.kwargs.get('bias_attr', None) if bias_attr is None: bias_attr = default @@ -119,12 +119,17 @@ class LayerHelper(object): attr_copy = copy.deepcopy(attr) if initializer is not None: attr_copy['initializer'] = initializer + else: + attr_copy['initializer'] = self._get_default_initializer(dtype) if attr_copy['name'] is None: attr_copy['name'] = unique_name(".".join([self.name, suffix])) self.startup_program.global_block().create_parameter( dtype=dtype, shape=shape, **attr_copy) return self.main_program.global_block().create_parameter( - name=attr_copy['name'], dtype=dtype, shape=shape) + name=attr_copy['name'], + dtype=dtype, + shape=shape, + trainable=attr_copy.get('trainable', True)) def create_tmp_variable(self, dtype): return self.main_program.current_block().create_var( @@ -149,13 +154,19 @@ class LayerHelper(object): persistable=True, initializer=initializer) - def append_bias_op(self, input_var, dim_start=1, dim_end=None): + def append_bias_op(self, + input_var, + bias_initializer, + dim_start=1, + dim_end=None): """ Append bias operator and return its output. If the user does not set bias_attr, append_bias_op will return input_var - :param input_var: the input variable. The len(input_var.shape) is larger - or equal than 2. + :param input_var: the input variable. The len(input_var.shape) is + larger or equal than 2. + :bias_initializer: an instance of a subclass of Initializer used to + initialize the bias :param dim_start: :param dim_end: the shape of the bias will be input_var.shape[dim_start:dim_end]. The bias is broadcasted to other @@ -167,7 +178,11 @@ class LayerHelper(object): return input_var b = self.create_parameter( - attr=bias_attr, shape=size, dtype=input_var.data_type, suffix='b') + attr=bias_attr, + shape=size, + dtype=input_var.data_type, + suffix='b', + initializer=bias_initializer) tmp = self.create_tmp_variable(dtype=input_var.data_type) self.append_op( type='elementwise_add', @@ -191,3 +206,10 @@ class LayerHelper(object): outputs={"Y": [tmp]}, attrs=act) return tmp + + def _get_default_initializer(self, dtype): + if dtype is None or dtype_is_floating(dtype) is True: + return XavierInitializer() + else: + # For integer and boolean types, initialize with all zeros + return ConstantInitializer() diff --git a/python/paddle/v2/fluid/layers.py b/python/paddle/v2/fluid/layers.py index 02ad2ecd72..fac91aac97 100644 --- a/python/paddle/v2/fluid/layers.py +++ b/python/paddle/v2/fluid/layers.py @@ -3,7 +3,7 @@ import paddle.v2.fluid.proto.framework_pb2 as framework_pb2 from paddle.v2.fluid.framework import OpProtoHolder, Variable, Program, \ Operator from paddle.v2.fluid.initializer import ConstantInitializer, \ - NormalInitializer + NormalInitializer, XavierInitializer from paddle.v2.fluid.layer_helper import LayerHelper, unique_name import re import cStringIO @@ -17,11 +17,13 @@ __all__ = [ def fc(input, size, + num_flatten_dims=1, param_attr=None, + param_initializer=None, bias_attr=None, - name=None, + bias_initializer=None, act=None, - num_flatten_dims=1, + name=None, main_program=None, startup_program=None): """ @@ -30,11 +32,15 @@ def fc(input, Args: input: The input tensor to the function size: The size of the layer + num_flatten_dims: Number of columns in input param_attr: The parameters/weights to the FC Layer + param_initializer: Initializer used for the weight/parameter. + If None, XavierInitializer() is used bias_attr: The bias parameter for the FC layer - name: Name/alias of the function + bias_initializer: Initializer used for the bias. + If None, then ConstantInitializer() is used act: Activation to be applied to the output of FC layer - num_flatten_dims: Number of columns in input + name: Name/alias of the function main_program: Name of the main program that calls this startup_program: Name of the startup program @@ -50,10 +56,23 @@ def fc(input, to the LayerHelper constructor. """ + + def _get_default_param_initializer(): + return XavierInitializer() + + def _get_default_bias_initializer(): + return ConstantInitializer() + helper = LayerHelper('fc', **locals()) dtype = helper.input_dtype() + if param_initializer is None: + param_initializer = _get_default_param_initializer() + + if bias_initializer is None: + bias_initializer = _get_default_bias_initializer() + mul_results = [] for input_var, param_attr in helper.iter_inputs_and_params(): input_shape = input_var.shape @@ -61,7 +80,10 @@ def fc(input, reduce(lambda a, b: a * b, input_shape[num_flatten_dims:], 1) ] + [size] w = helper.create_parameter( - attr=param_attr, shape=param_shape, dtype=dtype) + attr=param_attr, + initializer=param_initializer, + shape=param_shape, + dtype=dtype) tmp = helper.create_tmp_variable(dtype) helper.append_op( type="mul", @@ -82,16 +104,17 @@ def fc(input, helper.append_op( type="sum", inputs={"X": mul_results}, outputs={"Out": pre_bias}) # add bias - pre_activation = helper.append_bias_op(pre_bias) + pre_activation = helper.append_bias_op(pre_bias, bias_initializer) # add activation return helper.append_activation(pre_activation) def embedding(input, size, - data_type='float32', is_sparse=False, + param_initializer=None, param_attr=None, + data_type='float32', main_program=None, startup_program=None): """ @@ -100,9 +123,9 @@ def embedding(input, Args: input: The input to the function size: The size of the layer - data_type: The type of data : float32, float_16, int etc is_sparse: A flag that decleares whether the input is sparse param_attr: Parameters for this layer + data_type: The type of data : float32, float_16, int etc main_program: Name of the main program that calls this startup_program: Name of the startup program @@ -114,9 +137,16 @@ def embedding(input, to the LayerHelper constructor. """ + + def _get_default_param_initializer(): + return XavierInitializer() + helper = LayerHelper('embedding', **locals()) w = helper.create_parameter( - attr=helper.param_attr, shape=size, dtype=data_type) + attr=helper.param_attr, + shape=size, + dtype=data_type, + initializer=param_initializer or _get_default_param_initializer()) tmp = helper.create_tmp_variable(data_type) helper.append_op( type='lookup_table', @@ -130,7 +160,6 @@ def embedding(input, # TODO(qijun): expose H0 and C0 def dynamic_lstm(input, size, - data_type='float32', param_attr=None, bias_attr=None, use_peepholes=True, @@ -138,6 +167,7 @@ def dynamic_lstm(input, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', + data_type='float32', main_program=None, startup_program=None): helper = LayerHelper('lstm', **locals()) @@ -178,9 +208,9 @@ def dynamic_lstm(input, def data(name, shape, + append_batch_size=True, data_type='float32', type=core.VarDesc.VarType.LOD_TENSOR, - append_batch_size=True, main_program=None, startup_program=None, stop_gradient=True): @@ -190,9 +220,9 @@ def data(name, Args: name: The name/alias of the function shape: Tuple declaring the shape. + append_batch_size: Whether or not to append the data as a batch. data_type: The type of data : float32, float_16, int etc type: The output type. By default it is LOD_TENSOR. - append_batch_size: Whether or not to append the data as a batch. main_program: Name of the main program that calls this startup_program: Name of the startup program stop_gradient: A boolean that mentions whether gradient should flow. @@ -226,7 +256,7 @@ def data(name, stop_gradient=stop_gradient) -def create_tensor(dtype, name=None, main_program=None): +def create_tensor(dtype, name=None, main_program=None, startup_program=None): helper = LayerHelper("create_tensor", **locals()) return helper.create_variable(name=helper.name, dtype=dtype) @@ -390,30 +420,12 @@ _create_op_func_('mul') _create_op_func_('elementwise_add') _create_op_func_('dropout') _create_op_func_('reshape') -_create_op_func_('elementwise_add') _create_op_func_('sigmoid') _create_op_func_('scale') _create_op_func_('reshape') _create_op_func_('transpose') -def fill_constant(data_type, shape, value=None, program=None): - """ - This function creates a tensor , with shape as mentioned in the input and - specified data_type and fills this up with a constant value that - comes in the input. - """ - helper = LayerHelper('fill_constant', **locals()) - out = helper.create_tmp_variable(dtype=data_type) - helper.append_op( - type='fill_constant', - outputs={'Out': [out]}, - attrs={'data_type': data_type, - 'shape': shape, - 'value': value}) - return out - - def cast(x, data_type, main_program=None): """ This function takes in the input with input_data_type @@ -456,7 +468,42 @@ def sums(input, main_program=None, startup_program=None): return out -def assign(input, output, main_program=None): +def linear_chain_crf(input, + label, + param_attr=None, + param_initializer=None, + main_program=None, + startup_program=None): + def _get_default_param_initializer(): + return XavierInitializer() + + helper = LayerHelper('linear_chain_crf', **locals()) + size = input.shape[1] + transition = helper.create_parameter( + attr=helper.param_attr, + shape=[size + 2, size], + dtype=helper.input_dtype(), + initializer=param_initializer or _get_default_param_initializer()) + alpha = helper.create_tmp_variable(dtype=helper.input_dtype()) + emission_exps = helper.create_tmp_variable(dtype=helper.input_dtype()) + transition_exps = helper.create_tmp_variable(dtype=helper.input_dtype()) + log_likelihood = helper.create_tmp_variable(dtype=helper.input_dtype()) + helper.append_op( + type='linear_chain_crf', + inputs={"Emission": [input], + "Transition": transition, + "Label": label}, + outputs={ + "Alpha": [alpha], + "EmissionExps": [emission_exps], + "TransitionExps": transition_exps, + "LogLikelihood": log_likelihood + }) + + return log_likelihood + + +def assign(input, output, main_program=None, startup_program=None): helper = LayerHelper('assign', **locals()) helper.append_op( type='scale', @@ -468,7 +515,7 @@ def assign(input, output, main_program=None): def split_lod_tensor(input, mask, - level, + level=0, main_program=None, startup_program=None): helper = LayerHelper('split_lod_tensor', **locals()) @@ -490,11 +537,11 @@ def merge_lod_tensor(in_true, in_false, x, mask, - level, + level=0, main_program=None, startup_program=None): helper = LayerHelper('merge_lod_tensor', **locals()) - out = helper.create_tmp_variable(dtype=x.data_type) + out = helper.create_tmp_variable(dtype=in_true.data_type) helper.append_op( type='merge_lod_tensor', inputs={'X': x, @@ -596,10 +643,12 @@ def sequence_conv(input, num_filters, filter_size=3, filter_stride=1, - act=None, padding=None, bias_attr=None, + bias_initializer=None, param_attr=None, + param_initializer=None, + act=None, main_program=None, startup_program=None): """ @@ -607,6 +656,13 @@ def sequence_conv(input, other convolutional configurations for the filters and stride as given in the input parameters to the function. """ + + def _get_default_bias_initializer(): + return ConstantInitializer() + + def _get_default_param_initializer(): + return XavierInitializer() + # FIXME(dzh) : want to unify the argument of python layer # function. So we ignore some unecessary attributes. # such as, padding_trainable, context_start. @@ -614,9 +670,17 @@ def sequence_conv(input, helper = LayerHelper('sequence_conv', **locals()) dtype = helper.input_dtype() + if param_initializer is None: + param_initializer = _get_default_param_initializer() + if bias_initializer is None: + bias_initializer = _get_default_bias_initializer() + filter_shape = [filter_size * input.shape[1], num_filters] filter = helper.create_parameter( - attr=helper.param_attr, shape=filter_shape, dtype=dtype) + attr=helper.param_attr, + shape=filter_shape, + dtype=dtype, + initializer=param_initializer) pre_bias = helper.create_tmp_variable(dtype) helper.append_op( @@ -631,20 +695,22 @@ def sequence_conv(input, 'contextStart': -int(filter_size / 2), 'contextLength': filter_size }) - pre_act = helper.append_bias_op(pre_bias) + pre_act = helper.append_bias_op(pre_bias, bias_initializer) return helper.append_activation(pre_act) def conv2d(input, num_filters, - name=None, - filter_size=[1, 1], - act=None, - groups=None, + filter_size, stride=[1, 1], padding=None, - bias_attr=None, + groups=None, param_attr=None, + param_initializer=None, + bias_attr=None, + bias_initializer=None, + act=None, + name=None, main_program=None, startup_program=None): """ @@ -654,6 +720,14 @@ def conv2d(input, This funciton can also append an activation on top of the conv-2d output, if mentioned in the input parameters. """ + + def _get_default_bias_initializer(): + return ConstantInitializer() + + def _get_default_param_initializer(filter_size, num_channels): + std = (2.0 / (filter_size[0]**2 * num_channels))**0.5 + return NormalInitializer(0.0, std, 0) + helper = LayerHelper('conv2d', **locals()) dtype = helper.input_dtype() @@ -675,12 +749,17 @@ def conv2d(input, input_shape = input.shape filter_shape = [num_filters, num_filter_channels] + filter_size - std = (2.0 / (filter_size[0]**2 * num_channels))**0.5 + if param_initializer is None: + param_initializer = _get_default_param_initializer(filter_size, + num_channels) + if bias_initializer is None: + bias_initializer = _get_default_bias_initializer() + filter = helper.create_parameter( attr=helper.param_attr, shape=filter_shape, dtype=dtype, - initializer=NormalInitializer(0.0, std, 0)) + initializer=param_initializer) pre_bias = helper.create_tmp_variable(dtype) helper.append_op( @@ -694,7 +773,8 @@ def conv2d(input, 'paddings': padding, 'groups': groups}) - pre_act = helper.append_bias_op(pre_bias, dim_start=1, dim_end=2) + pre_act = helper.append_bias_op( + pre_bias, bias_initializer, dim_start=1, dim_end=2) return helper.append_activation(pre_act) @@ -1311,7 +1391,7 @@ def array_to_lod_tensor(x, table, main_program=None): return tmp -def fill_constant(shape, dtype, value, main_program=None): +def fill_constant(shape, dtype, value, main_program=None, startup_program=None): """ This function creates a tensor , with shape as mentioned in the input and specified data_type and fills this up with a constant value that @@ -1332,6 +1412,31 @@ def fill_constant(shape, dtype, value, main_program=None): return out +def fill_constant_batch_size_like(input, + shape, + dtype, + value, + input_dim_idx=0, + output_dim_idx=0, + main_program=None, + startup_program=None): + helper = LayerHelper("fill_constant_batch_size_like", **locals()) + out = helper.create_tmp_variable(dtype=dtype) + helper.append_op( + type='fill_constant_batch_size_like', + inputs={'Input': input}, + outputs={'Out': [out]}, + attrs={ + 'shape': shape, + 'data_type': out.data_type, + 'value': float(value), + 'input_dim_idx': input_dim_idx, + 'output_dim_idx': output_dim_idx + }) + out.stop_gradient = True + return out + + def ones(shape, dtype, main_program=None): """ This function performs the same function as fill_constant() declared above @@ -1394,7 +1499,7 @@ def create_array(dtype, main_program=None): dtype=dtype) -def less_than(x, y, cond=None, main_program=None): +def less_than(x, y, cond=None, main_program=None, **ignored): helper = LayerHelper("less_than", **locals()) if cond is None: cond = helper.create_tmp_variable(dtype='bool') @@ -1472,13 +1577,20 @@ class ConditionalBlockGuard(BlockGuard): class ConditionalBlock(object): - def __init__(self, inputs, name=None, main_program=None): + def __init__(self, + inputs, + name=None, + main_program=None, + startup_program=None): for each_input in inputs: if not isinstance(each_input, Variable): raise TypeError("Each input should be variable") self.inputs = inputs self.helper = LayerHelper( - 'conditional_block', name=name, main_program=main_program) + 'conditional_block', + name=name, + main_program=main_program, + startup_program=startup_program) def block(self): return ConditionalBlockGuard(self) @@ -1523,3 +1635,148 @@ class ConditionalBlock(object): outputs={'Out': out_list, 'Scope': [step_scope]}, attrs={'block': inside_block}) + + +class IfElseBlockGuard(object): + def __init__(self, is_true, ifelse): + if not isinstance(ifelse, IfElse): + raise TypeError("ifelse must be an instance of IfElse class") + + if ifelse.status != IfElse.OUT_IF_ELSE_BLOCKS: + raise ValueError("You cannot invoke IfElse.block() inside a block") + + self.is_true = is_true + self.ie = ifelse + if is_true: + self.cond_block = ifelse.conditional_true_block + else: + self.cond_block = ifelse.conditional_false_block + + if not isinstance(self.cond_block, ConditionalBlock): + raise TypeError("Unexpected situation") + + self.cond_block = self.cond_block.block() + + def __enter__(self): + self.ie.status = IfElse.IN_IF_ELSE_TRUE_BLOCKS if self.is_true else IfElse.IN_IF_ELSE_FALSE_BLOCKS + self.cond_block.__enter__() + + def __exit__(self, exc_type, exc_val, exc_tb): + if not self.cond_block.__exit__(exc_type, exc_val, exc_tb): + # re-raise inside exception + return False + if len(self.ie.output_table[1 if self.is_true else 0]) == 0: + raise ValueError("Must set output inside block") + self.ie.status = IfElse.OUT_IF_ELSE_BLOCKS + + +class IfElse(object): + OUT_IF_ELSE_BLOCKS = 0 + IN_IF_ELSE_TRUE_BLOCKS = 1 + IN_IF_ELSE_FALSE_BLOCKS = 2 + + def __init__(self, cond, name=None, main_program=None, + startup_program=None): + if not isinstance(cond, Variable): + raise TypeError("cond must be a Variable") + self.helper = LayerHelper( + 'ifelse', + name=name, + main_program=main_program, + startup_program=startup_program) + self.cond = cond + self.input_table = {} + self.status = IfElse.OUT_IF_ELSE_BLOCKS + self.conditional_true_block = ConditionalBlock(inputs=[self.cond]) + self.conditional_false_block = ConditionalBlock(inputs=[self.cond]) + self.output_table = ([], []) # (true_outs, false_outs) + + def input(self, x): + if self.status == IfElse.OUT_IF_ELSE_BLOCKS: + raise ValueError("input must in true/false blocks") + if id(x) not in self.input_table: + parent_block = self.parent_block() + out_true = parent_block.create_var( + name=unique_name('ifelse_input' + self.helper.name), + dtype=x.data_type) + + out_false = parent_block.create_var( + name=unique_name('ifelse_input' + self.helper.name), + dtype=x.data_type) + parent_block.append_op( + type='split_lod_tensor', + inputs={ + 'X': x, + 'Mask': self.cond, + }, + outputs={'OutTrue': out_true, + 'OutFalse': out_false}, + attrs={'level': 0}) + self.input_table[id(x)] = (out_true, out_false) + else: + out_true, out_false = self.input_table[id(x)] + + if self.status == IfElse.IN_IF_ELSE_TRUE_BLOCKS: + return out_true + else: + return out_false + + def parent_block(self): + current_block = self.helper.main_program.current_block() + return self.helper.main_program.block(current_block.parent_idx) + + def true_block(self): + return IfElseBlockGuard(True, self) + + def false_block(self): + return IfElseBlockGuard(False, self) + + def output(self, *outs): + if self.status == self.OUT_IF_ELSE_BLOCKS: + raise ValueError("output can only be invoked in the sub-block") + + out_table = self.output_table[1 if self.status == + self.IN_IF_ELSE_TRUE_BLOCKS else 0] + parent_block = self.parent_block() + for each_out in outs: + if not isinstance(each_out, Variable): + raise TypeError("Each output should be a variable") + # create outside tensor + outside_out = parent_block.create_var( + name=unique_name("_".join([self.helper.name, 'output'])), + dtype=each_out.data_type) + out_table.append(outside_out) + + # assign local var to outside + assign( + input=each_out, + output=outside_out, + main_program=self.helper.main_program, + startup_program=self.helper.startup_program) + + def __call__(self): + if self.status != self.OUT_IF_ELSE_BLOCKS: + raise ValueError("IfElse::__call__ must be out of sub-block") + false_len, true_len = map(len, self.output_table) + if false_len == 0 and true_len == 0: + raise ValueError("Must invoke true_block/false_block before " + "__call__") + elif false_len != true_len and false_len != 0 and true_len != 0: + raise ValueError("The output side must be same") + elif false_len == 0 or true_len == 0: + return self.output_table[0 if false_len != 0 else 1] + + # else none of false_len/true_len is zero + # merge together + rlist = [] + for false_var, true_var in zip(*self.output_table): + rlist.append( + merge_lod_tensor( + in_true=true_var, + in_false=false_var, + mask=self.cond, + x=self.cond, + level=0, + main_program=self.helper.main_program, + startup_program=self.helper.startup_program)) + return rlist diff --git a/python/paddle/v2/fluid/optimizer.py b/python/paddle/v2/fluid/optimizer.py index d2841df6af..87a478c290 100644 --- a/python/paddle/v2/fluid/optimizer.py +++ b/python/paddle/v2/fluid/optimizer.py @@ -170,7 +170,8 @@ class Optimizer(object): optimize_ops = [] for param_and_grad in parameters_and_grads: - if param_and_grad[1] is not None: + if param_and_grad[0].trainable is True and param_and_grad[ + 1] is not None: optimize_op = self._append_optimize_op(loss.block, param_and_grad) optimize_ops.append(optimize_op) diff --git a/python/paddle/v2/fluid/tests/book/test_label_semantic_roles.py b/python/paddle/v2/fluid/tests/book/test_label_semantic_roles.py new file mode 100644 index 0000000000..f66e6e748b --- /dev/null +++ b/python/paddle/v2/fluid/tests/book/test_label_semantic_roles.py @@ -0,0 +1,192 @@ +import numpy as np +import paddle.v2 as paddle +import paddle.v2.dataset.conll05 as conll05 +import paddle.v2.fluid.core as core +import paddle.v2.fluid.framework as framework +import paddle.v2.fluid.layers as layers +from paddle.v2.fluid.executor import Executor, g_scope +from paddle.v2.fluid.optimizer import SGDOptimizer + +word_dict, verb_dict, label_dict = conll05.get_dict() +word_dict_len = len(word_dict) +label_dict_len = len(label_dict) +pred_len = len(verb_dict) + +mark_dict_len = 2 +word_dim = 32 +mark_dim = 5 +hidden_dim = 512 +depth = 8 +mix_hidden_lr = 1e-3 + +IS_SPARSE = True +PASS_NUM = 10 +BATCH_SIZE = 20 + +embedding_name = 'emb' + + +def load_parameter(file_name, h, w): + with open(file_name, 'rb') as f: + f.read(16) # skip header. + return np.fromfile(f, dtype=np.float32).reshape(h, w) + + +def db_lstm(): + # 8 features + word = layers.data(name='word_data', shape=[1], data_type='int64') + predicate = layers.data(name='verb_data', shape=[1], data_type='int64') + ctx_n2 = layers.data(name='ctx_n2_data', shape=[1], data_type='int64') + ctx_n1 = layers.data(name='ctx_n1_data', shape=[1], data_type='int64') + ctx_0 = layers.data(name='ctx_0_data', shape=[1], data_type='int64') + ctx_p1 = layers.data(name='ctx_p1_data', shape=[1], data_type='int64') + ctx_p2 = layers.data(name='ctx_p2_data', shape=[1], data_type='int64') + mark = layers.data(name='mark_data', shape=[1], data_type='int64') + + predicate_embedding = layers.embedding( + input=predicate, + size=[pred_len, word_dim], + data_type='float32', + is_sparse=IS_SPARSE, + param_attr={'name': 'vemb'}) + + mark_embedding = layers.embedding( + input=mark, + size=[mark_dict_len, mark_dim], + data_type='float32', + is_sparse=IS_SPARSE) + + word_input = [word, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2] + emb_layers = [ + layers.embedding( + size=[word_dict_len, word_dim], + input=x, + param_attr={'name': embedding_name, + 'trainable': False}) for x in word_input + ] + emb_layers.append(predicate_embedding) + emb_layers.append(mark_embedding) + + hidden_0_layers = [ + layers.fc(input=emb, size=hidden_dim) for emb in emb_layers + ] + + hidden_0 = layers.sums(input=hidden_0_layers) + + lstm_0 = layers.dynamic_lstm( + input=hidden_0, + size=hidden_dim, + candidate_activation='relu', + gate_activation='sigmoid', + cell_activation='sigmoid') + + # stack L-LSTM and R-LSTM with direct edges + input_tmp = [hidden_0, lstm_0] + + for i in range(1, depth): + mix_hidden = layers.sums(input=[ + layers.fc(input=input_tmp[0], size=hidden_dim), + layers.fc(input=input_tmp[1], size=hidden_dim) + ]) + + lstm = layers.dynamic_lstm( + input=mix_hidden, + size=hidden_dim, + candidate_activation='relu', + gate_activation='sigmoid', + cell_activation='sigmoid', + is_reverse=((i % 2) == 1)) + + input_tmp = [mix_hidden, lstm] + + feature_out = layers.sums(input=[ + layers.fc(input=input_tmp[0], size=label_dict_len), + layers.fc(input=input_tmp[1], size=label_dict_len) + ]) + + return feature_out + + +def to_lodtensor(data, place): + seq_lens = [len(seq) for seq in data] + cur_len = 0 + lod = [cur_len] + for l in seq_lens: + cur_len += l + lod.append(cur_len) + flattened_data = np.concatenate(data, axis=0).astype("int64") + flattened_data = flattened_data.reshape([len(flattened_data), 1]) + res = core.LoDTensor() + res.set(flattened_data, place) + res.set_lod([lod]) + return res + + +def main(): + # define network topology + feature_out = db_lstm() + target = layers.data(name='target', shape=[1], data_type='int64') + crf_cost = layers.linear_chain_crf( + input=feature_out, + label=target, + param_attr={"name": 'crfw', + "learning_rate": mix_hidden_lr}) + avg_cost = layers.mean(x=crf_cost) + # TODO(qiao) + # 1. add crf_decode_layer and evaluator + # 2. use other optimizer and check why out will be NAN + sgd_optimizer = SGDOptimizer(learning_rate=0.0001) + opts = sgd_optimizer.minimize(avg_cost) + + train_data = paddle.batch( + paddle.reader.shuffle( + paddle.dataset.conll05.test(), buf_size=8192), + batch_size=BATCH_SIZE) + place = core.CPUPlace() + exe = Executor(place) + + exe.run(framework.default_startup_program()) + + embedding_param = g_scope.find_var(embedding_name).get_tensor() + embedding_param.set( + load_parameter(conll05.get_embedding(), word_dict_len, word_dim), place) + + batch_id = 0 + for pass_id in xrange(PASS_NUM): + for data in train_data(): + word_data = to_lodtensor(map(lambda x: x[0], data), place) + ctx_n2_data = to_lodtensor(map(lambda x: x[1], data), place) + ctx_n1_data = to_lodtensor(map(lambda x: x[2], data), place) + ctx_0_data = to_lodtensor(map(lambda x: x[3], data), place) + ctx_p1_data = to_lodtensor(map(lambda x: x[4], data), place) + ctx_p2_data = to_lodtensor(map(lambda x: x[5], data), place) + verb_data = to_lodtensor(map(lambda x: x[6], data), place) + mark_data = to_lodtensor(map(lambda x: x[7], data), place) + target = to_lodtensor(map(lambda x: x[8], data), place) + + outs = exe.run(framework.default_main_program(), + feed={ + 'word_data': word_data, + 'ctx_n2_data': ctx_n2_data, + 'ctx_n1_data': ctx_n1_data, + 'ctx_0_data': ctx_0_data, + 'ctx_p1_data': ctx_p1_data, + 'ctx_p2_data': ctx_p2_data, + 'verb_data': verb_data, + 'mark_data': mark_data, + 'target': target + }, + fetch_list=[avg_cost]) + avg_cost_val = np.array(outs[0]) + + if batch_id % 10 == 0: + print("avg_cost=" + str(avg_cost_val)) + + # exit early for CI + exit(0) + + batch_id = batch_id + 1 + + +if __name__ == '__main__': + main() diff --git a/python/paddle/v2/fluid/tests/book/test_understand_sentiment_lstm.py b/python/paddle/v2/fluid/tests/book/test_understand_sentiment_lstm.py index 280f6e902c..9a51a2f207 100644 --- a/python/paddle/v2/fluid/tests/book/test_understand_sentiment_lstm.py +++ b/python/paddle/v2/fluid/tests/book/test_understand_sentiment_lstm.py @@ -54,17 +54,17 @@ def to_lodtensor(data, place): return res -def chop_data(data, chop_len=80, batch_len=50): +def chop_data(data, chop_len=80, batch_size=50): data = [(x[0][:chop_len], x[1]) for x in data if len(x[0]) >= chop_len] - return data[:batch_len] + return data[:batch_size] def prepare_feed_data(data, place): tensor_words = to_lodtensor(map(lambda x: x[0], data), place) label = np.array(map(lambda x: x[1], data)).astype("int64") - label = label.reshape([50, 1]) + label = label.reshape([len(label), 1]) tensor_label = core.LoDTensor() tensor_label.set(label, place) @@ -72,33 +72,41 @@ def prepare_feed_data(data, place): def main(): - word_dict = paddle.dataset.imdb.word_dict() - cost, acc = lstm_net(dict_dim=len(word_dict), class_dim=2) + BATCH_SIZE = 100 + PASS_NUM = 5 - batch_size = 100 - train_data = paddle.batch( - paddle.reader.buffered( - paddle.dataset.imdb.train(word_dict), size=batch_size * 10), - batch_size=batch_size) + word_dict = paddle.dataset.imdb.word_dict() + print "load word dict successfully" + dict_dim = len(word_dict) + class_dim = 2 - data = chop_data(next(train_data())) + cost, acc = lstm_net(dict_dim=dict_dim, class_dim=class_dim) + train_data = paddle.batch( + paddle.reader.shuffle( + paddle.dataset.imdb.train(word_dict), buf_size=BATCH_SIZE * 10), + batch_size=BATCH_SIZE) place = core.CPUPlace() - tensor_words, tensor_label = prepare_feed_data(data, place) exe = Executor(place) + exe.run(framework.default_startup_program()) - while True: - outs = exe.run(framework.default_main_program(), - feed={"words": tensor_words, - "label": tensor_label}, - fetch_list=[cost, acc]) - cost_val = np.array(outs[0]) - acc_val = np.array(outs[1]) - - print("cost=" + str(cost_val) + " acc=" + str(acc_val)) - if acc_val > 0.9: - break + for pass_id in xrange(PASS_NUM): + for data in train_data(): + chopped_data = chop_data(data) + tensor_words, tensor_label = prepare_feed_data(chopped_data, place) + + outs = exe.run(framework.default_main_program(), + feed={"words": tensor_words, + "label": tensor_label}, + fetch_list=[cost, acc]) + cost_val = np.array(outs[0]) + acc_val = np.array(outs[1]) + + print("cost=" + str(cost_val) + " acc=" + str(acc_val)) + if acc_val > 0.7: + exit(0) + exit(1) if __name__ == '__main__': diff --git a/python/paddle/v2/fluid/tests/test_ftrl_op.py b/python/paddle/v2/fluid/tests/test_ftrl_op.py new file mode 100644 index 0000000000..f77ac4659a --- /dev/null +++ b/python/paddle/v2/fluid/tests/test_ftrl_op.py @@ -0,0 +1,62 @@ +import unittest +import numpy as np +from op_test import OpTest + + +class TestFTRLOp(OpTest): + def setUp(self): + self.op_type = "ftrl" + w = np.random.random((102, 105)).astype("float32") + g = np.random.random((102, 105)).astype("float32") + sq_accum = np.full((102, 105), 0.1).astype("float32") + linear_accum = np.full((102, 105), 0.1).astype("float32") + lr = np.array([0.01]).astype("float32") + l1 = 0.1 + l2 = 0.2 + lr_power = -0.5 + + self.inputs = { + 'Param': w, + 'SquaredAccumulator': sq_accum, + 'LinearAccumulator': linear_accum, + 'Grad': g, + 'LearningRate': lr + } + self.attrs = { + 'l1': l1, + 'l2': l2, + 'lr_power': lr_power, + 'learning_rate': lr + } + new_accum = sq_accum + g * g + if lr_power == -0.5: + linear_out = linear_accum + g - ( + (np.sqrt(new_accum) - np.sqrt(sq_accum)) / lr) * w + else: + linear_out = linear_accum + g - ((np.power( + new_accum, -lr_power) - np.power(sq_accum, -lr_power)) / lr) * w + + x = (l1 * np.sign(linear_out) - linear_out) + if lr_power == -0.5: + y = (np.sqrt(new_accum) / lr) + (2 * l2) + pre_shrink = x / y + param_out = np.where(np.abs(linear_out) > l1, pre_shrink, 0.0) + else: + y = (np.power(new_accum, -lr_power) / lr) + (2 * l2) + pre_shrink = x / y + param_out = np.where(np.abs(linear_out) > l1, pre_shrink, 0.0) + + sq_accum_out = sq_accum + g * g + + self.outputs = { + 'ParamOut': param_out, + 'SquaredAccumOut': sq_accum_out, + 'LinearAccumOut': linear_out + } + + def test_check_output(self): + self.check_output() + + +if __name__ == "__main__": + unittest.main() diff --git a/python/paddle/v2/fluid/tests/test_gru_unit_op.py b/python/paddle/v2/fluid/tests/test_gru_unit_op.py index f356f6e9ec..501d5aa579 100644 --- a/python/paddle/v2/fluid/tests/test_gru_unit_op.py +++ b/python/paddle/v2/fluid/tests/test_gru_unit_op.py @@ -28,8 +28,8 @@ def relu(x): class TestGRUUnitOp(OpTest): - batch_size = 3 - frame_size = 5 + batch_size = 5 + frame_size = 10 activate = { GRUActivationType.identity: identity, GRUActivationType.sigmoid: sigmoid, @@ -77,7 +77,7 @@ class TestGRUUnitOp(OpTest): c = self.activate[self.attrs['activation']](np.dot(r_h_p, w_c) + g[:, frame_size * 2:]) g = np.hstack((u_r, c)) - h = u * h_p + (1 - u) * c + h = u * c + (1 - u) * h_p self.outputs = { 'Gate': g.astype('float64'), 'ResetHiddenPrev': r_h_p.astype('float64'), @@ -92,10 +92,7 @@ class TestGRUUnitOp(OpTest): self.check_output() def test_check_grad(self): - self.check_grad( - ['Input', 'HiddenPrev', 'Weight'], - ['Hidden', 'ResetHiddenPrev', 'Gate'], - max_relative_error=0.007) + self.check_grad(['Input', 'HiddenPrev', 'Weight'], ['Hidden']) class TestGRUUnitOpWithBias(TestGRUUnitOp): @@ -104,18 +101,20 @@ class TestGRUUnitOpWithBias(TestGRUUnitOp): frame_size = self.frame_size super(TestGRUUnitOpWithBias, self).set_inputs() self.inputs['Bias'] = np.random.uniform( - -0.1, 0.1, (1, frame_size * 3)).astype('float32') + -0.1, 0.1, (1, frame_size * 3)).astype('float64') self.attrs = { 'activation': GRUActivationType.identity, 'gate_activation': GRUActivationType.sigmoid } def test_check_grad(self): + self.check_grad(['Input', 'HiddenPrev', 'Weight', 'Bias'], ['Hidden']) + + def test_check_grad_ingore_input(self): self.check_grad( - ['Input', 'HiddenPrev', 'Weight', 'Bias'], ['Hidden'], - max_relative_error=0.007) + ['HiddenPrev', 'Weight', 'Bias'], ['Hidden'], + no_grad_set=set('Input')) if __name__ == '__main__': - exit(0) # FIXME(yuyang18): This unittest is not pass. Fix it later unittest.main() diff --git a/python/paddle/v2/fluid/tests/test_initializer.py b/python/paddle/v2/fluid/tests/test_initializer.py index f2eb79b209..6c20203f8e 100644 --- a/python/paddle/v2/fluid/tests/test_initializer.py +++ b/python/paddle/v2/fluid/tests/test_initializer.py @@ -223,5 +223,109 @@ class TestXavierInitializer(unittest.TestCase): self.assertEqual(init_op.attr('seed'), 134) +class TestMSRAInitializer(unittest.TestCase): + def test_uniform_msra_initializer(self): + """Test MSRA initializer with uniform distribution on + for matrix multiply. + """ + program = framework.Program() + block = program.global_block() + param = block.create_parameter( + dtype="float32", + shape=[5, 10], + lod_level=0, + name="param", + initializer=initializer.MSRAInitializer()) + self.assertEqual(len(block.ops), 1) + init_op = block.ops[0] + self.assertEqual(init_op.type, 'uniform_random') + limit = np.sqrt(6.0 / param.shape[0]) + self.assertAlmostEqual(init_op.attr('min'), -limit, delta=DELTA) + self.assertAlmostEqual(init_op.attr('max'), limit, delta=DELTA) + self.assertEqual(init_op.attr('seed'), 0) + + def test_uniform_msra_initializer_conv(self): + """Test MSRA initializer with uniform distribution on + for convolutions. + """ + program = framework.Program() + block = program.global_block() + param = block.create_parameter( + dtype="float32", + shape=[5, 10, 15, 20], + lod_level=0, + name="param", + initializer=initializer.MSRAInitializer()) + self.assertEqual(len(block.ops), 1) + init_op = block.ops[0] + self.assertEqual(init_op.type, 'uniform_random') + receptive_field_size = float(15 * 20) + limit = np.sqrt(6.0 / (param.shape[1] * receptive_field_size)) + self.assertAlmostEqual(init_op.attr('min'), -limit, delta=DELTA) + self.assertAlmostEqual(init_op.attr('max'), limit, delta=DELTA) + self.assertEqual(init_op.attr('seed'), 0) + + def test_normal_msra_initializer(self): + """Test MSRA initializer with normal distribution on + for matrix multiply. + """ + program = framework.Program() + block = program.global_block() + param = block.create_parameter( + dtype="float32", + shape=[5, 10], + lod_level=0, + name="param", + initializer=initializer.MSRAInitializer(uniform=False)) + self.assertEqual(len(block.ops), 1) + init_op = block.ops[0] + self.assertEqual(init_op.type, 'gaussian_random') + std = np.sqrt(2.0 / param.shape[0]) + self.assertAlmostEqual(init_op.attr('mean'), 0.0, delta=DELTA) + self.assertAlmostEqual(init_op.attr('std'), std, delta=DELTA) + self.assertEqual(init_op.attr('seed'), 0) + + def test_normal_msra_initializer_conv(self): + """Test MSRA initializer with normal distribution on + for convolutions. + """ + program = framework.Program() + block = program.global_block() + param = block.create_parameter( + dtype="float32", + shape=[5, 10, 15, 20], + lod_level=0, + name="param", + initializer=initializer.MSRAInitializer(uniform=False)) + self.assertEqual(len(block.ops), 1) + init_op = block.ops[0] + self.assertEqual(init_op.type, 'gaussian_random') + receptive_field_size = float(15 * 20) + std = np.sqrt(2.0 / (param.shape[1] * receptive_field_size)) + self.assertAlmostEqual(init_op.attr('mean'), 0.0, delta=DELTA) + self.assertAlmostEqual(init_op.attr('std'), std, delta=DELTA) + self.assertEqual(init_op.attr('seed'), 0) + + def test_msra_initializer_supplied_arguments(self): + """Test the MSRA initializer with supplied arguments + """ + program = framework.Program() + block = program.global_block() + block.create_parameter( + dtype="float32", + shape=[5, 10], + lod_level=0, + name="param", + initializer=initializer.MSRAInitializer( + fan_in=12, seed=134)) + self.assertEqual(len(block.ops), 1) + init_op = block.ops[0] + self.assertEqual(init_op.type, 'uniform_random') + limit = np.sqrt(6.0 / 12) + self.assertAlmostEqual(init_op.attr('min'), -limit, delta=DELTA) + self.assertAlmostEqual(init_op.attr('max'), limit, delta=DELTA) + self.assertEqual(init_op.attr('seed'), 134) + + if __name__ == '__main__': unittest.main() diff --git a/python/paddle/v2/fluid/tests/test_layers.py b/python/paddle/v2/fluid/tests/test_layers.py index 3d18e7ce3a..d3dc45742d 100644 --- a/python/paddle/v2/fluid/tests/test_layers.py +++ b/python/paddle/v2/fluid/tests/test_layers.py @@ -1,8 +1,8 @@ +import unittest + import paddle.v2.fluid.layers as layers import paddle.v2.fluid.nets as nets from paddle.v2.fluid.framework import Program -import paddle.v2.fluid.core as core -import unittest class TestBook(unittest.TestCase): @@ -20,7 +20,8 @@ class TestBook(unittest.TestCase): avg_cost = layers.mean(x=cost, main_program=program) self.assertIsNotNone(avg_cost) program.append_backward(avg_cost) - print str(program) + + # print str(program) def test_recognize_digits_mlp(self): program = Program() @@ -49,7 +50,7 @@ class TestBook(unittest.TestCase): input=predict, label=label, main_program=program) avg_cost = layers.mean(x=cost, main_program=program) self.assertIsNotNone(avg_cost) - print str(program) + # print str(program) def test_simple_conv2d(self): program = Program() @@ -64,7 +65,7 @@ class TestBook(unittest.TestCase): filter_size=[4, 4], main_program=program) - print str(program) + # print str(program) def test_recognize_digits_conv(self): program = Program() @@ -103,7 +104,7 @@ class TestBook(unittest.TestCase): program.append_backward(avg_cost) - print str(program) + # print str(program) def test_word_embedding(self): program = Program() @@ -164,7 +165,24 @@ class TestBook(unittest.TestCase): avg_cost = layers.mean(x=cost, main_program=program) self.assertIsNotNone(avg_cost) - print str(program) + # print str(program) + + def test_linear_chain_crf(self): + program = Program() + + # Change g_program, so the rest layers use `g_program` + images = layers.data( + name='pixel', + shape=[784], + data_type='float32', + main_program=program) + label = layers.data( + name='label', shape=[1], data_type='int32', main_program=program) + hidden = layers.fc(input=images, size=128, main_program=program) + crf = layers.linear_chain_crf( + input=hidden, label=label, main_program=program) + + # print str(program) if __name__ == '__main__': diff --git a/python/paddle/v2/fluid/tests/test_linear_chain_crf_op.py b/python/paddle/v2/fluid/tests/test_linear_chain_crf_op.py index 6f06a66c82..c26634ff20 100644 --- a/python/paddle/v2/fluid/tests/test_linear_chain_crf_op.py +++ b/python/paddle/v2/fluid/tests/test_linear_chain_crf_op.py @@ -104,7 +104,7 @@ class TestLinearChainCrfOp(OpTest): transition_exps = np.exp(transition) labels = np.random.randint( - low=0, high=TAG_NUM, size=(lod[-1][-1], 1), dtype="int32") + low=0, high=TAG_NUM, size=(lod[-1][-1], 1), dtype="int64") self.inputs = { "Emission": (emission, lod), diff --git a/python/paddle/v2/fluid/tests/test_mnist_if_else_op.py b/python/paddle/v2/fluid/tests/test_mnist_if_else_op.py new file mode 100644 index 0000000000..8af99005dc --- /dev/null +++ b/python/paddle/v2/fluid/tests/test_mnist_if_else_op.py @@ -0,0 +1,154 @@ +import paddle.v2.fluid.layers as layers +from paddle.v2.fluid.framework import Program +from paddle.v2.fluid.executor import Executor +from paddle.v2.fluid.optimizer import MomentumOptimizer +import paddle.v2.fluid.core as core +import paddle.v2 as paddle +import unittest +import numpy as np + + +class TestMNISTIfElseOp(unittest.TestCase): + def test_raw_api(self): + kwargs = {'startup_program': Program(), 'main_program': Program()} + image = layers.data( + name='x', shape=[784], data_type='float32', **kwargs) + + label = layers.data(name='y', shape=[1], data_type='int64', **kwargs) + + limit = layers.fill_constant_batch_size_like( + input=label, dtype='int64', shape=[1], value=5.0, **kwargs) + + cond = layers.less_than(x=label, y=limit, **kwargs) + true_image, false_image = layers.split_lod_tensor( + input=image, mask=cond, **kwargs) + + true_out = layers.create_tensor(dtype='float32', **kwargs) + true_cond = layers.ConditionalBlock([true_image], **kwargs) + + with true_cond.block(): + hidden = layers.fc(input=true_image, size=100, act='tanh', **kwargs) + prob = layers.fc(input=hidden, size=10, act='softmax', **kwargs) + layers.assign(input=prob, output=true_out, **kwargs) + + false_out = layers.create_tensor(dtype='float32', **kwargs) + false_cond = layers.ConditionalBlock([false_image], **kwargs) + + with false_cond.block(): + hidden = layers.fc(input=false_image, + size=200, + act='tanh', + **kwargs) + prob = layers.fc(input=hidden, size=10, act='softmax', **kwargs) + layers.assign(input=prob, output=false_out, **kwargs) + + prob = layers.merge_lod_tensor( + in_true=true_out, in_false=false_out, mask=cond, x=image, **kwargs) + loss = layers.cross_entropy(input=prob, label=label, **kwargs) + avg_loss = layers.mean(x=loss, **kwargs) + + optimizer = MomentumOptimizer(learning_rate=0.001, momentum=0.9) + optimizer.minimize(avg_loss, kwargs['startup_program']) + + train_reader = paddle.batch( + paddle.reader.shuffle( + paddle.dataset.mnist.train(), buf_size=8192), + batch_size=200) + + place = core.CPUPlace() + exe = Executor(place) + + exe.run(kwargs['startup_program']) + PASS_NUM = 100 + for pass_id in range(PASS_NUM): + for data in train_reader(): + x_data = np.array(map(lambda x: x[0], data)).astype("float32") + y_data = np.array(map(lambda x: x[1], data)).astype("int64") + y_data = np.expand_dims(y_data, axis=1) + + tensor_x = core.LoDTensor() + tensor_x.set(x_data, place) + + tensor_y = core.LoDTensor() + tensor_y.set(y_data, place) + + outs = map(np.array, + exe.run(kwargs['main_program'], + feed={'x': tensor_x, + 'y': tensor_y}, + fetch_list=[avg_loss])) + print outs[0] + if outs[0] < 1.0: + return + self.assertFalse(True) + + def test_ifelse(self): + kwargs = {'startup_program': Program(), 'main_program': Program()} + image = layers.data( + name='x', shape=[784], data_type='float32', **kwargs) + + label = layers.data(name='y', shape=[1], data_type='int64', **kwargs) + + limit = layers.fill_constant_batch_size_like( + input=label, dtype='int64', shape=[1], value=5.0, **kwargs) + + cond = layers.less_than(x=label, y=limit, **kwargs) + + ie = layers.IfElse(cond, **kwargs) + + with ie.true_block(): + true_image = ie.input(image) + hidden = layers.fc(input=true_image, size=100, act='tanh', **kwargs) + prob = layers.fc(input=hidden, size=10, act='softmax', **kwargs) + ie.output(prob) + + with ie.false_block(): + false_image = ie.input(image) + hidden = layers.fc(input=false_image, + size=200, + act='tanh', + **kwargs) + prob = layers.fc(input=hidden, size=10, act='softmax', **kwargs) + ie.output(prob) + + prob = ie() + loss = layers.cross_entropy(input=prob[0], label=label, **kwargs) + avg_loss = layers.mean(x=loss, **kwargs) + + optimizer = MomentumOptimizer(learning_rate=0.001, momentum=0.9) + optimizer.minimize(avg_loss, kwargs['startup_program']) + train_reader = paddle.batch( + paddle.reader.shuffle( + paddle.dataset.mnist.train(), buf_size=8192), + batch_size=200) + + place = core.CPUPlace() + exe = Executor(place) + + exe.run(kwargs['startup_program']) + PASS_NUM = 100 + for pass_id in range(PASS_NUM): + for data in train_reader(): + x_data = np.array(map(lambda x: x[0], data)).astype("float32") + y_data = np.array(map(lambda x: x[1], data)).astype("int64") + y_data = np.expand_dims(y_data, axis=1) + + tensor_x = core.LoDTensor() + tensor_x.set(x_data, place) + + tensor_y = core.LoDTensor() + tensor_y.set(y_data, place) + + outs = map(np.array, + exe.run(kwargs['main_program'], + feed={'x': tensor_x, + 'y': tensor_y}, + fetch_list=[avg_loss])) + print outs[0] + if outs[0] < 1.0: + return + self.assertFalse(True) + + +if __name__ == '__main__': + unittest.main() diff --git a/python/paddle/v2/fluid/tests/test_variable.py b/python/paddle/v2/fluid/tests/test_variable.py index a3e60a7517..c3e1f9ac0a 100644 --- a/python/paddle/v2/fluid/tests/test_variable.py +++ b/python/paddle/v2/fluid/tests/test_variable.py @@ -1,5 +1,5 @@ import unittest -from paddle.v2.fluid.framework import Variable, g_main_program, Program +from paddle.v2.fluid.framework import g_main_program, Program, convert_np_dtype_to_dtype_ import paddle.v2.fluid.core as core import numpy as np @@ -7,7 +7,7 @@ import numpy as np class TestVariable(unittest.TestCase): def test_np_dtype_convert(self): DT = core.DataType - convert = Variable._convert_np_dtype_to_dtype_ + convert = convert_np_dtype_to_dtype_ self.assertEqual(DT.FP32, convert(np.float32)) self.assertEqual(DT.FP16, convert("float16")) self.assertEqual(DT.FP64, convert("float64"))