* Fix SRL hang when exit.
* Error occurred when enable Async Load in TestDataProvider.
* It because DataProvider is calling getNextBatchInternal in one thread, and destructing DataProvider in other thread.
* Add wait routine in DataProvider destructing.
* Also fix another bug, when destructing TestDataProvider and do not read any test data.
Fix#286
* Follow comments, Use mutex is cool!
* Add elementwise math operations
This allows use to use expressions like: y=log(1+exp(x))
Also added unittests for ActivationFunction
* Enforce keyword arguments for non-positional arguments
* Add LogActivation to doc
For multiple installation of paddle, there might be multiple versions of python package at opt/paddle/share/wheels/. We should install the right version.
Ideally, we should remove the wrong versions when install. But it's not easy to do this with cmake.
Change-Id: Ida8a8d60643ad9e42cf1c85776de9122d5ba1392
For multiple installation of paddle, there might be multiple versions of python package at opt/paddle/share/wheels/. We should install the right version.
Ideally, we should remove the wrong versions when install. But it's not easy to do this with cmake.
Change-Id: Ida8a8d60643ad9e42cf1c85776de9122d5ba1392
* Add benchmark for PaddlePaddle, tensorflow and caffe
* ConvProjection to reduce memory for goolenet
* Add unit test for ConvProjection.
1. unit test in test_LayerGrad.
2. compare the ConvPorjection and CudnnConvLayer, also compare the concat_layer+img_conv_layer and concat_layer_conv_projection.
* Reduce cudnn_conv memory and add benchmark document.
1. Use TmpMatrix as the workspace in cudnn_conv to reduce gpu memory. It reduce lots of memory.
2. Add benchmark document.
3. fix smallnet_mnist_cifar.py in paddle.
* Add job=time and refine cudnn_conv to reduce gpu memroy and speed up
* Refine cudnn_conv and shared biases operation in concat_layer and mixed_layer.
* follow comments
* follow comments
* Use unique_ptr to prevent memory leaks in CudnnConvLayer.
* Because in cluster maybe use a lot machine to train a model, and some parameter size could be too small for ParameterServer. Then some of pservers could not have any ParamBlock.
* Also, because ports_num or ports_num_for_sparse is too large, then give a warning in runtime.
* fix interface bug of block_expand_layer and add unittest
* auto compute num_channels
* default value of num_channels is None
* adjust input order of block_expand
* add input sparse data check for sparse layer at runtime,
to avoid invalid data access at pserver end while doing prefetch
* remote sparse design support binary sparse and float saprse both