Merge branch 'develop' into conv

avx_docs
Luo Tao 8 years ago
commit d114d8976a

@ -3,7 +3,7 @@ cmake_minimum_required(VERSION 2.8)
project(paddle CXX C) project(paddle CXX C)
set(PADDLE_MAJOR_VERSION 0) set(PADDLE_MAJOR_VERSION 0)
set(PADDLE_MINOR_VERSION 9) set(PADDLE_MINOR_VERSION 9)
set(PADDLE_PATCH_VERSION 0a0) set(PADDLE_PATCH_VERSION 0)
set(PADDLE_VERSION ${PADDLE_MAJOR_VERSION}.${PADDLE_MINOR_VERSION}.${PADDLE_PATCH_VERSION}) set(PADDLE_VERSION ${PADDLE_MAJOR_VERSION}.${PADDLE_MINOR_VERSION}.${PADDLE_PATCH_VERSION})
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake") set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake")

@ -0,0 +1,69 @@
# Release v0.9.0
## New Features:
* New Layers
* bilinear interpolation layer.
* spatial pyramid-pool layer.
* de-convolution layer.
* maxout layer.
* Support rectangle padding, stride, window and input for Pooling Operation.
* Add —job=time in trainer, which can be used to print time info without compiler option -WITH_TIMER=ON.
* Expose cost_weight/nce_layer in `trainer_config_helpers`
* Add FAQ, concepts, h-rnn docs.
* Add Bidi-LSTM and DB-LSTM to quick start demo @alvations
* Add usage track scripts.
## Improvements
* Add Travis-CI for Mac OS X. Enable swig unittest in Travis-CI. Skip Travis-CI when only docs are changed.
* Add code coverage tools.
* Refine convolution layer to speedup and reduce GPU memory.
* Speed up PyDataProvider2
* Add ubuntu deb package build scripts.
* Make Paddle use git-flow branching model.
* PServer support no parameter blocks.
## Bug Fixes
* add zlib link to py_paddle
* add input sparse data check for sparse layer at runtime
* Bug fix for sparse matrix multiplication
* Fix floating-point overflow problem of tanh
* Fix some nvcc compile options
* Fix a bug in yield dictionary in DataProvider
* Fix SRL hang when exit.
# Release v0.8.0beta.1
New features:
* Mac OSX is supported by source code. #138
* Both GPU and CPU versions of PaddlePaddle are supported.
* Support CUDA 8.0
* Enhance `PyDataProvider2`
* Add dictionary yield format. `PyDataProvider2` can yield a dictionary with key is data_layer's name, value is features.
* Add `min_pool_size` to control memory pool in provider.
* Add `deb` install package & docker image for no_avx machines.
* Especially for cloud computing and virtual machines
* Automatically disable `avx` instructions in cmake when machine's CPU don't support `avx` instructions.
* Add Parallel NN api in trainer_config_helpers.
* Add `travis ci` for Github
Bug fixes:
* Several bugs in trainer_config_helpers. Also complete the unittest for trainer_config_helpers
* Check if PaddlePaddle is installed when unittest.
* Fix bugs in GTX series GPU
* Fix bug in MultinomialSampler
Also more documentation was written since last release.
# Release v0.8.0beta.0
PaddlePaddle v0.8.0beta.0 release. The install package is not stable yet and it's a pre-release version.

@ -0,0 +1,9 @@
This dataset consists of electronics product reviews associated with
binary labels (positive/negative) for sentiment classification.
The preprocessed data can be downloaded by script `get_data.sh`.
The data was derived from reviews_Electronics_5.json.gz at
http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Electronics_5.json.gz
If you want to process the raw data, you can use the script `proc_from_raw_data/get_data.sh`.

@ -17,14 +17,11 @@ set -e
DIR="$( cd "$(dirname "$0")" ; pwd -P )" DIR="$( cd "$(dirname "$0")" ; pwd -P )"
cd $DIR cd $DIR
echo "Downloading Amazon Electronics reviews data..." # Download the preprocessed data
# http://jmcauley.ucsd.edu/data/amazon/ wget http://paddlepaddle.bj.bcebos.com/demo/quick_start_preprocessed_data/preprocessed_data.tar.gz
wget http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Electronics_5.json.gz
echo "Downloading mosesdecoder..." # Extract package
#https://github.com/moses-smt/mosesdecoder tar zxvf preprocessed_data.tar.gz
wget https://github.com/moses-smt/mosesdecoder/archive/master.zip
unzip master.zip # Remove compressed package
rm master.zip rm preprocessed_data.tar.gz
echo "Done."

@ -1,2 +0,0 @@
the device is cute , but that 's just about all that 's good. the specs are what you 'd expect : it 's a wifi mic , with some noise filter options. the app has the option to upload your baby 's name and photo , which is a cutesy touch. but the app is otherwise unstable and useless unless you upgrade for $ 60 / year.set up involves downloading the app , turning on the mic , switching your phone to the wifi network of the mic , telling the app your wifi settings , switching your wifi back to your home router. the app is then directly connected to your mic.the app is adware ! the main screen says " cry notifications on / off : upgrade to evoz premium and receive a text message of email when your baby is crying " .but the adware points out an important limitation , this monitor is only intended to be used from your home network. if you want to access it remotely , get a webcam. this app would make a lot more sense of the premium features were included with the hardware .
don 't be fooled by my one star rating. if there was a zero , i would have selected it. this product was a waste of my money.it has never worked like the company said it supposed to. i only have one device , an iphone 4gs. after charging the the iphone mid way , the i.sound portable power max 16,000 mah is completely drained. the led light no longer lit up. when plugging the isound portable power max into a wall outlet to charge , it would charge for about 20-30 minutes and then all four battery led indicator lit up showing a full charge. i would leave it on to charge for the full 8 hours or more but each time with the same result upon using. don 't buy this thing. put your money to good use elsewhere .

@ -16,10 +16,26 @@
# 1. size of pos : neg = 1:1. # 1. size of pos : neg = 1:1.
# 2. size of testing set = min(25k, len(all_data) * 0.1), others is traning set. # 2. size of testing set = min(25k, len(all_data) * 0.1), others is traning set.
# 3. distinct train set and test set. # 3. distinct train set and test set.
# 4. build dict
set -e set -e
DIR="$( cd "$(dirname "$0")" ; pwd -P )"
cd $DIR
# Download data
echo "Downloading Amazon Electronics reviews data..."
# http://jmcauley.ucsd.edu/data/amazon/
wget http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Electronics_5.json.gz
echo "Downloading mosesdecoder..."
# https://github.com/moses-smt/mosesdecoder
wget https://github.com/moses-smt/mosesdecoder/archive/master.zip
unzip master.zip
rm master.zip
##################
# Preprocess data
echo "Preprocess data..."
export LC_ALL=C export LC_ALL=C
UNAME_STR=`uname` UNAME_STR=`uname`
@ -29,11 +45,11 @@ else
SHUF_PROG='gshuf' SHUF_PROG='gshuf'
fi fi
mkdir -p data/tmp mkdir -p tmp
python preprocess.py -i data/reviews_Electronics_5.json.gz python preprocess.py -i reviews_Electronics_5.json.gz
# uniq and shuffle # uniq and shuffle
cd data/tmp cd tmp
echo 'uniq and shuffle...' echo 'Uniq and shuffle...'
cat pos_*|sort|uniq|${SHUF_PROG}> pos.shuffed cat pos_*|sort|uniq|${SHUF_PROG}> pos.shuffed
cat neg_*|sort|uniq|${SHUF_PROG}> neg.shuffed cat neg_*|sort|uniq|${SHUF_PROG}> neg.shuffed
@ -53,11 +69,11 @@ cat train.pos train.neg | ${SHUF_PROG} >../train.txt
cat test.pos test.neg | ${SHUF_PROG} >../test.txt cat test.pos test.neg | ${SHUF_PROG} >../test.txt
cd - cd -
echo 'data/train.txt' > data/train.list echo 'train.txt' > train.list
echo 'data/test.txt' > data/test.list echo 'test.txt' > test.list
# use 30k dict # use 30k dict
rm -rf data/tmp rm -rf tmp
mv data/dict.txt data/dict_all.txt mv dict.txt dict_all.txt
cat data/dict_all.txt | head -n 30001 > data/dict.txt cat dict_all.txt | head -n 30001 > dict.txt
echo 'preprocess finished' echo 'Done.'

@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
""" """
1. (remove HTML before or not)tokensizing 1. Tokenize the words and punctuation
2. pos sample : rating score 5; neg sample: rating score 1-2. 2. pos sample : rating score 5; neg sample: rating score 1-2.
Usage: Usage:
@ -76,7 +76,11 @@ def tokenize(sentences):
sentences : a list of input sentences. sentences : a list of input sentences.
return: a list of processed text. return: a list of processed text.
""" """
dir = './data/mosesdecoder-master/scripts/tokenizer/tokenizer.perl' dir = './mosesdecoder-master/scripts/tokenizer/tokenizer.perl'
if not os.path.exists(dir):
sys.exit(
"The ./mosesdecoder-master/scripts/tokenizer/tokenizer.perl does not exists."
)
tokenizer_cmd = [dir, '-l', 'en', '-q', '-'] tokenizer_cmd = [dir, '-l', 'en', '-q', '-']
assert isinstance(sentences, list) assert isinstance(sentences, list)
text = "\n".join(sentences) text = "\n".join(sentences)
@ -104,7 +108,7 @@ def tokenize_batch(id):
num_batch, instance, pre_fix = parse_queue.get() num_batch, instance, pre_fix = parse_queue.get()
if num_batch == -1: ### parse_queue finished if num_batch == -1: ### parse_queue finished
tokenize_queue.put((-1, None, None)) tokenize_queue.put((-1, None, None))
sys.stderr.write("tokenize theread %s finish\n" % (id)) sys.stderr.write("Thread %s finish\n" % (id))
break break
tokenize_instance = tokenize(instance) tokenize_instance = tokenize(instance)
tokenize_queue.put((num_batch, tokenize_instance, pre_fix)) tokenize_queue.put((num_batch, tokenize_instance, pre_fix))

@ -14,10 +14,10 @@
# limitations under the License. # limitations under the License.
set -e set -e
wget http://www.cs.upc.edu/~srlconll/conll05st-tests.tar.gz wget http://www.cs.upc.edu/~srlconll/conll05st-tests.tar.gz
wget https://www.googledrive.com/host/0B7Q8d52jqeI9ejh6Q1RpMTFQT1k/semantic_role_labeling/verbDict.txt --no-check-certificate wget http://paddlepaddle.bj.bcebos.com/demo/srl_dict_and_embedding/verbDict.txt
wget https://www.googledrive.com/host/0B7Q8d52jqeI9ejh6Q1RpMTFQT1k/semantic_role_labeling/targetDict.txt --no-check-certificate wget http://paddlepaddle.bj.bcebos.com/demo/srl_dict_and_embedding/targetDict.txt
wget https://www.googledrive.com/host/0B7Q8d52jqeI9ejh6Q1RpMTFQT1k/semantic_role_labeling/wordDict.txt --no-check-certificate wget http://paddlepaddle.bj.bcebos.com/demo/srl_dict_and_embedding/wordDict.txt
wget https://www.googledrive.com/host/0B7Q8d52jqeI9ejh6Q1RpMTFQT1k/semantic_role_labeling/emb --no-check-certificate wget http://paddlepaddle.bj.bcebos.com/demo/srl_dict_and_embedding/emb
tar -xzvf conll05st-tests.tar.gz tar -xzvf conll05st-tests.tar.gz
rm conll05st-tests.tar.gz rm conll05st-tests.tar.gz
cp ./conll05st-release/test.wsj/words/test.wsj.words.gz . cp ./conll05st-release/test.wsj/words/test.wsj.words.gz .

@ -0,0 +1,14 @@
ABOUT
=======
PaddlPaddle is an easy-to-use, efficient, flexible and scalable deep learning platform,
which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu.
PaddlePaddle is now open source but far from complete, which is intended to be built upon, improved, scaled, and extended.
We hope to build an active open source community both by providing feedback and by actively contributing to the source code.
Credits
--------
We owe many thanks to `all contributors and developers <https://github.com/PaddlePaddle/Paddle/blob/develop/authors>`_ of PaddlePaddle!

@ -1,7 +0,0 @@
Algorithm Tutorial
==================
.. toctree::
:maxdepth: 1
rnn/rnn.rst

@ -1 +0,0 @@
../../demo/sentiment_analysis/bi_lstm.jpg

@ -1 +0,0 @@
../../demo/text_generation/encoder-decoder-attention-model.png

@ -1,5 +1,5 @@
DataProvider Introduction Introduction
========================= ==============
DataProvider is a module that loads training or testing data into cpu or gpu DataProvider is a module that loads training or testing data into cpu or gpu
memory for the following triaining or testing process. memory for the following triaining or testing process.

@ -1,5 +1,5 @@
How to use PyDataProvider2 PyDataProvider2
========================== =================
We highly recommand users to use PyDataProvider2 to provide training or testing We highly recommand users to use PyDataProvider2 to provide training or testing
data to PaddlePaddle. The user only needs to focus on how to read a single data to PaddlePaddle. The user only needs to focus on how to read a single

@ -0,0 +1,36 @@
API
====
DataProvider API
----------------
.. toctree::
:maxdepth: 1
data_provider/index.rst
data_provider/pydataprovider2.rst
Model Config API
----------------
.. toctree::
:maxdepth: 1
trainer_config_helpers/index.rst
trainer_config_helpers/optimizers.rst
trainer_config_helpers/data_sources.rst
trainer_config_helpers/layers.rst
trainer_config_helpers/activations.rst
trainer_config_helpers/poolings.rst
trainer_config_helpers/networks.rst
trainer_config_helpers/evaluators.rst
trainer_config_helpers/attrs.rst
Applications API
----------------
.. toctree::
:maxdepth: 1
predict/swig_py_paddle_en.rst

@ -1,5 +1,5 @@
Python Prediction API Python Prediction
===================== ==================
PaddlePaddle offers a set of clean prediction interfaces for python with the help of PaddlePaddle offers a set of clean prediction interfaces for python with the help of
SWIG. The main steps of predict values in python are: SWIG. The main steps of predict values in python are:

@ -0,0 +1,5 @@
Parameter Attributes
=======================
.. automodule:: paddle.trainer_config_helpers.attrs
:members:

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save