Commit Graph

72 Commits (f46257fa4a57ab0afb9c155e8faabf3adca2e775)

Author SHA1 Message Date
Yibing Liu f46257fa4a Merge branch 'develop' of upstream into add_lstm_doc
7 years ago
Yibing Liu f050390754 Polish the doc of dynamic_lstm
7 years ago
ying e043c2ce45 Merge branch 'develop' into wraper_for_l2_normalize
7 years ago
guosheng 66054984cd Merge branch 'develop' of https://github.com/PaddlePaddle/paddle into add-dot_product_attention
7 years ago
guosheng 9bcb2d268e Add python wrapper for matmul_op and dot_product_attention
7 years ago
ying 87a59d65d6 Merge branch 'develop' into wraper_for_l2_normalize
7 years ago
Yibing Liu aab4cfeb65 Add doc for dynamic_lstm python api
7 years ago
caoying03 6497bff901 add python wrapper for l2 normalize.
7 years ago
dzhwinter b9b75377a2
Feature/hooks (#7513)
7 years ago
guosheng 234013a9d7 Add python wrapper for matmul_op
7 years ago
guosheng ef129718ea Merge branch 'develop' of https://github.com/PaddlePaddle/paddle into add-python-glu
7 years ago
guosheng c083a60d7a Add python split and glu
7 years ago
dzhwinter 5ad1aef051
"cudnn operators change to cudnn kernel" (#6660)
7 years ago
ying 8ac744f372 add wrapper for elementwise math operator.
7 years ago
Abhinav Arora ea782e38a6
Fix typo in batch norm bias initialization (#7449)
7 years ago
QI JUN 87f9b58363
set stop gradient for mask in dropout layer (#7390)
7 years ago
QI JUN fe341bacde
refine batch norm python layer (#7348)
7 years ago
yangyaming 67b8c09210 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into fix-7211
7 years ago
yangyaming 872b1c8884 Stop gradient when pool_type=='max'
7 years ago
Siddharth Goyal 691b5cac61 Fix equation for gru op (#7274)
7 years ago
Cao Ying 0b6c5e6d25
Merge pull request #7223 from PaddlePaddle/emailweixu-patch-1
7 years ago
Siddharth Goyal f3c42f607c
Add doc for gru_unit op (in fluid) (#7151)
7 years ago
emailweixu ebc616b0f9
Fix comment to layers.fc
7 years ago
yangyaming 60fecce43d Fix unit test for lstm_unit.
7 years ago
yangyaming c0f6f492bc Add shape info for arguments.
7 years ago
yangyaming d6ec963047 Minor correction.
7 years ago
yangyaming f0e797e5b7 Doc fix and enhancement for lstm_unit python wrapper.
7 years ago
Siddharth Goyal 87f46ebb36 Add squared error layers doc (#6862)
7 years ago
Yibing Liu 3a93fa7708
Merge pull request #7069 from kuke/fix_cross_entropy_doc
7 years ago
chengduo f9a1229666
Merge pull request #6850 from chengduoZH/feature/conv2d_python_doc
7 years ago
chengduoZH 3d2b2d408f refine doc
7 years ago
Yibing Liu e2c2652fc0 amend comments in cross_entropy_op
7 years ago
Yibing Liu 4177e80545 Add line feed character in the doc of cross_entropy
7 years ago
Yibing Liu c67c54a8e7 Polish the doc of cross_entropy
7 years ago
fengjiayi bff0cbfcd3
Merge pull request #7025 from JiayiFeng/rename_output_of_softmax_and_activitions
7 years ago
fengjiayi e41a71cea8 fix errors
7 years ago
guosheng 97d47ca3fc Add python wrapper for reduce_max and reduce_min
7 years ago
chengduoZH 9e7c068677 fix embedding example
7 years ago
chengduoZH 1d936f1dfa refine
7 years ago
Luo Tao 36acbba674 Merge branch 'develop' into seq_pool_doc
7 years ago
Guo Sheng c5a2672a07
Merge pull request #6779 from guoshengCS/add-python-reduceMean
7 years ago
Luo Tao f3fc8de1d5 add doc for sequence_first/last_step
7 years ago
caoying03 852cd544a9 fix latex equation in fluid fc layer.
7 years ago
Luo Tao d7a9bb6e19 add python wrap for sequence_first/last_step
7 years ago
Cao Ying 298dc8958d
Merge pull request #6792 from lcy-seso/refine_doc
7 years ago
Abhinav Arora 91911f4b56
Fix documentation of embedding layer (#6854)
7 years ago
chengduoZH e902c36cdf add conv2d_python doc
7 years ago
caoying03 a74db488f7 follow comments.
7 years ago
chengduoZH dcf5e948b0 remove conflict
7 years ago
caoying03 ebe4425ffa Merge branch 'develop' into refine_doc
7 years ago