Commit Graph

629 Commits (a4024a5f3dc852bc955f8c4c7aa30750d8da4c6f)

Author SHA1 Message Date
tensor-tang 89cb3a249c follow comments, refine comment and function name
7 years ago
tensor-tang adf79faaca Merge remote-tracking branch 'upstream/develop' into mkl_packed
7 years ago
tensor-tang df2b054b13 follow comments refine code
7 years ago
Yu Yang 15e8c80ee0 Rename API of DeviceContext (#7055)
7 years ago
tensor-tang 4360615850 fix compile error
7 years ago
tensor-tang 8209103551 follow comments and refine code
7 years ago
tensor-tang 0b080a42da add recurrent layer header
7 years ago
tensor-tang 0f8aad2934 fix compile error
7 years ago
tensor-tang 624e3e5208 add MKL Packed RecurrentLayer
7 years ago
tensor-tang 9c27c13e46 follow comments using macro to separate the original implements
7 years ago
tensor-tang 84cb542c13 use intel openmp to speedup seq2batch when WITH_MKL
7 years ago
dangqingqing 0fce0fe698 Reduce memory usage in conv layer and RoI layer for mobile inference.
7 years ago
Tao Luo a34fc8b36b
Merge pull request #6213 from tensor-tang/mkldnn_lrn
7 years ago
whs e09e21beee
Merge pull request #6188 from wanghaoshuang/conv_fix
7 years ago
tensor-tang 54205c99b6 add MKLDNNLRNLayer
7 years ago
wanghaoshuang b25ee3ae60 Fix ConvTransProjection bug.
7 years ago
guosheng fe6af6b6ac Enhance the AvgPooling to support optional exclude-mode
7 years ago
guosheng 6ed135413a Fix useGpu in HierarchicalSigmoidLayer
7 years ago
peterzhang2029 bb61e90ffc Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into hsigmoid_gpu
7 years ago
Wang Meng 95cdbfec19
Merge pull request #4859 from will-am/factorization_machine_layer
7 years ago
peterzhang2029 539462839b bug fix when using hsigmoid with gpu
7 years ago
peterzhang2029 cda3a7747a bug fix when using hsigmoid with gpu
7 years ago
dangqingqing a21fe4ac0d Fix bug in RoI pooling.
7 years ago
dangqingqing cc9a761a87 Fix bug in RoI pooling.
7 years ago
wangmeng28 89e63b138f Merge remote-tracking branch 'upstream/develop' into factorization_machine_layer
7 years ago
Cao Ying 657776012b
Merge pull request #5692 from peterzhang2029/add_bn_eq
7 years ago
tensor-tang 63ee7290f2 remove the tmp buffer
7 years ago
peterzhang2029 90e05a4b8c Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add_bn_eq
7 years ago
peterzhang2029 5502abb95b refine docstrings
7 years ago
Luo Tao 67fa0de2a7 fix some warning with MKLDNN related codes and etc
7 years ago
wangmeng28 74a699a72e change clone to resizeOrCreate in fm layer
7 years ago
wangmeng28 6fed6f2079 Add support of sparse_binary_vector as input for fm layer
7 years ago
tensor-tang c961fbf09a change the condition to reset the forward in MKLDNNLayer
7 years ago
tensor-tang a8eeef86ac make MKLDNNLayer input grad as a vector
7 years ago
tensor-tang bc0d255796 make MKLDNNLayer input value as a vector
7 years ago
tensor-tang 3117d97784 add inputChannel in resetInValue for concat layer
7 years ago
tensor-tang c397599dfd remove weight and bias in MKLDNN reset function, since not all layers have weight and bias.
7 years ago
tensor-tang a9490a1053 make output channels changeable in reshape function
7 years ago
wangmeng28 5392a503a7 Merge remote-tracking branch 'upstream/develop' into factorization_machine_layer
7 years ago
wangmeng28 d5a6c81dc5 Update docs for factorization machine layer
7 years ago
wangmeng28 5ee63bb67c Merge remote-tracking branch 'upstream/develop' into factorization_machine_layer
7 years ago
peterzhang2029 9580c45077 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add_bn_eq
7 years ago
caoying03 29fc94b265 Merge branch 'develop' into l2_distance
7 years ago
ranqiu92 093c526d43
Merge pull request #5724 from ranqiu92/dot_product
7 years ago
caoying03 4772b78ced add config_helper.
7 years ago
Tao Luo 56f28a0f65
Merge pull request #5732 from tensor-tang/rename
7 years ago
Tao Luo 9b56074083
Merge pull request #5705 from tensor-tang/mkldnn_concat
7 years ago
ranqiu 2e1cd3313d Update dot_prod_layer
7 years ago
tensor-tang f5df46e1a4 rename all Mkldnn to MKLDNN
7 years ago
caoying03 dfc5d1f19a add the l2 distance layer.
7 years ago