Commit Graph

4610 Commits (e0d4e04bdd51f3c401c13c09f866f232841655df)

Author SHA1 Message Date
Tao Luo e0d4e04bdd fix some compiler warning
6 years ago
Tao Luo 8ea13e336a add in_num_col_dims for fc
6 years ago
Wu Yi 9f33593910
human readable memory warns (#14361)
6 years ago
Tao Luo 9eb0ab1db3
Merge pull request #14384 from tensor-tang/refine/lrn
6 years ago
Qiao Longfei e65cbd3b06
Merge pull request #14387 from jacquesqiao/lookup_sparse_table_add_test_mode
6 years ago
Qiao Longfei 6cf8f24b1b
Merge pull request #14389 from jacquesqiao/fix_sgd_op_optimize_sparse_table
6 years ago
Zeng Jinle 7066b3850a
Merge pull request #14395 from sneaxiy/fix_num_threads_in_fast_pe
6 years ago
Xin Pan 10ab177f89
Merge pull request #14403 from PaddlePaddle/revert-14337-prv-dam-softmax
6 years ago
Yan Chunwei 9f252e0032
Combine Inference Analysis with IR (#13914)
6 years ago
Tao Luo 5b9c62faee
Revert "Softmax op optimization for inference "
6 years ago
Tao Luo 6490bb2765
Merge pull request #14337 from jczaja/prv-dam-softmax
6 years ago
Zeng Jinle 38d32c98b8 merge develop
6 years ago
sneaxiy eb18d532a5 fix num_threads in fast_pe
6 years ago
chengduo 9f68e9a7fe
fix auc op (#14385)
6 years ago
Qiao Longfei efb5c03f60 sgd_op optimize selected rows do not enforce id < height
6 years ago
Qiao Longfei 51f3838f96 add log for not exist code
6 years ago
Qiao Longfei 7aa8b2ccf2 optimize code
6 years ago
Qiao Longfei 8d205c853c add is_test for lookup_sparse_table
6 years ago
tensor-tang b4dfba1779 refine lrn_op cpu forward and speedup
6 years ago
tensor-tang 1be85d011d add mkl vsqr and vpow
6 years ago
Yibing Liu 6c7b64cc20
Support softmax return in softmax_with_cross_entropy (#14367)
6 years ago
Tao Luo 6c32945556
Merge pull request #14372 from luotao1/speedup_analysis
6 years ago
ruri 4a55fb5f5b Add density_prior_box_op (#14226)
6 years ago
Tao Luo 668ae523d2 speedup DetectPatterns
6 years ago
Yan Chunwei 9a6e239281
fix mac graph detector sort (#14356)
6 years ago
Qiao Longfei c27554ac33
Merge pull request #14336 from jacquesqiao/add_bilinear_tensor_product_layer
6 years ago
Tao Luo 991a9f7b72
Merge pull request #14358 from NHZlX/add_serial_and_filter_logs
6 years ago
Yibing Liu bd2943788b
Fix gather & stack op (#14355)
6 years ago
Yu Yang 8f9bfad246
perf(compile): speed up reduce_op compile by splitting files (#14294)
6 years ago
nhzlx 397de907ed merge develops
6 years ago
nhzlx d6ff006903 add serial to trt test and do not print log for unused trt logs
6 years ago
Jacek Czaja 03299ed46c - Fix to linking for GPU builds of softmax inference
6 years ago
Jacek Czaja 0756343767 - Fix GPU compilation
6 years ago
Jacek Czaja d332326847 - Added unit tests for softmax is_test=True op
6 years ago
Jacek Czaja c1fccc29c1 - Noise adding removed for Test phase of softmax
6 years ago
Tao Luo 573e68eb40
Merge pull request #14348 from luotao1/speedup_analysis
6 years ago
Xin Pan ff28b1ffc0
Merge pull request #14071 from barrierye/add_similarity_focus_op
6 years ago
li099 688ed60116 Add lod tensor array to tensor op (#13990)
6 years ago
chengduo 6c6e638550
Add InferVarType for some op (#14201)
6 years ago
Kaipeng Deng 0b38822624
Merge pull request #14345 from heavengate/fix_grid_sampler
6 years ago
Tao Luo 433fc7c1d4 skip mkldnn related pass when use_mkldnn=false
6 years ago
Qiyang Min 698698f2fa
Merge branch 'develop' into fix_vlog
6 years ago
qingqing01 abe209234f
Exhaustive search for cuDNN conv. (#14286)
6 years ago
Kaipeng Deng f215534ecf
Merge pull request #14205 from heavengate/nearest_interp
6 years ago
dengkaipeng 72108d8dbe fix win compile error: EigenTenor * float unsupport. test=develop
6 years ago
tensor-tang 22125ebaef
Merge pull request #14321 from tensor-tang/fea/jit/vscal
6 years ago
Tao Luo f1046d7e37
Merge pull request #14335 from wojtuss/wojtuss/add-graph-viz
6 years ago
Tao Luo 34e9e59f4a
Merge pull request #14333 from kbinias/change-hardcoded-format-and-bump-mkldnn-version
6 years ago
Qiao Longfei 3f91e0f001 update API.spec
6 years ago
minqiyang 87450b9ad4 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into fix_vlog
6 years ago