Commit Graph

22187 Commits (4c8feae43d226646397f812b945a4b5b07e5a6f9)
 

Author SHA1 Message Date
dengkaipeng db8ff57a61 remove useless code and update doc. test=develop
6 years ago
dengkaipeng 577a92d992 use typename DeviceContext. test=develop
6 years ago
dengkaipeng 0c4acc8305 imporve yolo loss implement. test=develop
6 years ago
dengkaipeng 2fbfef2ec9 fix no box expression. test=develop
6 years ago
dengkaipeng c0fa8d2eec use L1Loss for w, h. test=develop
6 years ago
dengkaipeng 3841983aa0 fix division error in mean process. test=develop
6 years ago
dengkaipeng 192d293854 use stable Sigmoid Cross Entropy implement. test=develop
6 years ago
Tao Luo 245b1f0579
Merge pull request #15570 from luotao1/bert
6 years ago
tink2123 909f864a9b remove unnecessary flags
6 years ago
JiabinYang bb881199f2 test=develop, polish code and fix wrong change in /paddle/fluid/inference/utils/CMakeLists.txt
6 years ago
tink2123 6961a94e94 avoid out_size less than 1
6 years ago
Jiabin Yang 075df09f86
Merge pull request #15470 from JiabinYang/feature/imperative
6 years ago
Qiyang Min b69996c2d3
Merge pull request #15558 from velconia/imperative_resnet
6 years ago
luotao1 5504425eb3 fix compiler error, use len20 dataset for bert
6 years ago
Yan Chunwei 655179089f
AnalysisConfig remove contrib namespace (#15540)
6 years ago
shanyi15 57320942e4 Merge branch 'fast_install_1.3' of git://github.com/JiabinYang/Paddle into JiabinYang-fast_install_1.3
6 years ago
jerrywgz 7bc8481c62
Merge pull request #15418 from jerrywgz/refine_nms
6 years ago
Wu Yi ab4715840d
fix default create_parameter dtype maching initializers (#15521)
6 years ago
tensor-tang d59f733551 refine softmax and use with cache
6 years ago
tensor-tang 7383eefd2d add softmax mix and mkl code
6 years ago
tensor-tang 50945685f2 add hmax, hsum jitcode
6 years ago
tensor-tang 8117725852 add jit kernel hsum, hmax and softmax refer code
6 years ago
Tao Luo 67e4450c34
Merge pull request #15485 from luotao1/fc500110-bert_test
6 years ago
Qiyang Min 6000a6e76e
Merge pull request #15312 from velconia/add_pyramid_dnn_support
6 years ago
Jiabin Yang fd286f3596
Merge pull request #15534 from JiabinYang/fix/multi_output_support_imperative
6 years ago
minqiyang 07822fef2c Clear all parameters' gradient
6 years ago
Zeng Jinle bf7dedcbc7
Merge pull request #15545 from sneaxiy/fix_debug_nccl_error
6 years ago
minqiyang 49a7fba848 Polish code
6 years ago
minqiyang 159c407328 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_resnet
6 years ago
minqiyang edf742cfac Polish code
6 years ago
minqiyang 5c7768776c Fix batch_norm's stop_gradient bug
6 years ago
luotao1 e31aef9f6e Merge branch 'develop' into fc500110-bert_test
6 years ago
qingqing01 a6910f900e
Always create variables in analysis_predictor before OptimizeInferenceProgram. (#15533)
6 years ago
tink2123 e7eb08febe fix api.spec
6 years ago
Tao Luo 748c2d3ea2
Merge pull request #15530 from luotao1/remove_with_doc
6 years ago
dzhwinter ee3aae56cd merge develop branch. test=develop
6 years ago
dzhwinter d6d3e6afe2 add more skip strategy
6 years ago
JiabinYang fff67a9481 test=develop, use parameters() to get parameters
6 years ago
Yan Chunwei b62b756b28
add version support (#15469)
6 years ago
Yan Chunwei 526790e652
infer get program (#15511)
6 years ago
JiabinYang 2e309b11c2 test=develop, merge develop
6 years ago
JiabinYang 0ea7c9c129 remove test split op in imperative
6 years ago
minqiyang 79d62c5402 Fix mnist
6 years ago
JiabinYang 3dfbef290b polish code and add comments for Embedding
6 years ago
tensor-tang 3c224e7e79
Merge pull request #15537 from baojun-nervana/rm_ngraph_operator
6 years ago
jerrywgz aaf756272f remove inplace arg, test=develop
6 years ago
jerrywgz cee2e1b089 refine code, test=develop
6 years ago
Xin Pan c11afdb5cb
Merge pull request #15516 from panyx0718/imperative3
6 years ago
sneaxiy ba4f43fd62 fix compile error in distributed mode
6 years ago
tink2123 a0c63f1106 add align_flag
6 years ago