Commit Graph

14254 Commits (a1d200a5dea60dfe23c26f50f09cfa7c02f5ac4b)

Author SHA1 Message Date
nhzlx a1d200a5de cherry-pick from feature/anakin-engine: Anakin support facebox #16111
6 years ago
flame a32d420043 cherry-pick from feature/anakin-engine: batch norm (#16110)
6 years ago
flame 0945b97f07 cherry-pick feature/anakin-engine: add anakin softmax/transpose/batch_norm/flatten/reshape op (#16020)
6 years ago
nhzlx b21770a2aa cherry-pick from feature/anakin-engine: Add subgraph fuse support and anakin engine #16018
6 years ago
nhzlx 084310f536 paddle-anakin: concat, split, pool2d converter#16003
6 years ago
flame be523baad2 Add anakin conv2d/relu/sigmoid/tanh converter (#15997)
6 years ago
Yan Chunwei d0ce6a9044 fix anakin converter registry (#15993)
6 years ago
Tao Luo a5124ee0bb
Merge pull request #16301 from luotao1/runtime_context_pass
6 years ago
baojun 2de263a5d9 Add softmax_with_cross_entropy_op to ngraph engine (#16304)
6 years ago
ruri a3b8028d46
Merge pull request #16202 from shippingwang/add_sqrt_doc
6 years ago
chengduo f26ba5bddd
Fuse AllReduce (#15921)
6 years ago
Zeng Jinle d0ef682552
Merge pull request #16274 from sneaxiy/fix_grad_maker
6 years ago
baojun 804afc51db Minor ngraph fix (#16270)
6 years ago
Tao Luo 9195c3bb03
Merge pull request #16280 from luotao1/cos_sim_infershape
6 years ago
Wu Yi 6382b62f6b
Collective ops (#15572)
6 years ago
sneaxiy 023a3a3d62 fix op grad maker
6 years ago
luotao1 82af8031d9 add runtime_context_cache_pass
6 years ago
Tao Luo b9fc80a133
Merge pull request #16287 from PaddlePaddle/revert-16002-runtime_context
6 years ago
whs 18911b6eea
[enhence] Make step_input of dynamic_rnn support custom lod level. (#15972)
6 years ago
luotao1 c05af910bc refine cos_sim infershape
6 years ago
Hongyu Liu d3acf68044
Merge pull request #16258 from phlrain/fix_concat_1
6 years ago
Tao Luo 7d2740db83
Revert "cache runtime_context"
6 years ago
tensor-tang ead558b7f6
Merge pull request #16256 from tensor-tang/refine/seqenum
6 years ago
Qiyang Min c7f1f3ed0c
Merge pull request #16214 from velconia/imperative_infer_var_type
6 years ago
Zeng Jinle f8df9eb32e fix api doc (#16201)
6 years ago
Jacek Czaja 13816dd4ac [MKL-DNN] Fix to crash of Transformer when mkldnn is to be used (#16233)
6 years ago
Yibing Liu 7e20e7691e
Fix the bug in fp16 backward kernel (#16269)
6 years ago
shippingwang 97c6051822 add api.spec, test=develop
6 years ago
Wojciech Uss af03008890 Add cpu_quantize_placement_pass for C-API quantization (#16265)
6 years ago
Tao Luo dbb92ee4b1
Merge pull request #16002 from luotao1/runtime_context
6 years ago
shippingwang 4f42504eef Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add_sqrt_doc
6 years ago
minqiyang 565b19b7a5 fix set data type bug
6 years ago
minqiyang 8364688c30 Fix py_func_op's problem
6 years ago
Zeng Jinle 6429d2a887
Merge pull request #16188 from sneaxiy/fix_const_cast
6 years ago
minqiyang b40e41fbd1 Polish code style
6 years ago
xiaolil1 e818fa1004 Enable INT8 transpose kernel for MobileNet-SSD improvement. (#16159)
6 years ago
Xin Pan 374abcf361
Merge pull request #16247 from panyx0718/imperative
6 years ago
Tao Luo c072998ac1
Merge pull request #16219 from luotao1/fc_infershape
6 years ago
tangwei12 8ea4218ce1
update load persistables for increment, test=develop (#15576)
6 years ago
phlrain dcba2e7236 fix conncat; test=develop
6 years ago
phlrain 955fad7a90 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into develop
6 years ago
phlrain a7fe3b508e fix concat; test=develop
6 years ago
tensor-tang 50931dee1d refine seq enum op
6 years ago
Qiyang Min 8e4ad008fb
Merge pull request #16198 from velconia/imperative_train_speed
6 years ago
minqiyang 36dce65bb3 Take DataType and VarType apart
6 years ago
Xin Pan 3e9319f3ab add more imperative layer tests.
6 years ago
luotao1 d9f0e7252a refine with comments
6 years ago
luotao1 6fa52f83ba Merge branch 'develop' into fc_infershape
6 years ago
luotao1 cc0ae1f1a1 refine with comments
6 years ago
luotao1 a275fd6e0c Merge branch 'develop' into runtime_context
6 years ago