Commit Graph

13 Commits (f04f2b232a22c9aba3ee4538ab708acf9f77c813)

Author SHA1 Message Date
chengduo 8281497030
Fix warning info of build_strategy (#19805)
6 years ago
chengduo 056fdedde3
Open fuse all reduce option (#19765)
6 years ago
chengduo 5866a7a5fe
Enable fused_all_reduce_op_handle support GPU and CPU Gradients (#19418)
6 years ago
tangwei12 65c7368400
Fix the correctness of async mode at distributed training (#18863)
6 years ago
Zeng Jinle 8008ab4e6b
Remove legacy C++ memory optimization codes (#18834)
6 years ago
chengduo 4140fe11a4
Open fuse optimization ops (#18741)
6 years ago
chengduo fd3aad6cb3
Make fuse_optimizer_op_pass also work when the model contains sparse gradients. (#18664)
6 years ago
chengduo 7453857324 Make fuse_all_reduce_op_pass support mix_precision (#17652)
7 years ago
gongweibao f5caf3443c
Fix reinitialized ncclid error! (#18025)
7 years ago
gongweibao fbbdc9ccad
Add backward and optimizer operator dependency pass. (#17746)
7 years ago
gongweibao 65bbf950ee
Add multi-ncclcomm and 2D ncclallreduce support. (#17263)
7 years ago
Qiao Longfei 58f7695ab2
Async exe support communicator (#17386)
7 years ago
chengduo 04bd413acb
Code Clean: Move all pass to paddle::framework::ir (#17228)
7 years ago