Commit Graph

44 Commits (test_benchmark_ci)

Author SHA1 Message Date
lilong12 0205e9f84e
remove the send/recv of tensor size (#31460)
5 years ago
liuyuhui 4a8b8b4547
[Kunlun] add gen_bkcl_id_op, support multi XPU cards training using multiprocess (#30858)
5 years ago
WangXi 31ed9c9eed
Fleet distributed strategy support pure fp16 (#30754)
5 years ago
lilong12 8126a41d73
fix the bug of all_reduce pipeline gradient multiple times (#30437)
5 years ago
hutuxian 9fec1618d2
Ascend Framework Part3: Ascend Parser (#30391)
5 years ago
JZ-LIANG 75936d838f
Recompute Offload (#30233)
5 years ago
Chengmo d479ae1725
【Paddle.Fleet】Support local save sparse param (#30175)
5 years ago
Chengmo 528e03fc08
【Paddle.Fleet】Fix tensor table (#30075)
5 years ago
WangXi ab04997846
[fleet] combine amp and gradient merge, test=develop (#30086)
5 years ago
lilong12 01950ceb42
fix the bug in pipeline data parallelism (#29731)
5 years ago
tangwei12 032414ca2a
[Feature] one ps (3/4) (#29604)
5 years ago
WangXi 9cbcc6cadc
fleet sync build strategy, test=develop (#29732)
5 years ago
JZ-LIANG d33d468f02
[Sharding] add hybrid-dp feature (#29518)
5 years ago
WangXi 0c2a51d240
optimizer amp, all use fp16 communication, overlap last comm and compute (#28957)
5 years ago
JZ-LIANG 0dadacc4eb
[sharding] doc, api, bug fixed (#28983)
5 years ago
WangXi e931c7baf9
Fix multi nccl comm & wait server ready (#28663)
5 years ago
Leo Chen 3815d7aa40
Upgrade string literals to raw string (#28989)
5 years ago
lilong12 f77a78cdee
enable pipeline to run with Executor.run() (#28373)
5 years ago
JZ-LIANG 5a9f6889c1
[Sharding] add new features (#28568)
5 years ago
Chengmo 4dc8c44ba1
【Paddle.Fleet】Fix fleetrun heter (#28252)
5 years ago
mapingshuo 81244fbfab
add sharding strategy in fleet(#27900)
6 years ago
WangXi fb641c915e
【paddle.fleet】fleet add _get_applied_meta_list and _get_applied_graph_list (#27952)
6 years ago
Chengmo 328cb289ed
【paddle.fleet】fix sparse load (#27680)
6 years ago
mapingshuo 8d2cb14f98
support gradient merge with recompute, test=develop (#27834)
6 years ago
Chengmo c5f2802d56
【paddle.fleet】Update fleetrun & ps-heter (#27472)
6 years ago
WangXi 0a1862d1d2
fleet combine amp dgc recompute meta optimizer (#27643)
6 years ago
WangXi e550fc02ae
fleet2.0 add fp16 grad compression (#27480)
6 years ago
tangwei12 d6b54de467
【paddle.fleet】Fix/role maker api fix (#27326)
6 years ago
ShenLiang 746a8ded29
fix comment of adaptive lsgd (#27362)
6 years ago
ShenLiang 54b81fa32c
add adaptivelsgd in meta_optimizer (#27289)
6 years ago
mapingshuo 9dedafa0df
fix strategy, test=develop (#27323)
6 years ago
ShenLiang 2b6a5793fe
remove auto mode from localsgd optimizer (#27237)
6 years ago
123malin 60c3ef3ab8
【paddle.fleet】parameter_server_optimizer support auto_strategy (#27181)
6 years ago
JZ-LIANG 5d039f4086
modified the implement of Lars optimizer (#26733)
6 years ago
123malin f2d68d3ed5
【paddle.fleet】parameter_server_optimizer support auto_strategy (#26838)
6 years ago
ShenLiang aca450f6fb
fix the localsgd optimizer (#27094)
6 years ago
Dong Daxiang 0443b480b8
【paddle.fleet】add auto parallel L1 implementations (#27090)
6 years ago
Chengmo 7f2aa2db3c
【paddle.fleet】Support Heter Parameter Server (#25998)
6 years ago
Dong Daxiang 994217ea05
【paddle.fleet】fix api documents (#26777)
6 years ago
Dong Daxiang 83cd185947
【paddle.fleet】Meta from optimizer (#26392)
6 years ago
mapingshuo cd48bdad31
add feature to fleet2.0 role_maker, distribute_strategy, test=develop (#26267)
6 years ago
Dong Daxiang 4ec51e0205
【paddle.fleet】Clear disable (#26334)
6 years ago
Yi Liu 3b2c580a66
【paddle.fleet】make fleet_localsgd_meta_optimizer work (#26213)
6 years ago
Dong Daxiang 50a5bcfc9d
【paddle.fleet】paddle.fleet -> paddle.distributed.fleet. (#26186)
6 years ago