Commit Graph

20 Commits (c3974d0e2a6353f3a134e8925aeb15cac7f0e48b)

Author SHA1 Message Date
lilong12 c3974d0e2a
[3D-parallel] Reformat pipeline parallel (#31786)
4 years ago
huangxu96 342d62de60
add amp example document (#30314)
4 years ago
Zhen Wang 7f7dfccf20
Support pure fp16 training for AMP API. (#29544)
4 years ago
huangxu96 c05170d3d8
add alias for fluid.contrib.mixed_precision (#29562)
5 years ago
Zhen Wang be3777a50a
Add pure fp16 training with master weights. (#27712)
5 years ago
furnace 7584bb5096
Layer norm fp16 (#29169)
5 years ago
Leo Chen 71d6220772
Skip reader op in mixed_precision decorator (#28353)
5 years ago
Zhang Ting 906e7f921e
add fuse_bn_act op (#27230)
5 years ago
Zhen Wang d708b21074
Update amp_check_finite_and_scale_op and add an updating_loss_scaling op for static graph amp training. (#26240)
5 years ago
Zhen Wang bcdbac1753
fix some cast error. (#26884)
5 years ago
mapingshuo f0e743f136
fix AMP and recompute (#23551)
5 years ago
Zhen Wang be2e3e67d9
Fix some typos in AMP. (#21354)
6 years ago
gongweibao 3255fe69bb Add custom black variable name set in amp interface. (#20875)
6 years ago
Jie Fang d9db94d752 Optimize amp for multi-gpu to enable FP16 gradients transfer across gpus. (#19714)
6 years ago
Jie Fang c6a598a276 init new amp, optimize inserting cast op for batchnorm (#18596)
6 years ago
gongweibao abaf87be2b
Change backward_guard to optimize_guard to maximize the allreduce overlap. (#19506)
6 years ago
Jie Fang 2b4ef509ea init custom black white list (#18377)
6 years ago
Jie Fang 172c2facef init black/white lists (#17847)
6 years ago
Jie Fang 30e178fa2c init auto loss scaling (#17194)
6 years ago
Yibing Liu beda78258f
Init mixed precision training interface (#16856)
6 years ago