Commit Graph

217 Commits (8002b2beb4a787d7f21a272289f4b1a36953c371)

Author SHA1 Message Date
Chen Weihang 98acfe97ec
Polish English APIs' doc of several Optimizers (#20166)
6 years ago
Zeng Jinle 02c6edc0d5
refine optimizer name doc, test=develop, test=document_fix (#20074)
6 years ago
Zeng Jinle 20f0878f70
Fix en docs of apis (#20050)
6 years ago
WangXi 8d92b36d51 Refine document of DGCMomentumOptimizer (#19960)
6 years ago
mapingshuo d62360fe5f
fix doc of apply_optimize (#19965)
6 years ago
Zeng Jinle 4a5ce4feb1
Add AdadeltaOptimizer doc (#19875)
6 years ago
jhjiangcs 766bd529d1 add optimizer:dpsgd,test=develop (#19915)
6 years ago
mapingshuo 9901f69677
Forward recompute3 (#19913)
6 years ago
chengduo ae31faaa87
refine optimier function (#19886)
6 years ago
gongweibao 6c2bc29cc0
Fix float16 optimizer. (#19682)
6 years ago
Jiabin Yang e9233d1c1e Refactor dygraph (#19107)
6 years ago
chengduo bfb6ac816e
Fix optimizer bug (#19410)
6 years ago
mapingshuo d5ac87ec22
Lookahead optimizer (#19386)
6 years ago
gongweibao 29d8781240
Polish fleet API to support cuda collective mode and nccl2 mode. (#18966)
6 years ago
LielinJiang e5b9753a18 Fix ExponentialMovingAverage api bug in python3, test=develop (#18775)
6 years ago
xsrobin 47e2ef38e9
add "import paddle.fluid as fluid" to examples lack of it
6 years ago
hutuxian 8a39e5c110 update api format (#18413)
6 years ago
Yibing Liu 23941e43ec
Update lamb optimizer (#18333)
6 years ago
hutuxian 6ed73830c2
add api desc for pipeline training (#18293)
6 years ago
Yibing Liu 412951d7d2
Fix ema's example & fp16 update (#18273)
6 years ago
wopeizl 222c9fe57e
fix doc for LarsMomentumOptimizer test=develop (#18208)
6 years ago
hutuxian 969e6378b9
Pipeline Concurrency (#17402)
6 years ago
Jiabin Yang 022dfed4fc
Add optimizer save and load (#16986)
6 years ago
Yibing Liu d6d33fd748
Add update method for ema (#17812)
6 years ago
Zeng Jinle 3a6ead24ad
Add no_grad decorator to dygraph (#17790)
6 years ago
lujun ed9d603a8a
fix api doc: Optimizer.ModelAverage (#17395)
6 years ago
Hongyu Liu 9f85f21880
Add new gard clip [old gradient clip not support in dy graph] (#17523)
6 years ago
wopeizl 058f1f1e1b
fix the api example for create_global_var, create_parameter, SGDOptim… (#17371)
6 years ago
Yibing Liu 4f4f0993c1
Bias correction for exponential moving average (#17677)
6 years ago
Zeng Jinle 887a39f050
Fix dygraph unique name bug (#17592)
6 years ago
Hongyu Liu e53119f5b1
Fix decayed adagrad example (#17390)
6 years ago
Yibing Liu 6e11f97708
Add exponential moving average (#17562)
6 years ago
Qiao Longfei 58f7695ab2
Async exe support communicator (#17386)
6 years ago
Yibing Liu f9796b1249
Add LAMB Optimizer support (#17489)
6 years ago
Zeng Jinle 65dd7ec2d6
add clear ops in dygraph optimizers,test=develop (#17484)
6 years ago
Jiabin Yang 15453d05a8
test=develop, fix AdgradOptimizer example code (#17401)
6 years ago
chengduo d915a04907
Add examples for AdamaxOptimizer (#17381)
6 years ago
gongweibao 91784f8ec3
Fix code in document. (#17237)
6 years ago
gongweibao cbdb8a17b1
Polish DGC code (#16818)
6 years ago
minqiyang 08a7cdee11 Polish code
6 years ago
minqiyang 20ead9e659 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_fix_growing_dict
6 years ago
lujun 9bd44b94da
Merge pull request #16561 from junjun315/move-api-to-root
6 years ago
gongweibao 95410652e8
Fix dgc spelling mistakes. (#16733)
6 years ago
minqiyang 9434f9a6f9 Fix auto growth bug of optimizer in dygraph mode
6 years ago
lujun 92c8ac8a74 merge conflict, test=develop
6 years ago
gongweibao 8b793d0efd
Fix DGC bug. (#16697)
6 years ago
lujun 01f4f2d7e4 merge confict, test=develop
6 years ago
gongweibao 0342f01249
Fix dgc bug. (#16602)
6 years ago
Qiyang Min d8d73ff3db
Merge pull request #15584 from velconia/imperative_lr_scheduler
6 years ago
chengduo bb80dae7d0
Add DecoupledWeightDecay (#16427)
6 years ago
minqiyang 34426e761e Polish code
6 years ago
minqiyang 3e57981294 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
6 years ago
lujun 1c9aaeebe0 move imperative to dygraph, test=develop
6 years ago
minqiyang 48f3cbdf55 Polish code
6 years ago
minqiyang 35c89f38c3 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
6 years ago
gongweibao eb83abeac3
Add DGC(Deep Gradient Compression) interface. (#15841)
6 years ago
minqiyang 99128a5c72 Implement Cosine and Noam Decay
6 years ago
Xin Pan f8c279b11c
Merge pull request #16454 from panyx0718/imperative2
6 years ago
minqiyang 4278be8c49 Merge branch 'imperative_lr_scheduler' of https://github.com/velconia/Paddle into imperative_lr_scheduler
6 years ago
minqiyang b5bbb13ac1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
6 years ago
Jiabin Yang f735102eab
add layer norm to Layers, add transformer test in imperative mode (#16092)
6 years ago
Xin Pan fd24ab47ab polish
6 years ago
phlrain 77a08750e9 add var name in optimizer; test=develop
6 years ago
Qiyang Min 1f4aa7a202 Imperative remove all descs (#16045)
6 years ago
minqiyang 45c9f2a68a Fix bugs in piecewise decay
6 years ago
minqiyang feb39028c6 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
6 years ago
Jiabin Yang 654825cfe3
test=develop, reconstruct layer helper to fit imperative usage (#15938)
6 years ago
xuezhong 46fcadec18 add parameter description
6 years ago
xuezhong 57294fa890 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into fix_bug_adagrad
6 years ago
xuezhong 794b90c93f for backward compatibility
6 years ago
minqiyang 700495e11f Fix FtrlOptimizer's API comment
6 years ago
sneaxiy 7e399b0628 rename
6 years ago
sneaxiy f85245b409 test=develop
6 years ago
xuezhong 20e579ef2a add initial_accumulator_value for adagrad
6 years ago
minqiyang 1e0a78556d Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
6 years ago
minqiyang 0ec53f987c Support imperative learning rate decay in optimizer
6 years ago
minqiyang 3ce2d295c0 Refine stop_gradient
6 years ago
minqiyang c8965dc1ab Polish code
6 years ago
minqiyang 8ce198b2e1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_resnet
6 years ago
minqiyang dbd4d058af Add static implementation and fix fc layer
7 years ago
minqiyang 315b133e67 Add single GPU support to imperative
7 years ago
Qiao Longfei a6b3bf6069 add attr min_row_size_to_use_multithread in op config test=develop
7 years ago
Qiao Longfei 8c516a24e5 remote min_row_size_to_use_multithread in adam interface test=develop
7 years ago
Qiao Longfei 9b4fe283e1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into multithread-sparse-adam
7 years ago
minqiyang d0b640dca1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_shared_ptr
7 years ago
Wu Yi fd85418329
[Feature] support mix precision training for resnet (#14899)
7 years ago
minqiyang 7aab39af15 Change grads to VarBase
7 years ago
Qiao Longfei 44b300556d change min_row_size_to_use_multithread to parameter of adam
7 years ago
minqiyang 336160e651 Complete imperative optimizer implementation
7 years ago
minqiyang 28013a5048 Polish code
7 years ago
minqiyang 5822f7f1d8 Polish code
7 years ago
minqiyang fff44af83f Support simple optimizer
7 years ago
minqiyang 68e9b841ab Add support for optimizer
7 years ago
typhoonzero da87f7a698 Revert "[Feature] Fp16 training for resnet50 (#14850)"
7 years ago
Wu Yi 3d750f9c5a
[Feature] Fp16 training for resnet50 (#14850)
7 years ago
Qiao Longfei eb5d427d39 add comment for lazy_mode adam optimizer
7 years ago
Qiao Longfei c624417c6f change sparse mode to lazy mode
7 years ago
Qiao Longfei fc6ec6bd14 add sparse mode adam
7 years ago
Qiao Longfei d03cbd1b8c follow comment test=develop
7 years ago
Qiao Longfei 373f64986d add comment and unit test
7 years ago