Commit Graph

251 Commits (e2d01eb650dba6267046c1cfd6e64cf8cfd74267)

Author SHA1 Message Date
hutuxian 8a39e5c110 update api format (#18413)
7 years ago
Yibing Liu 23941e43ec
Update lamb optimizer (#18333)
7 years ago
hutuxian 6ed73830c2
add api desc for pipeline training (#18293)
7 years ago
Yibing Liu 412951d7d2
Fix ema's example & fp16 update (#18273)
7 years ago
wopeizl 222c9fe57e
fix doc for LarsMomentumOptimizer test=develop (#18208)
7 years ago
hutuxian 969e6378b9
Pipeline Concurrency (#17402)
7 years ago
Jiabin Yang 022dfed4fc
Add optimizer save and load (#16986)
7 years ago
Yibing Liu d6d33fd748
Add update method for ema (#17812)
7 years ago
Zeng Jinle 3a6ead24ad
Add no_grad decorator to dygraph (#17790)
7 years ago
lujun ed9d603a8a
fix api doc: Optimizer.ModelAverage (#17395)
7 years ago
Hongyu Liu 9f85f21880
Add new gard clip [old gradient clip not support in dy graph] (#17523)
7 years ago
wopeizl 058f1f1e1b
fix the api example for create_global_var, create_parameter, SGDOptim… (#17371)
7 years ago
Yibing Liu 4f4f0993c1
Bias correction for exponential moving average (#17677)
7 years ago
Zeng Jinle 887a39f050
Fix dygraph unique name bug (#17592)
7 years ago
Hongyu Liu e53119f5b1
Fix decayed adagrad example (#17390)
7 years ago
Yibing Liu 6e11f97708
Add exponential moving average (#17562)
7 years ago
Qiao Longfei 58f7695ab2
Async exe support communicator (#17386)
7 years ago
Yibing Liu f9796b1249
Add LAMB Optimizer support (#17489)
7 years ago
Zeng Jinle 65dd7ec2d6
add clear ops in dygraph optimizers,test=develop (#17484)
7 years ago
Jiabin Yang 15453d05a8
test=develop, fix AdgradOptimizer example code (#17401)
7 years ago
chengduo d915a04907
Add examples for AdamaxOptimizer (#17381)
7 years ago
gongweibao 91784f8ec3
Fix code in document. (#17237)
7 years ago
gongweibao cbdb8a17b1
Polish DGC code (#16818)
7 years ago
minqiyang 08a7cdee11 Polish code
7 years ago
minqiyang 20ead9e659 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_fix_growing_dict
7 years ago
lujun 9bd44b94da
Merge pull request #16561 from junjun315/move-api-to-root
7 years ago
gongweibao 95410652e8
Fix dgc spelling mistakes. (#16733)
7 years ago
minqiyang 9434f9a6f9 Fix auto growth bug of optimizer in dygraph mode
7 years ago
lujun 92c8ac8a74 merge conflict, test=develop
7 years ago
gongweibao 8b793d0efd
Fix DGC bug. (#16697)
7 years ago
lujun 01f4f2d7e4 merge confict, test=develop
7 years ago
gongweibao 0342f01249
Fix dgc bug. (#16602)
7 years ago
Qiyang Min d8d73ff3db
Merge pull request #15584 from velconia/imperative_lr_scheduler
7 years ago
chengduo bb80dae7d0
Add DecoupledWeightDecay (#16427)
7 years ago
minqiyang 34426e761e Polish code
7 years ago
minqiyang 3e57981294 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
7 years ago
lujun 1c9aaeebe0 move imperative to dygraph, test=develop
7 years ago
minqiyang 48f3cbdf55 Polish code
7 years ago
minqiyang 35c89f38c3 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
7 years ago
gongweibao eb83abeac3
Add DGC(Deep Gradient Compression) interface. (#15841)
7 years ago
minqiyang 99128a5c72 Implement Cosine and Noam Decay
7 years ago
Xin Pan f8c279b11c
Merge pull request #16454 from panyx0718/imperative2
7 years ago
minqiyang 4278be8c49 Merge branch 'imperative_lr_scheduler' of https://github.com/velconia/Paddle into imperative_lr_scheduler
7 years ago
minqiyang b5bbb13ac1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
7 years ago
Jiabin Yang f735102eab
add layer norm to Layers, add transformer test in imperative mode (#16092)
7 years ago
Xin Pan fd24ab47ab polish
7 years ago
phlrain 77a08750e9 add var name in optimizer; test=develop
7 years ago
Qiyang Min 1f4aa7a202 Imperative remove all descs (#16045)
7 years ago
minqiyang 45c9f2a68a Fix bugs in piecewise decay
7 years ago
minqiyang feb39028c6 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
7 years ago
Jiabin Yang 654825cfe3
test=develop, reconstruct layer helper to fit imperative usage (#15938)
7 years ago
xuezhong 46fcadec18 add parameter description
7 years ago
xuezhong 57294fa890 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into fix_bug_adagrad
7 years ago
xuezhong 794b90c93f for backward compatibility
7 years ago
minqiyang 700495e11f Fix FtrlOptimizer's API comment
7 years ago
sneaxiy 7e399b0628 rename
7 years ago
sneaxiy f85245b409 test=develop
7 years ago
xuezhong 20e579ef2a add initial_accumulator_value for adagrad
7 years ago
minqiyang 1e0a78556d Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
7 years ago
minqiyang 0ec53f987c Support imperative learning rate decay in optimizer
7 years ago
minqiyang 3ce2d295c0 Refine stop_gradient
7 years ago
minqiyang c8965dc1ab Polish code
7 years ago
minqiyang 8ce198b2e1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_resnet
7 years ago
minqiyang dbd4d058af Add static implementation and fix fc layer
7 years ago
minqiyang 315b133e67 Add single GPU support to imperative
7 years ago
Qiao Longfei a6b3bf6069 add attr min_row_size_to_use_multithread in op config test=develop
7 years ago
Qiao Longfei 8c516a24e5 remote min_row_size_to_use_multithread in adam interface test=develop
7 years ago
Qiao Longfei 9b4fe283e1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into multithread-sparse-adam
7 years ago
minqiyang d0b640dca1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_shared_ptr
7 years ago
Wu Yi fd85418329
[Feature] support mix precision training for resnet (#14899)
7 years ago
minqiyang 7aab39af15 Change grads to VarBase
7 years ago
Qiao Longfei 44b300556d change min_row_size_to_use_multithread to parameter of adam
7 years ago
minqiyang 336160e651 Complete imperative optimizer implementation
7 years ago
minqiyang 28013a5048 Polish code
7 years ago
minqiyang 5822f7f1d8 Polish code
7 years ago
minqiyang fff44af83f Support simple optimizer
7 years ago
minqiyang 68e9b841ab Add support for optimizer
7 years ago
typhoonzero da87f7a698 Revert "[Feature] Fp16 training for resnet50 (#14850)"
7 years ago
Wu Yi 3d750f9c5a
[Feature] Fp16 training for resnet50 (#14850)
7 years ago
Qiao Longfei eb5d427d39 add comment for lazy_mode adam optimizer
7 years ago
Qiao Longfei c624417c6f change sparse mode to lazy mode
7 years ago
Qiao Longfei fc6ec6bd14 add sparse mode adam
7 years ago
Qiao Longfei d03cbd1b8c follow comment test=develop
7 years ago
Qiao Longfei 373f64986d add comment and unit test
7 years ago
Qiao Longfei 55edfca2b8 revert unused change
7 years ago
Qiao Longfei fec0b192a2 fix unit test
7 years ago
Qiao Longfei 3d8077e9fb update optimizer
7 years ago
Qiao Longfei fbcdb29d8c fix import issue
7 years ago
Qiao Longfei 866d6bfe59 dist table support other optimize and regular config
7 years ago
Wu Yi 26200f2e42
[1.1] [project] train imagenet using large batch size (#13766)
7 years ago
Xin Pan d5d09672c8 better fix
7 years ago
tangwei12 e3964e5a43
lookup table bug fix about lr, test=develop (#13946)
7 years ago
chengduo 8e2fdc54b1
Add check for opt op (#13840)
7 years ago
sneaxiy 0633095c74 fix_api_kwargs
7 years ago
Xin Pan 88ae3f169d further clean
7 years ago
Xin Pan 2030958eee covert **kwargs to explicit arguments
7 years ago
Wu Yi efafc72f62
Hide program APIs (#12315)
7 years ago
Qiao Longfei 6e03f7900f
Add centered mode rmsprop (#13161)
7 years ago
Xin Pan 51ef0ad766 allow to use name_scope for debugging and visiualization
7 years ago
whs 9be39bb4b7
Enhence optimizer. (#13004)
7 years ago
minqiyang 99d3f08920 Add print_function for all python files
7 years ago
minqiyang 91f0573bc1 Fix the overfix of 2to3 for print function
8 years ago
minqiyang 559d36328c Apply 2to3 to current paddle main python code
8 years ago
qingqing01 873a50ce35
Fix serious bug in nesterov momentum optimizer. (#12231)
8 years ago
Luo Tao e33017f23e fix _clone_variable in optimizer.py
8 years ago
Wu Yi db67d60e31
Remove block api (#12107)
8 years ago
chengduo 86b0a72576
Refine multi thread cpu parallel exe (#11406)
8 years ago
qiaolongfei 2ce1ed3dbd add optimized_guard for ModelAverage
8 years ago
qiaolongfei 971cf70517 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into fix-optimizer-accumulator
8 years ago
qiaolongfei d8220ccb91 add optimized_guard for optimizer finish_update
8 years ago
qiaolongfei 7ce0d45efa fix adam and adamax optimizer
8 years ago
yuyang18 5e725dc52b
Hide Optimizer methods
8 years ago
whs 02e521e3ac
Fix model average on multi-GPUs. (#11814)
8 years ago
yuyang18 706f383933 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add-none-layers-api-doc
8 years ago
qiaolongfei fe5de04bde optimize doc for MomentumOptimizer
8 years ago
qiaolongfei ca341db258 add FtrlOptimizer and it's doc
8 years ago
qiaolongfei 69d568bd3c add doc for DecayedAdagradOptimizer
8 years ago
qiaolongfei 1bee5129c9 add doc for AdamaxOptimizer
8 years ago
qiaolongfei 2053b6b756 add doc fo AdamOptimizer
8 years ago
qiaolongfei 156617d34b polish doc of RMSPropOptimizer
8 years ago
qiaolongfei 5e8646ab30 add doc for AdagradOptimizer
8 years ago
qiaolongfei d2b791a0cc add SGD and momentum optimizer doc
8 years ago
Wu Yi 53d1d0f0f2
add LARS support (#10374)
8 years ago
Yu Yang 8653cf3004
Merge pull request #10656 from reyoung/feature/support_op_role
8 years ago
weixing02 7f40cff913 yapf adjust
8 years ago
yuyang18 017bba1664 Add op role
8 years ago
dzhwinter 62c51e44d2
"add float64 tests" (#10450)
8 years ago
Yu Yang 1bb579a3f5 A naive trainer implementation
8 years ago
weixing 84ceffd02d Fix api display errors in fluid (#10051)
8 years ago
wanghaoshuang a7c6bf771c Change do_model_average_for_mean_and_var to boolean in batch_normal.
8 years ago
wanghaoshuang 2e40660e7a Fix some issues.
8 years ago
wanghaoshuang 9708b21f19 Refine average model option
8 years ago
wanghaoshuang e1290c4fd7 Make Average Model support for 'moving mean' and 'moving variance' of batch_normal op
8 years ago
wanghaoshuang edb4e29ab7 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into average_model
8 years ago
qingqing01 30b70323b4
Expose RMSProp optimizer. (#9247)
8 years ago
wanghaoshuang ad63722ed9 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into average_model
8 years ago
wanghaoshuang 68c9f6ef11 Fix error while params_grads[1]==None
8 years ago
wanghaoshuang 7c59ac484f Refine doc and use 'raise' instead of assert
8 years ago
wanghaoshuang d22f4de794 Refine sum_accumulates_op.
8 years ago
wanghaoshuang de2d7299ae Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into adadelta
8 years ago
wanghaoshuang 72847ad031 Add python API for Adadelta optimizer.
8 years ago
wanghaoshuang cad4d7f325 Refine initial and API of ModelAverage API
8 years ago
wanghaoshuang 92a01d4994 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into average_model
8 years ago
wanghaoshuang 87fe52c109 Add ModelAverage class to optimizer.py
8 years ago
Yu Yang 41d8bcdc06 Fix models #725
8 years ago
qiaolongfei 4fdd114d34 a little optimize of optimizer
8 years ago
Yu Yang 2af9aac2cb Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into feature/add_global_step
8 years ago
Yu Yang 175cf6e024 Add global_step in nn.py
8 years ago
qiaolongfei ea9e62b8fc optimize code
8 years ago
qiaolongfei a636aa585b Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into fix-optimize-multi-program
8 years ago