Commit Graph

161 Commits (766bd529d155c601516d00dc09abe3750222b59a)

Author SHA1 Message Date
minqiyang 99128a5c72 Implement Cosine and Noam Decay
6 years ago
Xin Pan f8c279b11c
Merge pull request #16454 from panyx0718/imperative2
6 years ago
minqiyang 4278be8c49 Merge branch 'imperative_lr_scheduler' of https://github.com/velconia/Paddle into imperative_lr_scheduler
6 years ago
minqiyang b5bbb13ac1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
6 years ago
Jiabin Yang f735102eab
add layer norm to Layers, add transformer test in imperative mode (#16092)
6 years ago
Xin Pan fd24ab47ab polish
6 years ago
phlrain 77a08750e9 add var name in optimizer; test=develop
7 years ago
Qiyang Min 1f4aa7a202 Imperative remove all descs (#16045)
7 years ago
minqiyang 45c9f2a68a Fix bugs in piecewise decay
7 years ago
minqiyang feb39028c6 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
7 years ago
Jiabin Yang 654825cfe3
test=develop, reconstruct layer helper to fit imperative usage (#15938)
7 years ago
xuezhong 46fcadec18 add parameter description
7 years ago
xuezhong 57294fa890 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into fix_bug_adagrad
7 years ago
xuezhong 794b90c93f for backward compatibility
7 years ago
minqiyang 700495e11f Fix FtrlOptimizer's API comment
7 years ago
sneaxiy 7e399b0628 rename
7 years ago
sneaxiy f85245b409 test=develop
7 years ago
xuezhong 20e579ef2a add initial_accumulator_value for adagrad
7 years ago
minqiyang 1e0a78556d Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
7 years ago
minqiyang 0ec53f987c Support imperative learning rate decay in optimizer
7 years ago
minqiyang 3ce2d295c0 Refine stop_gradient
7 years ago
minqiyang c8965dc1ab Polish code
7 years ago
minqiyang 8ce198b2e1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_resnet
7 years ago
minqiyang dbd4d058af Add static implementation and fix fc layer
7 years ago
minqiyang 315b133e67 Add single GPU support to imperative
7 years ago
Qiao Longfei a6b3bf6069 add attr min_row_size_to_use_multithread in op config test=develop
7 years ago
Qiao Longfei 8c516a24e5 remote min_row_size_to_use_multithread in adam interface test=develop
7 years ago
Qiao Longfei 9b4fe283e1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into multithread-sparse-adam
7 years ago
minqiyang d0b640dca1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_shared_ptr
7 years ago
Wu Yi fd85418329
[Feature] support mix precision training for resnet (#14899)
7 years ago
minqiyang 7aab39af15 Change grads to VarBase
7 years ago
Qiao Longfei 44b300556d change min_row_size_to_use_multithread to parameter of adam
7 years ago
minqiyang 336160e651 Complete imperative optimizer implementation
7 years ago
minqiyang 28013a5048 Polish code
7 years ago
minqiyang 5822f7f1d8 Polish code
7 years ago
minqiyang fff44af83f Support simple optimizer
7 years ago
minqiyang 68e9b841ab Add support for optimizer
7 years ago
typhoonzero da87f7a698 Revert "[Feature] Fp16 training for resnet50 (#14850)"
7 years ago
Wu Yi 3d750f9c5a
[Feature] Fp16 training for resnet50 (#14850)
7 years ago
Qiao Longfei eb5d427d39 add comment for lazy_mode adam optimizer
7 years ago
Qiao Longfei c624417c6f change sparse mode to lazy mode
7 years ago
Qiao Longfei fc6ec6bd14 add sparse mode adam
7 years ago
Qiao Longfei d03cbd1b8c follow comment test=develop
7 years ago
Qiao Longfei 373f64986d add comment and unit test
7 years ago
Qiao Longfei 55edfca2b8 revert unused change
7 years ago
Qiao Longfei fec0b192a2 fix unit test
7 years ago
Qiao Longfei 3d8077e9fb update optimizer
7 years ago
Qiao Longfei fbcdb29d8c fix import issue
7 years ago
Qiao Longfei 866d6bfe59 dist table support other optimize and regular config
7 years ago
Wu Yi 26200f2e42
[1.1] [project] train imagenet using large batch size (#13766)
7 years ago
Xin Pan d5d09672c8 better fix
7 years ago
tangwei12 e3964e5a43
lookup table bug fix about lr, test=develop (#13946)
7 years ago
chengduo 8e2fdc54b1
Add check for opt op (#13840)
7 years ago
sneaxiy 0633095c74 fix_api_kwargs
7 years ago
Xin Pan 88ae3f169d further clean
7 years ago
Xin Pan 2030958eee covert **kwargs to explicit arguments
7 years ago
Wu Yi efafc72f62
Hide program APIs (#12315)
7 years ago
Qiao Longfei 6e03f7900f
Add centered mode rmsprop (#13161)
7 years ago
Xin Pan 51ef0ad766 allow to use name_scope for debugging and visiualization
7 years ago
whs 9be39bb4b7
Enhence optimizer. (#13004)
7 years ago
minqiyang 99d3f08920 Add print_function for all python files
7 years ago
minqiyang 91f0573bc1 Fix the overfix of 2to3 for print function
7 years ago
minqiyang 559d36328c Apply 2to3 to current paddle main python code
7 years ago
qingqing01 873a50ce35
Fix serious bug in nesterov momentum optimizer. (#12231)
7 years ago
Luo Tao e33017f23e fix _clone_variable in optimizer.py
7 years ago
Wu Yi db67d60e31
Remove block api (#12107)
7 years ago
chengduo 86b0a72576
Refine multi thread cpu parallel exe (#11406)
7 years ago
qiaolongfei 2ce1ed3dbd add optimized_guard for ModelAverage
7 years ago
qiaolongfei 971cf70517 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into fix-optimizer-accumulator
7 years ago
qiaolongfei d8220ccb91 add optimized_guard for optimizer finish_update
7 years ago
qiaolongfei 7ce0d45efa fix adam and adamax optimizer
7 years ago
yuyang18 5e725dc52b
Hide Optimizer methods
7 years ago
whs 02e521e3ac
Fix model average on multi-GPUs. (#11814)
7 years ago
yuyang18 706f383933 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add-none-layers-api-doc
7 years ago
qiaolongfei fe5de04bde optimize doc for MomentumOptimizer
7 years ago
qiaolongfei ca341db258 add FtrlOptimizer and it's doc
7 years ago
qiaolongfei 69d568bd3c add doc for DecayedAdagradOptimizer
7 years ago
qiaolongfei 1bee5129c9 add doc for AdamaxOptimizer
7 years ago
qiaolongfei 2053b6b756 add doc fo AdamOptimizer
7 years ago
qiaolongfei 156617d34b polish doc of RMSPropOptimizer
7 years ago
qiaolongfei 5e8646ab30 add doc for AdagradOptimizer
7 years ago
qiaolongfei d2b791a0cc add SGD and momentum optimizer doc
7 years ago
Wu Yi 53d1d0f0f2
add LARS support (#10374)
7 years ago
Yu Yang 8653cf3004
Merge pull request #10656 from reyoung/feature/support_op_role
7 years ago
weixing02 7f40cff913 yapf adjust
7 years ago
yuyang18 017bba1664 Add op role
7 years ago
dzhwinter 62c51e44d2
"add float64 tests" (#10450)
7 years ago
Yu Yang 1bb579a3f5 A naive trainer implementation
7 years ago
weixing 84ceffd02d Fix api display errors in fluid (#10051)
7 years ago
wanghaoshuang a7c6bf771c Change do_model_average_for_mean_and_var to boolean in batch_normal.
7 years ago
wanghaoshuang 2e40660e7a Fix some issues.
7 years ago
wanghaoshuang 9708b21f19 Refine average model option
7 years ago
wanghaoshuang e1290c4fd7 Make Average Model support for 'moving mean' and 'moving variance' of batch_normal op
7 years ago
wanghaoshuang edb4e29ab7 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into average_model
8 years ago
qingqing01 30b70323b4
Expose RMSProp optimizer. (#9247)
8 years ago
wanghaoshuang ad63722ed9 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into average_model
8 years ago
wanghaoshuang 68c9f6ef11 Fix error while params_grads[1]==None
8 years ago
wanghaoshuang 7c59ac484f Refine doc and use 'raise' instead of assert
8 years ago
wanghaoshuang d22f4de794 Refine sum_accumulates_op.
8 years ago
wanghaoshuang de2d7299ae Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into adadelta
8 years ago