minqiyang
3e57981294
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
...
test=develop
6 years ago
lujun
1c9aaeebe0
move imperative to dygraph, test=develop
6 years ago
minqiyang
48f3cbdf55
Polish code
...
test=develop
6 years ago
minqiyang
35c89f38c3
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
...
test=develop
6 years ago
gongweibao
eb83abeac3
Add DGC(Deep Gradient Compression) interface. ( #15841 )
6 years ago
minqiyang
99128a5c72
Implement Cosine and Noam Decay
...
test=develop
6 years ago
Xin Pan
f8c279b11c
Merge pull request #16454 from panyx0718/imperative2
...
polish deepCF model to support real dataset
6 years ago
minqiyang
4278be8c49
Merge branch 'imperative_lr_scheduler' of https://github.com/velconia/Paddle into imperative_lr_scheduler
...
test=develop
6 years ago
minqiyang
b5bbb13ac1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
6 years ago
Jiabin Yang
f735102eab
add layer norm to Layers, add transformer test in imperative mode ( #16092 )
...
* add layer norm to Layers, add transformer prepare encoding
* little change
* finish encoder part
* add decoder part
* finish model part
* add test case and part of data feed
* add transformer test
* add to_parameter, add remove in set_attr
* test=develop, fix pos encoding bug, create_parameter with stantard name
* test=develop, rm dropout test in imperative
* test=develop, fix cpu error
* test=develop, fix minize bug
* test=develop, fix one hot not stop gradient
* test=develop, fix one hot not stop gradient
* test=develop, refine parameter name
* test=develop, fix transformer test in imperative mode
* test=develop, fix transformer test in imperative mode
* test=develop, fix boost and mkl download error
* test=develop, fix boost and mkl download error
* test=develop, fix ci and refine code
* test=develop, fix ci and refine code
6 years ago
Xin Pan
fd24ab47ab
polish
...
test=develop
6 years ago
phlrain
77a08750e9
add var name in optimizer; test=develop
6 years ago
Qiyang Min
1f4aa7a202
Imperative remove all descs ( #16045 )
...
* Remove Desc in Forward Pass
* Refactor VarBase
* Add dbg info
* Only check type in imperative mode
* Polish code and support optimizer
test=develop
* Fix stop gradient problem in PyLayer
test=develop
6 years ago
minqiyang
45c9f2a68a
Fix bugs in piecewise decay
...
test=develop
6 years ago
minqiyang
feb39028c6
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
6 years ago
Jiabin Yang
654825cfe3
test=develop, reconstruct layer helper to fit imperative usage ( #15938 )
...
* test=develop, reconstruct layer helper to fit imperative usage
* test=develop, fix import error on py35
* test=develop, fix rnn gradient error
* test=develop, delete test use code
* test=develop, remove helper from imperative usage
* test=develop, fix test_base_layer using new helper
* test=develop, reconstruct layerhelper for imperative mode
* test=develop, reconstruct layerhelper for imperative mode
* test=develop, fix bug
* test=develop, fix test failed bug
* test=develop, fix test failed bug
* test=develop, fix test failed bug
* test=develop, fix bug
* test=develop, polish code
6 years ago
xuezhong
46fcadec18
add parameter description
...
test=develop
6 years ago
xuezhong
57294fa890
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into fix_bug_adagrad
...
test=develop
6 years ago
xuezhong
794b90c93f
for backward compatibility
6 years ago
minqiyang
700495e11f
Fix FtrlOptimizer's API comment
...
test=develop
6 years ago
sneaxiy
7e399b0628
rename
...
test=develop
6 years ago
sneaxiy
f85245b409
test=develop
6 years ago
xuezhong
20e579ef2a
add initial_accumulator_value for adagrad
...
test=develop
6 years ago
minqiyang
1e0a78556d
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
6 years ago
minqiyang
0ec53f987c
Support imperative learning rate decay in optimizer
6 years ago
minqiyang
3ce2d295c0
Refine stop_gradient
...
test=develop
6 years ago
minqiyang
c8965dc1ab
Polish code
...
test=develop
6 years ago
minqiyang
8ce198b2e1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_resnet
...
test=develop
6 years ago
minqiyang
dbd4d058af
Add static implementation and fix fc layer
6 years ago
minqiyang
315b133e67
Add single GPU support to imperative
6 years ago
Qiao Longfei
a6b3bf6069
add attr min_row_size_to_use_multithread in op config test=develop
6 years ago
Qiao Longfei
8c516a24e5
remote min_row_size_to_use_multithread in adam interface test=develop
6 years ago
Qiao Longfei
9b4fe283e1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into multithread-sparse-adam
...
test=develop
6 years ago
minqiyang
d0b640dca1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_shared_ptr
...
test=develop
6 years ago
Wu Yi
fd85418329
[Feature] support mix precision training for resnet ( #14899 )
...
* clip softmax for fp16
* updates
* fuse xent support fp16 test=develop
* wip
* wip
* add simple row reduce
* wip fp16 accurate softmax
* add accurate softmax kernel for fp16 test=develop
* update test=develop
* fix cpu build test=develop
* update api.spec test=develop
* follow comments test=develop
* fix build test=develop
* fix trt build test=develop
* fix inference build test=develop
* fix merge test=develop
* update test=develop
* try fix build test=develop
* fix build test=develop
* rename real_exp test=develop
* fortest
* remove hacky kernels test=develop
* clean up test=develop
6 years ago
minqiyang
7aab39af15
Change grads to VarBase
6 years ago
Qiao Longfei
44b300556d
change min_row_size_to_use_multithread to parameter of adam
...
test=develop
6 years ago
minqiyang
336160e651
Complete imperative optimizer implementation
...
test=develop
6 years ago
minqiyang
28013a5048
Polish code
...
test=develop
6 years ago
minqiyang
5822f7f1d8
Polish code
...
test=develop
6 years ago
minqiyang
fff44af83f
Support simple optimizer
...
test=develop
6 years ago
minqiyang
68e9b841ab
Add support for optimizer
6 years ago
typhoonzero
da87f7a698
Revert "[Feature] Fp16 training for resnet50 ( #14850 )"
...
This reverts commit 3d750f9c5a
.
6 years ago
Wu Yi
3d750f9c5a
[Feature] Fp16 training for resnet50 ( #14850 )
...
* wip
* wip
* wip
* wip for test
* add fp16 tests test=develop
* fix cpu build test=develop
* fix test=develop
* fix py3 tests test=develop
* fix lr_scheduler dtype test=develop
* fix test=dvelop
* test fix ci compile test=develop
* fix build and merge test=develop
* fallback momentumop change to general test=develop
6 years ago
Qiao Longfei
eb5d427d39
add comment for lazy_mode adam optimizer
6 years ago
Qiao Longfei
c624417c6f
change sparse mode to lazy mode
6 years ago
Qiao Longfei
fc6ec6bd14
add sparse mode adam
6 years ago
Qiao Longfei
d03cbd1b8c
follow comment test=develop
6 years ago
Qiao Longfei
373f64986d
add comment and unit test
...
test=develop
6 years ago
Qiao Longfei
55edfca2b8
revert unused change
6 years ago