Commit Graph

94 Commits (c21a979790aebefffdde5c470dc406a2a81959e7)

Author SHA1 Message Date
furnace 8ff3550658
refactor momentum op to combine weight (#27414)
4 years ago
taixiurong fad4744aa4
fix crash in adam in xpu, *test=kunlun (#28433)
4 years ago
yinhaofeng 6f0c3d1f06
xpu adam op (#28031)
4 years ago
Chengmo 5f04875c30
Fix xpu error message (#28061)
4 years ago
MRXLT 263a9e97fd
Fix adam (#27778)
4 years ago
Chen Weihang 4ba977c720
Polish some error message in opeators (#27876)
4 years ago
Chengmo 1607e87cb9
add xpu sgd & momentum (#27728)
4 years ago
Chengmo d014e29fc6
fix error message (#27318)
4 years ago
123malin a04524759e
Enhance Op's Error Message (#27455)
4 years ago
MRXLT f936adbd2d
fix adam (#27343)
4 years ago
JZ-LIANG 5d039f4086
modified the implement of Lars optimizer (#26733)
5 years ago
Jiawei Wang a1b99fae07
Adadelta Optimizer (#26590)
5 years ago
lilong12 5f524efe56
modify error report message, test=develop (#26743)
5 years ago
Chen Weihang 0b54d54fd8
Fix index overflow bug of the CUDA kernel loop increment (#25435)
5 years ago
leesusu a6beb96dd0
FTRL with sparse update, test=develop (#22092)
5 years ago
gongweibao f1c57d648c
Enhance error message of prefetch_op, proximal_adagrad_op, proximal_gd_op (#24436)
5 years ago
MRXLT 71ff32b65d
update error message for unstack op and lamb op; test=develop (#24439)
5 years ago
zhang wenhui 621a4085b9
enhance cvm bpr_loss adam adagrad adamax ftrl error message, test=develop (#24452)
5 years ago
liuwei1031 9a93f6aae0
improve efficiency of runtime InferVarType (#22778)
5 years ago
wangchaochaohu 29c4fae112
Tensor value support (#23491)
5 years ago
Chen Weihang 16315d3d9e
Delete Ref & VectorRef and add GetDataSafely (#22997)
5 years ago
zhaoyuchen2018 72dde4abde
Refine adam op to improve performance, test=develop (#22346)
5 years ago
zhongpu d0f0a2520c test Optimizer in dygraph (#21949)
5 years ago
Aurelius84 51a86d2b6b Optimize adam speed (#21777)
5 years ago
Huihuang Zheng 1dcf6a7212
Add Much Complex Test and Fix Bugs for Control Flow cond API (#21532)
5 years ago
Chen Weihang 664f958a02
Fix optimizer op infershape failed in dygraph multi-cards mode (#21374)
5 years ago
hong ac8546701d
Add dygraph execution context (#20157)
5 years ago
Kaipeng Deng ebfb720a63
add Adam beta1/beta2 support Variable (#21234)
5 years ago
WangXi 8ac7687e36 Fix dgc accuracy by mv regularization to local (#21278)
5 years ago
hong 8c4573a3cb
GradMaker for dygraph (#19706)
5 years ago
Chen Weihang 26cc1fe508
Replace risky GetInputType method with secure IndicateVarDataType interface (#20668)
5 years ago
WangXi 250e72d254 Fix DGC algorithm flow to make it the same as paper (#20758)
5 years ago
jhjiangcs 766bd529d1 add optimizer:dpsgd,test=develop (#19915)
5 years ago
Chen Weihang 8cb54ede8c
Add user-friendly error message in optimizer ops to give a hint about the position sensitive problem of run(startup_program) (#19605)
6 years ago
Tao Luo 75d1571995
refine PADDLE_ENFORCE codes for unify PADDLE_ASSERT_MSG (#19603)
6 years ago
chengduo 7453857324 Make fuse_all_reduce_op_pass support mix_precision (#17652)
6 years ago
Yibing Liu 23941e43ec
Update lamb optimizer (#18333)
6 years ago
Yibing Liu e8990e64f6
Fix trust ratio in lamb (#17614)
6 years ago
Yibing Liu f9796b1249
Add LAMB Optimizer support (#17489)
6 years ago
Qiyang Min c7f1f3ed0c
Merge pull request #16214 from velconia/imperative_infer_var_type
6 years ago
minqiyang b40e41fbd1 Polish code style
6 years ago
minqiyang ca392c7e97 Implement infer var type context
6 years ago
sneaxiy f0d108f589 fix const_cast
6 years ago
tensor-tang 14a764c930 simplify the jitkernel templates and tests
6 years ago
tensor-tang 802f362ac4 unify the kernelfuncs cache and add unit test
6 years ago
tensor-tang a0c37662b9 enable sgd jitkernel refer code and test
6 years ago
peizhilin cd562f8fb7 disable the parallel mode for adam op on windows test=develop
6 years ago
Qiao Longfei 8c516a24e5 remote min_row_size_to_use_multithread in adam interface test=develop
6 years ago
Qiao Longfei 9b4fe283e1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into multithread-sparse-adam
6 years ago
Qiao Longfei 44b300556d change min_row_size_to_use_multithread to parameter of adam
6 years ago