Commit Graph

4439 Commits (72efd830b9dfdabe36ecd135a8ea9f46236bcbca)

Author SHA1 Message Date
Chengmo 328cb289ed
【paddle.fleet】fix sparse load (#27680)
6 years ago
furnace 8e70b18e6c
add paddle.nn.initializer API, including: Normal, TruncatedNormal, Uniform, XavierNormal, XavierUniform, Assign (#27769)
6 years ago
123malin a4f850748a
【paddle.fleet】bug fix for parameter_recv (#27838)
6 years ago
QingshuChen 2712d07644
support kunlun matmul_v2 (#27910)
6 years ago
zhang wenhui 7a58431c0a
fix norm api doc, test=develop (#27652)
6 years ago
yinhaofeng 3eb106da6d
Lookup table v2 xpu (#27888)
6 years ago
pangyoki 6150cc86e3
fix Error of gpu version paddle when CUDA device is not set properly (#27819)
6 years ago
YUNSHEN XIE 6898746f1d
disable ut (#27913)
6 years ago
zhulei 62556d5e74
Add api of KaimingUniform & KaimingNormal in paddle.nn.initializer (#27811)
6 years ago
hutuxian 3f2a6ab65d
fix error msg (#27887)
6 years ago
xiaoting ae01801f0a
Add dropout and log_loss for kunlun (#27790)
6 years ago
WangXi 50619cd842
use floyd algorithm to find meta optimizer max path, test=develop (#27867)
6 years ago
Guanghua Yu 70c8c31371
support mean,softmax_with_cross_entropy on Baidu Kunlun (#27792)
6 years ago
Chengmo 1607e87cb9
add xpu sgd & momentum (#27728)
6 years ago
Leo Chen 049696bf67
Refine the format of printing tensor (#27673)
6 years ago
hong19860320 c90d35564b
Add batch_norm and layer_norm XPU kernels (#27818)
6 years ago
xiaoting 6da7a7458b
add conv for xpu, test=kunlun (#27809)
6 years ago
huangjun12 74092635f8
Add local_response_norm in nn.functional and nn.layer (#27725)
6 years ago
Thunderbrook 04be37c57f
add xpu slice op (#27349)
6 years ago
mapingshuo 8d2cb14f98
support gradient merge with recompute, test=develop (#27834)
6 years ago
QingshuChen 79b5db135e
bug: fix mul unitest bug (#27852)
6 years ago
wanghuancoder 92708a2723
modify test_load_op save path from /tmp to ./ (#27872)
6 years ago
Chengmo 9637d963f7
update index sample (#27839)
6 years ago
ShenLiang 6d63cd2b93
add gather_op xpu, test=kunlun (#27822)
6 years ago
wangxinxin08 e6a4d1705a
modify dtype in doublegrad matmul ut (#27868)
6 years ago
Steffy-zxf 92b3a71705
Update api 2.0 for some ops
6 years ago
Zhou Wei e122e16456
fix english doc, unittest, and remove useless alias of 2.0 lr_scheduler (#27686)
6 years ago
Steffy-zxf 9215ad96ca
Update code examples for api2.0
6 years ago
Peihan af57537ec7
remove dy2static test_lac predictor run case (#27844)
6 years ago
Chengmo c5f2802d56
【paddle.fleet】Update fleetrun & ps-heter (#27472)
6 years ago
Bai Yifan 6cdf2c9604
mig deformable_conv to deform_conv2d (#27841)
6 years ago
guofei 2e1bca99ca
Refine the gradient calculation errors caused by renaming in while_grad (#27814)
6 years ago
wanghuancoder 8fa4c09889
add load_op_xpu for Baidu Kunlun (#27817)
6 years ago
MRXLT 84d8e49de8
refine adam/strided_slice && fix doc for rmsprop/unstack (#27740)
6 years ago
joejiong 2bcb7c0a2f
Mutiply allows non-tensor data input (#27690)
6 years ago
Jacek Czaja 55e63763ec
[oneDNN] adaptive pool support (#27747)
6 years ago
GaoWei8 36bb056ed6
Add flattern weight of lstm (#27192)
6 years ago
zhupengyang 659d04df2c
hsigmoid -> hsigmoid_loss/HSigmoidLoss; refine docs (#27745)
6 years ago
hong19860320 f3e2580cf0
Fix the param of swish (#27824)
6 years ago
TeslaZhao 070ac9590c
Add double grad in Squeeze and Unsqueeze (#27810)
6 years ago
Jack Zhou d4359b0f39
add the kunlun kernel for the paddle 2.0
6 years ago
mapingshuo 840d54de9b
add XPU support for shape op and reshape op (#27804)
6 years ago
WangXi 0a1862d1d2
fleet combine amp dgc recompute meta optimizer (#27643)
6 years ago
Chen Weihang 9b49f02441
Polish jit.save/load design & remove paddle.SaveLoadConfig (#27623)
6 years ago
hong19860320 74d3a55072
Add Swish and ThresholdedReLU for API 2.0 (#27758)
6 years ago
wangxinxin08 ad99e638fd
add double grad op for matmul (#27776)
6 years ago
Yiqun Liu bf187c7577
Polish the documentation and examples of paddle.static.nn.fc. (#27768)
6 years ago
zhupengyang 0025e0d87b
refine APIs: brelu, hardsigmoid, hardswish, maxout (#27658)
6 years ago
zhupengyang 5098891fdf
add softmax xpu kernel (#27700)
6 years ago
Leo Chen 65c06141b6
disable_fuse_all_reduce (#27746)
6 years ago