Commit Graph

180 Commits (1dbc4e9eafba5c693a95ada87b3b3372aded7c94)

Author SHA1 Message Date
panyifeng 44e74ad5aa Apply indexed_slices
5 years ago
leilei_snow 9b21420b3e update SSIM loss, add MSSSIM loss feature; add their ut testcases.
5 years ago
peixu_ren bef1fc7f19 add sample functions in normal and bermoulli distributions
5 years ago
Xun Deng 0aa26c1815 add high level abstract class Distribution and two example class:
5 years ago
panyifeng d6635bbbe2 Add IndexedSlices
5 years ago
wangnan39@huawei.com 172728a6a6 support weight decay for sparse optimizer
5 years ago
zhaozhenlong 71d33b087e limit ssim input img type to fp32 and fp16
5 years ago
Ziyan 41ddc153a6 modify lars interface
5 years ago
panyifeng 3c2057297e support multi param for tuple grad
5 years ago
lilei 497067d7b2 add sparse proximal ada grad optimizer
5 years ago
simson ca988e9e69 fix the condition when activation name is 0
5 years ago
liuxiao aa73abc2f7 Add image.CentralCrop
5 years ago
mindspore-ci-bot 2d84011504 !2071 optimizer support loss scale for sparse situation
5 years ago
wangnan39@huawei.com d4e3d69f37 support loss scale for sparse situation
5 years ago
kingfo 9708e58259 fix TupleToArray & Cast operator issue
5 years ago
mindspore-ci-bot 3536185f5b !2007 add lazy adam optimizer and support sparse adam&ftrl for cpu backend
5 years ago
wangnan39@huawei.com 4042f16ce4 add lazy adam optim and support sparse adam & ftrl for cpu backend
5 years ago
mindspore-ci-bot 5c7cb7bd71 !2023 add op CosineEmbeddingLoss
5 years ago
zhaozhenlong 19c5921c06 composed op CosineEmbeddingLoss
5 years ago
mindspore-ci-bot f859dfecc8 !1920 SupportPynativeIndexing
5 years ago
huangdongrun 9522f59b87 support for tensor indexing in pynative
5 years ago
mindspore-ci-bot 9dfb1011fe !1854 add SparseApplyAdam and SparseApplyLazyAdam ops
5 years ago
wangnan39@huawei.com de21dbdaef add ops SparseApplyAdam and SparseApplyLazyAdam
5 years ago
mindspore-ci-bot 3b8edd5a5b !1918 sparse grad for gatherv2
5 years ago
panyifeng acaa66a738 sparse grad for gatherv2
5 years ago
lilei 36d9e353a5 add proximal_ada_grad optimizer
5 years ago
mindspore-ci-bot c82a8bf483 !1678 modify print
5 years ago
jinyaohui 5e43edc474 clean pylint
5 years ago
wangnan39@huawei.com c9b7d95c2c fix lr check bug in AdamWeightDecayDynamicLR
5 years ago
chenhaozhe 435fc12e28 optimize clip_norm
5 years ago
jinyaohui 86d197dfeb clean pylint
5 years ago
mindspore-ci-bot 19ce0c372a !1257 Implicit type conversion
5 years ago
wangnan39@huawei.com 810ccf80d8 fix_bug_in_check_lamb_warmup_step
5 years ago
candanzg 2429da19fb implicit type conversion
5 years ago
chenhaozhe b6aceddeab update bert scripts according to rules of modelzoo
5 years ago
“liuxiao” ebbccca78b pylint clean
5 years ago
“liuxiao” f4542f810b pylint clean
5 years ago
“liuxiao” 382a0124c3 pylint clean
5 years ago
mindspore-ci-bot 62c716b68e !1349 Fix some functions in group parameters and optimizer
5 years ago
jinyaohui fbdba6e4da clean pylint
5 years ago
guohongzilong 2d2f9ba8fd fix group parameter code for check
5 years ago
jinyaohui 5a914994ba clean pylint
5 years ago
jiangjinsheng e45532b78c fixed transpose
5 years ago
candanzg 2cc85bdc93 Support weight compile according to shape
5 years ago
jinyaohui 26fd75895d pylint waring clean
5 years ago
mindspore-ci-bot 66667d727e !1034 Gpu Support Dropout operator
5 years ago
chenzomi 661f9dfaf8 add dropout primtive
5 years ago
mindspore-ci-bot 8a45ab1125 !906 fix a bug that support dilation greater than 1 in conv2dTranspose ops
5 years ago
mindspore-ci-bot deae380969 !637 Learning rate and weight decay making group params
5 years ago
guohongzilong 824bc30a94 learning rate and weight decay support group mode
5 years ago
mindspore-ci-bot 3d3b9d5474 !840 [BUG] fix conv2dtranspose bug
5 years ago
jinyaohui 88e763a98f modify conv2dtranspose
5 years ago
zhaojichen 039c75af8e fix avgpool and add check
5 years ago
yangyongjie 3f1c6b7b47 solve the problem when dialtion greater than 1 in Conv2dTranspose ops
5 years ago
zhaozhenlong 66e7a36846 ImageGradients check 4d
5 years ago
wangnan39@huawei.com 7f602016f4 add parameter verification for rmsprop, and modify default value in annotation
5 years ago
zhaozhenlong c88edfb31d psnr check two input same shape and type
5 years ago
mindspore-ci-bot 818acd46d4 !569 fix bug in checkpoint when save scaler
5 years ago
wangnan39@huawei.com f38d18c665 fix bug in checkpoint when save scaler
5 years ago
mindspore-ci-bot 67057d1309 !541 add average pooling 1D
5 years ago
mindspore-ci-bot 72f42fc37c !170 Add prim name to error message for operators in nn_ops.py
5 years ago
wangnan39@huawei.com b812b18c02 support update parameter for vm
5 years ago
fary86 6dd72f654a Add prim name to error message for nn_ops.py
5 years ago
zhaojichen 94c99998ae add AvgPooling layer
5 years ago
fary86 8cbbbd950e Add cell name to error message
5 years ago
zhaozhenlong aa8fbcc06e add cell psnr
5 years ago
zhaojichen 04c522d0c6 Add Group Normalization
5 years ago
zhaojichen 0b7de6968f Add Group Normalization
5 years ago
zhaojichen ebe6efff71 Add Group Normalization
5 years ago
zhaozhenlong 6a2cf4b6e6 ssim impl code
5 years ago
mindspore-ci-bot 7ffb8bb19f !250 Add nn.pad to support three modes
5 years ago
gaojing 2db3e64ff2 add operation
5 years ago
root 7d700295f8 add dynamic lr and enhance optim
5 years ago
chenzomi d64f662c76 quantization aware training frontend operators define.
5 years ago
buxue 7541d3b067 Develop op MaxPoolWithArgMax
5 years ago
zhaozhenlong f9d180d413 add api image gradients
5 years ago
mindspore-ci-bot 352c6faf85 !18 enable use float type learning rate in lars optimizer
5 years ago
buxue 0da0bdcf40 Fix bug structure output when there is depend whose first input is constant in outputs
5 years ago
Ziyan 4cbcd8e907 enable use float type learning rate in lars optimizer
5 years ago
zhunaipan 930a1fb0a8 initial version
5 years ago