mindspore-ci-bot
|
8634675e2d
|
!14499 [GraphKernel]split UMonad in inputs of op
From: @wenfangpei
Reviewed-by: @dayschan,@ckey_dou,@gaoxiong1
Signed-off-by: @gaoxiong1
|
4 years ago |
wenfangpei
|
0085a273e7
|
split UMonad in inputs of op
|
4 years ago |
mindspore-ci-bot
|
18d79d35b6
|
!14498 [GraphKernel]remove redundant cast bias for matmul
From: @lingyunli63
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
|
4 years ago |
lingyunli63
|
8b3823b22c
|
optimizeMatmul
|
4 years ago |
mindspore-ci-bot
|
b8e35c663f
|
!14508 add float64 support to SigmoidCrossEntropyWithLogits gpu
From: @TFbunny
Reviewed-by: @tom__chen,@robingrosman
Signed-off-by: @robingrosman
|
4 years ago |
mindspore-ci-bot
|
efb53fb9c0
|
!14183 Support SparseTensorDenseMatmul for CPU
From: @xuguoyang5566
Reviewed-by:
Signed-off-by:
|
4 years ago |
TFBunny
|
4de6b25d23
|
add float64 support to SigmoidCrossEntropyWithLogits and Grad
|
4 years ago |
xuguoyang
|
7df6bfe7dd
|
support sparse tensor dense matmul fot CPU
|
4 years ago |
mindspore-ci-bot
|
69526df01e
|
!14314 [GraphKernel] unify graph kernel pass add_atomic_clean on Ascend and GPU back-end
From: @looop5
Reviewed-by: @gaoxiong1,@gaoxiong1,@dylangeng
Signed-off-by: @dylangeng
|
4 years ago |
mindspore-ci-bot
|
ddf75da542
|
!14085 [GraphKernel] add some expander ops
From: @chenlei_autodiff
Reviewed-by:
Signed-off-by:
|
4 years ago |
looop5
|
76d322464d
|
unify graph kernel pass add_atomic_clean on Ascend and GPU back-end
refactor CanActivateAtomicAdd
use smart pointer
|
4 years ago |
chenlei_autodiff
|
f4289d40f3
|
add graph kernel expander ops.
|
4 years ago |
mindspore-ci-bot
|
7149e8c2c9
|
!14045 [Graph Kernel] add compare test case
From: @zengzitao
Reviewed-by: @gaoxiong1
Signed-off-by:
|
4 years ago |
zengzitao
|
72c6dad4ba
|
add compare_test case in gpu ci and update akg submodule
|
4 years ago |
mindspore-ci-bot
|
ad140a8bf4
|
!14084 [GraphKernel] support matmul on D
From: @lingyunli63
Reviewed-by:
Signed-off-by:
|
4 years ago |
lingyunli63
|
4b966ed40d
|
support matmul on D
|
4 years ago |
mindspore-ci-bot
|
f1fb0d9f3a
|
!13833 Add SparseToDense
From: @ZhengQihao3f3f3f
Reviewed-by:
Signed-off-by:
|
4 years ago |
mindspore-ci-bot
|
d2ecf71ace
|
!13693 Slice op only support input_x int32 and float32 at CPU backend
From: @wangyanling10
Reviewed-by:
Signed-off-by:
|
4 years ago |
zhengqihao
|
27f508760b
|
Add SparseToDense op
|
4 years ago |
w00535372
|
761a2e2127
|
Bug fix for ISSUE #I3CN9Q
|
4 years ago |
wangyanling
|
fb64e14265
|
fix slice op bug
|
4 years ago |
zhuyuxiao
|
a11287c332
|
adagrad: support ouput on gpu
|
4 years ago |
mindspore-ci-bot
|
38b48fb0e8
|
!13608 Add CPU LogSoftMax
From: @zhao_ting_v
Reviewed-by: @wuxuejian
Signed-off-by: @wuxuejian
|
4 years ago |
mindspore-ci-bot
|
5504116718
|
!13647 BroadcastTo add general -1 dim behavior
From: @peilin-wang
Reviewed-by: @liangchenghui,@wuxuejian
Signed-off-by: @liangchenghui
|
4 years ago |
zhaoting
|
8754aaeb74
|
add CPU LogSoftMax
|
4 years ago |
mindspore-ci-bot
|
7bb35d8ce4
|
!13635 add float64 support to Addn gpu
From: @TFbunny
Reviewed-by: @liangchenghui,@wuxuejian
Signed-off-by: @liangchenghui
|
4 years ago |
Peilin Wang
|
6cead43bdf
|
add general -1 dim behavior for BroadcastTo op
|
4 years ago |
wangyanling
|
6268f660fb
|
add cpu broadcast_to op
|
4 years ago |
TFBunny
|
b780e5737c
|
add float64 to Addn gpu
|
4 years ago |
mindspore-ci-bot
|
5b95409022
|
!13512 add some expander ops
From: @zengzitao
Reviewed-by:
Signed-off-by:
|
4 years ago |
mindspore-ci-bot
|
2fadad0875
|
!13121 expander lamb_apply_optimizer_assign
From: @wenfangpei
Reviewed-by:
Signed-off-by:
|
4 years ago |
mindspore-ci-bot
|
8e8f3043f9
|
!12115 IR operators of GPU and CPU are unified as batchnorm
From: @ding_fei_fei
Reviewed-by:
Signed-off-by:
|
4 years ago |
wenfangpei
|
043a558ae2
|
expander lamb_apply_optimizer_assign
|
4 years ago |
zengzitao
|
d0a656f3cd
|
add some expander ops
|
4 years ago |
dingpeifei
|
87e41aaeee
|
IR operators of GPU and CPU are unified as batchnorm
|
4 years ago |
mindspore-ci-bot
|
b20e964760
|
!13429 Refactor some cpu ops
From: @wuxuejian
Reviewed-by: @kisnwang,@liangchenghui
Signed-off-by: @liangchenghui
|
4 years ago |
mindspore-ci-bot
|
b61aa9b4cf
|
!13342 Throw exception when tensor with 0 shape is constructed
From: @liangzhibo
Reviewed-by:
Signed-off-by:
|
4 years ago |
mindspore-ci-bot
|
0fdf5c54e3
|
!13069 Add dynamic shape testcases for Sub
From: @TFbunny
Reviewed-by: @tom__chen,@robingrosman
Signed-off-by: @robingrosman
|
4 years ago |
wuxuejian
|
5498103990
|
Refactor some cpu ops
|
4 years ago |
mindspore-ci-bot
|
2545e2c5f1
|
!13335 Fix check of input dims for BiasAdd and attr of maxpool3d .
From: @liu_xiao_93
Reviewed-by: @liangchenghui
Signed-off-by: @liangchenghui
|
4 years ago |
liuxiao93
|
a6b7de5df9
|
fix check of input dims for BiasAdd.
|
4 years ago |
l00591931
|
bd777e0710
|
throw exception when tensor has zero in shape
|
4 years ago |
mindspore-ci-bot
|
18580ac506
|
!13140 Operator[StridedSlice] not support bool tensor in cpu env.
From: @wangyanling10
Reviewed-by:
Signed-off-by:
|
4 years ago |
wangyanling
|
42aa96a1a2
|
support bool data type for StridedSlice op
|
4 years ago |
yanzhenxiang2020
|
f6677628c0
|
add StackPush and StackPop for aicpu
|
4 years ago |
mindspore-ci-bot
|
5a6bb251b0
|
!12724 add Dropout2D and rename Dropout3d to Dropout3D
From: @yanzhenxiang2020
Reviewed-by:
Signed-off-by:
|
4 years ago |
TFBunny
|
4d35303265
|
support string in GPU print
|
4 years ago |
TFBunny
|
3f397503c0
|
add dynamic shape testcases for Sub
|
4 years ago |
mindspore-ci-bot
|
70024d3ab1
|
!12189 Add CPU Pad op
From: @wanyiming
Reviewed-by:
Signed-off-by:
|
4 years ago |
mindspore-ci-bot
|
802e756c9b
|
!12897 Add float64 support to reducemax grad gpu op
From: @peilin-wang
Reviewed-by: @liangchenghui,@wuxuejian
Signed-off-by: @liangchenghui
|
4 years ago |