Commit Graph

153 Commits (88709569546b702b9dd84711b85facd04ffb0900)

Author SHA1 Message Date
mindspore-ci-bot 8870956954 !2441 add fake quant test case for gpu
5 years ago
chenzomi 8873f9dc7e add fake quant test case for gpu
5 years ago
mindspore-ci-bot a2cd05339f !2180 Gpu Gelu kernel support fp16
5 years ago
mindspore-ci-bot 0327d7e79b !2424 Add aicpu op: CTCLoss, ReverseSequence and CropAndResize
5 years ago
mindspore-ci-bot c9b8a8da0a !2369 add cpu reduce op and cpu softmax_cross_entropy_with_logits op
5 years ago
xutianchun 2bfc86f5b7 Add aicpu op: ReverseSequence, CropAndResize, CTCLoss
5 years ago
gong chen a6dfa281ea Init GraphKernel.
5 years ago
mindspore-ci-bot d57decc8a3 !2338 Gpu Minimum & Maximum kernels support int32
5 years ago
mindspore-ci-bot 88a3c0fa53 !2379 remove ftrl operator st on pynative mode
5 years ago
lizhenyu eb68c9953d change ftrl operator st
5 years ago
pkuliuliu acf46bafbb add Normal op
5 years ago
baihuawei 862522376f add reduce and softmax_cross_entropy_with_logits
5 years ago
wilfChen 480bf4151b Gpu Minimum & Maximum kernels support int32
5 years ago
yanzhenxiang2020 8621c032d9 add pack op for aicpu
5 years ago
mindspore-ci-bot a9d06edae9 !2282 remove _quant_op.py from __init__.py
5 years ago
mindspore-ci-bot fce37a5fbe !2281 add Sigmoid and SigmoidGrad operation of GPU
5 years ago
wilfChen 8f4cd76582 gpu Gelu kernel support fp16
5 years ago
chenzomi bbce6faff9 remove _quant_ops.py from __init__.py
5 years ago
mindspore-ci-bot 2e002ab64c !2292 gpu fix all nop node graph execute
5 years ago
limingqi107 0f4397cece fix all nop node graph execute
5 years ago
lizhenyu ea0cd5ccdd add Sigmoid and SigmoidGrad operation of GPU
5 years ago
mindspore-ci-bot 74c3e15675 !2194 fix FakeQuantPerLayer/FakeQuantPerLayerGrad symmetric=True calculation error bug
5 years ago
mindspore-ci-bot 1e12b07afd !2251 Add implementation of SparseApplyProximalAdagrad cpu kernel
5 years ago
mindspore-ci-bot 19e66f06e2 !2150 Gpu Tanh kernel support fp16
5 years ago
yujianfeng c3871d98cc Add implementation of SparseApplyProximalAdagrad cpu kernel
5 years ago
mindspore-ci-bot fe797aaf10 !2229 add ftrl optimizer
5 years ago
mindspore-ci-bot 95d887a35b !2226 add adam op for wide&deep model
5 years ago
mindspore-ci-bot c4863683ef !2235 add SigmoidCrossEntropyWithLogitsGrad operation
5 years ago
mindspore-ci-bot 116ed509bf !2234 add SigmoidCrossEntropyWithLogits op
5 years ago
lizhenyu 636b8e2b88 add SigmoidCrossEntropyWithLogitsGrad op
5 years ago
mindspore-ci-bot 4642df207a !2210 gpu optimize the max device memory config
5 years ago
lizhenyu 694a8213b7 add adam optimizer
5 years ago
lizhenyu ac2217dbae add SigmoidCrossEntropyWithLogits op
5 years ago
lizhenyu c3360a84cd add ftrl optimizer
5 years ago
wilfChen 9201ea5ed2 replace tanh implement with cudnn
5 years ago
limingqi107 55b3557c0d gpu optimize the max device memory config
5 years ago
王东旭 4e09ae83eb fix FakeQuantPerLayer/FakeQuantPerLayerGrad symmetric bug
5 years ago
liuxiao df63a3195d fix input value check for SparseApplyFtrl and SparseApplyAdagrad
5 years ago
mindspore-ci-bot 8593c4d841 !2051 support host mpi
5 years ago
chenjianping 6034f9c1e2 support host reduce scatter and mpi config
5 years ago
mindspore-ci-bot d4a7c87b22 !2093 GPU add argmaxwithvalue
5 years ago
VectorSL 17377912ba gpu add argmaxwithvalue
5 years ago
buxue 66bbdb4a31 change tensor dtype and shape from function to attr
5 years ago
mindspore-ci-bot 87fa15de80 !2021 GPU add akg kernel greaterequal notequal
5 years ago
VectorSL cf2fc1cecf gpu add notequal greaterequal akg kernel
5 years ago
buxue 0cd57ddc5d check arg is tensor with vm backend
5 years ago
mindspore-ci-bot d6cc7089fc !1888 Add cpu kernel implement of sparse adam
5 years ago
yujianfeng c956dfff51 Add SparseAdam and SparseLazyAdam cpu kernel
5 years ago
mindspore-ci-bot c4b3534913 !1905 update cpu lstm
5 years ago
baihuawei 9c74e39b12 update cpu lstm
5 years ago