Commit Graph

971 Commits (3ece8dd0901ec79334cdf738943f4a8bb0499554)

Author SHA1 Message Date
heleiwang 3ece8dd090 1. support get_all_edges, get_nodes_from_edge, get_sampled_neighbors, get_neg_sampled_neighbors and graph_info API
5 years ago
mindspore-ci-bot 444d9484d7 !1916 add ProximalAdagrad Optimizer
5 years ago
mindspore-ci-bot 38109a386b !1957 fix cmakelists
5 years ago
jiangjinsheng 709704d339 fixed CMakeLists.txt
5 years ago
lilei 36d9e353a5 add proximal_ada_grad optimizer
5 years ago
mindspore-ci-bot bfc18f3adc !1873 synchronize latest ascend software 04 Jun 2020
5 years ago
mindspore-ci-bot 5eb95599f6 !1874 Updates to string tensor
5 years ago
yanghaoran 8da4c1a763 synchronize latest ascend software 04 Jun 2020
5 years ago
mindspore-ci-bot 0e7839826e !1945 [bug]fix bug in '=', use signature to support auto cast in assign.
5 years ago
hesham f837ddc956 - Bug when empty strings sent to Python
5 years ago
Wei Luning ee8420aefa Make assign operator to use the signature.
5 years ago
zhaoting b16a552d41 Revert "Revert "add pattern AdjustAllReduceMulAdduse the old opadd test case for bugtemp fix try""
5 years ago
mindspore-ci-bot beb714d2d0 !1911 add a function to charge the node input and output is a scalar
5 years ago
liuxiao 6856c2ac7a Adapt ops ApplyProximalAdagrad for GE
5 years ago
mindspore-ci-bot cf04a7b190 !1741 summary support cpu
5 years ago
yujianfeng 2ff9e74d07 Add unique process for duplicated indices
5 years ago
wenkai ab6b6add0b cpu support summary
5 years ago
WilliamLian 9808e47663 change checkAicpu to CheckAICPU & add charge Scalar function to charge the input or output is scalar
5 years ago
mindspore-ci-bot 10fd781b15 !1831 Add order parameter function in group params
5 years ago
mindspore-ci-bot b350ee0c00 !1824 add single batchnorm fission pass
5 years ago
mindspore-ci-bot c82a8bf483 !1678 modify print
5 years ago
guohongzilong 85a06b00c6 add order function in group params
5 years ago
mindspore-ci-bot 0a897b0ce7 !1865 add inv,invgrad&invert for vm
5 years ago
huanghui b4c0ed4b36 add signle batchnorm fission pass
5 years ago
zhaojichen cdb7ec937b add inv,invgrad&invert for vm
5 years ago
zhaozhenlong 1f342fb926 add op BroadcastTo
5 years ago
mindspore-ci-bot 65eacc9593 !1787 optimize transdata for pynative mode
5 years ago
mindspore-ci-bot 1c640face9 !1826 fix bug when check learning_rate in AdamWeightDecayDynamicLR
5 years ago
chujinjin 7465abc798 optimize transdata for pynative
5 years ago
mindspore-ci-bot bd34c6ec8b !1853 Fix initializer
5 years ago
mindspore-ci-bot 1973594bd1 !1774 [MD] minddataset support padding samples
5 years ago
mindspore-ci-bot b236beae28 !1615 convert constant bool tensor to bool
5 years ago
liyong feff8899ac support padding samples
5 years ago
mindspore-ci-bot c51d90d84e !1767 Move LayerNormGrad split pass ahead of kernel select
5 years ago
mindspore-ci-bot 5c21616293 !1807 Implemented Ngram TensorOp for dataset
5 years ago
huangdongrun 9081041199 fix initiliazer
5 years ago
mindspore-ci-bot 769ae609b4 !1808 consistent design for num_samples
5 years ago
Zirui Wu dbf9936ec4 Implemented n-gram for dataset TensorOp
5 years ago
huangdongrun beacc26077 * add isconstant primitive
5 years ago
mindspore-ci-bot fac6c56db5 !1851 Add batch norm fusion pattern for mix precision
5 years ago
jinyaohui 5e43edc474 clean pylint
5 years ago
yujianfeng e87ac6525e Add batch norm fusion pattern for mix precision
5 years ago
mindspore-ci-bot 97524b9ddd !1823 support vm for ConfusionMatrix
5 years ago
huanghui cf87218fb7 place layernormgrad split pass before kernel select
5 years ago
jiangjinsheng fc4cf5a470 add vm support for ConfusionMatrix
5 years ago
mindspore-ci-bot 0c1674496f !1819 Restricting modify non_Parameter class members
5 years ago
Jamie Nisbet 51bc0c0460 consistent design for num_samples
5 years ago
mindspore-ci-bot bc7a3a1bef !1806 Add crop size check to python RandomCrop op
5 years ago
wangnan39@huawei.com c9b7d95c2c fix lr check bug in AdamWeightDecayDynamicLR
5 years ago
buxue 94c9019d8e restricting modify non_Parameter class members
5 years ago