# MindSpore 1.2.0 Release Notes ## MindSpore ### API Change #### Backwards Incompatible Change ##### Python API ###### Turn `ops.MakeRefKey` into an internal interface ([!12010](https://gitee.com/mindspore/mindspore/pulls/12010)) Previously MakeRefKey is an external interface that is not used, now make it an internal interface with the same usage. We do not recommend users to use this interface, and we will remove the relevant introduction of this interface from the official website. ###### `ops.ApplyFtrl`, `ops.ApplyMomentum`, `ops.ApplyRMSProp`, `ops.ApplyCenteredRMSProp` change the output on Ascend backend from multiple to a single. ([!11895](https://gitee.com/mindspore/mindspore/pulls/11895)) Previously the number of outputs of these operator is different on different backends. To unify their definition we change their output on Ascend backend from multiple to a single. ##### `P.FusedBatchNorm`, `P.FusedBatchNormEx` deleted ([!12115](https://gitee.com/mindspore/mindspore/pulls/12115)) The FusedBatchNorm and FusedBatchNormEx interface has been deleted. Please use the batchnorm operator to replace it. ##### `MetaTensor` deleted ([!10325](https://gitee.com/mindspore/mindspore/pulls/10325)) The MetaTensor interface has been deleted. The function of MetaTensor has been integrated into tensor. ###### `mindspore.numpy.array()`, `mindspore.numpy.asarray()`, `mindspore.numpy.asfarray()`, `mindspore.numpy.copy()` now support GRAPH mode, but cannot accept `numpy.ndarray` as input arguments anymore([!12726](https://gitee.com/mindspore/mindspore/pulls/12726)) Previously, these interfaces can accept numpy.ndarray as arguments and convert numpy.ndarray to Tensor, but cannot be used in GRAPH mode. However, currently MindSpore Parser cannot parse numpy.ndarray in JIT-graph. To support these interfaces in graph mode, we have to remove `numpy.ndarray` support. With that being said, users can still use `Tensor` to convert `numpy.ndarray` to tensors.
1.1.0 | 1.2.0 |
```python >>> import mindspore.numpy as mnp >>> import numpy >>> >>> nd_array = numpy.array([1,2,3]) >>> tensor = mnp.asarray(nd_array) # this line cannot be parsed in GRAPH mode ``` | ```python >>> import mindspore.numpy as mnp >>> import numpy >>> >>> tensor = mnp.asarray([1,2,3]) # this line can be parsed in GRAPH mode ``` |
1.1.0 | 1.2.0 |
```python >>> import mindspore.numpy as np >>> >>> a = np.ones((3,3)) >>> b = np.ones((3,3)) >>> out = np.zeros((3,3)) >>> where = np.asarray([[True, False, True],[False, False, True],[True, True, True]]) >>> res = np.add(a, b, out=out, where=where) # `out` cannot be used as a reference, therefore it is misleading ``` | ```python >>> import mindspore.numpy as np >>> >>> a = np.ones((3,3)) >>> b = np.ones((3,3)) >>> out = np.zeros((3,3)) >>> where = np.asarray([[True, False, True],[False, False, True],[True, True, True]]) >>> res = np.add(a, b) >>> out = np.where(where, x=res, y=out) # instead of np.add(a, b, out=out, where=where) ``` |
1.1.0 | 1.2.0 |
```python >>> import numpy as np >>> from mindspore import Tensor, nn >>> >>> x = Tensor(np.ones((2, 3)).astype(onp.float32) >>> y = Tensor(np.ones((3, 4)).astype(onp.float32) >>> nn.MatMul()(x, y) ``` | ```python >>> import numpy as np >>> from mindspore import Tensor, ops >>> >>> x = Tensor(np.ones((2, 3)).astype(onp.float32) >>> y = Tensor(np.ones((3, 4)).astype(onp.float32) >>> ops.matmul(x, y) ``` |
1.1.0 | 1.1.1 |
```python >>> import mindspore.ops as ops >>> >>> avg_pool = ops.AvgPool(ksize=2, padding='same') >>> max_pool = ops.MaxPool(ksize=2, padding='same') >>> max_pool_with_argmax = ops.MaxPoolWithArgmax(ksize=2, padding='same') ``` | ```python >>> import mindspore.ops as ops >>> >>> avg_pool = ops.AvgPool(kernel_size=2, pad_mode='same') >>> max_pool = ops.MaxPool(kernel_size=2, pad_mode='same') >>> max_pool_with_argmax = ops.MaxPoolWithArgmax(kernel_size=2, pad_mode='same') ``` |
1.1.0 | 1.1.1 |
```python >>> import mindspore.ops as ops >>> >>> add = ops.TensorAdd() ``` | ```python >>> import mindspore.ops as ops >>> >>> add = ops.Add() ``` |
1.1.0 | 1.1.1 |
```python >>> import mindspore.ops as ops >>> >>> gelu = ops.Gelu() >>> gelu_grad = ops.GeluGrad() >>> fast_gelu = ops.FastGelu() >>> fast_gelu_grad = ops.FastGeluGrad() ``` | ```python >>> import mindspore.ops as ops >>> >>> gelu = ops.GeLU() >>> gelu_grad = ops.GeLUGrad() >>> fast_gelu = ops.FastGeLU() >>> fast_gelu_grad = ops.FastGeLUGrad() ``` |
1.1.0 | 1.1.1 |
```python >>> import mindspore.ops as ops >>> >>> gather = ops.GatherV2() ``` | ```python >>> import mindspore.ops as ops >>> >>> gather = ops.Gather() ``` |
1.1.0 | 1.1.1 |
```python >>> import mindspore.ops as ops >>> >>> pack= ops.Pack() >>> unpack= ops.Unpack() ``` | ```python >>> import mindspore.ops as ops >>> >>> stack= ops.Stack() >>> unstack= ops.Unstack() ``` |
1.1.0 | 1.1.1 |
```pythonNote: Note: This operation does not work in `PYNATIVE_MODE`. ``` | ```python Note: This operation does not work in `PYNATIVE_MODE`. `ControlDepend` is deprecated from version 1.1 and will be removed in a future version, use `Depend` instead. ``` |
1.1.0 | 1.1.1 |
```python Depend is used for processing side-effect operations. Inputs: - **value** (Tensor) - the real value to return for depend operator. - **expr** (Expression) - the expression to execute with no outputs. Outputs: Tensor, the value passed by last operator. Supported Platforms: ``Ascend`` ``GPU`` ``CPU`` ``` | ```python Depend is used for processing dependency operations. In some side-effect scenarios, we need to ensure the execution order of operators. In order to ensure that operator A is executed before operator B, it is recommended to insert the Depend operator between operators A and B. Previously, the ControlDepend operator was used to control the execution order. Since the ControlDepend operator will be deprecated from version 1.2, it is recommended to use the Depend operator instead. The replacement method is as follows:: a = A(x) ---> a = A(x) b = B(y) ---> y = Depend(y, a) ControlDepend(a, b) ---> b = B(y) Inputs: - **value** (Tensor) - the real value to return for depend operator. - **expr** (Expression) - the expression to execute with no outputs. Outputs: Tensor, the value passed by last operator. Supported Platforms: ``Ascend`` ``GPU`` ``CPU`` Examples: >>> import numpy as np >>> import mindspore >>> import mindspore.nn as nn >>> import mindspore.ops.operations as P >>> from mindspore import Tensor >>> class Net(nn.Cell): ... def __init__(self): ... super(Net, self).__init__() ... self.softmax = P.Softmax() ... self.depend = P.Depend() ... ... def construct(self, x, y): ... mul = x * y ... y = self.depend(y, mul) ... ret = self.softmax(y) ... return ret ... >>> x = Tensor(np.ones([4, 5]), dtype=mindspore.float32) >>> y = Tensor(np.ones([4, 5]), dtype=mindspore.float32) >>> net = Net() >>> output = net(x, y) >>> print(output) [[0.2 0.2 0.2 0.2 0.2] [0.2 0.2 0.2 0.2 0.2] [0.2 0.2 0.2 0.2 0.2] [0.2 0.2 0.2 0.2 0.2]] ``` |
1.1.0 | 1.1.1 |
```c++ namespace ms = mindspore::api; ``` | ```c++ namespace ms = mindspore; ``` |
1.1.0 | 1.1.1 |
```c++ ms::Context::Instance().SetDeviceTarget(ms::kDeviceTypeAscend310).SetDeviceID(0); ``` | ```c++ ms::GlobalContext::SetGlobalDeviceTarget(ms::kDeviceTypeAscend310); ms::GlobalContext::SetGlobalDeviceID(0); ``` |
1.1.0 | 1.1.1 |
```c++ ms::Tensor a; ``` | ```c++ ms::MSTensor a; ``` |
1.1.0 | 1.1.1 |
```c++ ms::Model model(graph_cell); model.Build(model_options); ``` | ```c++ ms::Model model(graph_cell, model_context); model.Build(); ``` |
1.1.0 | 1.1.1 |
```c++
std::vector |
```c++ auto inputs = model.GetInputs(); std::cout << "Input 0 name: " << inputs[0].Name() << std::endl; ``` |
1.1.0 | 1.1.1 |
```c++
std::vector |
```c++
std::vector |
1.0.1 | 1.1.0 |
```python >>> import mindspore.nn as nn >>> from mindspore.common import initializer >>> from mindspore import dtype as mstype >>> >>> def conv3x3(in_channels, out_channels) >>> weight = initializer('XavierUniform', shape=(3, 2, 32, 32), dtype=mstype.float32) >>> return nn.Conv2d(in_channels, out_channels, weight_init=weight, has_bias=False, pad_mode="same") ``` | ```python >>> import mindspore.nn as nn >>> from mindspore.common.initializer import XavierUniform >>> >>> #1) using string >>> def conv3x3(in_channels, out_channels) >>> return nn.Conv2d(in_channels, out_channels, weight_init='XavierUniform', has_bias=False, pad_mode="same") >>> >>> #2) using subclass of class Initializer >>> def conv3x3(in_channels, out_channels) >>> return nn.Conv2d(in_channels, out_channels, weight_init=XavierUniform(), has_bias=False, pad_mode="same") ``` |
1.0.1 | 1.1.0 |
```python >>> import mindspore.nn as nn >>> from mindspore.common import initializer >>> from mindspore.common.initializer import XavierUniform >>> >>> weight_init_1 = XavierUniform(gain=1.1) >>> conv1 = nn.Conv2d(3, 6, weight_init=weight_init_1) >>> weight_init_2 = XavierUniform(gain=1.1) >>> conv2 = nn.Conv2d(6, 10, weight_init=weight_init_2) ``` | ```python >>> import mindspore.nn as nn >>> from mindspore.common import initializer >>> from mindspore.common.initializer import XavierUniform >>> >>> weight_init = XavierUniform(gain=1.1) >>> conv1 = nn.Conv2d(3, 6, weight_init=weight_init) >>> conv2 = nn.Conv2d(6, 10, weight_init=weight_init) ``` |
1.0.1 | 1.1.0 |
```python >>> from mindspore import nn >>> >>> start = 1 >>> stop = 10 >>> num = 5 >>> linspace = nn.LinSpace(start, stop, num) >>> output = linspace() ``` | ```python >>> import mindspore >>> from mindspore import Tensor >>> from mindspore import ops >>> >>> linspace = ops.LinSpace() >>> start = Tensor(1, mindspore.float32) >>> stop = Tensor(10, mindspore.float32) >>> num = 5 >>> output = linspace(start, stop, num) ``` |
1.0.1 | 1.1.0 |
```python >>> from mindspore.nn import Adam >>> >>> net = LeNet5() >>> optimizer = Adam(filter(lambda x: x.requires_grad, net.get_parameters())) >>> optimizer.sparse_opt.add_prim_attr("primitive_target", "CPU") ``` | ```python >>> from mindspore.nn import Adam >>> >>> net = LeNet5() >>> optimizer = Adam(filter(lambda x: x.requires_grad, net.get_parameters())) >>> optimizer.target = 'CPU' ``` |
1.0.1 | 1.1.0 |
```python >>> from mindspore.train.quant import quant >>> >>> network = LeNetQuant() >>> inputs = Tensor(np.ones([1, 1, 32, 32]), mindspore.float32) >>> quant.export(network, inputs, file_name="lenet_quant.mindir", file_format='MINDIR') lenet_quant.mindir ``` | ```python >>> from mindspore import export >>> >>> network = LeNetQuant() >>> inputs = Tensor(np.ones([1, 1, 32, 32]), mindspore.float32) >>> export(network, inputs, file_name="lenet_quant", file_format='MINDIR', quant_mode='AUTO') lenet_quant.mindir ``` |
1.0.1 | 1.1.0 |
```python >>> import mindspore.nn as nn >>> >>> dense = nn.Dense(1, 1, activation='relu') ``` | ```python >>> import mindspore.nn as nn >>> import mindspore.ops as ops >>> >>> dense = nn.Dense(1, 1, activation=nn.ReLU()) >>> dense = nn.Dense(1, 1, activation=ops.ReLU()) ``` |
1.0.1 | 1.1.0 |
```python >>> from mindspore import Tensor >>> >>> Tensor((1,2,3)).size() >>> Tensor((1,2,3)).dim() ``` | ```python >>> from mindspore import Tensor >>> >>> Tensor((1,2,3)).size >>> Tensor((1,2,3)).ndim ``` |
1.0.1 | 1.1.0 |
```python >>> from mindspore.nn import EmbeddingLookup >>> >>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32) >>> result = EmbeddingLookup(4,2)(input_indices) >>> print(result.shape) (2, 2, 2) ``` | ```python >>> from mindspore.nn import EmbeddingLookup >>> >>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32) >>> result = EmbeddingLookup(4,2)(input_indices, sparse=False) >>> print(result.shape) (2, 2, 2) ``` |
1.0.1 | 1.1.0 |
```python >>> import mindspore.nn.probability.bijector as msb >>> >>> power = 2 >>> bijector = msb.PowerTransform(power=power) ``` | ```python >>> import mindspore.nn.probability.bijector as msb >>> >>> power = 2.0 >>> bijector = msb.PowerTransform(power=power) ``` |
1.0.1 | 1.1.0 |
```python >>> import mindspore.nn.probability.bijector as msb >>> from mindspore import dtype as mstype >>> >>> bijector = msb.GumbelCDF(loc=0.0, scale=1.0, dtype=mstype.float32) ``` | ```python >>> import mindspore.nn.probability.bijector as msb >>> >>> bijector = msb.GumbelCDF(loc=0.0, scale=1.0) ``` |
1.0.1 | 1.1.0 |
```python >>> from mindspore.nn.layer.quant import Conv2dBnAct, DenseBnAct ``` | ```python >>> from mindspore.nn import Conv2dBnAct, DenseBnAct ``` |