# MindSpore 1.2.0 Release Notes ## MindSpore ### API Change #### Backwards Incompatible Change ##### Python API ###### Turn `ops.MakeRefKey` into an internal interface ([!12010](https://gitee.com/mindspore/mindspore/pulls/12010)) Previously MakeRefKey is an external interface that is not used, now make it an internal interface with the same usage. We do not recommend users to use this interface, and we will remove the relevant introduction of this interface from the official website. ###### `ops.ApplyFtrl`, `ops.ApplyMomentum`, `ops.ApplyRMSProp`, `ops.ApplyCenteredRMSProp` change the output on Ascend backend from multiple to a single. ([!11895](https://gitee.com/mindspore/mindspore/pulls/11895)) Previously the number of outputs of these operator is different on different backends. To unify their definition we change their output on Ascend backend from multiple to a single. # MindSpore 1.1.1 Release Notes ## MindSpore ### Major Features and Improvements #### NewModels - [STABLE] BGCF: a Bayesian Graph Collaborative Filtering(BGCF) framework used to model the uncertainty in the user-item interaction graph and thus recommend accurate and diverse items on Amazon recommendation dataset.(Ascend) - [STABLE] GRU: a recurrent neural network architecture like the LSTM(Long-Short Term Memory) on Multi30K dataset.(Ascend) - [STABLE] FastText: a simple and efficient text classification algorithm on AG's news topic classification dataset, DBPedia Ontology classification dataset and Yelp Review Polarity dataset.(Ascend) - [STABLE] LSTM: a recurrent neural network architecture used to learn word vectors for sentiment analysis on aclImdb_v1 dataset.(Ascend) - [STABLE] SimplePoseNet: a convolution-based neural network for the task of human pose estimation and tracking on COCO2017 dataset.(Ascend) #### FrontEnd - [BETA] Support Tensor Fancy Index Getitem with tuple and list. (Ascend/GPU/CPU) ### Backwards Incompatible Change #### Python API ##### `ops.AvgPool`, `ops.MaxPool`, `ops.MaxPoolWithArgmax` change attr name from 'ksize', 'padding' to 'kernel_size', 'pad_mode' ([!11350](https://gitee.com/mindspore/mindspore/pulls/11350)) Previously the kernel size and pad mode attrs of pooling ops are named "ksize" and "padding", which is a little puzzling and inconsistent with convolution ops. So they are rename to "kernel_size" and "pad_mode".
1.1.0 | 1.1.1 |
```python >>> import mindspore.ops as ops >>> >>> avg_pool = ops.AvgPool(ksize=2, padding='same') >>> max_pool = ops.MaxPool(ksize=2, padding='same') >>> max_pool_with_argmax = ops.MaxPoolWithArgmax(ksize=2, padding='same') ``` | ```python >>> import mindspore.ops as ops >>> >>> avg_pool = ops.AvgPool(kernel_size=2, pad_mode='same') >>> max_pool = ops.MaxPool(kernel_size=2, pad_mode='same') >>> max_pool_with_argmax = ops.MaxPoolWithArgmax(kernel_size=2, pad_mode='same') ``` |
1.1.0 | 1.1.1 |
```python >>> import mindspore.ops as ops >>> >>> add = ops.TensorAdd() ``` | ```python >>> import mindspore.ops as ops >>> >>> add = ops.Add() ``` |
1.1.0 | 1.1.1 |
```python >>> import mindspore.ops as ops >>> >>> gelu = ops.Gelu() >>> gelu_grad = ops.GeluGrad() >>> fast_gelu = ops.FastGelu() >>> fast_gelu_grad = ops.FastGeluGrad() ``` | ```python >>> import mindspore.ops as ops >>> >>> gelu = ops.GeLU() >>> gelu_grad = ops.GeLUGrad() >>> fast_gelu = ops.FastGeLU() >>> fast_gelu_grad = ops.FastGeLUGrad() ``` |
1.1.0 | 1.1.1 |
```python >>> import mindspore.ops as ops >>> >>> gather = ops.GatherV2() ``` | ```python >>> import mindspore.ops as ops >>> >>> gather = ops.Gather() ``` |
1.1.0 | 1.1.1 |
```python >>> import mindspore.ops as ops >>> >>> pack= ops.Pack() >>> unpack= ops.Unpack() ``` | ```python >>> import mindspore.ops as ops >>> >>> stack= ops.Stack() >>> unstack= ops.Unstack() ``` |
1.1.0 | 1.1.1 |
```pythonNote: Note: This operation does not work in `PYNATIVE_MODE`. ``` | ```python Note: This operation does not work in `PYNATIVE_MODE`. `ControlDepend` is deprecated from version 1.1 and will be removed in a future version, use `Depend` instead. ``` |
1.1.0 | 1.1.1 |
```python Depend is used for processing side-effect operations. Inputs: - **value** (Tensor) - the real value to return for depend operator. - **expr** (Expression) - the expression to execute with no outputs. Outputs: Tensor, the value passed by last operator. Supported Platforms: ``Ascend`` ``GPU`` ``CPU`` ``` | ```python Depend is used for processing dependency operations. In some side-effect scenarios, we need to ensure the execution order of operators. In order to ensure that operator A is executed before operator B, it is recommended to insert the Depend operator between operators A and B. Previously, the ControlDepend operator was used to control the execution order. Since the ControlDepend operator will be deprecated from version 1.2, it is recommended to use the Depend operator instead. The replacement method is as follows:: a = A(x) ---> a = A(x) b = B(y) ---> y = Depend(y, a) ControlDepend(a, b) ---> b = B(y) Inputs: - **value** (Tensor) - the real value to return for depend operator. - **expr** (Expression) - the expression to execute with no outputs. Outputs: Tensor, the value passed by last operator. Supported Platforms: ``Ascend`` ``GPU`` ``CPU`` Examples: >>> import numpy as np >>> import mindspore >>> import mindspore.nn as nn >>> import mindspore.ops.operations as P >>> from mindspore import Tensor >>> class Net(nn.Cell): ... def __init__(self): ... super(Net, self).__init__() ... self.softmax = P.Softmax() ... self.depend = P.Depend() ... ... def construct(self, x, y): ... mul = x * y ... y = self.depend(y, mul) ... ret = self.softmax(y) ... return ret ... >>> x = Tensor(np.ones([4, 5]), dtype=mindspore.float32) >>> y = Tensor(np.ones([4, 5]), dtype=mindspore.float32) >>> net = Net() >>> output = net(x, y) >>> print(output) [[0.2 0.2 0.2 0.2 0.2] [0.2 0.2 0.2 0.2 0.2] [0.2 0.2 0.2 0.2 0.2] [0.2 0.2 0.2 0.2 0.2]] ``` |
1.1.0 | 1.1.1 |
```c++ namespace ms = mindspore::api; ``` | ```c++ namespace ms = mindspore; ``` |
1.1.0 | 1.1.1 |
```c++ ms::Context::Instance().SetDeviceTarget(ms::kDeviceTypeAscend310).SetDeviceID(0); ``` | ```c++ ms::GlobalContext::SetGlobalDeviceTarget(ms::kDeviceTypeAscend310); ms::GlobalContext::SetGlobalDeviceID(0); ``` |
1.1.0 | 1.1.1 |
```c++ ms::Tensor a; ``` | ```c++ ms::MSTensor a; ``` |
1.1.0 | 1.1.1 |
```c++ ms::Model model(graph_cell); model.Build(model_options); ``` | ```c++ ms::Model model(graph_cell, model_context); model.Build(); ``` |
1.1.0 | 1.1.1 |
```c++
std::vector |
```c++ auto inputs = model.GetInputs(); std::cout << "Input 0 name: " << inputs[0].Name() << std::endl; ``` |
1.1.0 | 1.1.1 |
```c++
std::vector |
```c++
std::vector |
1.0.1 | 1.1.0 |
```python >>> import mindspore.nn as nn >>> from mindspore.common import initializer >>> from mindspore import dtype as mstype >>> >>> def conv3x3(in_channels, out_channels) >>> weight = initializer('XavierUniform', shape=(3, 2, 32, 32), dtype=mstype.float32) >>> return nn.Conv2d(in_channels, out_channels, weight_init=weight, has_bias=False, pad_mode="same") ``` | ```python >>> import mindspore.nn as nn >>> from mindspore.common.initializer import XavierUniform >>> >>> #1) using string >>> def conv3x3(in_channels, out_channels) >>> return nn.Conv2d(in_channels, out_channels, weight_init='XavierUniform', has_bias=False, pad_mode="same") >>> >>> #2) using subclass of class Initializer >>> def conv3x3(in_channels, out_channels) >>> return nn.Conv2d(in_channels, out_channels, weight_init=XavierUniform(), has_bias=False, pad_mode="same") ``` |
1.0.1 | 1.1.0 |
```python >>> import mindspore.nn as nn >>> from mindspore.common import initializer >>> from mindspore.common.initializer import XavierUniform >>> >>> weight_init_1 = XavierUniform(gain=1.1) >>> conv1 = nn.Conv2d(3, 6, weight_init=weight_init_1) >>> weight_init_2 = XavierUniform(gain=1.1) >>> conv2 = nn.Conv2d(6, 10, weight_init=weight_init_2) ``` | ```python >>> import mindspore.nn as nn >>> from mindspore.common import initializer >>> from mindspore.common.initializer import XavierUniform >>> >>> weight_init = XavierUniform(gain=1.1) >>> conv1 = nn.Conv2d(3, 6, weight_init=weight_init) >>> conv2 = nn.Conv2d(6, 10, weight_init=weight_init) ``` |
1.0.1 | 1.1.0 |
```python >>> from mindspore import nn >>> >>> start = 1 >>> stop = 10 >>> num = 5 >>> linspace = nn.LinSpace(start, stop, num) >>> output = linspace() ``` | ```python >>> import mindspore >>> from mindspore import Tensor >>> from mindspore import ops >>> >>> linspace = ops.LinSpace() >>> start = Tensor(1, mindspore.float32) >>> stop = Tensor(10, mindspore.float32) >>> num = 5 >>> output = linspace(start, stop, num) ``` |
1.0.1 | 1.1.0 |
```python >>> from mindspore.nn import Adam >>> >>> net = LeNet5() >>> optimizer = Adam(filter(lambda x: x.requires_grad, net.get_parameters())) >>> optimizer.sparse_opt.add_prim_attr("primitive_target", "CPU") ``` | ```python >>> from mindspore.nn import Adam >>> >>> net = LeNet5() >>> optimizer = Adam(filter(lambda x: x.requires_grad, net.get_parameters())) >>> optimizer.target = 'CPU' ``` |
1.0.1 | 1.1.0 |
```python >>> from mindspore.train.quant import quant >>> >>> network = LeNetQuant() >>> inputs = Tensor(np.ones([1, 1, 32, 32]), mindspore.float32) >>> quant.export(network, inputs, file_name="lenet_quant.mindir", file_format='MINDIR') lenet_quant.mindir ``` | ```python >>> from mindspore import export >>> >>> network = LeNetQuant() >>> inputs = Tensor(np.ones([1, 1, 32, 32]), mindspore.float32) >>> export(network, inputs, file_name="lenet_quant", file_format='MINDIR', quant_mode='AUTO') lenet_quant.mindir ``` |
1.0.1 | 1.1.0 |
```python >>> import mindspore.nn as nn >>> >>> dense = nn.Dense(1, 1, activation='relu') ``` | ```python >>> import mindspore.nn as nn >>> import mindspore.ops as ops >>> >>> dense = nn.Dense(1, 1, activation=nn.ReLU()) >>> dense = nn.Dense(1, 1, activation=ops.ReLU()) ``` |
1.0.1 | 1.1.0 |
```python >>> from mindspore import Tensor >>> >>> Tensor((1,2,3)).size() >>> Tensor((1,2,3)).dim() ``` | ```python >>> from mindspore import Tensor >>> >>> Tensor((1,2,3)).size >>> Tensor((1,2,3)).ndim ``` |
1.0.1 | 1.1.0 |
```python >>> from mindspore.nn import EmbeddingLookup >>> >>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32) >>> result = EmbeddingLookup(4,2)(input_indices) >>> print(result.shape) (2, 2, 2) ``` | ```python >>> from mindspore.nn import EmbeddingLookup >>> >>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32) >>> result = EmbeddingLookup(4,2)(input_indices, sparse=False) >>> print(result.shape) (2, 2, 2) ``` |
1.0.1 | 1.1.0 |
```python >>> import mindspore.nn.probability.bijector as msb >>> >>> power = 2 >>> bijector = msb.PowerTransform(power=power) ``` | ```python >>> import mindspore.nn.probability.bijector as msb >>> >>> power = 2.0 >>> bijector = msb.PowerTransform(power=power) ``` |
1.0.1 | 1.1.0 |
```python >>> import mindspore.nn.probability.bijector as msb >>> from mindspore import dtype as mstype >>> >>> bijector = msb.GumbelCDF(loc=0.0, scale=1.0, dtype=mstype.float32) ``` | ```python >>> import mindspore.nn.probability.bijector as msb >>> >>> bijector = msb.GumbelCDF(loc=0.0, scale=1.0) ``` |
1.0.1 | 1.1.0 |
```python >>> from mindspore.nn.layer.quant import Conv2dBnAct, DenseBnAct ``` | ```python >>> from mindspore.nn import Conv2dBnAct, DenseBnAct ``` |