!8774 modify api example

From: @lijiaqi0612
Reviewed-by: @youui,@liangchenghui
Signed-off-by: @liangchenghui
pull/8774/MERGE
mindspore-ci-bot 4 years ago committed by Gitee
commit d6eac77ffd

@ -383,7 +383,7 @@ class SoftmaxCrossEntropyWithLogits(GraphKernel):
Sets input logits as `X`, input label as `Y`, output as `loss`. Then,
.. math::
p_{ij} = softmax(X_{ij}) = \frac{exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)}
p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)}
.. math::
loss_{ij} = -\sum_j{Y_{ij} * ln(p_{ij})}
@ -666,7 +666,7 @@ class LogSoftmax(GraphKernel):
the Log Softmax function is shown as follows:
.. math::
\text{output}(x_i) = \log \left(\frac{exp(x_i)} {\sum_{j = 0}^{N-1}\exp(x_j)}\right),
\text{output}(x_i) = \log \left(\frac{\exp(x_i)} {\sum_{j = 0}^{N-1}\exp(x_j)}\right),
where :math:`N` is the length of the Tensor.
@ -674,7 +674,7 @@ class LogSoftmax(GraphKernel):
axis (int): The axis to do the Log softmax operation. Default: -1.
Inputs:
logits (Tensor): The input of Log Softmax.
- **logits** (Tensor) - The input of Log Softmax.
Outputs:
Tensor, with the same type and shape as the logits.

@ -127,7 +127,7 @@ def _make_axis_range(start, end):
class EmbeddingLookup(Cell):
r"""
Returns a slice of input tensor based on the specified indices.
Returns a slice of the input tensor based on the specified indices.
Note:
When 'target' is set to 'CPU', this module will use

@ -22,7 +22,7 @@ class Exp(PowerTransform):
This Bijector performs the operation:
.. math::
Y = exp(x).
Y = \exp(x).
Args:
name (str): The name of the Bijector. Default: 'Exp'.

@ -24,7 +24,7 @@ Examples:
>>> import mindspore.ops as ops
Note:
- The Primitive operators in operations need to be used after instantiation.
- The Primitive operators in operations need to be instantiated before being used.
- The composite operators are the pre-defined combination of operators.
- The functional operators are the pre-instantiated Primitive operators, which can be used directly as a function.
- For functional operators usage, please refer to

@ -352,7 +352,7 @@ class GradOperation(GradOperation_):
class MultitypeFuncGraph(MultitypeFuncGraph_):
"""
Generate overloaded functions.
Generates overloaded functions.
MultitypeFuncGraph is a class used to generate overloaded functions, considering different types as inputs.
Initialize an `MultitypeFuncGraph` object with name, and use `register` with input types as the decorator

@ -171,11 +171,11 @@ def TensorDot(x1, x2, axes):
axes = 2 is the same as axes = ((0,1),(1,2)) where length of input shape is 3 for both `a` and `b`
Inputs:
- **x1** (Tensor): First tensor in TensorDot op with datatype float16 or float32
- **x2** (Tensor): Second tensor in TensorDot op with datatype float16 or float32
- **axes** (Union[int, tuple(int), tuple(tuple(int)), list(list(int))]): Single value or
tuple/list of length 2 with dimensions specified for `a` and `b` each. If single value `N` passed,
automatically picks up first N dims from `a` input shape and last N dims from `b` input shape.
- **x1** (Tensor) - First tensor in TensorDot op with datatype float16 or float32
- **x2** (Tensor) - Second tensor in TensorDot op with datatype float16 or float32
- **axes** (Union[int, tuple(int), tuple(tuple(int)), list(list(int))]) - Single value or
tuple/list of length 2 with dimensions specified for `a` and `b` each. If single value `N` passed,
automatically picks up first N dims from `a` input shape and last N dims from `b` input shape.
Outputs:
Tensor, the shape of the output tensor is :math:`(N + M)`. Where :math:`N` and :math:`M` are the free axes not

@ -342,7 +342,7 @@ class AiCPURegOp(RegOp):
class TBERegOp(RegOp):
"""Class for TBE op info register."""
"""Class for TBE operator information register."""
def __init__(self, op_name):
super(TBERegOp, self).__init__(op_name)

File diff suppressed because it is too large Load Diff

@ -25,7 +25,7 @@ from ..primitive import PrimitiveWithInfer, PrimitiveWithCheck, prim_attr_regist
class ReduceOp:
"""
Operation options for reduce tensors.
Operation options for reducing tensors.
There are four kinds of operation options, "SUM", "MAX", "MIN", and "PROD".

@ -23,7 +23,7 @@ from ..primitive import Primitive, PrimitiveWithInfer, prim_attr_register
class ControlDepend(Primitive):
"""
Adds control dependency relation between source and destination operation.
Adds control dependency relation between source and destination operations.
In many cases, we need to control the execution order of operations. ControlDepend is designed for this.
ControlDepend will instruct the execution engine to run the operations in a specific order. ControlDepend

@ -84,7 +84,7 @@ class ScalarSummary(PrimitiveWithInfer):
class ImageSummary(PrimitiveWithInfer):
"""
Outputs image tensor to protocol buffer through image summary operator.
Outputs the image tensor to protocol buffer through image summary operator.
Inputs:
- **name** (str) - The name of the input variable, it must not be an empty string.
@ -167,7 +167,7 @@ class TensorSummary(PrimitiveWithInfer):
class HistogramSummary(PrimitiveWithInfer):
"""
Outputs tensor to protocol buffer through histogram summary operator.
Outputs the tensor to protocol buffer through histogram summary operator.
Inputs:
- **name** (str) - The name of the input variable.
@ -209,7 +209,7 @@ class HistogramSummary(PrimitiveWithInfer):
class InsertGradientOf(PrimitiveWithInfer):
"""
Attaches callback to graph node that will be invoked on the node's gradient.
Attaches callback to the graph node that will be invoked on the node's gradient.
Args:
f (Function): MindSpore's Function. Callback function.
@ -325,7 +325,7 @@ class HookBackward(PrimitiveWithInfer):
class Print(PrimitiveWithInfer):
"""
Outputs tensor or string to stdout.
Outputs the tensor or string to stdout.
Note:
In pynative mode, please use python print function.
@ -368,7 +368,7 @@ class Print(PrimitiveWithInfer):
class Assert(PrimitiveWithInfer):
"""
Asserts that the given condition is true.
Asserts that the given condition is True.
If input condition evaluates to false, print the list of tensor in data.
Args:

@ -23,7 +23,7 @@ from ..primitive import prim_attr_register, PrimitiveWithInfer
class ScalarCast(PrimitiveWithInfer):
"""
Cast the input scalar to another type.
Casts the input scalar to another type.
Inputs:
- **input_x** (scalar) - The input scalar. Only constant value is allowed.

File diff suppressed because it is too large Load Diff

@ -174,7 +174,7 @@ class LogSoftmax(PrimitiveWithInfer):
the Log Softmax function is shown as follows:
.. math::
\text{output}(x_i) = \log \left(\frac{exp(x_i)} {\sum_{j = 0}^{N-1}\exp(x_j)}\right),
\text{output}(x_i) = \log \left(\frac{\exp(x_i)} {\sum_{j = 0}^{N-1}\exp(x_j)}\right),
where :math:`N` is the length of the Tensor.
@ -293,7 +293,7 @@ class Softsign(PrimitiveWithInfer):
class ReLU(PrimitiveWithInfer):
r"""
Computes ReLU(Rectified Linear Unit) of input tensor element-wise.
Computes ReLU (Rectified Linear Unit) of input tensors element-wise.
It returns :math:`\max(x,\ 0)` element-wise.
@ -330,7 +330,7 @@ class ReLU(PrimitiveWithInfer):
class ReLU6(PrimitiveWithInfer):
r"""
Computes ReLU(Rectified Linear Unit) upper bounded by 6 of input tensor element-wise.
Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input tensors element-wise.
It returns :math:`\min(\max(0,x), 6)` element-wise.
@ -367,7 +367,7 @@ class ReLU6(PrimitiveWithInfer):
class ReLUV2(PrimitiveWithInfer):
r"""
Computes ReLU(Rectified Linear Unit) of input tensor element-wise.
Computes ReLU (Rectified Linear Unit) of input tensors element-wise.
It returns :math:`\max(x,\ 0)` element-wise.
@ -435,7 +435,18 @@ class ReLUV2(PrimitiveWithInfer):
class Elu(PrimitiveWithInfer):
r"""
Computes exponential linear: `alpha * (exp(x) - 1)` if x < 0, `x` otherwise.
Computes exponential linear:
if x < 0:
.. math::
\text{x} = \alpha * (\exp(\text{x}) - 1)
if x >= 0:
.. math::
\text{x} = \text{x}
The data type of input tensor must be float.
Args:
@ -523,7 +534,7 @@ class Sigmoid(PrimitiveWithInfer):
Computes Sigmoid of input element-wise. The Sigmoid function is defined as:
.. math::
\text{sigmoid}(x_i) = \frac{1}{1 + exp(-x_i)},
\text{sigmoid}(x_i) = \frac{1}{1 + \exp(-x_i)},
where :math:`x_i` is the element of the input.
@ -640,7 +651,7 @@ class Tanh(PrimitiveWithInfer):
class FusedBatchNorm(Primitive):
r"""
FusedBatchNorm is a BatchNorm that moving mean and moving variance will be computed instead of being loaded.
FusedBatchNorm is a BatchNorm. Moving mean and moving variance will be computed instead of being loaded.
Batch Normalization is widely used in convolutional networks. This operation applies
Batch Normalization over input to avoid internal covariate shift as described in the
@ -848,7 +859,7 @@ class FusedBatchNormEx(PrimitiveWithInfer):
class BNTrainingReduce(PrimitiveWithInfer):
"""
For BatchNorm operator, this operator update the moving averages for training and is used in conjunction with
For the BatchNorm operation this operator update the moving averages for training and is used in conjunction with
BNTrainingUpdate.
Inputs:
@ -885,7 +896,7 @@ class BNTrainingReduce(PrimitiveWithInfer):
class BNTrainingUpdate(PrimitiveWithInfer):
"""
For BatchNorm operator, this operator update the moving averages for training and is used in conjunction with
For the BatchNorm operation, this operator update the moving averages for training and is used in conjunction with
BNTrainingReduce.
Args:
@ -1508,7 +1519,7 @@ class MaxPool(_Pool):
class MaxPoolWithArgmax(_Pool):
r"""
Perform max pooling on the input Tensor and return both max values and indices.
Performs max pooling on the input Tensor and returns both max values and indices.
Typically the input is of shape :math:`(N_{in}, C_{in}, H_{in}, W_{in})`, MaxPool outputs
regional maximum in the :math:`(H_{in}, W_{in})`-dimension. Given kernel size
@ -1915,7 +1926,7 @@ class SoftmaxCrossEntropyWithLogits(PrimitiveWithInfer):
Sets input logits as `X`, input label as `Y`, output as `loss`. Then,
.. math::
p_{ij} = softmax(X_{ij}) = \frac{exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)}
p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)}
.. math::
loss_{ij} = -\sum_j{Y_{ij} * ln(p_{ij})}
@ -1966,7 +1977,7 @@ class SparseSoftmaxCrossEntropyWithLogits(PrimitiveWithInfer):
Sets input logits as `X`, input label as `Y`, output as `loss`. Then,
.. math::
p_{ij} = softmax(X_{ij}) = \frac{exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)}
p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)}
.. math::
loss_{ij} = \begin{cases} -ln(p_{ij}), &j = y_i \cr -ln(1 - p_{ij}), & j \neq y_i \end{cases}
@ -2283,7 +2294,7 @@ class RNNTLoss(PrimitiveWithInfer):
class SGD(PrimitiveWithCheck):
"""
Computes stochastic gradient descent (optionally with momentum).
Computes the stochastic gradient descent. Momentum is optional.
Nesterov momentum is based on the formula from On the importance of
initialization and momentum in deep learning.
@ -2775,7 +2786,7 @@ class DropoutDoMask(PrimitiveWithInfer):
class ResizeBilinear(PrimitiveWithInfer):
r"""
Resizes the image to certain size using bilinear interpolation.
Resizes an image to a certain size using the bilinear interpolation.
The resizing only affects the lower two dimensions which represent the height and width. The input images
can be represented by different data types, but the data types of output images are always float32.
@ -3067,7 +3078,7 @@ class PReLU(PrimitiveWithInfer):
class LSTM(PrimitiveWithInfer):
"""
Performs the long short term memory(LSTM) on the input.
Performs the Long Short-Term Memory (LSTM) on the input.
For detailed information, please refer to `nn.LSTM`.
@ -3227,7 +3238,7 @@ class SigmoidCrossEntropyWithLogits(PrimitiveWithInfer):
class Pad(PrimitiveWithInfer):
"""
Pads input tensor according to the paddings.
Pads the input tensor according to the paddings.
Args:
paddings (tuple): The shape of parameter `paddings` is (N, 2). N is the rank of input data. All elements of
@ -3367,7 +3378,7 @@ class MirrorPad(PrimitiveWithInfer):
class ROIAlign(PrimitiveWithInfer):
"""
Computes Region of Interest (RoI) Align operator.
Computes the Region of Interest (RoI) Align operator.
The operator computes the value of each sampling point by bilinear interpolation from the nearby grid points on the
feature map. No quantization is performed on any coordinates involved in the RoI, its bins, or the sampling
@ -3435,7 +3446,7 @@ class ROIAlign(PrimitiveWithInfer):
class Adam(PrimitiveWithInfer):
r"""
Updates gradients by Adaptive Moment Estimation (Adam) algorithm.
Updates gradients by the Adaptive Moment Estimation (Adam) algorithm.
The Adam algorithm is proposed in `Adam: A Method for Stochastic Optimization <https://arxiv.org/abs/1412.6980>`_.
@ -3643,7 +3654,7 @@ class AdamNoUpdateParam(PrimitiveWithInfer):
class FusedSparseAdam(PrimitiveWithInfer):
r"""
Merges the duplicate value of the gradient and then updates parameters by Adaptive Moment Estimation (Adam)
Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (Adam)
algorithm. This operator is used when the gradient is sparse.
The Adam algorithm is proposed in `Adam: A Method for Stochastic Optimization <https://arxiv.org/abs/1412.6980>`_.
@ -3780,7 +3791,7 @@ class FusedSparseAdam(PrimitiveWithInfer):
class FusedSparseLazyAdam(PrimitiveWithInfer):
r"""
Merges the duplicate value of the gradient and then updates parameters by Adaptive Moment Estimation (Adam)
Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (LazyAdam)
algorithm. This operator is used when the gradient is sparse. The behavior is not equivalent to the
original Adam algorithm, as only the current indices parameters will be updated.
@ -4815,7 +4826,7 @@ class SparseApplyAdagrad(PrimitiveWithInfer):
class SparseApplyAdagradV2(PrimitiveWithInfer):
r"""
Updates relevant entries according to the adagrad scheme.
Updates relevant entries according to the adagrad scheme, one more epsilon attribute than SparseApplyAdagrad.
.. math::
accum += grad * grad
@ -5357,7 +5368,7 @@ class ApplyPowerSign(PrimitiveWithInfer):
class ApplyGradientDescent(PrimitiveWithInfer):
r"""
Updates relevant entries according to the following formula.
Updates relevant entries according to the following.
.. math::
var = var - \alpha * \delta
@ -5521,7 +5532,7 @@ class ApplyProximalGradientDescent(PrimitiveWithInfer):
class LARSUpdate(PrimitiveWithInfer):
"""
Conducts lars (layer-wise adaptive rate scaling) update on the sum of squares of gradient.
Conducts LARS (layer-wise adaptive rate scaling) update on the sum of squares of gradient.
Args:
epsilon (float): Term added to the denominator to improve numerical stability. Default: 1e-05.
@ -5800,7 +5811,8 @@ class SparseApplyFtrl(PrimitiveWithCheck):
class SparseApplyFtrlV2(PrimitiveWithInfer):
"""
Updates relevant entries according to the FTRL-proximal scheme.
Updates relevant entries according to the FTRL-proximal scheme. This class has one more attribute, named
l2_shrinkage, than class SparseApplyFtrl.
All of inputs except `indices` comply with the implicit type conversion rules to make the data types consistent.
If they have different data types, lower priority data type will be converted to
@ -6362,7 +6374,7 @@ class DynamicRNN(PrimitiveWithInfer):
class InTopK(PrimitiveWithInfer):
r"""
Whether the targets are in the top `k` predictions.
Determines whether the targets are in the top `k` predictions.
Args:
k (int): Specifies the number of top elements to be used for computing precision.

@ -287,7 +287,7 @@ class PrimitiveWithCheck(Primitive):
class PrimitiveWithInfer(Primitive):
"""
PrimitiveWithInfer is the base class of primitives in python defines functions for tracking inference in python.
PrimitiveWithInfer is the base class of primitives in python and defines functions for tracking inference in python.
There are four method can be overide to define the infer logic of the primitive: __infer__(), infer_shape(),
infer_dtype(), and infer_value(). If __infer__() is defined in primitive, the __infer__() has highest priority
@ -464,8 +464,8 @@ def prim_attr_register(fn):
def constexpr(fn=None, get_instance=True, name=None):
"""
Make a PrimitiveWithInfer operator that can infer the value at compile time. We can use it to define a function to
compute constant value using the constants in the constructor.
Creates a PrimitiveWithInfer operator that can infer the value at compile time. We can use it to define a function
to compute constant value using the constants in the constructor.
Args:
fn (function): A `fn` use as the infer_value of the output operator.

@ -37,7 +37,7 @@ Examples:
def get_vm_impl_fn(prim):
"""
Get the virtual implementation function by a primitive object or primitive name.
Gets the virtual implementation function by a primitive object or primitive name.
Args:
prim (Union[Primitive, str]): primitive object or name for operator register.

Loading…
Cancel
Save