|
|
|
@ -481,11 +481,67 @@ def conv2d(input,
|
|
|
|
|
act=None,
|
|
|
|
|
name=None):
|
|
|
|
|
"""
|
|
|
|
|
This function creates the op for a 2-dimensional Convolution.
|
|
|
|
|
This is performed using the parameters of filters(size, dimensionality etc)
|
|
|
|
|
, stride and other configurations for a Convolution operation.
|
|
|
|
|
This funciton can also append an activation on top of the
|
|
|
|
|
conv-2d output, if mentioned in the input parameters.
|
|
|
|
|
**Convlution2D Layer**
|
|
|
|
|
|
|
|
|
|
The convolution2D layer calculates the output based on the input, filter
|
|
|
|
|
and strides, paddings, dilations, groups parameters. Input(Input) and Output(Output)
|
|
|
|
|
are in NCHW format. Where N is batch size, C is the number of channels, H is the height
|
|
|
|
|
of the feature, and W is the width of the feature.
|
|
|
|
|
The details of convolution layer, please refer UFLDL's `convolution,
|
|
|
|
|
<http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/>`_ .
|
|
|
|
|
If bias_attr and activation type are provided, bias is added to the output of the convolution,
|
|
|
|
|
and the corresponding activation function is applied to the final result.
|
|
|
|
|
For each input :math:`X`, the equation is:
|
|
|
|
|
|
|
|
|
|
.. math::
|
|
|
|
|
|
|
|
|
|
Out = \sigma (W\ast X + b)
|
|
|
|
|
|
|
|
|
|
In the above equation:
|
|
|
|
|
|
|
|
|
|
* :math:`X`: Input value, a tensor with NCHW format.
|
|
|
|
|
* :math:`W`: Filter value, a tensor with MCHW format.
|
|
|
|
|
* :math:`b`: Bias, .
|
|
|
|
|
* :math:\sigma : Activation function.
|
|
|
|
|
* :math:`Out`: Output value, the shape of :math:`Out` and :math:`X` may be different.
|
|
|
|
|
|
|
|
|
|
Example:
|
|
|
|
|
|
|
|
|
|
Input:
|
|
|
|
|
Input shape: $(N, C_{in}, H_{in}, W_{in})$
|
|
|
|
|
Filter shape: $(C_{out}, C_{in}, H_f, W_f)$
|
|
|
|
|
Output:
|
|
|
|
|
Output shape: $(N, C_{out}, H_{out}, W_{out})$
|
|
|
|
|
Where
|
|
|
|
|
$$
|
|
|
|
|
H_{out}= \\frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]}+ 1 \\
|
|
|
|
|
W_{out}= \\frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]}+ 1
|
|
|
|
|
$$
|
|
|
|
|
|
|
|
|
|
All the input variables are passed in as local variables to the LayerHelper
|
|
|
|
|
constructor.
|
|
|
|
|
|
|
|
|
|
Args:
|
|
|
|
|
input(Variable): Input tensors. The format of input tensor is NCHW.
|
|
|
|
|
num_filters(int): Number of filters
|
|
|
|
|
filter_size(list/int): Filter size of Conv2d Layer
|
|
|
|
|
stride(list/int, optional): Strides(h_s, w_s) of Conv2d Layer. Default: 1
|
|
|
|
|
padding(list/int, optional): Paddings(h_pad, w_pad) of Conv2d Layer. Default: 0
|
|
|
|
|
groups(int, optional): The groups number of the Conv2d Layer. Default: 1
|
|
|
|
|
param_attr(ParamAttr): The parameters to the Conv2d Layer. Default: None
|
|
|
|
|
bias_attr(ParamAttr): Bias parameter for the Conv2d layer. Default: None
|
|
|
|
|
act(str): Activation type. Default: None
|
|
|
|
|
name(str): Name/alias of the function
|
|
|
|
|
|
|
|
|
|
Returns:
|
|
|
|
|
Variable: The tensor variable storing the convolution and \
|
|
|
|
|
non-linearity activation result.
|
|
|
|
|
|
|
|
|
|
Examples:
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
|
|
data = fluid.layers.data(name='data', shape=[3,32, 32], dtype='float32')
|
|
|
|
|
conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3, act="relu")
|
|
|
|
|
"""
|
|
|
|
|
|
|
|
|
|
if stride is None:
|
|
|
|
|