nn_conv2d {torch}R Documentation

Conv2D module

Description

Applies a 2D convolution over an input signal composed of several input planes.

Usage

nn_conv2d(
  in_channels,
  out_channels,
  kernel_size,
  stride = 1,
  padding = 0,
  dilation = 1,
  groups = 1,
  bias = TRUE,
  padding_mode = "zeros"
)

Arguments

in_channels

(int): Number of channels in the input image

out_channels

(int): Number of channels produced by the convolution

kernel_size

(int or tuple): Size of the convolving kernel

stride

(int or tuple, optional): Stride of the convolution. Default: 1

padding

(int or tuple or string, optional): Zero-padding added to both sides of the input. controls the amount of padding applied to the input. It can be either a string 'valid', 'same' or a tuple of ints giving the amount of implicit padding applied on both sides. Default: 0

dilation

(int or tuple, optional): Spacing between kernel elements. Default: 1

groups

(int, optional): Number of blocked connections from input channels to output channels. Default: 1

bias

(bool, optional): If TRUE, adds a learnable bias to the output. Default: TRUE

padding_mode

(string, optional): 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'

Details

In the simplest case, the output value of the layer with input size (N, C_{\mbox{in}}, H, W) and output (N, C_{\mbox{out}}, H_{\mbox{out}}, W_{\mbox{out}}) can be precisely described as:

\mbox{out}(N_i, C_{\mbox{out}_j}) = \mbox{bias}(C_{\mbox{out}_j}) + \sum_{k = 0}^{C_{\mbox{in}} - 1} \mbox{weight}(C_{\mbox{out}_j}, k) \star \mbox{input}(N_i, k)

where \star is the valid 2D cross-correlation operator, N is a batch size, C denotes a number of channels, H is a height of input planes in pixels, and W is width in pixels.

The parameters kernel_size, stride, padding, dilation can either be:

Note

Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.

When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also termed in literature as depthwise convolution. In other words, for an input of size :math:⁠(N, C_{in}, H_{in}, W_{in})⁠, a depthwise convolution with a depthwise multiplier K, can be constructed by arguments (in\_channels=C_{in}, out\_channels=C_{in} \times K, ..., groups=C_{in}).

In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting backends_cudnn_deterministic = TRUE.

Shape

Attributes

Examples

if (torch_is_installed()) {

# With square kernels and equal stride
m <- nn_conv2d(16, 33, 3, stride = 2)
# non-square kernels and unequal stride and with padding
m <- nn_conv2d(16, 33, c(3, 5), stride = c(2, 1), padding = c(4, 2))
# non-square kernels and unequal stride and with padding and dilation
m <- nn_conv2d(16, 33, c(3, 5), stride = c(2, 1), padding = c(4, 2), dilation = c(3, 1))
input <- torch_randn(20, 16, 50, 100)
output <- m(input)
}

[Package torch version 0.13.0 Index]