nn_mse_loss {torch}R Documentation

MSE loss

Description

Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input xx and target yy. The unreduced (i.e. with reduction set to 'none') loss can be described as:

Usage

nn_mse_loss(reduction = "mean")

Arguments

reduction

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed.

Details

(x,y)=L={l1,,lN},ln=(xnyn)2, \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left( x_n - y_n \right)^2,

where NN is the batch size. If reduction is not 'none' (default 'mean'), then:

(x,y)=\mboxmean(L),\mboxifreduction=\mboxmean;\mboxsum(L),\mboxifreduction=\mboxsum. \ell(x, y) = \begin{array}{ll} \mbox{mean}(L), & \mbox{if reduction} = \mbox{'mean';}\\ \mbox{sum}(L), & \mbox{if reduction} = \mbox{'sum'.} \end{array}

xx and yy are tensors of arbitrary shapes with a total of nn elements each.

The mean operation still operates over all the elements, and divides by nn. The division by nn can be avoided if one sets reduction = 'sum'.

Shape

Examples

if (torch_is_installed()) {
loss <- nn_mse_loss()
input <- torch_randn(3, 5, requires_grad = TRUE)
target <- torch_randn(3, 5)
output <- loss(input, target)
output$backward()
}

[Package torch version 0.13.0 Index]