nn_gelu {torch}R Documentation

GELU module

Description

Applies the Gaussian Error Linear Units function:

\mbox{GELU}(x) = x * \Phi(x)

Usage

nn_gelu(approximate = "none")

Arguments

approximate

the gelu approximation algorithm to use: 'none' or 'tanh'. Default: 'none'.

Details

where \Phi(x) is the Cumulative Distribution Function for Gaussian Distribution.

Shape

Examples

if (torch_is_installed()) {
m <- nn_gelu()
input <- torch_randn(2)
output <- m(input)
}

[Package torch version 0.13.0 Index]