| network {leabRa} | R Documentation |
Leabra network class
Description
Class to simulate a biologically realistic network of neurons
(units) organized in layers
Usage
network
Format
R6Class object.
Details
This class simulates a biologically realistic artificial neuronal network in
the Leabra framework (e.g. O'Reilly et al., 2016). It consists of several
layer objects in the variable (field) layers and some
network-specific variables.
Value
Object of R6Class with methods for calculating changes
of activation in a network of neurons organized in layers.
Fields
layersA list of
layerobjects.lrateLearning rate, gain factor for how much the connection weights should change when the method
chg_wt()is called.
Methods
new(dim_lays, cxn, g_i_gain = rep(2, length(dim_lays)), w_init_fun = function(x) runif(x, 0.3, 0.7), w_init = NULL)Creates an object of this class with default parameters.
dim_laysList of number pairs for rows and columns of the layers, e.g.
list(c(5, 5), c(10, 10), c(5, 5))for a 25 x 100 x 25 network.cxnMatrix specifying connection strength between layers, if layer j sends projections to layer i, then
cxn[i, j] = strength > 0and 0 otherwise. Strength specifies the relative strength of that connection with respect to the other projections to layer i.g_i_gainVector of inhibitory conductance gain values for every layer. This comes in handy to control overall level of inhibition of specific layers. Default is 2 for every layer.
w_init_funFunction that specifies how random weights should be created, default value is to generate weights between 0.3 and 0.7 from a uniform distribution. It is close to 0.5 because the weights are contrast enhanced internally, so will actually be in a wider range.
w_initMatrix of initial weight matrices (like a cell array in 'MATLAB'), this is analogous to
cxn, i.e.w_init[i, j]contains the initial weight matrix for the connection from layer j to i. If you specify aw_init,w_init_funis ignored. You can use this if you want to have full control over the weight matrix.
cycle(ext_inputs, clamp_inp)Iterates one time step with the network object with external inputs.
ext_inputsA list of matrices; ext_inputs[[i]] is a matrix that for layer i specifies the external input to each of its units. An empty matrix (
NULL) denotes no input to that layer. You can also use a vector instead of a matrix, because the matrix is vectorized anyway.clamp_inpLogical variable; TRUE: external inputs are clamped to the activities of the units in the layers, FALSE: external inputs are summed to excitatory conductance values (note: not to the activation) of the units in the layers.
chg_wt()Changes the weights of the entire network with the XCAL learning equation.
reset(random = F)Sets the activation of all units in all layers to 0, and sets all activation time averages to that value. Used to begin trials from a random stationary point. The activation values may also be set to a random value.
randomLogical variable, if TRUE set activation randomly between .05 and .95, if FALSE set activation to 0, which is the default.
create_inputs(which_layers, n_inputs, prop_active = 0.3)Returns a list of length
n_inputswith random input patterns (either 0.05, or. 0.95) for the layers specified inwhich_layers. All other layers will have an input of NULL.which_layersVector of layer numbers, for which you want to create random inputs.
n_inputsSingle numeric value, how many inputs should be created.
prop_activeAverage proportion of active units in the input patterns, default is 0.3.
learn_error_driven(inputs_minus, inputs_plus, lrate = 0.1, n_cycles_minus = 50, n_cycles_plus = 25, random_order = FALSE, show_progress = TRUE)Learns to associate specific inputs with specific outputs in an error-driven fashion.
inputs_minusInputs for the minus phase (the to be learned output is not presented).
inputs_plusInputs for the plus phase (the to be learned output is presented).
lrateLearning rate, default is 0.1.
n_cycles_minusHow many cycles to run in the minus phase, default is 50.
n_cycles_plusHow many cycles to run in the plus phase, default is 25.
random_orderShould the order of stimulus presentation be randomized? Default is FALSE.
show_progressWhether progress of learning should be shown. Default is TRUE.
learn_self_organized(inputs, lrate = 0.1, n_cycles = 50, random_order = FALSE, show_progress = TRUE)Learns to categorize inputs in a self-organized fashion.
inputsInputs for cycling.
lrateLearning rate, default is 0.1.
n_cyclesHow many cycles to run, default is 50.
random_orderShould the order of stimulus presentation be randomized? Default is FALSE.
show_progressWhether progress of learning should be shown. Default is TRUE.
test_inputs = function(inputs, n_cycles = 50, show_progress = FALSE)Tests inputs without changing the weights (without learning). This is usually done after several learning runs.
inputsInputs for cycling.
n_cyclesHow many cycles to run, default is 50.
show_progressWhether progress of learning should be shown. Default is FALSE.
mad_per_epoch(outs_per_epoch, inputs_plus, layer)Calculates mean absolute distance for two lists of activations for a specific layer. This can be used to compare whether the network has learned what it was supposed to learn.
outs_per_epochOutput activations for entire network for each trial for every epoch. This is what the network produced on its own.
inputs_plusOriginal inputs for the plus phase. This is what the network was supposed to learn.
layerSingle numeric, for which layer to calculate the mean absolute distance. Usually, this is the "output" layer.
set_weights(weights)Sets new weights for entire network, useful to load networks that have already learned and thus very specific weights.
weightsMatrix of matrices (like a cell array in 'MATLAB') with new weight values.
get_weights()Returns the complete weight matrix,
w[i, j]contains the weight matrix for the projections from layer j to layer i. Note that this is a matrix of matrices (equivalent to a 'MATLAB' cell array).get_layer_and_unit_vars(show_dynamics = T, show_constants = F)Returns a data frame with the current state of all layer and unit variables. Every row is a unit. You can choose whether you want dynamic values and / or constant values. This might be useful if you want to analyze what happens in the network overall, which would otherwise not be possible, because most of the variables (fields) are private in the layer and unit class.
show_dynamicsShould dynamic values be shown? Default is TRUE.
show_constantsShould constant values be shown? Default is FALSE.
get_network_vars(show_dynamics = T, show_constants = F)Returns a data frame with 1 row with the current state of the variables in the network. You can choose whether you want dynamic values and / or constant values. This might be useful if you want to analyze what happens in a network, which would otherwise not be possible, because some of the variables (fields) are private in the network class. There are some additional variables in the network class that cannot be extracted this way because they are matrices; if it is necessary to extract them, look at the source code.
show_dynamicsShould dynamic values be shown? Default is TRUE.
show_constantsShould constant values be shown? Default is FALSE.
References
O'Reilly, R. C., Munakata, Y., Frank, M. J., Hazy, T. E., and Contributors (2016). Computational Cognitive Neuroscience. Wiki Book, 3rd (partial) Edition. URL: http://ccnbook.colorado.edu
Have also a look at https://grey.colorado.edu/emergent/index.php/Leabra (especially the link to the 'MATLAB' code) and https://en.wikipedia.org/wiki/Leabra
Examples
# create a small network with 3 layers
dim_lays <- list(c(2, 5), c(2, 10), c(2, 5))
cxn <- matrix(c(0, 0, 0,
1, 0, 0.2,
0, 1, 0), nrow = 3, byrow = TRUE)
net <- network$new(dim_lays, cxn)
net$m_in_s # private values cannot be accessed
# if you want to see alle variables, you need to use the function
net$get_network_vars(show_dynamics = TRUE, show_constants = TRUE)
# if you want to see a summary of all units (with layer information) without
# constant values
net$get_layer_and_unit_vars(show_dynamics = TRUE, show_constants = FALSE)
# let us create 10 random inputs for layer 1 and 3
inputs <- net$create_inputs(c(1, 3), 10)
inputs # a list of lists
# the input in layer 1 should be associated with the output in layer 3; we
# can use error driven learning to achieve this
# first we will need the input for the minus phase (where no correct output
# is presented; layer 3 is NULL)
inputs_minus <- lapply(inputs, function(x) replace(x, 3, list(NULL)))
inputs_minus # layer 3 is indeed NULL
# now we can learn with default parameters; we will run 10 epochs,
# inputs_plus is equivalent to inputs; the output will be activations after
# each trial for the wohle network; this might take a while depending on your
# system
n_epochs <- 10
## Not run:
output <- lapply(seq(n_epochs),
function(x) net$learn_error_driven(inputs_minus,
inputs,
lrate = 0.5))
# let's compare the actual output with what should have been learned we can
# use the method mad_per_epoch for this; it will calculate the mean absolute
# distance for each epoch; we are interested in layer 3
mad <- net$mad_per_epoch(output, inputs, 3)
# the error should decrease with increasing epoch number
plot(mad)
## End(Not run)