slpEXIT {catlearn}R Documentation

EXIT Category Learning Model

Description

EXemplar-based attention to distinctive InpuT model (Kruschke, 2001)

Usage

  slpEXIT(st, tr, xtdo = FALSE)

Arguments

st

List of model parameters

tr

R-by-C matrix of training items

xtdo

if TRUE extended output is returned

Details

The contents of this help file are relatively brief; a more extensive tutorial on using slpEXIT can be found in Spicer et al. (n.d.).

The functions works as a stateful list processor. Specifically, it takes a data frame as an argument, where each row is one trial for the network, and the columns specify the input representation, teaching signals, and other control signals. It returns a matrix where each row is a trial, and the columns are the response probabilities at the output units. It also returns the final state of the network (cue -> exemplar, and cue -> outcome weights), hence its description as a 'stateful' list processor.

References to Equations refer to the equation numbers used in the Appendix of Kruschke (2001).

Argument tr must be a data frame, where each row is one trial presented to the network, in the order of their occurence. tr requires the following columns:

x1, x2, ... - columns for each cue (1 = cue present, 0 = cue absent). These columns have to start with x1 ascending with features ..., x2, x3, ... at adjacent columns. See Notes 1, 2.

t1, t2, ... - columns for the teaching values indicating the category feedback on the current trial. Each category needs a single teaching signal in a dummy coded fashion, e.g., if the first category is the correct category for that trial, then t1 is set to 1, else it is set to 0. These columns have to start with t1 ascending with categories ..., t2, t3, ... at adjacent columns.

ctrl - vector of control codes. Available codes are: 0 = normal trial, 1 = reset network (i.e. reset connection weights to the values specified in st). 2 = freeze learning. Control codes are actioned before the trial processed.

opt1, opt2, ... - optional columns, which may have any name you wish. These optional columns are ignored by this function, but you may wish to use them for readability. For example, you might include columns for block number, trial number, and stimulus ID..

Argument st must be a list containing the following items:

nFeat - integer indicating the total number of possible stimulus features, i.e. the number of x1, x2, ... columns in tr.

nCat - integer indicating the total number of possible categories, i.e. the number of t1, t2, ... columns in tr.

phi - response scaling constant - Equation (2)

c - specificity parameter. Defines the narrowness of receptive field in exemplar node activation - Equation (3).

P - Attentional normalization power (attentional capacity) - Equation (5). If P equals 1 then the attention weights will satisfy the constraint that attention strength for currently present features will sum to one. The sum of attention strengths for present features grows as a function of P.

l_gain - attentional shift rate - Equation (7)

l_weight - learning rate for feature to category associations. - Equation (8)

l_ex - learning rate for exemplar_node to gain_node associations - Equation (9)

iterations - number of iterations of shifting attention on each trial (see Kruschke, 2001, p. 1400). If you're not sure what to use here, set it to 10.

sigma - Vector of cue saliences, one for each cue. If you're not sure what to put here, use 1 for all cues except the bias cue. For the bias cue, use some value between 0 and 1.

w_in_out - matrix with nFeat columns and nCat rows, defining the input-to-category association weights, i.e. how much each feature is associated to a category (see Equation 1). The nFeat columns follow the same order as x1, x2, ... in tr, and likewise, the nCat rows follow the order of t1, t2, ....

exemplars - matrix with nFeat columns and n rows, where n is the number of exemplars, such that each row represents a single exemplar in memory, and their corresponding feature values. The nFeat columns follow the same order as x1, x2, ... in tr. The n-rows follow the same order as in the w_exemplars matrix defined below. See Note 3.

w_exemplars - matrix which is structurally equivalent to exemplars. However, the matrix represents the associative weight from the exemplar nodes to the gain nodes, as given in Equation 4. The nFeat columns follow the same order as x1, x2, ... in tr. The n-rows follow the same order as in the exemplars matrix.

Value

Returns a list containing three components (if xtdo = FALSE) or four components (if xtdo = TRUE, g is also returned):

p

Matrix of response probabilities for each outcome on each trial

w_in_out

Matrix of final cue -> outcome associative strengths

w_exemplars

Matrix of final cue -> exemplar associative strengths

g

Vector of gains at the end of the final trial

Note

1. Code optimization in slpEXIT means it's essential that every cue is either set to 1 or to 0. If you use other values, it won't work properly. If you wish to represent cues of unequal salience, use sigma.

2. EXIT simulations normally include a 'bias' cue, i.e. a cue that is present on all trials. You will need to explicitly include this in your input representation in tr. For an example, see the output of krus96train.

3. The bias cue should be included in these exemplar representations, i.e. they should be the same as the representation of the stimuli in tr. For an example, see the output of krus96train.

Author(s)

René Schlegelmilch, Andy Wills, Angus Inkster

References

Kruschke, J. K. (1996). Base rates in category learning. Journal of Experimental Psychology-Learning Memory and Cognition, 22(1), 3-26.

Kruschke, J. K. (2001). The inverse base rate effect is not explained by eliminative inference. Journal of Experimental Psychology: Learning, Memory & Cognition, 27, 1385-1400.

Spicer, S.G., Schlegelmilch, R., Jones, P.M., Inkster, A.B., Edmunds, C.E.R. & Wills, A.J. (n.d.). Progress in learning theory through distributed collaboration: Concepts, tools, and examples. Manuscript in preparation.


[Package catlearn version 1.0 Index]