explanation {ExplainPrediction} | R Documentation |
Explanation of predictions on instance and model level
Description
Using general explanation methodology EXPLAIN or IME, the function explainVis
explains
predictions of given model and visualizes the explanations.
An explanation of a prediction is given for individual instances; aggregation of instance explanations
gives a model explanation. The details are given in the description and references.
Usage
explainVis(model, trainData, testData,
method=c("EXPLAIN", "IME"), classValue=1,
fileType=c("none","pdf","eps","emf","jpg","png","bmp","tif","tiff"),
dirName=getwd(), fileName="explainVis", visLevel=c("both","model","instance"),
explainType=c("WE","infGain","predDiff"), naMode=c("avg", "na"),
nLaplace=nrow(trainData), estimator=NULL,
pError=0.05, err=0.05, batchSize=40, maxIter=1000,
genType=c("rf", "rbf", "indAttr"), noAvgBins=20,
displayAttributes=NULL, modelVisCompact=FALSE,
displayThreshold=0.0, normalizeTo=0,
colors=c("navyblue", "darkred", "blue", "red", "lightblue", "orange"),
noDecimalsInValueName=2,
modelTitle=ifelse(model$noClasses==0,"Explaining %R\nmodel: %M",
"Explaining %R=%V\nmodel: %M"),
modelSubtitle="Method: %E, type: %X",
instanceTitle=ifelse(model$noClasses==0,
"Explaining %R\ninstance: %I, model: %M",
"Explaining %R=%V\ninstance: %I, model: %M"),
instanceSubtitle=ifelse(model$noClasses==0,
"Method: %E\nf(%I)=%P, true %R=%T",
"Method: %E, type: %X\nP(%R=%V)=%P, true %R=%T"),
recall=NULL)
Arguments
model |
The model as returned by |
trainData |
Data frame with data, which is used to extract average explanations, discretization,
and other information needed for explanation of instances and model. Typically this is the data set
which was used to train the |
testData |
Data frame with instances which will be explained.
The |
method |
The explanation method; two methods are available, EXPLAIN and IME. The EXPLAIN is much faster and works for any number of attributes in the model, but cannot explain dependencies expressed disjunctively in the model (for details see references). The IME can in principle explain any type of dependencies in the model. It uses sampling based method to avoid exhaustive search for dependencies and works reasonably fast for up to a few dozen attributes in the model. |
classValue |
For classification models this parameter determines for which class value the explanations will be generated.
The |
fileType |
The parameter determines the graphical format of the visualization file(s).
If |
dirName |
A name of the folder where resulting visualization files will be saved if |
fileName |
A file name of the resulting visualization files, in case |
visLevel |
The level of explanations desired. If |
explainType |
For method EXPLAIN this parameter determines how the prediction with knowledge about
given feature and prediction
without knowledge of this feature are combined into the final explanation.
Values |
naMode |
For method EXPLAIN this parameter determines how the impact of missing information about certain feature value is
estimated. If |
nLaplace |
For EXPLAIN method and classification problems the predicted probabilities are corrected with Laplace correction,
pushing them away from 0 and 1 and towards uniform distribution. Larger values imply smaller effect. The default value is equal
to the number of instances in |
estimator |
The name of feature evaluation method used to greedily discretize attributes
when averaging explanation over intervals.
The default value |
pError |
For method IME the estimated probability of an error in explanations. Together with
parameter |
err |
For method IME the parameter controls the size of tolerable error.
Together with parameter |
batchSize |
For method IME the number of samples processed in batch mode for each explanation. Larger sizes cause less overhead in processing but may process more samples than required. |
maxIter |
The maximal number of iterations in IME method allowed for a single explanation. |
genType |
The type of data generator used to generate random part of instances in method IME.
The generators from package |
noAvgBins |
For IME method the number of discretization bins used to present model level explanations and average explanations. |
displayAttributes |
The vector of attribute names which are visualized, subject to |
modelVisCompact |
The logical value controlling if attribute values are displayed
in model level visualization. The default value |
displayThreshold |
The threshold value for absolute values of explanations
below which feature contributions are not displayed in instance and model explanation graphs.
The threshold applies after the values are normalized, see the explanation for parameter |
normalizeTo |
For instance level visualization the absolute values of feature contributions are
summed and normalized to the value of |
colors |
A vector with 6 colors names, giving 6 colors used in visualization (average positive impact of attribute, average negative impact of attribute, positive instance explanation, negative instance explanation, average positive impact of attribute value, average negative impact of attribute value). If set to NULL sensible grayscale defaults are used i.e., (gray30,gray30,gray60,gray60,gray90,gray90). |
noDecimalsInValueName |
How many decimal places will numeric feature values use in visualizations.The default value is 2. |
modelTitle |
A character string for title template of model visualization. See the details. If |
modelSubtitle |
A character string for subtitle template of model visualization. See the details. If |
instanceTitle |
A character string for title template of instance visualization. See the details. If |
instanceSubtitle |
A character string for subtitle template of instance visualization. See the details. If |
recall |
If parameter is different from NULL, it shall contain the list invisibly returned by one of previous calls to function |
Details
The function explainVis
generates explanations and their visualizations given the trained model,
its training data, and data for which we want explanations. This is the frontend explanation function which takes
care of everything, internally calling other functions.
The produced visualizations are output to a graphical device or saved to a file.
If one requires internal information about the explanations, they are returned invisibly.
Separate calls to internal functions (explain
, ime
,
prepareForExplanations
, and explanationAverages
) are also possible.
In the model explanation all feature values of nominal attributes and intervals of numeric attributes are visualized, as
well as weighted summary over all these values.
In the instance level visualizations the contributions of each feature are presented (thick bars) as well as average contributions of these
feature values in the trainData
(thin bars above them). For details see the references below.
The titles and subtitles of model and instance explanation graphs use templates which allows insertion of the following values:
Response variable: %R
Selected class value for explanation (in case of classification): %V
Type of model: %M
Explanation method (see parameter
method
):: %EExplanation type (only for method EXPLAIN): %X
Title and subtitle of instance explanation graphs can additionally use the following information:
Instance name (extracted from
row.names
intestData
): %IPredicted value/probability of the response: %P
True (class) value of the response: %T
Default templates for regression and classification models are provided. For example, the default template for title of model explanation is "Explaining %R=%V\nmodel: %M", meaning that information about response variable, selected class value, and model are displayed in the title.
Value
The function explainVis
generates explanations and saves their visualizations to a file or
outputs them to graphical device, based on the value of fileType
. It invisibly returns a list with three components containing
explanations, average explanations, and additional data like discretization used and data generator.
The main ingredients of these three components are:
-
expl
, a matrix of generated explanations (of sizedim(testData)
), -
pCXA
, a vector of predictions, -
stddev
, (for method IME only) a matrix with standard deviations of explanations, -
noIter
, (for method IME only) a matrix with number of iterations executed for each explanation, -
discPoints
, (for method EXPLAIN only) a list containing values of discrete features or centers of discretization intervals for numeric features, -
pAV
, (for method EXPLAIN only) a list with probabilities for discrete values or discretization intervals in case of numeric features, -
discretization
, a list with discretization intervals output bydiscretize
function, used in estimating averages and model based explanations, -
avNames
, a list containing the names of discrete values/intervals, -
generator
, (for IME method only) a generator used to generate random part of instances in IME method, -
explAvg
, a list with several components giving average explanations on thetrainingData
. Averages are given for attributes, their values (for discrete attributes) and discretization intervals (for numeric features). These average explanations are used in visualization to give impression how the model works on average. This can be contrasted with explanation for the specific instance.
Author(s)
Marko Robnik-Sikonja
References
Marko Robnik-Sikonja, Igor Kononenko: Explaining Classifications For Individual Instances. IEEE Transactions on Knowledge and Data Engineering, 20:589-600, 2008
Erik Strumbelj, Igor Kononenko, Igor, Marko Robnik-Sikonja: Explaining Instance Classifications with Interactions of Subsets of Feature Values. Data and Knowledge Engineering, 68(10):886-904, Oct. 2009
Erik Strumbelj, Igor Kononenko: An Efficient Explanation of Individual Classifications using Game Theory, Journal of Machine Learning Research, 11(1):1-18, 2010.
Marko Robnik-Sikonja, Igor Kononenko: Discretization of continuous attributes using ReliefF. Proceedings of ERK'95, B149-152, Ljubljana, 1995
Some references are available from http://lkm.fri.uni-lj.si/rmarko/papers/
See Also
CORElearn
,
predict.CoreModel
,
attrEval
,
discretize
,
semiArtificial-package
Examples
require(CORElearn)
# use iris data set, split it randomly into a training and testing set
trainIdxs <- sample(x=nrow(iris), size=0.7*nrow(iris), replace=FALSE)
testIdxs <- c(1:nrow(iris))[-trainIdxs]
# build random forests model with certain parameters
modelRF <- CoreModel(Species ~ ., iris[trainIdxs,], model="rf",
selectionEstimator="MDL",minNodeWeightRF=5,
rfNoTrees=100, maxThreads=1)
# generate model explanation and visualization
# turn on history in the visualization window to see all graphs
explainVis(modelRF, iris[trainIdxs,], iris[testIdxs,], method="EXPLAIN",visLevel="both",
fileType="none", naMode="avg", explainType="WE", classValue=1)
## Not run:
#store instance explanations in grayscale to file in PDF format
explainVis(modelRF, iris[trainIdxs,], iris[testIdxs,], method="EXPLAIN", visLevel="instance",
fileType="pdf", naMode="avg", explainType="WE", classValue=1, colors=NULL)
destroyModels(modelRF) # clean up
# build a regression tree
trainReg <- regDataGen(100)
testReg <- regDataGen(20)
modelRT <- CoreModel(response~., trainReg, model="regTree", modelTypeReg=1)
# generate both model and instance level explanation using defaults
explainVis(modelRT, trainReg, testReg)
destroyModels(modelRT) #clean up
## End(Not run)