evaluate.tamat {causalDisco} | R Documentation |
Applies several different metrics to evaluate difference between estimated and true adjacency matrices. Intended to be used to evaluate performance of causal discovery algorithms.
## S3 method for class 'tamat'
evaluate(est, true, metrics, ...)
est |
Estimated adjacency matrix/matrices. |
true |
True adjacency matrix/matrices. |
metrics |
List of metrics, see details. |
... |
Further arguments that depend on input type. Currently only |
Two options for input are available: Either est
and true
can be two adjacency matrices, or they can be two arrays of adjacency matrices.
The arrays should have shape n * p * p
where n is the number of of matrices,
and p is the number of nodes/variables.
The metrics should be given as a list with slots $adj
, $dir
and
$other
. Metrics under $adj
are applied to the adjacency confusion
matrix, while metrics under $dir
are applied to the conditional orientation
confusion matrix (see confusion). Metrics under $other
are applied
without computing confusion matrices first.
Available metrics to be used with confusion matrices are precision, recall, specificity, FOR, FDR, NPV, F1 and G1. The user can supply custom metrics as well: They need to have the confusion matrix as their first argument and should return a numeric.
Available metrics to be used as "other" is: shd. The user
can supply custom metrics as well: They need to have arguments est_amat
and true_amat
,
where the former is the estimated adjacency matrix and the latter is the true adjacency matrix. The
metrics should return a numeric.
A data.frame with one column for each computed metric and one row per evaluated
matrix pair. Adjacency metrics are prefixed with "adj_", orientation metrics are prefixed
with "dir_", other metrics do not get a prefix. If the first argument is a matrix, list.out = TRUE
can be used to change the return object to a list instead. This list will contain three lists, where
adjacency, orientation and other metrics are reported, respectively.