CoOL_5_layerwise_relevance_propagation {CoOL} | R Documentation |
Layer-wise relevance propagation of the fitted non-negative neural network
Description
Calculates risk contributions for each exposure and a baseline using layer-wise relevance propagation of the fitted non-negative neural network and data.
Usage
CoOL_5_layerwise_relevance_propagation(X, model)
Arguments
X |
The exposure data. |
model |
The fitted the non-negative neural network. |
Details
For each individual:
P(Y=1|X^+)=R^b+\sum_iR^X_i
The below procedure is conducted for all individuals in a one by one fashion. The baseline risk, $R^b$, is simply parameterised in the model. The decomposition of the risk contributions for exposures, $R^X_i$, takes 3 steps:
Step 1 - Subtract the baseline risk, $R^b$:
R^X_k = P(Y=1|X^+)-R^b
Step 2 - Decompose to the hidden layer:
R^{X}_j = \frac{H_j w_{j,k}}{\sum_j(H_j w_{j,k})} R^X_k
Where $H_j$ is the value taken by each of the $ReLU()_j$ functions for the specific individual.
Step 3 - Hidden layer to exposures:
R^{X}_i = \sum_j \Big(\frac{X_i^+ w_{i,j}}{\sum_i( X_i^+ w_{i,j})}R^X_j\Big)
This creates a dataset with the dimensions equal to the number of individuals times the number of exposures plus a baseline risk value, which can be termed a risk contribution matrix. Instead of exposure values, individuals are given risk contributions, R^X_i.
Value
A data frame with the risk contribution matrix [number of individuals, risk contributors + the baseline risk].
References
Rieckmann, Dworzynski, Arras, Lapuschkin, Samek, Arah, Rod, Ekstrom. 2022. Causes of outcome learning: A causal inference-inspired machine learning approach to disentangling common combinations of potential causes of a health outcome. International Journal of Epidemiology <https://doi.org/10.1093/ije/dyac078>
Examples
#See the example under CoOL_0_working_example