The Conditional Predictive Impact (CPI) is a general test for conditional independence in supervised learning algorithms. It implements a conditional variable importance measure which can be applied to any supervised learning algorithm and loss function.
As a first example, we calculate the CPI for a random forest on the wine data with 5-fold cross validation:
library(mlr3)
library(mlr3learners)
library(cpi)
cpi(task = tsk("wine"),
learner = lrn("classif.ranger", predict_type = "prob", num.trees = 10),
resampling = rsmp("cv", folds = 5))
#> Variable CPI SE test statistic estimate p.value ci.lo
#> 1 alcalinity 0.00106 0.00346 t 0.31 0.00106 0.3798 -0.00466
#> 2 alcohol 0.02759 0.01088 t 2.54 0.02759 0.0060 0.00961
#> 3 ash 0.00019 0.00019 t 1.00 0.00019 0.1593 -0.00012
#> 4 color 0.21308 0.18515 t 1.15 0.21308 0.1257 -0.09306
#> 5 dilution 0.00046 0.00771 t 0.06 0.00046 0.4761 -0.01229
#> 6 flavanoids 0.00000 0.00000 t 0.00 0.00000 1.0000 0.00000
#> 7 hue 0.00151 0.00705 t 0.21 0.00151 0.4155 -0.01015
#> 8 magnesium 0.00826 0.00494 t 1.67 0.00826 0.0480 0.00010
#> 9 malic 0.00047 0.00412 t 0.11 0.00047 0.4551 -0.00635
#> 10 nonflavanoids 0.00073 0.00205 t 0.36 0.00073 0.3612 -0.00266
#> 11 phenols -0.00351 0.00346 t -1.01 -0.00351 0.8441 -0.00922
#> 12 proanthocyanins 0.00162 0.00389 t 0.42 0.00162 0.3389 -0.00481
#> 13 proline 0.08475 0.03003 t 2.82 0.08475 0.0027 0.03509
The result is a CPI value for each feature, i.e. how much did the loss function change when the feature was replaced with its knockoff version, with corresponding standard errors, test statistics, p-values and confidence interval.
The task, learner and resampling strategy are specified with the mlr3 package, which provides a unified interface for machine learning tasks and makes it quite easy to change these components. For example, we can change to regularized logistic regression and a simple holdout as resampling strategy:
cpi(task = tsk("wine"),
learner = lrn("classif.glmnet", predict_type = "prob", lambda = 0.01),
resampling = rsmp("holdout"))
#> Variable CPI SE test statistic estimate p.value ci.lo
#> 1 alcalinity 8.6e-03 1.4e-02 t 0.62 8.6e-03 0.269 -1.5e-02
#> 2 alcohol 2.0e-02 1.4e-02 t 1.46 2.0e-02 0.074 -2.8e-03
#> 3 ash 3.1e-04 2.8e-04 t 1.11 3.1e-04 0.136 -1.6e-04
#> 4 color 3.2e-02 2.1e-02 t 1.57 3.2e-02 0.061 -2.2e-03
#> 5 dilution 8.2e-03 7.2e-03 t 1.15 8.2e-03 0.128 -3.8e-03
#> 6 flavanoids 4.0e-06 4.0e-06 t 1.01 4.0e-06 0.157 -2.6e-06
#> 7 hue 6.4e-03 8.1e-03 t 0.79 6.4e-03 0.217 -7.1e-03
#> 8 magnesium 0.0e+00 0.0e+00 t 0.00 0.0e+00 1.000 0.0e+00
#> 9 malic -8.0e-03 9.3e-03 t -0.85 -8.0e-03 0.802 -2.4e-02
#> 10 nonflavanoids -9.6e-04 2.9e-03 t -0.33 -9.6e-04 0.627 -5.9e-03
#> 11 phenols 0.0e+00 0.0e+00 t 0.00 0.0e+00 1.000 0.0e+00
#> 12 proanthocyanins 0.0e+00 0.0e+00 t 0.00 0.0e+00 1.000 0.0e+00
#> 13 proline 3.5e-03 1.8e-02 t 0.19 3.5e-03 0.424 -2.7e-02
We refer to the mlr3 book for full introduction and reference.
The loss function used by the cpi()
function is
specified with measure
. By default, the mean squared error
(MSE) is used for regression and log-loss for classification. In
mlr3, this corresponds to the measures "regr.mse"
and "classif.logloss"
. We re-run the example above with
simple classification error (ce):
cpi(task = tsk("wine"),
learner = lrn("classif.glmnet", lambda = 0.01),
resampling = rsmp("holdout"),
measure = msr("classif.ce"))
#> Variable CPI SE test statistic estimate p.value ci.lo
#> 1 alcalinity 0.000 0.000 t 0.00 0.000 1.00 0.000
#> 2 alcohol 0.017 0.030 t 0.57 0.017 0.28 -0.032
#> 3 ash 0.000 0.000 t 0.00 0.000 1.00 0.000
#> 4 color 0.000 0.000 t 0.00 0.000 1.00 0.000
#> 5 dilution -0.017 0.017 t -1.00 -0.017 0.84 -0.045
#> 6 flavanoids 0.000 0.000 t 0.00 0.000 1.00 0.000
#> 7 hue 0.000 0.000 t 0.00 0.000 1.00 0.000
#> 8 magnesium -0.017 0.017 t -1.00 -0.017 0.84 -0.045
#> 9 malic -0.017 0.017 t -1.00 -0.017 0.84 -0.045
#> 10 nonflavanoids -0.017 0.017 t -1.00 -0.017 0.84 -0.045
#> 11 phenols 0.000 0.000 t 0.00 0.000 1.00 0.000
#> 12 proanthocyanins 0.000 0.000 t 0.00 0.000 1.00 0.000
#> 13 proline 0.017 0.045 t 0.38 0.017 0.35 -0.059
Here we see more 0 CPI values because the classification error is less sensitive to small changes and hence results in lower power.
The CPI offers several statistical tests to be calculated: The
t-test ("t"
, default), Wilcoxon signed-rank test
("wilcox"
), binomial test ("binom"
), Fisher
permutation test ("fisher"
) and Bayesian testing
("bayes"
) with the package BEST. For example, we
re-run the first example with Fisher’s permutation test:
cpi(task = tsk("wine"),
learner = lrn("classif.ranger", predict_type = "prob", num.trees = 10),
resampling = rsmp("cv", folds = 5),
test = "fisher")
#> Variable CPI SE test p.value ci.lo
#> 1 alcalinity 0.00864 0.00441 fisher 0.0255 0.00131
#> 2 alcohol 0.02225 0.01082 fisher 0.0180 0.00459
#> 3 ash 0.00000 0.00000 fisher 1.0000 0.00000
#> 4 color 0.03918 0.01555 fisher 0.0040 0.01390
#> 5 dilution 0.00864 0.00780 fisher 0.1395 -0.00435
#> 6 flavanoids -0.00039 0.00039 fisher 1.0000 -0.00078
#> 7 hue 0.00720 0.00738 fisher 0.1700 -0.00473
#> 8 magnesium 0.00344 0.00366 fisher 0.1635 -0.00249
#> 9 malic 0.01378 0.00394 fisher 0.0005 0.00700
#> 10 nonflavanoids -0.00118 0.00289 fisher 0.6405 -0.00614
#> 11 phenols -0.00192 0.00431 fisher 0.6695 -0.00938
#> 12 proanthocyanins 0.00926 0.00476 fisher 0.0220 0.00128
#> 13 proline 0.05066 0.01421 fisher 0.0005 0.02737
The CPI relies on a valid knockoff sampler for the data to be
analyzed. By default, second-order Gaussian knockoffs from the package
knockoff are used. However, any other knockoff sampler can be
used by changing the knockoff_fun
or the
x_tilde
argument in the cpi()
function. Here,
knockoff_fun
expects a function taking a
data.frame
with the original data as input and returning a
data.frame
with the knockoffs. For example, we use
sequential knockoffs from the seqknockoff package1:
<- as_task_regr(iris, target = "Petal.Length")
mytask cpi(task = mytask, learner = lrn("regr.ranger", num.trees = 10),
resampling = rsmp("cv", folds = 5),
knockoff_fun = seqknockoff::knockoffs_seq)
The x_tilde
argument directly takes the knockoff
data:
library(seqknockoff)
<- knockoffs_seq(iris[, -3])
x_tilde <- as_task_regr(iris, target = "Petal.Length")
mytask cpi(task = mytask, learner = lrn("regr.ranger", num.trees = 10),
resampling = rsmp("cv", folds = 5),
x_tilde = x_tilde)
Instead of calculating the CPI for each feature separately, we can
also calculate it for groups of features by replacing data of whole
groups with the respective knockoff data. In cpi()
this can
be done with the groups
argument:
cpi(task = tsk("iris"),
learner = lrn("classif.glmnet", predict_type = "prob", lambda = 0.01),
resampling = rsmp("holdout"),
groups = list(Sepal = 1:2, Petal = 3:4))
#> Group CPI SE test statistic estimate p.value ci.lo
#> 1 Sepal 0.0033 0.009 t 0.37 0.0033 0.358 -0.0118
#> 2 Petal 0.0192 0.011 t 1.82 0.0192 0.037 0.0015
For parallel execution, we need to register a parallel backend. Parallelization will be performed by the features, i.e. the CPI for each feature will be calculated in parallel. For example:
::registerDoParallel(4)
doParallelcpi(task = tsk("wine"),
learner = lrn("classif.ranger", predict_type = "prob", num.trees = 10),
resampling = rsmp("cv", folds = 5))
seqknockoff is not on CRAN yet; available here: https://github.com/kormama1/seqknockoff↩︎