Mixgb offers a scalable solution for imputing large datasets using XGBoost, subsampling and predictive mean matching. Our method utilizes the capabilities of XGBoost, a highly efficient implementation of gradient boosted trees, to capture interactions and non-linear relations automatically. Moreover, we have integrated subsampling and predictive mean matching to minimize bias and reflect appropriate imputation variability. Our package supports various types of variables and offers flexible settings for subsampling and predictive mean matching. We also include diagnostic tools for evaluating the quality of the imputed values.
mixgb
We first load the mixgb
package and the
nhanes3_newborn
dataset, which contains 16 variables of
various types (integer/numeric/factor/ordinal factor). There are 9
variables with missing values.
library(mixgb)
str(nhanes3_newborn)
#> tibble [2,107 × 16] (S3: tbl_df/tbl/data.frame)
#> $ HSHSIZER: int [1:2107] 4 3 5 4 4 3 5 3 3 3 ...
#> $ HSAGEIR : int [1:2107] 2 5 10 10 8 3 10 7 2 7 ...
#> $ HSSEX : Factor w/ 2 levels "1","2": 2 1 2 2 1 1 2 2 2 1 ...
#> $ DMARACER: Factor w/ 3 levels "1","2","3": 1 1 2 1 1 1 2 1 2 2 ...
#> $ DMAETHNR: Factor w/ 3 levels "1","2","3": 3 1 3 3 3 3 3 3 3 3 ...
#> $ DMARETHN: Factor w/ 4 levels "1","2","3","4": 1 3 2 1 1 1 2 1 2 2 ...
#> $ BMPHEAD : num [1:2107] 39.3 45.4 43.9 45.8 44.9 42.2 45.8 NA 40.2 44.5 ...
#> ..- attr(*, "label")= chr "Head circumference (cm)"
#> $ BMPRECUM: num [1:2107] 59.5 69.2 69.8 73.8 69 61.7 74.8 NA 64.5 70.2 ...
#> ..- attr(*, "label")= chr "Recumbent length (cm)"
#> $ BMPSB1 : num [1:2107] 8.2 13 6 8 8.2 9.4 5.2 NA 7 5.9 ...
#> ..- attr(*, "label")= chr "First subscapular skinfold (mm)"
#> $ BMPSB2 : num [1:2107] 8 13 5.6 10 7.8 8.4 5.2 NA 7 5.4 ...
#> ..- attr(*, "label")= chr "Second subscapular skinfold (mm)"
#> $ BMPTR1 : num [1:2107] 9 15.6 7 16.4 9.8 9.6 5.8 NA 11 6.8 ...
#> ..- attr(*, "label")= chr "First triceps skinfold (mm)"
#> $ BMPTR2 : num [1:2107] 9.4 14 8.2 12 8.8 8.2 6.6 NA 10.9 7.6 ...
#> ..- attr(*, "label")= chr "Second triceps skinfold (mm)"
#> $ BMPWT : num [1:2107] 6.35 9.45 7.15 10.7 9.35 7.15 8.35 NA 7.35 8.65 ...
#> ..- attr(*, "label")= chr "Weight (kg)"
#> $ DMPPIR : num [1:2107] 3.186 1.269 0.416 2.063 1.464 ...
#> ..- attr(*, "label")= chr "Poverty income ratio"
#> $ HFF1 : Factor w/ 2 levels "1","2": 2 2 1 1 1 2 2 1 2 1 ...
#> $ HYD1 : Ord.factor w/ 5 levels "1"<"2"<"3"<"4"<..: 1 3 1 1 1 1 1 1 2 1 ...
colSums(is.na(nhanes3_newborn))
#> HSHSIZER HSAGEIR HSSEX DMARACER DMAETHNR DMARETHN BMPHEAD BMPRECUM
#> 0 0 0 0 0 0 124 114
#> BMPSB1 BMPSB2 BMPTR1 BMPTR2 BMPWT DMPPIR HFF1 HYD1
#> 161 169 124 167 117 192 7 0
To impute this dataset, we can use the default settings. The default
number of imputed datasets is m = 5
. Note that we do not
need to convert our data into dgCMatrix or one-hot coding format. Our
package will automatically convert it for you. Variables should be of
the following types: numeric, integer, factor or ordinal factor.
We can also customize imputation settings:
The number of imputed datasets m
The number of imputation iterations maxit
XGBoost hyperparameters and verbose settings.
xgb.params
, nrounds
,
early_stopping_rounds
, print_every_n
and
verbose
.
Subsampling ratio. By default, subsample = 0.7
.
Users can change this value under the xgb.params
argument.
Predictive mean matching settings pmm.type
,
pmm.k
and pmm.link
.
Whether ordinal factors should be converted to integer
(imputation process may be faster)
ordinalAsInteger
Whether or not to use bootstrapping
bootstrap
Initial imputation methods for different types of variables
initial.num
, initial.int
and
initial.fac
.
Whether to save models for imputing newdata
save.models
and save.vars
.
# Use mixgb with chosen settings
params <- list(
max_depth = 5,
subsample = 0.9,
nthread = 2,
tree_method = "hist"
)
imputed.data <- mixgb(
data = nhanes3_newborn, m = 10, maxit = 2,
ordinalAsInteger = FALSE, bootstrap = FALSE,
pmm.type = "auto", pmm.k = 5, pmm.link = "prob",
initial.num = "normal", initial.int = "mode", initial.fac = "mode",
save.models = FALSE, save.vars = NULL,
xgb.params = params, nrounds = 200, early_stopping_rounds = 10, print_every_n = 10L, verbose = 0
)
Imputation performance can be affected by the hyperparameter
settings. Although tuning a large set of hyperparameters may appear
intimidating, it is often possible to narrowing down the search space
because many hyperparameters are correlated. In our package, the
function mixgb_cv()
can be used to tune the number of
boosting rounds - nrounds
. There is no default
nrounds
value in XGBoost,
so users are
required to specify this value themselves. The default
nrounds
in mixgb()
is 100. However, we
recommend using mixgb_cv()
to find the optimal
nrounds
first.
params <- list(max_depth = 3, subsample = 0.7, nthread = 2)
cv.results <- mixgb_cv(data = nhanes3_newborn, nrounds = 100, xgb.params = params, verbose = FALSE)
cv.results$evaluation.log
#> iter train_rmse_mean train_rmse_std test_rmse_mean test_rmse_std
#> <num> <num> <num> <num> <num>
#> 1: 1 5.6264775 0.012654152 5.6232637 0.06990575
#> 2: 2 3.9907647 0.005318002 3.9932861 0.06224862
#> 3: 3 2.8464760 0.006524204 2.8525462 0.05902746
#> 4: 4 2.0574403 0.006270467 2.0662760 0.06732408
#> 5: 5 1.5162559 0.007352323 1.5294432 0.07411480
#> 6: 6 1.1549950 0.011001226 1.1775540 0.07831956
#> 7: 7 0.9211032 0.017621583 0.9557387 0.08728562
#> 8: 8 0.7762264 0.020489332 0.8226546 0.09638312
#> 9: 9 0.6872187 0.020336335 0.7405964 0.10173064
#> 10: 10 0.6327497 0.022435161 0.6959031 0.10413192
#> 11: 11 0.5997786 0.021871348 0.6718497 0.10540036
#> 12: 12 0.5802680 0.021192239 0.6623342 0.10318676
#> 13: 13 0.5692138 0.021683524 0.6547936 0.10517984
#> 14: 14 0.5605627 0.022707172 0.6498806 0.10645453
#> 15: 15 0.5533794 0.022272710 0.6470849 0.10711865
#> 16: 16 0.5487505 0.022556864 0.6456341 0.10695166
#> 17: 17 0.5429231 0.022675312 0.6459804 0.10622881
#> 18: 18 0.5394071 0.022504066 0.6455456 0.10644238
#> 19: 19 0.5332706 0.019513758 0.6467061 0.10552916
#> 20: 20 0.5293077 0.019003029 0.6461716 0.10458823
#> 21: 21 0.5264754 0.019843989 0.6449325 0.10593313
#> 22: 22 0.5242713 0.019797220 0.6456301 0.10578064
#> 23: 23 0.5207213 0.018940991 0.6470836 0.10516188
#> 24: 24 0.5181564 0.017855978 0.6478329 0.10496112
#> 25: 25 0.5162895 0.018038007 0.6480913 0.10369702
#> 26: 26 0.5127118 0.017277472 0.6485172 0.10317041
#> 27: 27 0.5098374 0.017414876 0.6494183 0.10139164
#> 28: 28 0.5038374 0.014997959 0.6516332 0.10140255
#> 29: 29 0.5007207 0.014485638 0.6523519 0.10034947
#> 30: 30 0.4977845 0.014088024 0.6524572 0.09967298
#> 31: 31 0.4957854 0.013787408 0.6523202 0.10004892
#> iter train_rmse_mean train_rmse_std test_rmse_mean test_rmse_std
cv.results$response
#> [1] "BMPWT"
cv.results$best.nrounds
#> [1] 21
By default, mixgb_cv()
will randomly choose an
incomplete variable as the response and build an XGBoost model with
other variables as explanatory variables using the complete cases of the
dataset. Therefore, each run of mixgb_cv()
will likely
return different results. Users can also specify the response and
covariates in the argument response
and
select_features
respectively.
cv.results <- mixgb_cv(
data = nhanes3_newborn, nfold = 10, nrounds = 100, early_stopping_rounds = 1,
response = "BMPHEAD", select_features = c("HSAGEIR", "HSSEX", "DMARETHN", "BMPRECUM", "BMPSB1", "BMPSB2", "BMPTR1", "BMPTR2", "BMPWT"), xgb.params = params, verbose = FALSE
)
cv.results$best.nrounds
#> [1] 19
Let us just try setting
nrounds = cv.results$best.nrounds
in mixgb()
to obtain 5 imputed datasets.
The mixgb
package used to provide a few visual
diagnostics functions. However, we have moved these functions to the
vismi
package, which provides a wide range of visualisation
tools for multiple imputation.
For more details, please check the vismi
package on
GitHub Visualisation Tools
for Multiple Imputation.