Replies: 6 comments 5 replies
-
If the underlying learner supports it, it should be possible. We don't add or strip features from algorithms in mlr3. If you're facing problems with particular learners, you can post them here. |
Beta Was this translation helpful? Give feedback.
-
According to the example in the XGBootst R package (ref), GPU acceleration can be achieved by adding one parameter: param <- list(objective = 'reg:logistic', eval_metric = 'auc', subsample = 0.5, nthread = 4,
max_bin = 64, tree_method = 'gpu_hist') However, to change the computing strategy in MLR3, I have read that we have to use the |
Beta Was this translation helpful? Give feedback.
-
To compute on GPUs, you need to compile xgboost yourself and link against CUDA. https://xgboost.readthedocs.io/en/stable/build.html#building-with-gpu-support Nothing we can do on our side, but I'll mention this in the docs. |
Beta Was this translation helpful? Give feedback.
-
Here is a proper reproducible code: library(data.table)
library(mlr3)
# Generate dummy data
n <- 10e4
d <- data.table(target = sample(2, n, replace = T))
d[, paste0("x", 1:10) := rnorm(n)]
# Set learner
xgb_learner <- lrn("classif.xgboost",
eval_metric = "error",
predict_type = "prob")
traintask <- TaskClassif$new(id = "training_data",
backend = d,
target = "target")
# Set parameters for training
tuner <- tnr("random_search")
rc = rsmp("subsampling")
mult_measure <- list(msr("classif.specificity", na_value = 0),
msr("classif.sensitivity", na_value = 0),
msr("classif.fbeta", na_value = 0),
msr("classif.bacc")
)
XGB_parameters <- ps(
# GPU
tree_method = p_fct(default = "gpu_hist", levels = c('gpu_hist')),
eta = p_dbl(default = 0.05, lower = 0.001, upper = 0.5),
max_depth = p_int(default = 6L, lower = 2L, upper = 18L),
nrounds = p_int(default = 150L, lower = 5L, upper = 1000L),
gamma = p_dbl(default = 7, lower = 2, upper = 20),
colsample_bytree = p_dbl(
default = 0.15, lower = 0.05, upper = 0.5),
subsample = p_dbl(default = 0.15, lower = 0.01, upper = 0.5),
min_child_weight = p_dbl(default = 1, lower = 0, upper = 3),
booster = p_fct(levels = c("dart")),
scale_pos_weight = p_dbl(default = 1, lower = 0.75, upper = 1.1),
# Parameters specific for DART
rate_drop = p_dbl(default = 0.1, lower = 0.1, upper = 1, tags = "train"),
skip_drop = p_dbl(default = 0.1, lower = 0.1, upper = 1, tags = "train")
)
XGB_parameters$add_dep("skip_drop", "booster", CondEqual$new("dart"))
XGB_parameters$add_dep("rate_drop", "booster", CondEqual$new("dart"))
XGB_parameters
stop_time = now()
stop_time <- stop_time + hours(4)
term_combo <- trm("combo",
list(
trm("evals", n_evals = 1000),
trm("clock_time", stop_time = stop_time))
)
# Setup instance
instance <- TuningInstanceMultiCrit$new(
task = traintask,
learner = xgb_learner,
resampling = rc,
measure = mult_measure,
search_space = XGB_parameters,
terminator = term_combo)
tuner$optimize(instance)
|
Beta Was this translation helpful? Give feedback.
-
Does it compute on the GPU if you call xgboost yourself? |
Beta Was this translation helpful? Give feedback.
-
Here is a screenshot of the GPU usage: the peak between 6.28 to 6.30 is when I ran the little sample script from the previous comment. |
Beta Was this translation helpful? Give feedback.
-
I would like to know if it is possible to train a model using GPU acceleration. I have seen that is possible to do it with LightGBM here, but the build is falling and I am not sure it translate to XGBOOST.
Thanks
Beta Was this translation helpful? Give feedback.
All reactions