Skip to content

Commit

Permalink
Merge pull request #648 from automl/development
Browse files Browse the repository at this point in the history
Release 0.12.1
  • Loading branch information
KEggensperger authored May 6, 2020
2 parents 8e9b336 + 6e37b08 commit 2cd6c9e
Show file tree
Hide file tree
Showing 61 changed files with 2,147 additions and 597 deletions.
2 changes: 2 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ matrix:
env: TESTSUITE=run_unittests.sh PYTHON_VERSION="3.6" MINICONDA_URL="https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh"
- os: linux
env: TESTSUITE=run_unittests.sh PYTHON_VERSION="3.7" COVERAGE="true" DOCPUSH="true" MINICONDA_URL="https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh"
- os: linux
env: TESTSUITE=run_unittests.sh PYTHON_VERSION="3.8" MINICONDA_URL="https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh"
# Other tests (mypy, examples, flake8...)
- os: linux
env: TESTSUITE=run_flake8.sh PYTHON_VERSION="3.6" MINICONDA_URL="https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh"
Expand Down
18 changes: 18 additions & 0 deletions changelog.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,21 @@
# 0.12.1

## Major Changes

## Minor Changes

* Upgrade the minimal scikit-learn dependency to 0.22.X.
* Make GP predictions faster (#638)
* Allow passing `tae_runner_kwargs` to `ROAR`.
* Add a new StatusType `DONOTADVANCE` for runs that would not benefit from a higher budgets. Such runs are always used
to build a model for SH/HB (#632)
* Add facades/examples for HB/SH (#610)
* Compute acquisition function only if necessary (#627,#629)

## Bug Fixes
* Fixes a bug which caused SH/HB to consider TIMEOUTS on all budgets for model building (#632)
* Fixed a bug in adaptive capping for SH (#619,#622)

# 0.12.0

## Major Changes
Expand Down
43 changes: 41 additions & 2 deletions ci_scripts/run_examples.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@ cd examples

for script in *.py
do
echo '###############################################################################'
echo '###############################################################################'
echo "Starting to test $script"
echo '###############################################################################'
python $script
rval=$?
if [ "$rval" != 0 ]; then
Expand All @@ -10,23 +14,58 @@ do
fi
done

echo '###############################################################################'
echo '###############################################################################'
echo "Starting to test Spear QCP SMAC"
echo '###############################################################################'
cd spear_qcp
bash run.sh
bash run_SMAC.sh
rval=$?
if [ "$rval" != 0 ]; then
echo "Error running example QCP"
exit $rval
fi

echo '###############################################################################'
echo '###############################################################################'
echo "Starting to test Spear QCP ROAR"
echo '###############################################################################'
bash run_ROAR.sh
rval=$?
if [ "$rval" != 0 ]; then
echo "Error running example QCP"
exit $rval
fi

echo '###############################################################################'
echo '###############################################################################'
echo "Starting to test Spear QCP Successive halving"
echo '###############################################################################'
python SMAC4AC_SH_spear_qcp.py
rval=$?
if [ "$rval" != 0 ]; then
echo "Error running python example QCP"
exit $rval
fi

cd ..

echo '###############################################################################'
echo '###############################################################################'
echo "Starting to test branin_fmin.py"
echo '###############################################################################'
cd branin
python branin_fmin.py
rval=$?
if [ "$rval" != 0 ]; then
echo "Error running example QCP"
echo "Error running example branin_fmin.py"
exit $rval
fi

echo '###############################################################################'
echo '###############################################################################'
echo "Starting to test branin from the command line"
echo '###############################################################################'
python ../../scripts/smac --scenario scenario.txt
rval=$?
if [ "$rval" != 0 ]; then
Expand Down
15 changes: 6 additions & 9 deletions examples/SMAC4HPO_mlp_hyperband.py → examples/BOHB4HPO_mlp.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""
================================
Optimizing an MLP with HyperBand
================================
===========================
Optimizing an MLP with BOHB
===========================
An example for the usage of Hyperband intensifier in SMAC.
We optimize a simple MLP on the digits dataset using "Hyperband" intensification.
Expand All @@ -23,8 +23,7 @@
from sklearn.neural_network import MLPClassifier

from smac.configspace import ConfigurationSpace
from smac.facade.smac_hpo_facade import SMAC4HPO
from smac.intensification.hyperband import Hyperband
from smac.facade.smac_bohb_facade import BOHB4HPO
from smac.scenario.scenario import Scenario

digits = load_digits()
Expand Down Expand Up @@ -74,7 +73,7 @@ def mlp_from_cfg(cfg, seed, instance, budget, **kwargs):
random_state=seed)

# returns the cross validation accuracy
cv = StratifiedKFold(n_splits=5, random_state=seed) # to make CV splits consistent
cv = StratifiedKFold(n_splits=5, random_state=seed, shuffle=True) # to make CV splits consistent
score = cross_val_score(mlp, digits.data, digits.target, cv=cv, error_score='raise')

return 1 - np.mean(score) # Because minimize!
Expand Down Expand Up @@ -127,10 +126,8 @@ def mlp_from_cfg(cfg, seed, instance, budget, **kwargs):
# intensifier parameters
intensifier_kwargs = {'initial_budget': 5, 'max_budget': max_iters, 'eta': 3}
# To optimize, we pass the function to the SMAC-object
smac = SMAC4HPO(scenario=scenario, rng=np.random.RandomState(42),
smac = BOHB4HPO(scenario=scenario, rng=np.random.RandomState(42),
tae_runner=mlp_from_cfg,
intensifier=Hyperband, # you can also change the intensifier to use like this!
# This example currently uses Hyperband intensification,
intensifier_kwargs=intensifier_kwargs) # all arguments related to intensifier can be passed like this

# Example call of the function with default values
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""
==============================================================
Optimizing average cross-validation performance with HyperBand
==============================================================
=========================================================
Optimizing average cross-validation performance with BOHB
=========================================================
An example for the usage of Hyperband intensifier in SMAC with multiple instances.
We optimize a SGD classifier on the digits dataset as multiple binary classification problems
Expand All @@ -26,8 +26,7 @@

# Import ConfigSpace and different types of parameters
from smac.configspace import ConfigurationSpace
from smac.facade.smac_hpo_facade import SMAC4HPO
from smac.intensification.hyperband import Hyperband
from smac.facade.smac_bohb_facade import BOHB4HPO
# Import SMAC-utilities
from smac.scenario.scenario import Scenario

Expand Down Expand Up @@ -81,7 +80,7 @@ def sgd_from_cfg(cfg, seed, instance):
# get instance
data, target = generate_instances(int(instance[0]), int(instance[1]))

cv = StratifiedKFold(n_splits=4, random_state=seed) # to make CV splits consistent
cv = StratifiedKFold(n_splits=4, random_state=seed, shuffle=True) # to make CV splits consistent
scores = cross_val_score(clf, data, target, cv=cv)
return 1 - np.mean(scores) # Minimize!

Expand Down Expand Up @@ -117,16 +116,13 @@ def sgd_from_cfg(cfg, seed, instance):
intensifier_kwargs = {'initial_budget': 1, 'max_budget': 45, 'eta': 3,
'instance_order': None, # You can also shuffle the order of using instances by this parameter.
# 'shuffle' will shuffle instances before each SH run and
# 'shuffle_once' will shuffle instances once before the 1st
# SH iteration begins
# 'shuffle_once' will shuffle instances once before the 1st SH iteration begins
}

# To optimize, we pass the function to the SMAC-object
smac = SMAC4HPO(scenario=scenario, rng=np.random.RandomState(42),
smac = BOHB4HPO(scenario=scenario, rng=np.random.RandomState(42),
tae_runner=sgd_from_cfg,
intensifier=Hyperband, # you can also change the intensifier to use like this!
# This example currently uses Hyperband intensification,
intensifier_kwargs=intensifier_kwargs) # all parameters related to intensifier can be passed like this
intensifier_kwargs=intensifier_kwargs) # all arguments related to intensifier can be passed like this

# Example call of the function
# It returns: Status, Cost, Runtime, Additional Infos
Expand Down
126 changes: 126 additions & 0 deletions examples/hyperband_mlp.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
"""
================================
Optimizing an MLP with Hyperband
================================
An example for the usage of a model-free Hyperband intensifier in SMAC.
The configurations are randomly sampled
In this example, we use a real-valued budget in hyperband (number of epochs to train the MLP) and
optimize the average accuracy on a 5-fold cross validation.
"""

import logging
import warnings

import numpy as np
from ConfigSpace.hyperparameters import CategoricalHyperparameter, \
UniformFloatHyperparameter, UniformIntegerHyperparameter
from sklearn.datasets import load_digits
from sklearn.exceptions import ConvergenceWarning
from sklearn.model_selection import cross_val_score, StratifiedKFold
from sklearn.neural_network import MLPClassifier

from smac.configspace import ConfigurationSpace
from smac.facade.hyperband_facade import HB4AC
from smac.scenario.scenario import Scenario

digits = load_digits()


# Target Algorithm
# The signature of the function determines what arguments are passed to it
# i.e., budget is passed to the target algorithm if it is present in the signature
def mlp_from_cfg(cfg, seed, instance, budget, **kwargs):
"""
Creates a MLP classifier from sklearn and fits the given data on it.
This is the function-call we try to optimize. Chosen values are stored in
the configuration (cfg).
Parameters
----------
cfg: Configuration
configuration chosen by smac
seed: int or RandomState
used to initialize the rf's random generator
instance: str
used to represent the instance to use (just a placeholder for this example)
budget: float
used to set max iterations for the MLP
Returns
-------
float
"""

with warnings.catch_warnings():
warnings.filterwarnings('ignore', category=ConvergenceWarning)

mlp = MLPClassifier(
hidden_layer_sizes=[cfg["n_neurons"]] * cfg["n_layer"],
batch_size=cfg['batch_size'],
activation=cfg['activation'],
learning_rate_init=cfg['learning_rate_init'],
max_iter=int(np.ceil(budget)),
random_state=seed)

# returns the cross validation accuracy
cv = StratifiedKFold(n_splits=5, random_state=seed, shuffle=True) # to make CV splits consistent
score = cross_val_score(mlp, digits.data, digits.target, cv=cv, error_score='raise')

return 1 - np.mean(score) # Because minimize!


logger = logging.getLogger("MLP-example")
logging.basicConfig(level=logging.INFO)

# Build Configuration Space which defines all parameters and their ranges.
# To illustrate different parameter types,
# we use continuous, integer and categorical parameters.
cs = ConfigurationSpace()

# We can add multiple hyperparameters at once:
n_layer = UniformIntegerHyperparameter("n_layer", 1, 5, default_value=1)
n_neurons = UniformIntegerHyperparameter("n_neurons", 8, 1024, log=True, default_value=10)
activation = CategoricalHyperparameter("activation", ['logistic', 'tanh', 'relu'],
default_value='tanh')
batch_size = UniformIntegerHyperparameter('batch_size', 30, 300, default_value=200)
learning_rate_init = UniformFloatHyperparameter('learning_rate_init', 0.0001, 1.0, default_value=0.001, log=True)
cs.add_hyperparameters([n_layer, n_neurons, activation, batch_size, learning_rate_init])

# SMAC scenario object
scenario = Scenario({"run_obj": "quality", # we optimize quality (alternative to runtime)
"wallclock-limit": 100, # max duration to run the optimization (in seconds)
"cs": cs, # configuration space
"deterministic": "true",
"limit_resources": True, # Uses pynisher to limit memory and runtime
# Alternatively, you can also disable this.
# Then you should handle runtime and memory yourself in the TA
"cutoff": 30, # runtime limit for target algorithm
"memory_limit": 3072, # adapt this to reasonable value for your hardware
})

# max budget for hyperband can be anything. Here, we set it to maximum no. of epochs to train the MLP for
max_iters = 50
# intensifier parameters
intensifier_kwargs = {'initial_budget': 5, 'max_budget': max_iters, 'eta': 3}
# To optimize, we pass the function to the SMAC-object
smac = HB4AC(scenario=scenario, rng=np.random.RandomState(42),
tae_runner=mlp_from_cfg,
intensifier_kwargs=intensifier_kwargs) # all arguments related to intensifier can be passed like this

# Example call of the function with default values
# It returns: Status, Cost, Runtime, Additional Infos
def_value = smac.get_tae_runner().run(config=cs.get_default_configuration(),
instance='1', budget=max_iters, seed=0)[1]
print("Value for default configuration: %.4f" % def_value)

# Start optimization
try:
incumbent = smac.optimize()
finally:
incumbent = smac.solver.incumbent

inc_value = smac.get_tae_runner().run(config=incumbent, instance='1',
budget=max_iters, seed=0)[1]
print("Optimized Value: %.4f" % inc_value)
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
# non-deterministic target algorithm
'initial_budget': 1,
'eta': 3,
'min_chall': 1 # because successive halving cannot handle min_chall > 1
}

smac = SMAC4AC(scenario=scenario, # scenario object
Expand Down
Original file line number Diff line number Diff line change
@@ -1 +1 @@
python3 ../../scripts/smac --scenario scenario.txt --verbose DEBUG
python3 ../../scripts/smac --scenario scenario.txt --verbose DEBUG --mode ROAR
1 change: 1 addition & 0 deletions examples/spear_qcp/run_SMAC.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
python3 ../../scripts/smac --scenario scenario.txt --verbose DEBUG --mode SMAC4AC
2 changes: 1 addition & 1 deletion extras_require.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,6 @@
"documentation": [
"sphinx",
"sphinx_rtd_theme",
"sphinx-gallery"
"sphinx-gallery==0.5.0"
]
}
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ scipy>=0.18.1
psutil
pynisher>=0.4.1
ConfigSpace>=0.4.9,<0.5
scikit-learn>=0.18.0,<0.22
scikit-learn>=0.22.0
pyrfr>=0.8.0
sobol_seq
joblib
Expand Down
2 changes: 1 addition & 1 deletion smac/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
import lazy_import
from smac.utils import dependencies

__version__ = '0.12.0'
__version__ = '0.12.1'
__author__ = 'Marius Lindauer, Matthias Feurer, Katharina Eggensperger, Joshua Marben, André Biedenkapp, Aaron Klein,'\
'Stefan Falkner and Frank Hutter'

Expand Down
Loading

0 comments on commit 2cd6c9e

Please sign in to comment.