-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add hpobench wrapper #993
base: develop
Are you sure you want to change the base?
add hpobench wrapper #993
Conversation
This could be a potential way to re-use singularity container in But this may not work in multiple worker scenario as we could create Orion benchmark task at local client node and run the task at a different node. |
docs/src/user/benchmark.rst
Outdated
local benchmark could ask for total difference extra requirements. With ``pip install orion[hpobench]``, | ||
you will be able to run local benchmark ``benchmarks.ml.tabular_benchmark``. | ||
If you want to run other benchmarks in local, refer HPOBench `Run a Benchmark Locally`_. To run | ||
containerized benchmarks, you will need to install `singularity`_. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe there should be a quick word on how to use it with singularity once singularity is installed. Even if the usage is exactly the same, then it should be mentioned.
@@ -185,6 +187,13 @@ def _from_categorical(dim: CategoricalHyperparameter) -> Categorical: | |||
return Categorical(dim.name, choices) | |||
|
|||
|
|||
@to_oriondim.register(OrdinalHyperparameter) | |||
def _from_ordinal(dim: OrdinalHyperparameter) -> Categorical: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is ordinal best mapped to categorical or integer? Categorical is loosing the importance of the ordering.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ordinal values does not have to be int
. https://automl.github.io/ConfigSpace/main/api/hyperparameters.html#ConfigSpace.hyperparameters.OrdinalHyperparameter
src/orion/benchmark/task/hpobench.py
Outdated
def __init__( | ||
self, | ||
max_trials: int, | ||
hpo_benchmark_class: Union[str, None] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it support None
as default? Or maybe ''
by default to simplify typing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wow, I forgot what I was thinking ^^
src/orion/benchmark/task/hpobench.py
Outdated
benchmark_kwargs: dict = dict(), | ||
objective_function_kwargs: dict = dict(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having non-mutable objects as defaults is a bit risky.
self.objective_function_kwargs = objective_function_kwargs | ||
|
||
def call(self, **kwargs) -> List[Dict]: | ||
hpo_benchmark = self.hpo_benchmark_cls(**self.benchmark_kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@donglinjy Did you figure out why the singularity container is destroyed at the end of the subprocess call? If we could avoid this then we could only pay the price of building it during task instantiation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is caused by how we now run the trials, if we create the containerized benchmark during task init, we serialize the task instance and then deserialize it as a new task instance to run, after the run complete, this new instance will be destroyed which will cause the singularity container shutdown too. Although I mentioned another possible solution at #993 (comment), which will ask some additional change to make it work in remote workers scenario.
from orion.algo.space import Space | ||
from orion.benchmark.task import Branin, CarromTable, EggHolder, HPOBench, RosenBrock | ||
|
||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And HPOBench will likely require additional tests for singularity, and so would best be in a separate test module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, let me see If I can change it to similar stuff as ImportOptional, I actually saw such ImportOptional code and was just trying to make it simple in the test cases as profet
and configspace
.
I have not figured out an easy way to install singularity
in our test env, maybe I can try it a little bit.
|
||
def test_create_with_container_benchmark(self): | ||
"""Test to create HPOBench container benchmark""" | ||
with pytest.raises(AttributeError) as ex: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With message 'Can not run conterized benchmark without Singularity'?
src/orion/benchmark/task/hpobench.py
Outdated
code, message = subprocess.getstatusoutput("singularity -h") | ||
if code != 0: | ||
raise AttributeError( | ||
"Can not run conterized benchmark without Singularity: {}".format( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Can not run conterized benchmark without Singularity: {}".format( | |
"Can not run containerized benchmark without Singularity: {}".format( |
) | ||
class TestHPOBench: | ||
"""Test benchmark task HPOBenchWrapper""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we would like to test invalid benchmark names we would need to support singularity in the test suite so that _verify_benchmark
does not fail and the container fails during call, am I wrong?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be true if we have singularity
installed. But we can also test invalid benchmark names without singularity
, because invalid name will cause the creation of benchmark fail before it reachs the point of using singularity
.
@bouthilx I already added the work-around for runner test case to succeed in 3afad9a. But there build will fail in python 3.10 test added recently, and
|
Description
Add an Orion benchmark task wrapper over HPOBench.
Tests
$ tox -e py38
; replace38
by your Python version if necessary)Documentation
Quality
$ tox -e lint
)