-
Notifications
You must be signed in to change notification settings - Fork 510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Lexicographic optimization in Blendsearch #972
base: main
Are you sure you want to change the base?
Conversation
Is it better to create a new subclass than modifying the original BlendSearch? It's an open question. |
I think we do not need to do it for now. Because LexiFlow is also not independent of flow2 in the current version. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR involves many big changes. There are many changes I don't understand. Could you summarize the changes first? Please also justify why a particular change should be done in that place. We do not want to add unnecessary changes or break the original functionality.
Later, I'll also request adding comments for all the non-obvious changes. Let's get a high-level understanding first.
flaml/tune/searcher/blendsearch.py
Outdated
metric += self.lagrange | ||
if self.lexico_objectives: | ||
self._metric_constraints = None | ||
logger.info("Do not support providing metric_constraints in lexicographic optimization for now.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Raise an error? Metric constraints should be provided via a target.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resolved
assert "lexico_algorithm" in lexico_objectives and lexico_objectives["lexico_algorithm"] in [ | ||
"CFO", | ||
"BlendSearch", | ||
], 'When performing lexicographic optimization, "lexico_algorithm" should be provided in lexicographic objectives (CFO or BlendSearch).' | ||
SearchAlgorithm = CFO if lexico_objectives["lexico_algorithm"] == "CFO" else BlendSearch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is strange. Redundant with search_alg
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is simpler for users to specify algorithm in lexico_objectives. Because search_alg
in tune represents algorithm instance rather than string
. We have also not tried to use tune
by passing instance of CFO before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can support str type search_alg
in tune
to make it easier. But don't create a new argument which is confusing.
AutoML
uses tune
by passing instance of CFO.
flaml/tune/searcher/suggestion.py
Outdated
@@ -739,3 +921,100 @@ def resolve_value(domain: Domain) -> ot.distributions.BaseDistribution: | |||
values = {"/".join(path): resolve_value(domain) for path, domain in domain_vars} | |||
|
|||
return values | |||
|
|||
|
|||
class LexiGlobalSearch(OptunaSearch): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why adding this class? None of the method looks necessary. The update of fbest etc. can happen in SearchThread.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resolved
flaml/tune/searcher/blendsearch.py
Outdated
if obj_1 > obj_2: | ||
return True | ||
else: | ||
return False | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
simplify
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resolved
flaml/tune/searcher/blendsearch.py
Outdated
n = len(arr) | ||
for i in range(n): | ||
for j in range(0, n - i - 1): | ||
if self._lexico_(arr[j], arr[j + 1]): | ||
arr[j], arr[j + 1] = arr[j + 1], arr[j] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there should be a more standard way for sorting in python
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resolved
flaml/tune/searcher/blendsearch.py
Outdated
@@ -748,6 +910,10 @@ def suggest(self, trial_id: str) -> Optional[Dict]: | |||
self._subspace[trial_id] = space | |||
else: # use init config | |||
if self._candidate_start_points is not None and self._points_to_evaluate: | |||
if self.lexico_objectives: | |||
raise NotImplementedError( | |||
"It doesn't support providing points_to_evaluate in lexicographic optimization for now." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
revise English.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resolved
flaml/tune/searcher/blendsearch.py
Outdated
self._result[config_signature][k_metric] = { | ||
self._metric: np.inf * -1 if k_mode == "max" else np.inf | ||
} | ||
self._result[config_signature]["time_total_s"] = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move out of the loop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resolved
flaml/tune/searcher/flow2.py
Outdated
@@ -312,6 +314,24 @@ def denormalize(self, config): | |||
"""denormalize each dimension in config from [0,1].""" | |||
return denormalize(config, self._space, self.best_config, self.incumbent, self._random) | |||
|
|||
def _get_lexico_bound(self, metric, mode): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same function appears in multiple places, which is not a good design.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resolved
flaml/tune/searcher/search_thread.py
Outdated
) | ||
else: | ||
self.speed = 0 | ||
elif self._search_alg._histories: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't access private fields
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a test for BlendSearch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resolved. I added a simple test function for now
test/tune/test_lexiflow.py
Outdated
@@ -103,6 +103,7 @@ def evaluate_function(configuration): | |||
|
|||
lexico_objectives = {} | |||
lexico_objectives["metrics"] = ["error_rate", "flops"] | |||
lexico_objectives["algorithm"] = ["CFO"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a test for BlendSearch as well.
this functionality. | ||
|
||
Tune automatically converts search spaces to Optuna's format: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why such a significant change on docstr?
points_to_evaluate: Optional[List[Dict]] = None, | ||
sampler: Optional["BaseSampler"] = None, | ||
seed: Optional[int] = None, | ||
evaluated_rewards: Optional[List] = None, | ||
): | ||
assert ot is not None, "Optuna must be installed! Run `pip install optuna`." | ||
super(OptunaSearch, self).__init__(metric=metric, mode=mode, max_concurrent=None, use_early_stopped_trials=None) | ||
super(OptunaSearch, self).__init__(metric=metric, mode=mode) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why removing max_concurrent=None, use_early_stopped_trials=None?
@@ -500,7 +676,7 @@ def _setup_study(self, mode: str): | |||
for point in self._points_to_evaluate: | |||
self._ot_study.enqueue_trial(point) | |||
|
|||
def set_search_properties(self, metric: Optional[str], mode: Optional[str], config: Dict) -> bool: | |||
def set_search_properties(self, metric: Optional[str], mode: Optional[str], config: Dict, **spec) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how spec
is used in this func?
@@ -553,21 +729,8 @@ def suggest(self, trial_id: str) -> Optional[Dict]: | |||
raise RuntimeError( | |||
UNDEFINED_METRIC_MODE.format(cls=self.__class__.__name__, metric=self._metric, mode=self._mode) | |||
) | |||
|
|||
if isinstance(self._space, list): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why removing this case?
# Optuna doesn't support incremental results | ||
# for multi-objective optimization | ||
return | ||
if trial_id in self._completed_trials: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why/when this happen?
metric = result[self.metric] | ||
step = result[TRAINING_ITERATION] | ||
ot_trial = self._ot_trials[trial_id] | ||
ot_trial.report(metric, step) | ||
|
||
def on_trial_complete(self, trial_id: str, result: Optional[Dict] = None, error: bool = False): | ||
if trial_id in self._completed_trials: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment with the question in on_trial_result
flaml/tune/searcher/suggestion.py
Outdated
@@ -641,34 +834,23 @@ def add_evaluated_point( | |||
self._ot_study.add_trial(trial) | |||
|
|||
def save(self, checkpoint_path: str): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this change is needed?
flaml/tune/searcher/suggestion.py
Outdated
self._points_to_evaluate, | ||
self._evaluated_rewards, | ||
) | ||
save_object = self.__dict__.copy() | ||
with open(checkpoint_path, "wb") as outputFile: | ||
pickle.dump(save_object, outputFile) | ||
|
||
def restore(self, checkpoint_path: str): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this change is needed?
@@ -26,7 +28,7 @@ | |||
from .flow2 import FLOW2 | |||
from ..space import add_cost_to_space, indexof, normalize, define_by_run_func | |||
from ..result import TIME_TOTAL_S | |||
|
|||
from ..utils import get_lexico_bound |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_lexico_bound could be included in this file as a staticmethod of BlendSearch
Why are these changes needed?
We plan to support lexicographic optimization in BlendSearch. This pull request is primarily intended to track the implementation progress. We encourage everyone to provide feedback in this pull request regarding both implementation suggestions and application scenarios.
TODO:
Related issue number
N/A
Checks