Optimization history plot can't seem to converge or plateau #5327
Unanswered
adnan-umich
asked this question in
Q&A
Replies: 1 comment 6 replies
-
If your objective function returns these three values, it is a multi-objective problem so the optimal solutions would have trade-off as described in https://en.wikipedia.org/wiki/Pareto_front; no single winner. I think you need to return the mean dice loss (on validation loss?) to make the optimisation converge instead of three values. |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone, I have a pytorch segmentation model (Unet) hyper parameter search. The objective function has 3 outputs (the mean dice score, the average training loss, and the average validation loss). I'm running an Optuna based hyperparam optimization experiment aimed at maximizing mean DICE, while minimizing the other 2 parameters. Below is what my Optimization History plot looks like. Even after 200+ trials, it appears the optimizing is still having a hard time converging to a set of parameters ? I'm using the TPESampler sampler.
Please let me know If you need me to post any more information. Any advice is greatly appreciated. Thank you.
My hyper parameters are the following:
Optuna_settings.hyperparam.batch = discrete integer between [1, 5]
Optuna_settings.hyperparam.beta_1 = continuous float between [0.85, 0.999]
Optuna_settings.hyperparam.beta_2 = continuous float between[0.85, 0.999]
Optuna_settings.hyperparam.epoch = discrete integer between [200, 800]
Optuna_settings.hyperparam.learning_rate = continuous float between [0.00001, 0.0001]
Optuna_settings.hyperparam.loss = categorical variable ["DiceLoss", "DiceCELoss", "MaskedDiceLoss", "GeneralizedDiceLoss", "FocalLoss", "TverskyLoss"]
Optuna_settings.hyperparam.optimizer = categorical variable ["Adam", "AdamW"]
Optuna_settings.hyperparam.weight_decay = continuous float between [0, 0.000001]
Beta Was this translation helpful? Give feedback.
All reactions