optuna | A hyperparameter optimization framework | Machine Learning library

 by   optuna Python Version: 3.6.1 License: Non-SPDX

kandi X-RAY | optuna Summary

kandi X-RAY | optuna Summary

optuna is a Python library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Pytorch applications. optuna has no bugs, it has no vulnerabilities and it has high support. However optuna build file is not available and it has a Non-SPDX License. You can install using 'pip install optuna' or download it from GitHub, PyPI.

Website | Docs | Install Guide | Tutorial. Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              optuna has a highly active ecosystem.
              It has 8216 star(s) with 835 fork(s). There are 123 watchers for this library.
              There were 3 major release(s) in the last 12 months.
              There are 120 open issues and 1316 have been closed. On average issues are closed in 58 days. There are 7 open pull requests and 0 closed requests.
              It has a positive sentiment in the developer community.
              The latest version of optuna is 3.6.1

            kandi-Quality Quality

              optuna has 0 bugs and 0 code smells.

            kandi-Security Security

              optuna has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              optuna code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              optuna has a Non-SPDX License.
              Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.

            kandi-Reuse Reuse

              optuna releases are available to install and integrate.
              Deployable package is available in PyPI.
              optuna has no build file. You will be need to create the build yourself to build the component from source.
              Installation instructions, examples and code snippets are available.
              optuna saves you 17050 person hours of effort in developing the same functionality from scratch.
              It has 37925 lines of code, 2496 functions and 288 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed optuna and discovered the below as its top functions. This is intended to give you an instant insight into optuna implemented functionality, and help decide if they suit your requirements.
            • Optimize the study using the given function
            • Optimizes an objective function
            • Run a single trial
            • Optimizes the objective function
            • Plot the empirical distribution function
            • Return True if the direction is a multi - objective
            • Check that the arguments passed to the plot
            • Helper function for _EDFInfo
            • Mark a function as deprecated
            • Adds a trial to the session
            • Plot intermediate values
            • Run tool
            • Return trials data as a pandas DataFrame
            • Plot parameter importances
            • Plot the optimization history
            • Plot a slice plot
            • Plot a parallel coordinate
            • Transform a search space into a numpy array
            • Enqueues a trial
            • Plot a study contour
            • Create a study ask
            • Plot the pareto front
            • Upgrade the database
            • Mark a class as deprecated
            • Return set of requirements required to download
            • Sample relative to study
            Get all kandi verified functions for this library.

            optuna Key Features

            No Key Features are available at this moment for optuna.

            optuna Examples and Code Snippets

            Class Resolver , Writing Extensible Machine Learning Models with
            Pythondot img1Lines of Code : 140dot img1License : Permissive (MIT)
            copy iconCopy
            from itertools import chain
            
            from more_itertools import pairwise
            from torch import nn
            
            class MLP(nn.Sequential):
                def __init__(self, dims: list[int]):
                    super().__init__(chain.from_iterable(
                        (
                            nn.Linear(in_feature  
            4. Usage,4.3. Tuning
            Pythondot img2Lines of Code : 86dot img2License : Permissive (MIT)
            copy iconCopy
            !TF_CPP_MIN_LOG_LEVEL=3 xagents tune ppo --env PongNoFrameskip-v4 --study ppo-carnival --storage sqlite:///ppo-carnival.db --trial-steps 500000 --n-trials 100 --warmup-trials 3 --preprocess --n-envs 16 32 --lr 1e-5 1e-2 --opt-epsilon 1e-7 1e-4 --gamm  
            4. Usage,4.3. Tuning
            Pythondot img3Lines of Code : 86dot img3License : Permissive (MIT)
            copy iconCopy
            !TF_CPP_MIN_LOG_LEVEL=3 xagents tune ppo --env PongNoFrameskip-v4 --study ppo-carnival --storage sqlite:///ppo-carnival.db --trial-steps 500000 --n-trials 100 --warmup-trials 3 --preprocess --n-envs 16 32 --lr 1e-5 1e-2 --opt-epsilon 1e-7 1e-4 --gamm  
            tianshou - lunarlander dqn
            Pythondot img4Lines of Code : 123dot img4License : Permissive (MIT License)
            copy iconCopy
            import argparse
            import os
            import pprint
            
            import gym
            import numpy as np
            import torch
            from torch.utils.tensorboard import SummaryWriter
            
            from tianshou.data import Collector, VectorReplayBuffer
            from tianshou.env import DummyVectorEnv, SubprocVectorEnv
            f  
            When using the optuna plugin for hydra, can I import the search space from another config file?
            Pythondot img5Lines of Code : 40dot img5License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            defaults:
              - datasets: data
              - models: Ets
              - search_spaces@hydra.sweeper.search_space: Ets
              - override hydra/sweeper: optuna
              - override hydra/sweeper/sampler: tpe
            
            hydra:
             run:
              dir: data/outputs/${now:%Y-%m-%d}/${user.user}/${now:
            The pytorch training model cannot be created successfully
            Pythondot img6Lines of Code : 7dot img6License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
                def forward(self, x):
                    x = self.linear1(x)
                    x = self.bn1(x)
                    x = self.activation(x)
                    x = self.linear2(x)
                    return x
            
            how to properly initialize a child class of XGBRegressor?
            Pythondot img7Lines of Code : 7dot img7License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            class XGBoostQuantileRegressor(XGBRegressor):
                def __init__(self, quant_alpha, max_depth=3, **kwargs):
                    self.quant_alpha = quant_alpha
                    super().__init__(max_depth=max_depth, **kwargs)
            
                # other methods unchanged and omi
            Method to choose variable for model
            Pythondot img8Lines of Code : 10dot img8License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            1. Create model with features n.
            2. Measure model's objective or accuracy for example.
            3. Save accuracy, and features used.
            4. If number of features is only 30 goto step 8.
            5. Get feature importance.
            6. Drop the feature with lowest value
            7. Goto
            Optimization of Wind Turbine plant in Scipy
            Pythondot img9Lines of Code : 175dot img9License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            """
            References:
                https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
                https://github.com/DTUWindEnergy/PyWake
            """
            
            
            import time
            
            from py_wake.examples.data.hornsrev1 import V80 
            from py_wake.examples.dat
            Supressing optunas cv_agg's binary_logloss output
            Pythondot img10Lines of Code : 4dot img10License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            tuner = lgb.LightGBMTunerCV({"objective": "binary", 'verbose': -1},
                   train_set=test_dataset, num_boost_round=10,
                   nfold=5, stratified=True, shuffle=True, verbose_eval=None)
            

            Community Discussions

            QUESTION

            When using the optuna plugin for hydra, can I import the search space from another config file?
            Asked 2022-Mar-07 at 19:54

            I want to hyper-parameter optimize multiple time series forecasting models on the same data. I'm using the Optuna Sweeper plugin for Hydra. The different models have different hyper-parameters and therefore different search spaces. At the moment my config file looks like this:

            ...

            ANSWER

            Answered 2022-Mar-07 at 19:54

            Here are two options:

            1. Use a @package directive
            2. Use a variable interpolation

            In detail:

            Using an @package directive

            An @package directive can be used to place Ets.yaml in the hydra.sweeper.search_space package:

            Source https://stackoverflow.com/questions/71381726

            QUESTION

            The pytorch training model cannot be created successfully
            Asked 2022-Mar-06 at 12:00

            I would like to do a neural network for regression analysis using optuna based on this site. I would like to create a model with two 1D data as input and one 1D data as output in batch learning.

            x is the training data and y is the teacher data.

            ...

            ANSWER

            Answered 2022-Mar-06 at 12:00

            With PyTorch, when you call y_pred = model(x) that will call the forward function which is defined in the Model class.

            So, y_pred will get the result of the forward function, in your case, it returns nothing, that's why you get a None value. You can change the forward function as below:

            Source https://stackoverflow.com/questions/71369132

            QUESTION

            how to properly initialize a child class of XGBRegressor?
            Asked 2021-Dec-26 at 11:58

            I want to build a quantile regressor based on XGBRegressor, the scikit-learn wrapper class for XGBoost. I have the following two versions: the second version is simply trimmed from the first one, but it no longer works.

            I am wondering why I need to put every parameters of XGBRegressor in its child class's initialization? What if I just want to take all the default parameter values except for max_depth?

            (My XGBoost is of version 1.4.2.)

            No.1 the full version that works as expected:

            ...

            ANSWER

            Answered 2021-Dec-26 at 11:58

            I am not an expert with scikit-learn but it seems that one of the requirements of various objects used by this framework is that they can be cloned by calling the sklearn.base.clone method. This appears to be something that the existing XGBRegressor class does, so is something your subclass of XGBRegressor must also do.

            What may help is to pass any other unexpected keyword arguments as a **kwargs parameter. In your constructor, kwargs will contain a dict of all of the other keyword parameters that weren't assigned to other constructor parameters. You can pass this dict of parameters on to the call to the superclass constructor by referring to them as **kwargs again: this will cause Python to expand them out:

            Source https://stackoverflow.com/questions/70473831

            QUESTION

            Supressing optunas cv_agg's binary_logloss output
            Asked 2021-Nov-29 at 23:38

            if I tune a model with the LightGBMTunerCV I always get this massive result of the cv_agg's binary_logloss. If I do this with a bigger dataset, this (unnecessary) io slows down the performance of the optimization process.

            Here is the code:

            ...

            ANSWER

            Answered 2021-Nov-29 at 23:38

            You can pass verbose_eval parameter with value None in LightGBMTunerCV().

            Example:

            Source https://stackoverflow.com/questions/70093026

            QUESTION

            disable logging for specific lines of code
            Asked 2021-Nov-20 at 11:45

            I am tuning the word2vec model hyper-parameters. Word2Vec has to many log in console that I cannot read Optuna or my custom log. Is there any trick to suppress logs generated by Word2Vec?

            ...

            ANSWER

            Answered 2021-Nov-19 at 22:09

            Gensim's classes generally only log if you specifically turn it on, in your code, by setting either a global or module/class-specific logging level.

            So, are you sure you didn't turn on more logging that you want?

            Search your code for anything that sets an INFO or DEBUG level of logging - and either delete or adjust/narrow that line to either not enable, or to set a more restrictie level, on the word2vec module or Word2Vec class.

            Source https://stackoverflow.com/questions/70039495

            QUESTION

            Understanding Intermediate Values and Pruning in Optuna
            Asked 2021-Nov-17 at 03:18

            I am just curious for more information on what an intermediate step actually is and how to use pruning if you're using a different ml library that isn't in the tutorial section eg) XGB, Pytorch etc.

            For example:

            ...

            ANSWER

            Answered 2021-Nov-17 at 03:18

            The basic model creation can be done by passing a complete training datasets once. But there are models that can still be improved (an increase in accuracy) by re-training again on the same training datasets.

            To see to it that we are not wasting resources here, we would check the accuracy after every step using the validation datasets via intermediate_score if accuracy improves, if not we prune the whole trial skipping other steps. Then we go for next trial asking another value of alpha - the hyperparameter that we are trying to determine to have the greatest accuracy on the validation datasets.

            For other libraries, it is just a matter of asking ourselves what do we want with our model, accuracy for sure is a good criteria to measure the model's competency. There can be others.

            Example optuna pruning, I want the model to continue re-training but only at my specific conditions. If intermediate value cannot defeat my best_accuracy and if steps are already more than half of my max iteration then prune this trial.

            Source https://stackoverflow.com/questions/69990009

            QUESTION

            Why optuna stuck at trial 2(trial_id=3) after it has calculated all hyperparameters?
            Asked 2021-Nov-16 at 20:09

            I am using optuna to tune xgboost model's hyperparameters. I find it stuck at trial 2 (trial_id=3) for a long time(244 minutes). But When I look at the SQLite database which records the trial data, I find all the trial 2 (trial_id=3) hyperparameters has been calculated except the mean squared error value of trial 2. And the optuna trial 2 (trial_id=3) seems stuck at that step. I want to know why this happened? And how to fix the issue?

            Here is the code

            ...

            ANSWER

            Answered 2021-Nov-16 at 20:09

            Although I am not 100% sure, I think I know what happened.

            This issue happens because some parameters are not suitable for certain booster type and the trial will return nan as result and be stuck at the step - calculating the MSE score.

            To solve the problem, you just need to delete the "booster": "dart".

            In other words, using "booster": trial.suggest_categorical("booster", ["gbtree", "gblinear"]), rather than "booster": trial.suggest_categorical("booster", ["gbtree", "gblinear", "dart"]), can solve the problem.

            I got the idea when I tuned my LightGBMRegressor Model. I found many trials fail because these trials returned nan and they all used the same "boosting_type"="rf". So I deleted the rf and all 100 trials were completed without any error. Then I looked for the XGBRegressor issue which I posted above. I found all the trials which were stuck had the same "booster":"dart" either. So I deleted the dart, and the XGBRegressor run normally.

            Source https://stackoverflow.com/questions/69984504

            QUESTION

            How to search a set of normally distributed parameters using optuna?
            Asked 2021-Nov-12 at 01:19

            I'm trying to optimize a custom model (no fancy ML whatsoever) that has 13 parameters, 12 of which I know to be normally distributed. I've gotten decent results using the hyperopt library:

            ...

            ANSWER

            Answered 2021-Nov-11 at 22:46

            You can cheat optuna by using uniform distribution and transforming it into normal distribution. To do that one of the method is inversed error function implemented in scipy.

            Function takes uniform distribution from in range <-1, 1> and converts it to standard normal distribution

            Source https://stackoverflow.com/questions/69935219

            QUESTION

            How to set hidden_layer_sizes in sklearn MLPRegressor using optuna trial
            Asked 2021-Nov-11 at 17:05

            I would like to use [OPTUNA][1] with sklearn [MLPRegressor][1] model.

            For almost all hyperparameters it is quite straightforward how to set OPTUNA for them. For example, to set the learning rate: learning_rate_init = trial.suggest_float('learning_rate_init ',0.0001, 0.1001, step=0.005)

            My problem is how to set it for hidden_layer_sizes since it is a tuple. So let's say I would like to have two hidden layers where the first will have 100 neurons and the second will have 50 neurons. Without OPTUNA I would do:

            MLPRegressor( hidden_layer_sizes =(100,50))

            But what if I want OPTUNA to try different neurons in each layer? e.g., from 100 to 500, how can I set it? the MLPRegressor expects a tuple

            ...

            ANSWER

            Answered 2021-Nov-11 at 17:05

            You could set up your objective function as follows:

            Source https://stackoverflow.com/questions/69931757

            QUESTION

            Perfect scores in multiclassclassification?
            Asked 2021-Nov-05 at 17:38

            I am working on a multiclass classification problem with 3 (1, 2, 3) classes being perfectly distributed. (70 instances of each class resulting in (210, 8) dataframe).

            Now my data has all the 3 classes distributed in order i.e first 70 instances are class1, next 70 instances are class 2 and last 70 instances are class 3. I know that this kind of distribution will lead to good score on train set but poor score on test set as the test set has classes that the model has not seen. So I used stratify parameter in train_test_split. Below is my code:-

            ...

            ANSWER

            Answered 2021-Nov-05 at 17:38

            Finally got the answer. The dataset I was using was the issue. The dataset was tailor made for knn algorithm and that was why I was getting perfect scores as I was using the same algorithm.

            I got came to this conclusion after I performed a clustering exercise on this dataset and the K-Means algorithm perfectly predicted the clusters.

            Source https://stackoverflow.com/questions/68699087

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install optuna

            Optuna is available at the Python Package Index and on Anaconda Cloud. Optuna supports Python 3.6 or newer. Also, we also provide Optuna docker images on DockerHub.

            Support

            GitHub Issues for bug reports, feature requests and questions.Gitter for interactive chat with developers.Stack Overflow for questions.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install optuna

          • CLONE
          • HTTPS

            https://github.com/optuna/optuna.git

          • CLI

            gh repo clone optuna/optuna

          • sshUrl

            git@github.com:optuna/optuna.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link