hyperas | simple wrapper for convenient hyperparameter optimization | Machine Learning library

 by   maxpumperla Python Version: 0.4.1 License: MIT

kandi X-RAY | hyperas Summary

kandi X-RAY | hyperas Summary

hyperas is a Python library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Tensorflow, Keras applications. hyperas has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can install using 'pip install hyperas' or download it from GitHub, PyPI.

A very simple convenience wrapper around hyperopt for fast prototyping with keras models. Hyperas lets you use the power of hyperopt without having to learn the syntax of it. Instead, just define your keras model as you are used to, but use a simple template notation to define hyper-parameter ranges to tune.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              hyperas has a highly active ecosystem.
              It has 2154 star(s) with 320 fork(s). There are 66 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 96 open issues and 160 have been closed. On average issues are closed in 125 days. There are no pull requests.
              OutlinedDot
              It has a negative sentiment in the developer community.
              The latest version of hyperas is 0.4.1

            kandi-Quality Quality

              hyperas has 0 bugs and 0 code smells.

            kandi-Security Security

              hyperas has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              hyperas code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              hyperas is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              hyperas releases are not available. You will need to build from source code and install.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.
              hyperas saves you 624 person hours of effort in developing the same functionality from scratch.
              It has 1451 lines of code, 85 functions and 23 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed hyperas and discovered the below as its top functions. This is intended to give you an instant insight into hyperas implemented functionality, and help decide if they suit your requirements.
            • Minimizes the hyperparameters
            • Base method for hyperparameters
            • Return a list of the names of the parts
            • Returns the string representation of the given model
            • Loads the cifar10 dataset
            • Visualize visualization
            • Generate random pairwise pairs
            • Create training and test pair
            • Create a model
            • Create base network
            • Predict using the model
            • Returns a voting model for the given model
            • Return a list of models with best fit
            • Predict predictions for the model
            • Visit an ImportFrom node
            • Insert an import node
            Get all kandi verified functions for this library.

            hyperas Key Features

            No Key Features are available at this moment for hyperas.

            hyperas Examples and Code Snippets

            default
            Pythondot img1Lines of Code : 17dot img1License : Permissive (MIT)
            copy iconCopy
             [2.039406419881185, 0.13577102455894152], u'epoch': [0, 1], u'time': 20.692888021469116, u'accuracy': 0.984}
            (worker available gpu0) launching new model to worker...
            1475215229.75 Model returned results: gpu1 392261103527195121 1174066377352204012 {  
            Speech and Music Detection,Installation
            Pythondot img2Lines of Code : 3dot img2License : Permissive (MIT)
            copy iconCopy
            brew install lame
            brew reinstall sox --with-lame  # for mp3 compatibility
            
            pip install -r requirements.txt
              
            Setting up an optimization solver on top of a neural network model
            Pythondot img3Lines of Code : 47dot img3License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            def score_trained_model(params, args):
                # Get the model from the fixed args.
                model = args[0]
            
                # Run the model on the params, return the output.
                return model_predict(model, params)
            
            # Nelder-Mead is my
            How to use hyperopt for hyperparameter optimization of Keras deep learning network?
            Pythondot img4Lines of Code : 98dot img4License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            from __future__ import print_function
            
            from hyperopt import Trials, STATUS_OK, tpe
            from keras.datasets import mnist
            from keras.layers.core import Dense, Dropout, Activation
            from keras.models import Sequential
            from keras.utils import np_uti
            What does max eval parameter in hyperas optim minimize function returns?
            Pythondot img5Lines of Code : 3dot img5License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            print('Best performing model chosen hyper-parameters:')
            print(best_run)
            
            Param Tuning with Keras and Hyperas
            Pythondot img6Lines of Code : 6dot img6License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
             model.add(LSTM({{choice([32,64,128,256,512])}},W_regularizer=l2({{uniform(0, 1)}})))
            
            model.add(LSTM(units={{choice([32,64,128,256,512])}},...)
            
            model.add(Dropout(rate=..))
            
            TypeError: hp_choice() takes 2 positional arguments but 7 were given
            Pythondot img7Lines of Code : 4dot img7License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            choice(32, 64, 128, 256, 512, 1024)
            
            choice([32, 64, 128, 256, 512, 1024])
            
            How to fix package resolution warnings in conda?
            Pythondot img8Lines of Code : 2dot img8License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            conda update --strict-channel-priority --all
            
            Hyperas grid search with a network with multiple inputs
            Pythondot img9Lines of Code : 7dot img9License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            datagen, X_train, Y_train, X_test, Y_test = data()
            
            datagen = ImageDataGenerator()
            train_list = []
            for input in X_train:
                train_list.append(datagen.fit(input))
            
            Can't use intermediate function in hyperas
            Pythondot img10Lines of Code : 29dot img10License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            best_run, best_model = optim.minimize(model=create_model,
                                                  data=data,
                                                  functions=[processing], # <<
                                                  algo=tpe.suggest,
               

            Community Discussions

            QUESTION

            Building a dense residual network with keras
            Asked 2021-Jan-30 at 10:14

            I am trying to build a classifier based on a Dense Network with Keras. My input are (26,1) vectors and I want to get a binary classification 1 or 0 as output.

            Using a Dense network and some optimisation with hyperas I manage to reach 80% accuracy which is not bad but I am trying to improve the accuracy of the network using Residual networks.

            I found on various forums a lot of examples of residual networks for convolutionary networks but I did not find examples of residual networks.

            I tried the following code to generate the residual net :

            ...

            ANSWER

            Answered 2021-Jan-30 at 10:14

            Firstly, resnets are intended to be used in much deep networks. Your network seems too shallow to get maximum adding benefit.

            More generally, resnets are kind of an evolved version of simple architectures like VGGNet, with the intend to be able to "go deeper". That does not mean that residual layers will always increase your accuracy, if the network is too shallow.

            Adding more layers should help. The key idea is to avoid vanishing gradients in the deeper layers of the network by allowing to skip connections.

            Source https://stackoverflow.com/questions/65966300

            QUESTION

            Usage of LSTM/GRU and Flatten throws dimensional incompatibility error
            Asked 2020-Sep-15 at 20:26

            I want to make use of a promising NN I found at towardsdatascience for my case study.

            The data shapes I have are:

            ...

            ANSWER

            Answered 2020-Aug-17 at 18:14

            I cannot reproduce your error, check if the following code works for you:

            Source https://stackoverflow.com/questions/63455257

            QUESTION

            Hyperparameter-tuning (Hyperas) and Cross-Validation with Pipeline-Preprocessing
            Asked 2020-May-19 at 21:25

            tl;dr I try to optimize and cross-validate my hyperparameters with Hyperas but can't make a preprocessing (scaling, over/undersampling) pipeline with KerasClassifier work

            I use Hyperas (wrapper for hyperopt) to tune my Neural Network's (built with Keras/Tensorflow) hyperparameters and try to implement a kfold-cross validation for optimal parameters. However, I also do preprocessing on the data (Standardscaler & MinMaxScaler) and then Over/undersampling using SMOTETOMEK).

            I read that one should not do the feature scaling and resampling on the whole dataset but only on the parts that are used for training to avoid spillovers. Trying to implement this inside hyperopt only for the train folds of the cross-validation is somewhat difficult, because when using a pipeline like imblearn, the pipeline only works with KerasClassifier which only takes a model-function. I can't give him that model function because the whole validation process in hyperopt takes place in one function.

            Do you have any suggestions on how to make something like this work? Can I do all the preprocessing in def data() and optimize/cross validate the parameters on the whole dataset or does this hurt the correct parameter finding process? (I do have an additional test dataset for the final model)

            Is there a way to make it work manually?

            ...

            ANSWER

            Answered 2020-May-19 at 21:25

            solved it. This is the solution if anybody is interested:

            Source https://stackoverflow.com/questions/61879535

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install hyperas

            Assume you have data generated as such. and an existing keras model like the following. To do hyper-parameter optimization on this model, just wrap the parameters you want to optimize into double curly brackets and choose a distribution over which to run the algorithm. In the above example, let's say we want to optimize for the best dropout probability in both dropout layers. Choosing a uniform distribution over the interval [0,1], this translates into the following definition. Note that before returning the model, to optimize, we also have to define which evaluation metric of the model is important to us. For example, in the following, we optimize for accuracy. Note: In the following code we use 'loss': -accuracy, i.e. the negative of accuracy. That's because under the hood hyperopt will always minimize whatever metric you provide. If instead you want to actually want to minimize a metric, say MSE or another loss function, you keep a positive sign (e.g. 'loss': mse).

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install hyperas

          • CLONE
          • HTTPS

            https://github.com/maxpumperla/hyperas.git

          • CLI

            gh repo clone maxpumperla/hyperas

          • sshUrl

            git@github.com:maxpumperla/hyperas.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link