autotuner | program computes automatic pitch correction | Machine Learning library

 by   sannawag Python Version: Current License: No License

kandi X-RAY | autotuner Summary

kandi X-RAY | autotuner Summary

autotuner is a Python library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Tensorflow applications. autotuner has no bugs, it has no vulnerabilities, it has build file available and it has low support. You can download it from GitHub.

This program computes automatic pitch correction for vocal performances. It outputs note-wise constant pitch shift values up to 100 cents, equivalent to one semitone. It it can also apply the shifts to the audio. The program is trained on examples of in-tune singing and applies corrections along a continuous frequency scale. A pre-trained model is available.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              autotuner has a low active ecosystem.
              It has 69 star(s) with 17 fork(s). There are 5 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 10 open issues and 1 have been closed. On average issues are closed in 4 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of autotuner is current.

            kandi-Quality Quality

              autotuner has 0 bugs and 0 code smells.

            kandi-Security Security

              autotuner has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              autotuner code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              autotuner does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              autotuner releases are not available. You will need to build from source code and install.
              Build file is available. You can build the component from source.
              autotuner saves you 1059 person hours of effort in developing the same functionality from scratch.
              It has 2400 lines of code, 65 functions and 13 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of autotuner
            Get all kandi verified functions for this library.

            autotuner Key Features

            No Key Features are available at this moment for autotuner.

            autotuner Examples and Code Snippets

            No Code Snippets are available at this moment for autotuner.

            Community Discussions

            QUESTION

            mlr3, benchmarking and nested resampling: how to extract a tuned model from a benchmark object to calculate feature importance
            Asked 2021-Nov-04 at 13:46

            I am using the benchmark() function in mlr3 to compare several ML algorithms. One of them is XGB with hyperparameter tuning. Thus, I have an outer resampling to evaluate the overall performance (hold-out sample) and an inner resampling for the hyper parameter tuning (5-fold Cross-validation). Besides having an estimate of the accuracy for all ML algorithms, I would like to see the feature importance of the tuned XGB. For that, I would have to access the tuned model (within the benchmark object). I do not know how to do that. The object returned by benchmark() is a deeply nested list and I do not understand its structure.

            This answer on stackoverflow did not help me, because it uses a different setup (a learner in a pipeline rather than a benchmark object).

            This answer on github did not help me, because it shows how to extract all the information about the benchmarking at once but not how to extract one (tuned) model of one of the learners in the benchmark.

            Below is the code I am using to carry out the nested resampling. Following the benchmarking, I would like to estimate the feature importance as described here, which requires accessing the tuned XGB model.

            ...

            ANSWER

            Answered 2021-Nov-03 at 16:54
            library(mlr3tuning)
            library(mlr3learners)
            library(mlr3misc)
            
            learner = lrn("classif.xgboost", nrounds = to_tune(100, 500), eval_metric = "logloss")
            
            at = AutoTuner$new(
              learner = learner,
              resampling = rsmp("cv", folds = 3),
              measure = msr("classif.ce"),
              terminator = trm("evals", n_evals = 5),
              tuner = tnr("random_search"),
              store_models = TRUE
            )
            
            design = benchmark_grid(task = tsk("pima"), learner = at, resampling = rsmp("cv", folds = 5))
            bmr = benchmark(design, store_models = TRUE)
            

            Source https://stackoverflow.com/questions/69827716

            QUESTION

            mlr3: using benchmark() with tuned models (i.e. AutoTuner objects)
            Asked 2021-Aug-17 at 15:26

            I would like to compare the performance of several machine learning algorithms (e.g., decisions trees from rpart, xgb, ...) including their hyperparameter tuning using mlr3. In other words, I would like to compare already tuned instances of different algorithms rather than the algorithms with their default hyperparameter values.

            mlr3 provides AutoTuner-Objects to carry out nested resampling and hyperparameter tuning. There is also a benchmark() function to conduct comparisons of several learners. The benchmark() function in turn uses benchmark_grid() to set up the benchmarking. According to this manual one can pass "an AutoTuner object to mlr3::resample() or mlr3::benchmark()". I do not understand how I can pass an AutoTuner object to benchmark_grid().

            The following code (benchmarking a tuned decision tree with the default version; based on the code in this book) is not working. It returns an error message: "Error: Element with key 'rpart_tuned' not found in DictionaryLearner!"

            ...

            ANSWER

            Answered 2021-Aug-17 at 15:26

            The problem in your code is that you're trying to create a new learner instead of reusing your own in

            Source https://stackoverflow.com/questions/68818828

            QUESTION

            How to get mlr3 importance scores from learner?
            Asked 2021-Aug-06 at 18:25

            How do I get importance scores? I tried this:

            ...

            ANSWER

            Answered 2021-Aug-06 at 18:25

            You need to get the importance from the learner, not the tuner:

            Source https://stackoverflow.com/questions/68685292

            QUESTION

            How to extract mlr3 tuned graph step by step?
            Asked 2021-Jun-09 at 07:49

            My codes in following

            ...

            ANSWER

            Answered 2021-Jun-08 at 09:22

            To be able to fiddle with the models after resampling its best to call resample with store_models = TRUE

            Using your example

            Source https://stackoverflow.com/questions/67869401

            QUESTION

            How to transform '2 levels ParamUty' class in nested cross-validation of mlr3proba?
            Asked 2021-Apr-19 at 04:08

            For survival analysis, I am using mlr3proba package of R.
            My dataset consists of 39 features(both continuous and factor, which i converted all to integer and numeric) and target (time & status).
            I want to tune hyperparameter: num_nodes, in Param_set.
            This is a ParamUty class parameter with default value: 32,32.
            so I decided to transform it.
            I wrote the code as follows for hyperparamters optimization of surv.deephit learner using 'nested cross-validation' (with 10 inner and 3 outer folds).

            ...

            ANSWER

            Answered 2021-Apr-17 at 08:46

            Hi thanks for using mlr3proba. I have actually just finished writing a tutorial that answers exactly this question! It covers training, tuning, and evaluating the neural networks in mlr3proba. For your specific question, the relevant part of the tutorial is this:

            Source https://stackoverflow.com/questions/67132598

            QUESTION

            Benchmarking multiple AutoTuning instances
            Asked 2021-Mar-24 at 09:04

            I have been trying to use mlr3 to do some hyperparameter tuning for xgboost. I want to compare three different models:

            1. xgboost tuned over just the alpha hyperparameter
            2. xgboost tuned over alpha and lambda hyperparameters
            3. xgboost tuned over alpha, lambda, and maxdepth hyperparameters.

            After reading the mlr3 book, I thought that using AutoTuner for the nested resampling and benchmarking would be the best way to go about doing this. Here is what I have tried:

            ...

            ANSWER

            Answered 2021-Mar-24 at 09:04

            To see whether tuning has an effect, you can just add an untuned learner to the benchmark. Otherwise, the conclusion could be that tuning alpha is sufficient for your example.

            I adapted the code so that it runs with an example task.

            Source https://stackoverflow.com/questions/66774423

            QUESTION

            how to repeat hyperparameter tuning (alpha and/or lambda) of glmnet in mlr3
            Asked 2021-Mar-22 at 09:34

            I would like to repeat the hyperparameter tuning (alpha and/or lambda) of glmnet in mlr3 to avoid variability in smaller data sets

            In caret, I could do this with "repeatedcv"

            Since I really like the mlr3 family packages I would like to use them for my analysis. However, I am not sure about the correct way how to do this step in mlr3

            Example data

            ...

            ANSWER

            Answered 2021-Mar-21 at 22:36

            Repeated hyperparameter tuning (alpha and lambda) of glmnet can be done using the SECOND mlr3 approach as stated above. The coefficients can be extracted with stats::coef and the stored values in the AutoTuner

            Source https://stackoverflow.com/questions/66696405

            QUESTION

            How to interpret the aggregated performance result of nested resampling in mlr3?
            Asked 2021-Feb-26 at 12:50

            Recently I am learning about the nested resampling in mlr3 package. According to the mlr3 book, the target of nested resampling is getting an unbiased performance estimates for learners. I run a test as follow:

            ...

            ANSWER

            Answered 2021-Feb-26 at 12:50

            The result shows that the 3 hyperparameters chosen from 3 inner resampling are not garantee to be the same.

            It sounds like you want to fit a final model with the hyperparameters selected in the inner resamplings. Nested resampling is not used to select hyperparameter values for a final model. Only check the inner tuning results for stable hyperparameters. This means that the selected hyperparameters should not vary too much.

            1. Yes, you are comparing the aggregated performance of all outer resampling test sets (rr$aggregate()) with the performances estimated on the inner resampling test sets (lapply(rr$learners, function(x) x$tuning_result)).

            2. The aggregated performance of all outer resampling iterations is the unbiased performance of a ranger model with optimal hyperparameters found by grid search. You can run at$train(task) to get a final model and report the performance estimated with nested resampling as the unbiased performance of this model.

            Source https://stackoverflow.com/questions/66293002

            QUESTION

            Importance based variable reduction
            Asked 2021-Feb-19 at 23:21

            I am facing a difficulty with filtering out the least important variables in my model. I received a set of data with more than 4,000 variables, and I have been asked to reduce the number of variables getting into the model.

            I did try already two approaches, but I have failed twice.

            The first thing I tried was to manually check variable importance after the modelling and based on that removing non significant variables.

            ...

            ANSWER

            Answered 2021-Feb-19 at 23:21

            The reason why you can't access $importance of the at variable is that it is an AutoTuner, which does not directly offer variable importance and only "wraps" around the actual Learner being tuned.

            The trained GraphLearner is saved inside your AutoTuner under $learner:

            Source https://stackoverflow.com/questions/66267945

            QUESTION

            How to access parameters fitted by benchmark?
            Asked 2021-Jan-22 at 16:04

            This is a really basic question, but I haven't found the answer on other sites, so I am kind of forced to ask about it here.

            I fitted my "classif.ranger" learner usin benchmark(design,store_models) function form mlr3 library and I need to acces the fitted parameters (obv). I found nothing about it in the benchmark documentation, so I tried to do it the hard way: -> I set store_models to TRUE -> I tried to acces the model using fitted(), but it returned NULL.

            I know the question is basic and that I probably am doing smth stupid (for ex. misreading the documentation or smth like that) but I just have no idea of how to acctualy access the parameters... please help.

            If it is needed in such (probably) trivial situation, here comes the code:

            ...

            ANSWER

            Answered 2021-Jan-21 at 21:28

            You can use getBMRModels() to get the models, which will tell you what hyperparameters were used to fit them. See the benchmark section of the documentation.

            Source https://stackoverflow.com/questions/65835419

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install autotuner

            You can download it from GitHub.
            You can use autotuner like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/sannawag/autotuner.git

          • CLI

            gh repo clone sannawag/autotuner

          • sshUrl

            git@github.com:sannawag/autotuner.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link