IMS-Toucan | Speech Toolkit of the Speech and Language Technologies Group | Machine Learning library

 by   DigitalPhonetics Python Version: v2.5 License: Apache-2.0

kandi X-RAY | IMS-Toucan Summary

kandi X-RAY | IMS-Toucan Summary

IMS-Toucan is a Python library typically used in Artificial Intelligence, Machine Learning applications. IMS-Toucan has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. You can download it from GitHub.

IMS Toucan is a toolkit for teaching, training and using state-of-the-art Speech Synthesis models, developed at the Institute for Natural Language Processing (IMS), University of Stuttgart, Germany. Everything is pure Python and PyTorch based to keep it as simple and beginner-friendly, yet powerful as possible. The basic PyTorch Modules of FastSpeech 2 are taken from ESPnet, the PyTorch Modules of HiFiGAN are taken from the ParallelWaveGAN repository which are also authored by the brilliant Tomoki Hayashi. For a version of the toolkit that includes TransformerTTS instead of Tacotron 2 and MelGAN instead of HiFiGAN, check out the TransformerTTS and MelGAN branch. They are separated to keep the code clean, simple and minimal.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              IMS-Toucan has a low active ecosystem.
              It has 339 star(s) with 66 fork(s). There are 14 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 26 open issues and 97 have been closed. On average issues are closed in 18 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of IMS-Toucan is v2.5

            kandi-Quality Quality

              IMS-Toucan has no bugs reported.

            kandi-Security Security

              IMS-Toucan has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              IMS-Toucan is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              IMS-Toucan releases are available to install and integrate.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of IMS-Toucan
            Get all kandi verified functions for this library.

            IMS-Toucan Key Features

            No Key Features are available at this moment for IMS-Toucan.

            IMS-Toucan Examples and Code Snippets

            Training a Model 🦜
            Pythondot img1Lines of Code : 11dot img1License : Permissive (Apache-2.0)
            copy iconCopy
            python run_training_pipeline.py 
            
            --gpu_id  
            
            --resume_checkpoint 
            
            --resume (if this is present, the furthest checkpoint available will be loaded automatically)
            
            --finetune (if this is present, the provided checkpoint will be fine-tuned on the data   
            Citation
            Pythondot img2Lines of Code : 8dot img2License : Permissive (Apache-2.0)
            copy iconCopy
            @inproceedings{lux2021toucan,
              title={{The IMS Toucan system for the Blizzard Challenge 2021}},
              author={Florian Lux and Julia Koch and Antje Schweitzer and Ngoc Thang Vu},
              year={2021},
              booktitle={Proc. Blizzard Challenge Workshop},
              volume={2  
            Installation
            Pythondot img3Lines of Code : 6dot img3License : Permissive (Apache-2.0)
            copy iconCopy
            conda create --prefix ./toucan_conda_venv --no-default-packages python=3.8
            
            pip install --no-cache-dir -r requirements.txt
            
            pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.ht  

            Community Discussions

            QUESTION

            Using RNN Trained Model without pytorch installed
            Asked 2022-Feb-28 at 20:17

            I have trained an RNN model with pytorch. I need to use the model for prediction in an environment where I'm unable to install pytorch because of some strange dependency issue with glibc. However, I can install numpy and scipy and other libraries. So, I want to use the trained model, with the network definition, without pytorch.

            I have the weights of the model as I save the model with its state dict and weights in the standard way, but I can also save it using just json/pickle files or similar.

            I also have the network definition, which depends on pytorch in a number of ways. This is my RNN network definition.

            ...

            ANSWER

            Answered 2022-Feb-17 at 10:47

            You should try to export the model using torch.onnx. The page gives you an example that you can start with.

            An alternative is to use TorchScript, but that requires torch libraries.

            Both of these can be run without python. You can load torchscript in a C++ application https://pytorch.org/tutorials/advanced/cpp_export.html

            ONNX is much more portable and you can use in languages such as C#, Java, or Javascript https://onnxruntime.ai/ (even on the browser)

            A running example

            Just modifying a little your example to go over the errors I found

            Notice that via tracing any if/elif/else, for, while will be unrolled

            Source https://stackoverflow.com/questions/71146140

            QUESTION

            Flux.jl : Customizing optimizer
            Asked 2022-Jan-25 at 07:58

            I'm trying to implement a gradient-free optimizer function to train convolutional neural networks with Julia using Flux.jl. The reference paper is this: https://arxiv.org/abs/2005.05955. This paper proposes RSO, a gradient-free optimization algorithm updates single weight at a time on a sampling bases. The pseudocode of this algorithm is depicted in the picture below.

            optimizer_pseudocode

            I'm using MNIST dataset.

            ...

            ANSWER

            Answered 2022-Jan-14 at 23:47

            Based on the paper you shared, it looks like you need to change the weight arrays per each output neuron per each layer. Unfortunately, this means that the implementation of your optimization routine is going to depend on the layer type, since an "output neuron" for a convolution layer is quite different than a fully-connected layer. In other words, just looping over Flux.params(model) is not going to be sufficient, since this is just a set of all the weight arrays in the model and each weight array is treated differently depending on which layer it comes from.

            Fortunately, Julia's multiple dispatch does make this easier to write if you use separate functions instead of a giant loop. I'll summarize the algorithm using the pseudo-code below:

            Source https://stackoverflow.com/questions/70641453

            QUESTION

            How can I check a confusion_matrix after fine-tuning with custom datasets?
            Asked 2021-Nov-24 at 13:26

            This question is the same with How can I check a confusion_matrix after fine-tuning with custom datasets?, on Data Science Stack Exchange.

            Background

            I would like to check a confusion_matrix, including precision, recall, and f1-score like below after fine-tuning with custom datasets.

            Fine tuning process and the task are Sequence Classification with IMDb Reviews on the Fine-tuning with custom datasets tutorial on Hugging face.

            After finishing the fine-tune with Trainer, how can I check a confusion_matrix in this case?

            An image of confusion_matrix, including precision, recall, and f1-score original site: just for example output image

            ...

            ANSWER

            Answered 2021-Nov-24 at 13:26

            What you could do in this situation is to iterate on the validation set(or on the test set for that matter) and manually create a list of y_true and y_pred.

            Source https://stackoverflow.com/questions/68691450

            QUESTION

            CUDA OOM - But the numbers don't add upp?
            Asked 2021-Nov-23 at 06:13

            I am trying to train a model using PyTorch. When beginning model training I get the following error message:

            RuntimeError: CUDA out of memory. Tried to allocate 5.37 GiB (GPU 0; 7.79 GiB total capacity; 742.54 MiB already allocated; 5.13 GiB free; 792.00 MiB reserved in total by PyTorch)

            I am wondering why this error is occurring. From the way I see it, I have 7.79 GiB total capacity. The numbers it is stating (742 MiB + 5.13 GiB + 792 MiB) do not add up to be greater than 7.79 GiB. When I check nvidia-smi I see these processes running

            ...

            ANSWER

            Answered 2021-Nov-23 at 06:13

            This is more of a comment, but worth pointing out.

            The reason in general is indeed what talonmies commented, but you are summing up the numbers incorrectly. Let's see what happens when tensors are moved to GPU (I tried this on my PC with RTX2060 with 5.8G usable GPU memory in total):

            Let's run the following python commands interactively:

            Source https://stackoverflow.com/questions/70074789

            QUESTION

            How to compare baseline and GridSearchCV results fair?
            Asked 2021-Nov-04 at 21:17

            I am a bit confusing with comparing best GridSearchCV model and baseline.
            For example, we have classification problem.
            As a baseline, we'll fit a model with default settings (let it be logistic regression):

            ...

            ANSWER

            Answered 2021-Nov-04 at 21:17

            No, they aren't comparable.

            Your baseline model used X_train to fit the model. Then you're using the fitted model to score the X_train sample. This is like cheating because the model is going to already perform the best since you're evaluating it based on data that it has already seen.

            The grid searched model is at a disadvantage because:

            1. It's working with less data since you have split the X_train sample.
            2. Compound that with the fact that it's getting trained with even less data due to the 5 folds (it's training with only 4/5 of X_val per fold).

            So your score for the grid search is going to be worse than your baseline.

            Now you might ask, "so what's the point of best_model.best_score_? Well, that score is used to compare all the models used when searching for the optimal hyperparameters in your search space, but in no way should be used to compare against a model that was trained outside of the grid search context.

            So how should one go about conducting a fair comparison?

            1. Split your training data for both models.

            Source https://stackoverflow.com/questions/69844028

            QUESTION

            Getting Error 524 while running jupyter lab in google cloud platform
            Asked 2021-Oct-15 at 02:14

            I am not able to access jupyter lab created on google cloud

            I created one notebook using Google AI platform. I was able to start it and work but suddenly it stopped and I am not able to start it now. I tried building and restarting the jupyterlab, but of no use. I have checked my disk usages as well, which is only 12%.

            I tried the diagnostic tool, which gave the following result:

            but didn't fix it.

            Thanks in advance.

            ...

            ANSWER

            Answered 2021-Aug-20 at 14:00

            QUESTION

            TypeError: brain.NeuralNetwork is not a constructor
            Asked 2021-Sep-29 at 22:47

            I am new to Machine Learning.

            Having followed the steps in this simple Maching Learning using the Brain.js library, it beats my understanding why I keep getting the error message below:

            I have double-checked my code multiple times. This is particularly frustrating as this is the very first exercise!

            Kindly point out what I am missing here!

            Find below my code:

            ...

            ANSWER

            Answered 2021-Sep-29 at 22:47

            Turns out its just documented incorrectly.

            In reality the export from brain.js is this:

            Source https://stackoverflow.com/questions/69348213

            QUESTION

            Ordinal Encoding or One-Hot-Encoding
            Asked 2021-Sep-04 at 06:43

            IF we are not sure about the nature of categorical features like whether they are nominal or ordinal, which encoding should we use? Ordinal-Encoding or One-Hot-Encoding? Is there a clearly defined rule on this topic?

            I see a lot of people using Ordinal-Encoding on Categorical Data that doesn't have a Direction. Suppose a frequency table:

            ...

            ANSWER

            Answered 2021-Sep-04 at 06:43

            You're right. Just one thing to consider for choosing OrdinalEncoder or OneHotEncoder is that does the order of data matter?

            Most ML algorithms will assume that two nearby values are more similar than two distant values. This may be fine in some cases e.g., for ordered categories such as:

            • quality = ["bad", "average", "good", "excellent"] or
            • shirt_size = ["large", "medium", "small"]

            but it is obviously not the case for the:

            • color = ["white","orange","black","green"]

            column (except for the cases you need to consider a spectrum, say from white to black. Note that in this case, white category should be encoded as 0 and black should be encoded as the highest number in your categories), or if you have some cases for example, say, categories 0 and 4 may be more similar than categories 0 and 1. To fix this issue, a common solution is to create one binary attribute per category (One-Hot encoding)

            Source https://stackoverflow.com/questions/69052776

            QUESTION

            How to increase dimension-vector size of BERT sentence-transformers embedding
            Asked 2021-Aug-15 at 13:35

            I am using sentence-transformers for semantic search but sometimes it does not understand the contextual meaning and returns wrong result eg. BERT problem with context/semantic search in italian language

            by default the vector side of embedding of the sentence is 78 columns, so how do I increase that dimension so that it can understand the contextual meaning in deep.

            code:

            ...

            ANSWER

            Answered 2021-Aug-10 at 07:39

            Increasing the dimension of a trained model is not possible (without many difficulties and re-training the model). The model you are using was pre-trained with dimension 768, i.e., all weight matrices of the model have a corresponding number of trained parameters. Increasing the dimensionality would mean adding parameters which however need to be learned.

            Also, the dimension of the model does not reflect the amount of semantic or context information in the sentence representation. The choice of the model dimension reflects more a trade-off between model capacity, the amount of training data, and reasonable inference speed.

            If the model that you are using does not provide representation that is semantically rich enough, you might want to search for better models, such as RoBERTa or T5.

            Source https://stackoverflow.com/questions/68686272

            QUESTION

            How to identify what features affect predictions result?
            Asked 2021-Aug-11 at 15:55

            I have a table with features that were used to build some model to predict whether user will buy a new insurance or not. In the same table I have probability of belonging to the class 1 (will buy) and class 0 (will not buy) predicted by this model. I don't know what kind of algorithm was used to build this model. I only have its predicted probabilities.

            Question: how to identify what features affect these prediction results? Do I need to build correlation matrix or conduct any tests?

            Table example:

            ...

            ANSWER

            Answered 2021-Aug-11 at 15:55

            You could build a model like this.

            x = features you have. y = true_lable

            from that you can extract features importance. also, if you want to go the extra mile,you can do Bootstrapping, so that the features importance would be more stable (statistical).

            Source https://stackoverflow.com/questions/68744565

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install IMS-Toucan

            To install this toolkit, clone it onto the machine you want to use it on (should have at least one GPU if you intend to train models on that machine. For inference, you can get by without GPU). Navigate to the directory you have cloned. We are going to create and activate a conda virtual environment to install the basic requirements into. After creating the environment, the command you need to use to activate the virtual environment is displayed. The commands below show everything you need to do. We use an ensemble of Speechbrain's ECAPA-TDNN and Speechbrain's x-Vector as the speaker conditioning. In the current version of the toolkit no further action should be required. When you are using multispeaker for the first time, it requires an internet connection to (automatically) download the pretrained models though. And finally you need to have espeak-ng installed on your system, because it is used as backend for the phonemizer. If you replace the phonemizer, you don't need it. On most Linux environments it will be installed already, and if it is not, and you have the sufficient rights, you can install it by simply running. You don't need to use pretrained models, but it can speed things up tremendously. Go into the release section and download the aligner model, the HiFiGAN model and the multi-lingual-multi-speaker FastSpeech2 model. Place them in Models/ALigner/aligner.pt, Models/HiFiGAN_combined/best.pt and Models/FastSpeech2_Meta/best.pt.
            In the directory called Utility there is a file called file_lists.py. In this file you should write a function that returns a list of all the absolute paths to each of the audio files in your dataset as strings. Then go to the directory TrainingInterfaces/TrainingPipelines. In there, make a copy of any existing pipeline that has HiFiGAN in its name. We will use this as reference and only make the necessary changes to use the new dataset. Import the function you have just written as get_file_list. Now look out for a variable called model_save_dir. This is the default directory that checkpoints will be saved into, unless you specify another one when calling the training script. Change it to whatever you like. Now you need to add your newly created pipeline to the pipeline dictionary in the file run_training_pipeline.py in the top level of the toolkit. In this file, import the run function from the pipeline you just created and give it a speaking name. Now in the pipeline_dict, add your imported function as value and use as key a shorthand that makes sense. And just like that you're done.
            In the directory called Utility there is a file called path_to_transcript_dicts.py. In this file you should write a function that returns a dictionary that has all the absolute paths to each of the audio files in your dataset as strings as the keys and the textual transcriptions of the corresponding audios as the values. Then go to the directory TrainingInterfaces/TrainingPipelines. In there, make a copy of any existing pipeline that has FastSpeech 2 in its name. We will use this copy as reference and only make the necessary changes to use the new dataset. Import the function you have just written as build_path_to_transcript_dict. Since the data will be processed a considerable amount, a cache will be built and saved as file for quick and easy restarts. So find the variable cache_dir and adapt it to your needs. The same goes for the variable save_dir, which is where the checkpoints will be saved to. This is a default value, you can overwrite it when calling the pipeline later using a command line argument, in case you want to fine-tune from a checkpoint and thus save into a different directory. In your new pipeline file, look out for the line in which the acoustic_model is loaded. Change the path to the checkpoint of an Aligner model. It can either be the one that is supplied with the toolkit on the release page, or one that you trained yourself. In the example pipelines, the one that we provide is finetuned to the dataset it is applied to before it is used to extract durations. Since we are using text here, we have to make sure that the text processing is adequate for the language. So check in Preprocessing/TextFrontend whether the TextFrontend already has a language ID (e.g. 'en' and 'de') for the language of your dataset. If not, you'll have to implement handling for that, but it should be pretty simple by just doing it analogous to what is there already. Now back in the pipeline, change the lang argument in the creation of the dataset and in the call to the train loop function to the language ID that matches your data. Now navigate to the implementation of the train_loop that is called in the pipeline. In this file, find the function called plot_progress_spec. This function will produce spectrogram plots during training, which is the most important way to monitor the progress of the training. In there, you may need to add an example sentence for the language of the data you are using. It should all be pretty clear from looking at it. Once this is done, we are almost done, now we just need to make it available to the run_training_pipeline.py file in the top level. In said file, import the run function from the pipeline you just created and give it a speaking name. Now in the pipeline_dict, add your imported function as value and use as key a shorthand that makes sense. And that's it.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries

            Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Consider Popular Machine Learning Libraries

            tensorflow

            by tensorflow

            youtube-dl

            by ytdl-org

            models

            by tensorflow

            pytorch

            by pytorch

            keras

            by keras-team

            Try Top Libraries by DigitalPhonetics

            adviser

            by DigitalPhoneticsPython

            speaker-anonymization

            by DigitalPhoneticsPython

            reading-comprehension

            by DigitalPhoneticsPython

            cyclegan-emotion-transfer

            by DigitalPhoneticsPython

            nlg-eval

            by DigitalPhoneticsPython