waveglow | A Flow-based Generative Network for Speech Synthesis

 by   NVIDIA Python Version: Current License: BSD-3-Clause

kandi X-RAY | waveglow Summary

kandi X-RAY | waveglow Summary

waveglow is a Python library. waveglow has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can download it from GitHub.

A Flow-based Generative Network for Speech Synthesis
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              waveglow has a medium active ecosystem.
              It has 2110 star(s) with 517 fork(s). There are 79 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 69 open issues and 184 have been closed. On average issues are closed in 42 days. There are 5 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of waveglow is current.

            kandi-Quality Quality

              waveglow has 0 bugs and 0 code smells.

            kandi-Security Security

              waveglow has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              waveglow code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              waveglow is licensed under the BSD-3-Clause License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              waveglow releases are not available. You will need to build from source code and install.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed waveglow and discovered the below as its top functions. This is intended to give you an instant insight into waveglow implemented functionality, and help decide if they suit your requirements.
            • Train the WaveGlow model
            • Unflatten a list of tensors
            • Flatten a list of tensors
            • Load a checkpoint
            • Save model and optimizer state
            • Reduce tensor into num_gpus
            • Apply gradients to modules
            • Recursively update the model
            • Update res_skip
            • Check if a model has old version
            • Update the model conditional weight
            • Perform the forward pass on the input
            • Concatenate tanh
            • Removes weights from the waveglow model
            • Remove weights from conv_norm
            • Return a list of files
            • Compute the mel spectrogram from the input audio
            • Load a WAV file into a torch torch torch torch torch torch torch torch Tensor
            Get all kandi verified functions for this library.

            waveglow Key Features

            No Key Features are available at this moment for waveglow.

            waveglow Examples and Code Snippets

            WaveGlow Vocoder,Usage
            Pythondot img1Lines of Code : 10dot img1License : Permissive (BSD-3-Clause)
            copy iconCopy
            import torch
            import librosa
            
            y,sr = librosa.load(librosa.util.example_audio_file(), sr=22050, mono=True, duration=10, offset=30)
            y_tensor = torch.from_numpy(y).to(device='cuda', dtype=torch.float32)
            
            from waveglow_vocoder import WaveGlowVocoder
            
            WV =  
            WAVEGLOW,Dataset Preparation
            Pythondot img2Lines of Code : 8dot img2License : Permissive (MIT)
            copy iconCopy
            cd src
            python3 dataset/procaudio.py
            
            Audio Name without extension|Text only for notation|True Text
            
            LJ001-0008|has never been surpassed.|has never been surpassed.
            LJ001-0009|Printing, then, for our purpose, may be considered as the art of making book  
            Usage,training
            Pythondot img3Lines of Code : 6dot img3no licencesLicense : No License
            copy iconCopy
            (without GPU)
            python train.py
            
            (with GPU #n)
            python train.py -g n
            
            python train.py -r snapshot_iter_100000
              

            Community Discussions

            QUESTION

            About the usage of vocoders
            Asked 2022-Feb-01 at 23:05

            I'm quite new to AI and I'm currently developing a model for non-parallel voice conversions. One confusing problem that I have is the use of vocoders.

            So my model needs Mel spectrograms as the input and the current model that I'm working on is using the MelGAN vocoder (Github link) which can generate 22050Hz Mel spectrograms from raw wav files (which is what I need) and back. I recently tried WaveGlow Vocoder (PyPI link) which can also generate Mel spectrograms from raw wav files and back.

            But, in other models such as, WaveRNN , VocGAN , WaveGrad There's no clear explanation about wav to Mel spectrograms generation. Do most of these models don't require the wav to Mel spectrograms feature because they largely cater to TTS models like Tacotron? or is it possible that all of these have that feature and I'm just not aware of it?

            A clarification would be highly appreciated.

            ...

            ANSWER

            Answered 2022-Feb-01 at 23:05
            How neural vocoders handle audio -> mel

            Check e.g. this part of the MelGAN code: https://github.com/descriptinc/melgan-neurips/blob/master/mel2wav/modules.py#L26

            Specifically, the Audio2Mel module simply uses standard methods to create log-magnitude mel spectrograms like this:

            • Compute the STFT by applying the Fourier transform to windows of the input audio,
            • Take the magnitude of the resulting complex spectrogram,
            • Multiply the magnitude spectrogram by a mel filter matrix. Note that they actually get this matrix from librosa!
            • Take the logarithm of the resulting mel spectrogram.
            Regarding the confusion

            Your confusion might stem from the fact that, usually, authors of Deep Learning papers only mean their mel-to-audio "decoder" when they talk about "vocoders" -- the audio-to-mel part is always more or less the same. I say this might be confusing since, to my understanding, the classical meaning of the term "vocoder" includes both an encoder and a decoder.

            Unfortunately, these methods will not always work exactly in the same manner as there are e.g. different methods to create the mel filter matrix, different padding conventions etc.

            For example, librosa.stft has a center argument that will pad the audio before applying the STFT, while tensorflow.signal.stft does not have this (it would require manual padding beforehand).

            An example for the different methods to create mel filters would be the htk argument in librosa.filters.mel, which switches between the "HTK" method and "Slaney". Again taking Tensorflow as an example, tf.signal.linear_to_mel_weight_matrix does not support this argument and always uses the HTK method. Unfortunately, I am not familiar with torchaudio, so I don't know if you need to be careful there, as well.

            Finally, there are of course many parameters such as the STFT window size, hop length, the frequencies covered by the mel filters etc, and changing these relative to what a reference implementation used may impact your results. Since different code repositories likely use slightly different parameters, I suppose the answer to your question "will every method do the operation(to create a mel spectrogram) in the same manner?" is "not really". At the end of the day, you will have to settle for one set of parameters either way...

            Bonus: Why are these all only decoders and the encoder is always the same?

            The direction Mel -> Audio is hard. Not even Mel -> ("normal") spectrogram is well-defined since the conversion to mel spectrum is lossy and cannot be inverted. Finally, converting a spectrogram to audio is difficult since the phase needs to be estimated. You may be familiar with methods like Griffin-Lim (again, librosa has it so you can try it out). These produce noisy, low-quality audio. So the research focuses on improving this process using powerful models.

            On the other hand, Audio -> Mel is simple, well-defined and fast. There is no need to define "custom encoders".

            Now, a whole different question is whether mel spectrograms are a "good" encoding. Using methods like variational autoencoders, you could perhaps find better (e.g. more compact, less lossy) audio encodings. These would include custom encoders and decoders and you would not get away with standard librosa functions...

            Source https://stackoverflow.com/questions/70942123

            QUESTION

            "errorMessage": "[Errno 28] No space left on device" AWS-Lambda
            Asked 2021-Jun-30 at 13:56

            I am executing my test configuration and this is the error I am facing. I have a trained model of size 327mb and layers of 250mb required for the inference of my Text To Speech trained model. So the size of model and layers might be the reason?? Please help me clarify and provide a solution. I am importing the trained model from s3 bucket and then loading it for the further processing. HERE IS THE CODE AND ERROR.

            ...

            ANSWER

            Answered 2021-Jun-30 at 13:56

            AWS Lambdas local storage in /tmp is only 512MB. You are apparently exceeding this limit.

            There are five solutions I can think of:

            1. Mount a EFS volume (which already contains your trained model) to the Lambda.
            2. Reduce the size of your model.
            3. Stream the model in chunks to your Lambda (might be hard).
            4. Not use Lambda (maybe just a plain EC2 or EKS).
            5. Use a Docker container that already contains your model as Lambda.

            It is hard to tell what the best solution for you is, since so much information is missing. But those solutions should give you a good starting point.

            Source https://stackoverflow.com/questions/68195577

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install waveglow

            Clone our repo and initialize submodule. Install requirements pip3 install -r requirements.txt.
            Clone our repo and initialize submodule git clone https://github.com/NVIDIA/waveglow.git cd waveglow git submodule init git submodule update
            Install requirements pip3 install -r requirements.txt
            Install Apex

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/NVIDIA/waveglow.git

          • CLI

            gh repo clone NVIDIA/waveglow

          • sshUrl

            git@github.com:NVIDIA/waveglow.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link