Speech-Transformer | PyTorch re-implementation of Speech-Transformer | Speech library

 by   foamliu Python Version: v1.0 License: MIT

kandi X-RAY | Speech-Transformer Summary

kandi X-RAY | Speech-Transformer Summary

Speech-Transformer is a Python library typically used in Artificial Intelligence, Speech, Deep Learning, Pytorch, Transformer applications. Speech-Transformer has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. You can download it from GitHub.

This is a PyTorch re-implementation of Speech-Transformer: A No-Recurrence Sequence-to-Sequence Model for Speech Recognition.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              Speech-Transformer has a low active ecosystem.
              It has 67 star(s) with 17 fork(s). There are 3 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 1 open issues and 12 have been closed. On average issues are closed in 39 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of Speech-Transformer is v1.0

            kandi-Quality Quality

              Speech-Transformer has 0 bugs and 21 code smells.

            kandi-Security Security

              Speech-Transformer has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              Speech-Transformer code analysis shows 0 unresolved vulnerabilities.
              There are 3 security hotspots that need review.

            kandi-License License

              Speech-Transformer is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              Speech-Transformer releases are available to install and integrate.
              Build file is available. You can build the component from source.
              Installation instructions are not available. Examples and code snippets are available.
              Speech-Transformer saves you 507 person hours of effort in developing the same functionality from scratch.
              It has 1191 lines of code, 69 functions and 22 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed Speech-Transformer and discovered the below as its top functions. This is intended to give you an instant insight into Speech-Transformer implemented functionality, and help decide if they suit your requirements.
            • Train the network
            • Calculate loss
            • Calculate the Calculation of the loss
            • Update statistics
            • Train the model
            • Augment the spectrogram using a time warping
            • Adds zero flow control points at boundary points
            • Warp a timeseries
            • Warp an image
            • Perform decoder
            • Pad a list
            • Compute key - pad mask for key query
            • Prepare padded input
            • Get data for given split
            • Ensure folder exists
            • Builds the vocabulary
            • Adds results to json
            • Parse the hypothesis
            • Update the sum
            • Process a dictionary of sos and eos IDs
            • Recognize beam
            • Parse command line arguments
            • Calculate the CER function
            • Build LFR features
            • Forward the layer
            • Extract a feature
            • Extract data from a tar file
            Get all kandi verified functions for this library.

            Speech-Transformer Key Features

            No Key Features are available at this moment for Speech-Transformer.

            Speech-Transformer Examples and Code Snippets

            Speech Transformer,Usage,Data Pre-processing
            Pythondot img1Lines of Code : 14dot img1License : Permissive (MIT)
            copy iconCopy
            $ python extract.py
            
            $ cd data/data_aishell/wav
            $ find . -name '*.tar.gz' -execdir tar -xzvf '{}' \;
            
            $ python pre_process.py
              
            Speech Transformer,Dataset
            Pythondot img2Lines of Code : 8dot img2License : Permissive (MIT)
            copy iconCopy
            @inproceedings{aishell_2017,
              title={AIShell-1: An Open-Source Mandarin Speech Corpus and A Speech Recognition Baseline},
              author={Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, Hao Zheng},
              booktitle={Oriental COCOSDA 2017},
              pages={Submitted},
              year={  
            Speech Transformer,Usage,Train
            Pythondot img3Lines of Code : 2dot img3License : Permissive (MIT)
            copy iconCopy
            $ python train.py
            
            $ tensorboard --logdir runs
              

            Community Discussions

            QUESTION

            What is attention penalty in speech transformer paper? (updated)
            Asked 2020-Jan-13 at 15:13

            github: https://github.com/sephiroce/tfsr/tree/exprimental

            I'm trying to reproduce recognition accuracies described in the speech transformer paper [1]. The attention penalty is a technique I could not fully understand. This is the description of the attention penalty in the paper.

            "In addition, we encouraged the model attending to closer positions by adding bigger penalty on the attention weights of more distant position-pairs."

            I understood as it means adding smaller negative values for more away from the diagonal on scaled attention logits (before masking) except for the first multi-head attention in decoders.

            This is a code snippet for computing attention weights.

            ...

            ANSWER

            Answered 2020-Jan-13 at 10:33

            I think you understand it well. They probably did a stripe around the diagonal, something like:

            Source https://stackoverflow.com/questions/59646954

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install Speech-Transformer

            You can download it from GitHub.
            You can use Speech-Transformer like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/foamliu/Speech-Transformer.git

          • CLI

            gh repo clone foamliu/Speech-Transformer

          • sshUrl

            git@github.com:foamliu/Speech-Transformer.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link