AutoSub | A CLI script to generate subtitle files (SRT/VTT/TXT) for any video using either DeepSpeech or Coqui | Speech library

 by   abhirooptalasila Python Version: v1.1.0 License: MIT

kandi X-RAY | AutoSub Summary

kandi X-RAY | AutoSub Summary

AutoSub is a Python library typically used in Artificial Intelligence, Speech, Deep Learning applications. AutoSub has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. You can download it from GitHub.

AutoSub is a CLI application to generate subtitle file (.srt) for any video file using Mozilla DeepSpeech. I use the DeepSpeech Python API to run inference on audio segments and pyAudioAnalysis to split the initial audio on silent segments, producing multiple small files. Featured in DeepSpeech Examples by Mozilla.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              AutoSub has a low active ecosystem.
              It has 470 star(s) with 93 fork(s). There are 12 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 8 open issues and 48 have been closed. On average issues are closed in 45 days. There are 2 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of AutoSub is v1.1.0

            kandi-Quality Quality

              AutoSub has 0 bugs and 14 code smells.

            kandi-Security Security

              AutoSub has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              AutoSub code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              AutoSub is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              AutoSub releases are available to install and integrate.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.
              It has 505 lines of code, 26 functions and 8 files.
              It has low code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed AutoSub and discovered the below as its top functions. This is intended to give you an instant insight into AutoSub implemented functionality, and help decide if they suit your requirements.
            • Removes silence from an audio file
            • Read an audio file
            • Performs silence removal
            • Feature extraction function
            • Process an audio file
            • Write text to file
            • Format a number of seconds
            • Get a model from local directory
            • Download a model from wget
            • Create and return a model instance
            • Delete all files in the given folder
            • Extract audio from ffmpeg
            • Setup a logger
            • Sort a list of numbers
            Get all kandi verified functions for this library.

            AutoSub Key Features

            No Key Features are available at this moment for AutoSub.

            AutoSub Examples and Code Snippets

            No Code Snippets are available at this moment for AutoSub.

            Community Discussions

            Trending Discussions on AutoSub

            QUESTION

            Discord.js ForEach function not working in V12
            Asked 2020-Nov-20 at 10:58

            I had a V11 code function to add a role if members were assigned a role from a integration, in this case give them a role when they get their automatic twitch sub role.

            In V12 I cannot get it to work, thanks for the help in advance

            PS: This is only ment for one server, hence why I dont clearify which guild.

            V11 code:

            ...

            ANSWER

            Answered 2020-Nov-19 at 19:01

            client.guild.members.cache.forEach(...

            Could it be because you called 'guild' instead of 'guilds' ?

            [edit] By the way it seems to me that 'guilds' in v12 returns a GuildManager and not a Collection as in v11, so maybe there is additionnal unwrapping that should occurs before you get to use 'members'. That leads me to think that you should probably call 'cache' before 'members', isn't it?

            [edit] ...and 'cache' returning a collection, you'll probably have use an appropiate 'get(...)' to obtain the Guild.

            discord.js

            Source https://stackoverflow.com/questions/64905510

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install AutoSub

            Clone the repo. All further steps should be performed while in the AutoSub/ directory. Create a pip virtual environment to install the required packages. Download the model and scorer files from DeepSpeech repo. The scorer file is optional, but it greatly improves inference results. Create two folders audio/ and output/ to store audio segments and final SRT file. Install FFMPEG. If you're running Ubuntu, this should work fine. [OPTIONAL] If you would like the subtitles to be generated faster, you can use the GPU package instead. Make sure to install the appropriate CUDA version.
            Clone the repo. All further steps should be performed while in the AutoSub/ directory $ git clone https://github.com/abhirooptalasila/AutoSub $ cd AutoSub
            Create a pip virtual environment to install the required packages $ python3 -m venv sub $ source sub/bin/activate $ pip3 install -r requirements.txt
            Download the model and scorer files from DeepSpeech repo. The scorer file is optional, but it greatly improves inference results. # Model file (~190 MB) $ wget https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-models.pbmm # Scorer file (~950 MB) $ wget https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-models.scorer
            Create two folders audio/ and output/ to store audio segments and final SRT file $ mkdir audio output
            Install FFMPEG. If you're running Ubuntu, this should work fine. $ sudo apt-get install ffmpeg $ ffmpeg -version # I'm running 4.1.4
            [OPTIONAL] If you would like the subtitles to be generated faster, you can use the GPU package instead. Make sure to install the appropriate CUDA version. $ source sub/bin/activate $ pip3 install deepspeech-gpu

            Support

            I would love to follow up on any suggestions/issues you find :).
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/abhirooptalasila/AutoSub.git

          • CLI

            gh repo clone abhirooptalasila/AutoSub

          • sshUrl

            git@github.com:abhirooptalasila/AutoSub.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link