FFmpeg | Mirror of https : //git.ffmpeg.org/ffmpeg.git | Video Utils library

 by   FFmpeg C Version: n3.4.13 License: Non-SPDX

kandi X-RAY | FFmpeg Summary

kandi X-RAY | FFmpeg Summary

FFmpeg is a C library typically used in Video, Video Utils applications. FFmpeg has no bugs and it has medium support. However FFmpeg has 20 vulnerabilities and it has a Non-SPDX License. You can download it from GitHub.

FFmpeg is a collection of libraries and tools to process multimedia content such as audio, video, subtitles and related metadata.

            kandi-support Support

              FFmpeg has a medium active ecosystem.
              It has 36733 star(s) with 10987 fork(s). There are 1372 watchers for this library.
              It had no major release in the last 6 months.
              FFmpeg has no issues reported. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of FFmpeg is n3.4.13

            kandi-Quality Quality

              FFmpeg has 0 bugs and 0 code smells.

            kandi-Security Security

              FFmpeg has 20 vulnerability issues reported (0 critical, 12 high, 8 medium, 0 low).
              FFmpeg code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              FFmpeg has a Non-SPDX License.
              Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.

            kandi-Reuse Reuse

              FFmpeg releases are not available. You will need to build from source code and install.
              It has 542 lines of code, 2 functions and 6 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of FFmpeg
            Get all kandi verified functions for this library.

            FFmpeg Key Features

            No Key Features are available at this moment for FFmpeg.

            FFmpeg Examples and Code Snippets

            No Code Snippets are available at this moment for FFmpeg.

            Community Discussions


            How to generate video preview thumbnails using nodejs and ffmpeg?
            Asked 2022-Mar-27 at 18:50

            I am creating a custom video player, I would like to add a video preview when the user hovers a progress bar.

            I am able to generate thumbnails using FFmpeg as follows.



            Answered 2022-Mar-27 at 18:50

            You will have to make your own tool for creating the WEBVTT file. And it's a simple process, you just need to get the information you need and fill it in the following format:

            Source https://stackoverflow.com/questions/69846515


            Convert .mp4 to gif using ffmpeg in golang
            Asked 2022-Jan-06 at 16:09

            I want to convert my mp4 file to gif format. I was used the command that is working in command prompt. i.e., converting my .mp4 into gif but in go lang it is not done anything. Here is my command:



            Answered 2022-Jan-06 at 16:09

            This is not exactly what you are looking for, but it is possible to do it like this:

            Source https://stackoverflow.com/questions/70607895


            Win10 Electron Error: Passthrough is not supported, GL is disabled, ANGLE is
            Asked 2022-Jan-03 at 01:54

            I have an electron repo (https://github.com/MartinBarker/RenderTune) which used to work on windows 10 fine when ran with command prompt. After a couple months I come back on a new fresh windows 10 machine with an Nvidia GPU, and the electron app prints an error in the window when starting up:



            Answered 2022-Jan-03 at 01:54

            You can try disabling hardware acceleration using app.disableHardwareAcceleration() (See the docs). I don't think this is a fix though, it just makes the message go away for me.

            Example Usage


            Source https://stackoverflow.com/questions/70267992


            Conversion from BGR to YUYV with OpenCV Python
            Asked 2021-Dec-27 at 16:07

            I have been trying to convert a BGR captured frame into the YUYV format.

            In OpenCV Python I can do convert YUYV into BGR with COLOR_YUV2BGR_YUY2 conversion code but I cannot do the reverse of this operation (there is no conversion code for this operation, I have tried COLOR_BGR2YUV but it is not converting correctly). I am curious about how to convert 3-channel BGR frame into the 2-channel YUYV frame.

            Here you can see the code that I am using to change camera mode to capture YUYV and converting it into BGR, I am looking for the replacement of the cap.set(cv2.CAP_PROP_CONVERT_RGB, 0) so I can capture BGR and convert it into YUYV without cap.set(cv2.CAP_PROP_CONVERT_RGB, 0) (Because it is an optional capture setting and Windows DirectShow ignores this flag)



            Answered 2021-Dec-27 at 14:34

            You can use the following code to convert your image to YUV and after that create YUYV from YUV. In this example an image is given as input to the program:

            Source https://stackoverflow.com/questions/70496578


            How to list the symbols in this WASM module?
            Asked 2021-Dec-25 at 14:45

            I'm looking to do some in-browser video work using good-ol' FFmpeg and Rust. Simple examples, where the caller is interacting with the ffmpeg command-line abound. More complex examples are harder to find. In my case I wish to extract, process and rotate discrete frames.

            Clipchamp makes impressive use of WASM and FFmpeg, however the downloaded WASM file (there's only one) will not reveal itself to wasm-nm nor wasm-decompile, both complaining about the same opcode:

            Has anyone wisdom to share on how I can (1) introspect the WASM module in use or (2) more generally advise on how I can (using WASM and Rust, most likely) work with video files?



            Answered 2021-Dec-25 at 14:45

            The WASM module uses SIMD instructions (prefixed with 0xfd, and also known as vector instructions), which were merged into the spec just last month. The latest release of wasm-decompile therefore doesn't have these enabled by default yet, but will in the next release. Meanwhile, you can enable them manually with the --enable-simd command line option. This invocation works for me with the latest release:

            Source https://stackoverflow.com/questions/70454530


            FFMPEG's xstack command results in out of sync sound, is it possible to mix the audio in a single encoding?
            Asked 2021-Dec-16 at 21:11

            I wrote a python script that generates a xstack complex filter command. The video inputs is a mixture of several formats described here:

            I have 2 commands generated, one for the xstack filter, and one for the audio mixing.

            Here is the stack command: (sorry the text doesn't wrap!)



            Answered 2021-Dec-16 at 21:11

            I'm a bit confused as how FFMPEG handles diverse framerates

            It doesn't, which would cause a misalignment in your case. The vast majority of filters (any which deal with multiple sources and make use of frames, essentially), including the Concatenate filter require that be the sources have the same framerate.

            For the concat filter to work, the inputs have to be of the same frame dimensions (e.g., 1920⨉1080 pixels) and should have the same framerate.

            (emphasis added)

            The documentation also adds:

            Therefore, you may at least have to add a ​scale or ​scale2ref filter before concatenating videos. A handful of other attributes have to match as well, like the stream aspect ratio. Refer to the documentation of the filter for more info.

            You should convert your sources to the same framerate first.

            Source https://stackoverflow.com/questions/70020874


            How to capture messages written to stderr by OpenCV?
            Asked 2021-Dec-07 at 13:29

            In case of invalid parameters, cv2.VideoWriter writes stuff to stderr. here is a minimal example:



            Answered 2021-Dec-07 at 13:29

            I've found the wurlitzer library, which can do exactly that, i.e., capture the streams written to by a C library:

            Source https://stackoverflow.com/questions/70259880


            How to animate this optimization model correctly
            Asked 2021-Nov-29 at 00:57

            I have implemented a simple randomized, population-based optimization method - Grey Wolf optimizer. I am having some trouble with properly capturing the Matplotlib plots at each iteration using the camera package.

            I am running GWO for the objective function f(x,y) = x^2 + y^2. I can only see the candidate solutions converging to the minima, but the contour plot doesn't show up.

            Do you have any suggestions, how can I display the contour plot in the background?

            GWO Algorithm implementation



            Answered 2021-Nov-29 at 00:57

            Is it possible that the line x = np.linspace(LB[0],LB[1],1000) should be x = np.linspace(LB[0],UB[1],1000) instead? With your current definition of x, x is an array only filled with the value -10 which means that you are unlikely to find a contour. Another thing that you might want to do is to move the cont = plt.contour(X1,X2,Z,20,linewidths=0.75) line inside of your plot_search_agent_positions function to ensure that the contour is plotted at each iteration of the animation. Once you make those changes, the code looks like that:

            Source https://stackoverflow.com/questions/70145946


            Docker is pushing all layers instead of the last one
            Asked 2021-Nov-26 at 13:41

            Yesterday i pushed the base image layer for my app that contained the environment needed to run my_app.

            That push was massive but it is done and up in my repo.

            This is currently the image situation in my local machine:



            Answered 2021-Nov-26 at 13:41

            docker push pushes all layers (5 at the time by default) of the image that are not equal to the image in the repository (aka the layers that did not change), not a single layer, in the end resulting in a new image in your repository.

            You can see it as if Docker made a diff between the local and the remote image and pushed only the differences between those two, which will end up being a new image - equal to the one you have in your machine but with "less work" to reach the desired result since it doesn't need to push literally all the layers.

            In your case it's taking a lot of time since the 4 Gb layer changed (since the content of what you are copying is different now), making Docker push a big part of the size of your image.

            Link for the docker push documentation, if needed: https://docs.docker.com/engine/reference/commandline/push/

            Source https://stackoverflow.com/questions/70123223


            How to detect the presence of a PAL or NTSC signal using DirectShow?
            Asked 2021-Nov-16 at 15:50

            In order to record the composite-video signal from a variety of analog cameras, I use a basic USB video capture device produced by AverMedia (C039).

            I have two analog cameras, one produces a PAL signal, the other produces an NTSC signal:

            1. PAL B, 625 lines, 25 fps
            2. NTSC M, 525 lines, 29.97 fps (i.e. 30/1.001)

            Unfortunately, the driver for the AverMedia C039 capture card does not automatically set the correct video standard based on which camera is connected.


            I would like the capture driver to be configured automatically for the correct video standard, either PAL or NTSC, based on the camera that is connected.


            The basic idea is to set one video standard, e.g. PAL, check for signal, and switch to the other standard if no signal is detected.

            By cobbling together some examples from the DirectShow documentation, I am able to set the correct video standard manually, from the command line.

            So, all I need to do is figure out how to detect whether a signal is present, after switching to PAL or NTSC.

            I know it must be possible to auto-detect the type of signal, as described e.g. in the book "Video Demystified". Moreover, the (commercial) AMCap viewer software actually proves it can be done.

            However, despite my best efforts, I have not been able to make this work.

            Could someone explain how to detect whether a PAL or NTSC signal is present, using DirectShow in C++?

            The world of Windows/COM/DirectShow programming is still new to me, so any help is welcome.

            What I tried

            Using the IAMAnalogVideoDecoder interface, I can read the current standard (get_TVFormat()), write the standard (put_TVFormat()), read the number of lines, and so on.

            The steps I took can be summarized as follows:



            Answered 2021-Nov-16 at 15:35

            The mentioned property page is likely to pull the data using IAMAnalogVideoDecoder and get_HorizontalLocked method in particular. Note that you might be limited in receiving valid status by requirement to have the filter graph in paused or running state, which in turn might require that you connect a renderer to complete the data path (Video Renderer or Null Renderer, or another renderer of your choice).

            See also this question on Null Renderer deprecation and source code for the worst case scenario replacement.

            Source https://stackoverflow.com/questions/69980501

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network


            No vulnerabilities reported

            Install FFmpeg

            You can download it from GitHub.


            The offline documentation is available in the doc/ directory. The online documentation is available in the main [website](https://ffmpeg.org) and in the [wiki](https://trac.ffmpeg.org).
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
          • HTTPS


          • CLI

            gh repo clone FFmpeg/FFmpeg

          • sshUrl


          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Video Utils Libraries


            by obsproject


            by videojs


            by bilibili


            by FFmpeg


            by iina

            Try Top Libraries by FFmpeg


            by FFmpegShell


            by FFmpegPerl