python-ffmpeg | A python binding for FFmpeg which provides sync and async APIs | Video Utils library
kandi X-RAY | python-ffmpeg Summary
kandi X-RAY | python-ffmpeg Summary
A python interface for FFmpeg using asyncio
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Execute ffmpeg
- Build command line arguments
- Read lines from stream
- Build a list of command line options
- Build option list
- Write stream to stdin
- Creates a new subprocess
- Drain the process
- Read stderr
- Add input file to ffmpeg
- Add output to ffmpeg
- Set global options
- Print progress
- Parse a progress line
- If progress is greater than 100 seconds
- Terminate the ffmpeg process
python-ffmpeg Key Features
python-ffmpeg Examples and Code Snippets
import asyncio
from ffmpeg import FFmpeg
ffmpeg = FFmpeg().option('y').input(
'rtsp://example.com/cam',
# Specify file options using kwargs
rtsp_transport='tcp',
rtsp_flags='prefer_tcp',
).output(
'output.ts',
# Use a diction
Community Discussions
Trending Discussions on python-ffmpeg
QUESTION
In a Python script I want to capture a webcam stream and save it to a video file on local harddisk, but I want to script to determine lifecycle of the video. For this I am using python-ffmpeg
library which is essentially a simple Python SDK around ffmpeg in a subprocess.
This is my snipped currently:
...ANSWER
Answered 2022-Mar-13 at 19:24Instead of trying to control the buffer size, we have to close FFmpeg gracefully.
For closing FFmpeg gracefully, we may write 'q'
to stdin
pipe, as described in this post.
When we start recording, the following message appears: Press [q] to stop, [?] for help
.
Writing 'q'
to stdin
simulates pressing the q
key.
Open FFmpeg sub-process with
pipe_stdin=True
:
QUESTION
I'm trying to apply a custom python function on every frame of a video, and create the video with modified frames as output. My input video is a mkv file, with variable framerate, and I'd like to get the same thing as output, so one frame in the input matches one in the output at the exact same time.
I tried to use this example of ffmpeg-python. However, it seems that the timestamp info are lost in the pipes. The output video has 689 frames when the input only has 300 (the durations also aren't a match, with 27s vs 11s for the input).
I also tried to first process each frame in my video and save the transformed version as PNGs. Then I "masked" the input video with the processed frames. This seems to be better because the output video has the same 11s duration than the input, but the frame count doesn't match (313 vs 300).
Code for the python-ffmpeg solution:
...ANSWER
Answered 2022-Feb-18 at 11:30I'll answer my own question, as I've been able to solve the issue with the help of kesh in the comments.
There are basically two things:
vsync passthrough
is required for the input video, to keep the number of frames- another external tool (MKVToolNix) has to be used twice to extract timestamps from the initial video and apply them to the output
Below is the relevant code to perform the whole operation using python and subprocess. You can use the following line on both input and output video to check that the timestamps are indeed the same for each frame: ffprobe -show_entries packet=pts_time,duration_time,stream_index video.mkv
QUESTION
I'm developing a Python FFmpeg wrapper called ffmpegio and one feature I want to implement is block-wise avfiltering of raw video and audio data. A block of data is piped to FFmpeg and Python waits for FFmpeg to process and pipe back available output data, rinse and repeat. I've got this to work for video feed but having a trouble with PCM audio I/O. Either PCM encoder or decoder appears to block until the stdin is closed. Is there any way around this behavior?
This question is related to another question "FFmpeg blocking pipe until done?" but none of its answers applies (I think)
Edit #1: (deleted a lot of original text for clarity)
Here are minimum Python examples.
First, this is the common script with load_setup()
to load video and audio data:
ANSWER
Answered 2022-Jan-27 at 16:09If anybody else is curious, I was able to answer my own question running a longer experiment.
(Presumably) the PCM encoder/decoder (used pcm_f32le
for both) initially buffers its input excessively and the max buffer size appears to be sampling rate dependent. It maxes out at somewhere between 51200 - 52224.
Once the configuration log is posted on stderr
, the output floodgate opens and eventually stabilizes to the expected number of output samples per input sample.
Here is a log of repeatedly writing 1024 samples at a time. The filter is afade
so we expect the same number of output samples. The stdout
read operation output is read 1024 bytes at a time to a queue on the reader thread and the main thread retrieves blocks with the timeout set to 10 ms.
QUESTION
I have a video (test.mkv
) that I have converted into a 4D NumPy array - (frame, height, width, color_channel). I have even managed to convert that array back into the same video (test_2.mkv
) without altering anything. However, after reading this new, test_2.mkv
, back into a new NumPy array, the array of the first video is different from the second video's array i.e. their hashes don't match and the numpy.array_equal()
function returns false. I have tried using both python-ffmpeg and scikit-video but cannot get the arrays to match.
ANSWER
Answered 2021-Mar-29 at 21:05Getting the same hash requires when writing and reading a video file requires careful attention.
Before comparing the hash, try to look at the video first.
Executing your code gave me the following output (first frame of video_2):
When the input (first frame of video) is:
I suggest the following modifications:
- Use AVI container (instead of MKV) for storing
test_2
video in raw video format.
AVI video container is originally designed for storing raw video.
There could be a way for storing raw, or lossless RGB video in MKV container, but I am not aware of such option. - Set the input pixel format of
test_2
video.
Add an argument:pixel_format='rgb24'
.
Note: I modified it topixel_format='bgr24'
, because AVI supportsbgr24
and notrgb24
. - Select video a lossless codec for
test_2
video.
You may selectvcodec='rawvideo'
(rawvideo codec is supported by AVI but not supported by MKV).
Note:
For getting equal hash, you need to look for lossless video codec that supports rgb24
(or bgr24
) pixel format.
Most of the lossless codecs, converts the pixel format from RGB to YUV.
The RGB to YUV conversion has rounding errors that prevents equal hash.
(I suppose there are ways to get around it, but it's a bit complicated).
Here is your complete code with few modifications:
QUESTION
I have 2 files: video.webm - contains ONLY video audio.webm - contains ONLY audio
I try to merge them into one without encoding with python-ffmpeg
...ANSWER
Answered 2020-May-19 at 14:57“Concatenate” means making one stream run after the other, but you want to merge both streams at the same time. So, remove the ffmpeg.concat
step, and just pass both streams into one call to ffmpeg.output
:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install python-ffmpeg
You can use python-ffmpeg like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page