chromaprint | C library for generating audio fingerprints used by AcoustID | Audio Utils library
kandi X-RAY | chromaprint Summary
kandi X-RAY | chromaprint Summary
Chromaprint is an audio fingerprint library developed for the AcoustID project. It's designed to identify near-identical audio and the fingerprints it generates are as compact as possible to achieve that. It's not a general purpose audio fingerprinting solution. It trades precision and robustness for search performance. The target use cases are full audio file identifcation, duplicate audio file detection and long audio stream monitoring.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of chromaprint
chromaprint Key Features
chromaprint Examples and Code Snippets
Community Discussions
Trending Discussions on chromaprint
QUESTION
i have the following scenario that is driving me crazy:
i have a capture device. Here the ffprobe on it:
...ANSWER
Answered 2022-Apr-08 at 00:16*.mjpeg
is a raw stream format. FFmpeg documentation states of raw muxers: "They do not store timestamps or metadata." So, instead try storing the data in an mp4 container:
QUESTION
I have a video with some background music in it.
I wish to add a piece of spoken dialogue at a particular location in the video, such that the background music is lowered for the entire duration of the dialogue audio.
I found a similar solution using sidechaincompress
, which just works for mp3. I made some changes to it so that it includes the video too (-map 0:v
). However, now the audio is cut short as soon as the dialogue ends.
ANSWER
Answered 2022-Mar-09 at 16:36Try this (the short clip inserted at 3-second mark):
QUESTION
I'm trying to convert all the songs in a folder from flac to alac. All the files in the folder are flac.
What I'm writing:
...ANSWER
Answered 2022-Feb-03 at 22:07The forfiles
command is a nasty beast, because there are several caveats:
- it is slow (particularly because it cannot run internal commands of the hosting command prompt);
- it handles wildcards differently than most other commands, hence
/M *.*
does not match all files but only such with an extension; to really match all files, use/M *
or skip it since it is the default anyway; - it applies backslash-escaping, which is particularly annoying with paths ending in
\
, like the root directory of a drive/P "D:\"
, which causes a syntax error since the closing quotation mark is considered as escaped; to work around that, preferably append a.
like/P "D:\."
, or remove the quotation marks like/P D:\
, though this exposes potential whitespaces or special characters to the parser; - all of the special
@
-variables that return the path and/or name of iterated items provide the values in quoted manner, which is particularly frustrating when it comes to concatenation; - it iterates over both files and directories that match the given mask; to distinguish between them you could use the special
@isdir
variable, but you will need anif
statement for this (likeif @isdir==FALSE
orif @isdir==TRUE
), which is an internalcmd.exe
command, requiring its explicit instantiation even when you would not need it else; - handling of the command behind
/C
and its arguments is terribly implemented, leading to the problem that directly running external commands (so withoutcmd /C
) may fail, unless you are aware of the mostly working fix by stating the command name twice (like/C "command.exe command.exe --parameter argument"
); - even its basically nice
/D
option (which is the only reason whyforfiles
might suit better thanfor
) to filter for the relative last modification date (but not time) is badly implemented when a positive number (like/D +1
) is used, because this uselessly points to the future;
All of these issues lead me to the point that I suggest not to use forfiles
and to use a standard for
loop instead, like this (note also the changed mask *.flac
):
In a batch-file:
QUESTION
I'm having regularly issue with hvc1 videos getting an inconsistent number of frames between ffprobe info and FFmpeg info, and I would like to know what could be the reason for this issue and how if it's possible to solve it without re-encoding the video.
I wrote the following sample script with a test video I have
I split the video into 5-sec segments and I get ffprobe giving the expected video length but FFmpeg gave 3 frames less than expected on every segment but the first one.
The issue is exactly the same if I split by 10 seconds or any split, I always lose 3 frames.
I noted that the first segment is always 3 frames smaller (on ffprobe) than the other ones and it's the only consistent one.
Here is an example script I wrote to test this issue :
...ANSWER
Answered 2022-Jan-11 at 22:08The source of the differences is that FFprobe counts the discarded packets, and FFmpeg doesn't count the discarded packets as frames.
Your results are consistent with video stream that is created with 3 B-Frames (3 consecutive B-Frames for every P-Frame or I-Frame).
According to Wikipedia:
I‑frames are the least compressible but don't require other video frames to decode.
P‑frames can use data from previous frames to decompress and are more compressible than I‑frames.
B‑frames can use both previous and forward frames for data reference to get the highest amount of data compression.
When splitting a video with P-Frame and B-Frame into segments without re-encoding, the dependency chain breaks.
- There are (almost) always frames that depends upon frames from the previous segment or the next segment.
- The above frames are kept, but the matching packets are marked as "discarded" (marked with
AV_PKT_FLAG_DISCARD
flag).
For the purpose of working on the same dataset, we my build synthetic video (to be used as input).
Building synthetic video with the following command:
QUESTION
i'm trying to write a shell script to analyse some videos by using the first frame of each second of video as a basis. i'm using ffmpeg to extract the frames. i thought i'd hit gold when this thread came up in my searches, but it doesn't handle the deinterlacing.
i'm using a time-based approach that works well for various formats as long as the video is progressive. it doesn't work so well for interlaced video (only the first field is output in that case, creating a half-height image). i've tried various combinations of deinterlacing (yadif/bwdif
) with select
but the filter chains i create either cause errors or still return a half-sized image.
here is my call w/a filter that works correctly for progressive video source :
ffmpeg -i $infile -vf "select='if(eq(n\,0),1,floor(t)-floor(prev_selected_t))" -vsync 0 $outfile
the following still return only half-height images for interlaced source :
... -vf "bwdif=0,select='if(eq(n\,0),1,floor(t)-floor(prev_selected_t))'"
... -vf "bwdif=0,select='between(mod(n\,$ips)\,1\,2)'"
-- $ips is images per second
... -vf "select='between(mod(n\,$ips)\,1\,2)',bwdif=0"
i've also tried various permutations of the above w/explicit frame reference (-r
, -vframes
, ...), still no joy.
can someone help me with the syntax ?
--EDIT--
here is the complete output of running the command with the first filter :
ANSWER
Answered 2021-Dec-22 at 04:45Your input is detected as interlaced but with half-height (288) and double the framerate (50). This may be due to missing or unrecognized boundary markers in the JPEG2000 packets. I assume this is meant to be PAL --> 720x576@25i.
Try using the tinterlace filter first to interleave the input "frames" to double-height and half fps, and then continue the original sequence of filters.
"tinterlace=mode=merge,bwdif=0,select='if(eq(n\,0),1,floor(t)-floor(prev_selected_t))'"
QUESTION
I kinda figured out, the problem needs to deal with apache mpm event... When I send my request on the first client, my script is getting executed linear. When I send my request from the second client, he is literally starting at that point in the code where the last request currently is. So that might be a problem with shared memory between those threads, but I'm not an apache professional, maybe somebody has an idea?
I used apache mpm prefork before, so every request is getting his own process and own memory etc., but there was a problem reading jpg files with that one and it worked after I changed apache to mpm event, see the following: https://github.com/python-pillow/Pillow/issues/5834#issue-comment-box
Inside my VM (running same apache version with mpm event) everything is working fine.
I've got a python script with flask running via wsgi on an apache2 webserver. Inside of that script I got following lines (458-461):
...ANSWER
Answered 2021-Dec-04 at 15:39Okay, for the love of God Im done... needed 4 days for that **** to find out, I had an apache mod activated (fcgid) which literally assign every single line of ur code to a single thread... so I deactivated that one and now its working without any issues...
So in short - what I did:
QUESTION
I have a HEVC encoded 4k Video captured from a Reolink IP-Cam with VLC Player 3.0.14. I want to extract each frame of the video but the encoding stops after only two frames. This also happens when I try to convert into another video format. This happens on Windows and Ubuntu with different ffmpeg versions.
My command is:
...ANSWER
Answered 2021-Dec-09 at 04:08MP4s may have an edit list which tell the player to construct a virtual playback timeline. Occasionally, this timeline can leave out frames needed to decode stream correctly, or the sync samples table may be wrong.
Add -ignore_editlist
to demux the stream without any edits.
ffmpeg.exe -ignore_editlist 1 -i vlc-record-2021-06-07-08h01m03s-rtsp___192.168.178.92_554_h265Preview_01_main-.mp4 output/%05d.jpg
QUESTION
Got this Dahua vto stream link: that works with omxplayer, but vlc won't play it:
...ANSWER
Answered 2021-Nov-10 at 05:29So what happened is that the library in Debian providing support for live555 was removed in February of this year, this affects all downstream distros including but not limited to RPi OS and Ubuntu:
https://askubuntu.com/a/1363113
The 2 active versions were 2020.01.19-1 and 2018.11.26-1.1, Live555 has since added GPL license headers to the offending files, however the RFC issue remains.
Now you may be tempted to just download the latest Live555 source code and compile it... it does not work. There have been changes to function names and structures referenced by VLC, and as such VLC will not compile against the source. You need to get an older version, I specifically used this one, which is a tweaked snapshot from 2020 prior to the modifications that prevent VLC compilation:
https://github.com/rgaufman/live555
The configuration you want is ./genMakefiles linux-with-shared-libraries
, I do not know if it is required but since my system is x86-64-bit I added -m64 to the compiler options first
After compilation and install, I went on to compile VLC, adding '--enable-live555'
and '--with-live555-tree=extras/live555-master'
after placing the root Live555 folder in the VLC extras folder, however VLC failed to compile, it turns out the Live555's make install does not copy all the header files needed to where VLC is looking. They were dropped as 4 subfolders into /usr/local/include/, and the actual libs into /usr/local/lib/. Adding the correct CXX/CPP flags will make it look where they were put, however I put them all in a single folder and used 1 flag.
I also had to '--disable-mod'
to work around a dependency version issue that I had no interest in fixing, since I do not use modplug or any mod files.
50 minutes later... VLC successfully compiled! However it was expecting the libraries for Live555 to be in /usr/lib/ not /usr/local/lib/, since it took so long to compile I was just fine with linking or copying the libraries into the expected folder, and after that VLC works with RTSP when linked to the new file. Or you can choose to maintain the original VLC and run the new file directly if you need to load the camera feeds.
QUESTION
I'm trying to publish a video using ffmpeg
. For publishing, I'm using python frame images as the input source. But when it streams, the video colours are different.
ANSWER
Answered 2021-Oct-28 at 06:58If you are reading JPEGs, PNGs, or video into OpenCV, it will hold them in memory in BGR channel ordering. If you are feeding such frames into ffmpeg
you must either:
- convert the frames to RGB first inside OpenCV with
cv2.cvtColor(... cv2.COLOR_BGR2RGB)
before sending toffmpeg
, or - tell
ffmpeg
that the frames are in BGR order by putting-pix_fmt bgr24
before the input specifier, i.e. before-i -
QUESTION
I'm trying to record video from my ip camera stream with ffmpeg in command line
...ANSWER
Answered 2021-Oct-11 at 16:24Stream #0:1 is unknown data, but the MP4 muxer does not know what to do with that.
- Map only the video by changing
-map 0
to-map 0:v
. - Or keep
-map 0
and omit the data stream by adding-map -0:d
.
See FFmpeg Wiki: Map.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install chromaprint
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page