Demux | A gateway to facilitate decentralised streaming ecosystem | REST library
kandi X-RAY | Demux Summary
kandi X-RAY | Demux Summary
A gateway to facilitate a decentralised streaming ecosystem. Currently hosted at For authentication credentials, please reach out to us at saumay@buidllabs.io.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- FileUploadHandler handles a file upload .
- PinFolder creates pinata for a given folder
- CreateStream creates a stream for demux .
- Price estimate request
- run is the main entry point for a new stage .
- CalculateTranscodingCost returns the approximate transfer cost for the given fileName .
- AssetHandler serves an asset database .
- AssetStatusHandler - returns status of asset .
- runSetup runs the PowergateSetup command .
- pollStorageDealProgress watches for changes to the given storage deal .
Demux Key Features
Demux Examples and Code Snippets
Community Discussions
Trending Discussions on Demux
QUESTION
I'm using ffmpeg filtergraphs to extract and concatenate chunks of videos. As a simple example, consider this, which takes an input file with a video stream and an audio stream and produces a 20-second output that includes timestamps 00:10-00:20 and 00:30-00:40 of the input:
...ANSWER
Answered 2022-Mar-21 at 20:18No, you cannot. FFmpeg filtergraph can only add subtitle text to video (e.g., subtitle
and ass
filters). It has no way of manipulating subtitle streams.
Your best bet is the concat
demuxer. You can list the same file multiple times with different start and end times. In your batch file, you can create the concat list in memory and pipe it into FFMpeg.
[edit]
Assuming that in.mkv
has it all: video, audio, and subtitle streams. Then you can prepare a concat demuxer file like:
listing.txt
QUESTION
I could not seem to link more than 3 elements in a gst pipeline in Python. For example, I try to implement the following cli command in Python.
...ANSWER
Answered 2022-Mar-15 at 12:23Your issue stems from an improper understanding of delayed linking.
Pads in gstreamer can be linked if and only if they agree on their format. For some elements, all possible pad formats are known ahead-of-time. This means they can be linked ahead of time, with functions like link_may
and link
. For example, from gst-inspect-1.0 oggdemux
In the pad section, we see this:
QUESTION
I have set this duration for lists on Windows over the setting 'demuxer', however the android version does not contain this option.
I'm actually creating the list dynamically over a php file in a server, so it looks like this:
#EXTM3U #EXTINF:-1, image1.png #EXTINF:-1, video1.m4v
So I wonder if there is an option in android to make images (not videos) display over a minute and if there is no such option, maybe I can add something to the list to make images display longer?
I have seen a tutorial for the list to use 1000 but it is a tag based language (html?), which seems not to be the case of my list. (link to tutorial: http://chris-reilly.org/blog/vlc-slideshow-duration/)
It is worth mentioning that in the Android version there is a small input for VLClib, but I was unable to find anything related to what I'm looking for.
Any help will be appreciated.
...ANSWER
Answered 2022-Feb-28 at 13:03The answer is #EXTVLCOPT:image-duration=100 after each image file. (change 100 for the number of seconds.
QUESTION
Assuming that there are 2 kinds of streams, one is only video stream (without audio), the other is video with audio. We know that playbin with a uri can play them all even if we dont know what kind of stream we get, but is there any pipelines that use xvimagesink or nv3dsink (not autovideosink etc.) that can receive both (with or without audio) cause we dont know whether the video stream is with audio or not?
For instance, if the video stream is with audio, we play video with audio, else we play video without audio.
I've tried
...ANSWER
Answered 2022-Feb-17 at 11:39You may just use video-sink property of playbin:
QUESTION
I am using the following gstreamer pipeline to grab RTMP src and transcode it with opusenc encoder and sending it as rtp packet to Mediasoup (a webrtc library).
...ANSWER
Answered 2021-Nov-22 at 20:15Sounds like a stereo audio interlacing problem, where every other sample is being skipped. Your provided output sample is a stereo MP3, yet both channels are identical.
Try using channels=1
or playing with or removing demux
processing.
QUESTION
I'm trying to use the FFmpeg libraries to process video files, and I need to get the video's fps. I'm loading mp4 h264 videos with constant fps.
My code is a slightly modified copy of the demuxing and decoding example. I removed the code for outputting video and audio, as well as all the audio related stuff altogether. The open_codec_context
function is unchanged besides removing the audio stuff.
After calling open_codec_context
, my code does the following:
ANSWER
Answered 2021-Dec-28 at 14:57Okay so I've figured out where the problem was.
I had built ffmpeg from source (master), and I must've messed up the config because something was missing. The linker very helpfully found the missing library in my regular ffmpeg from my package manager (stable 4.4.1). So my program was linking against half master and half 4.4.1, resulting in undefined behaviour.
Turns out the random values I was getting came from that undefined behaviour. Linking with the stable 4.4.1 build directly fixed the issue and now I get_fps
returns the expected values.
I'm surprised I got this far without noticing. Not even a segfault.
QUESTION
I'm working on a project that takes individual images from an RTSP-Stream and manipulates them (drawing bounding boxes). Those images should be restreamed (h264 encoded) on a separate RTSP-stream on an other address and shouldn't be saved on the local disk.
My current code so far is:
...ANSWER
Answered 2021-Aug-31 at 13:23Because you are doing a new Media for each frame, you won't be able to stream it as a single stream.
What you could do is create a MJPEG stream : Put .jpg images one after one in a single stream, and use that stream with LibVLCSharp to stream it.
However, if LibVLCSharp is faster to read the data from your memory stream than you are writing data to it, it will detect the end of the file, and will stop the playback / streaming (A Read() call that returns no data is considered as the end of the file). To avoid that, the key is to "block" the Read() call until there is actually data to read. This is not a problem to block the call as this happen on the VLC thread.
The default MemoryStream
/StreamMediaInput
won't let you block the Read() call, and you would need to write your own Stream
implementation or write your own MediaInput
implementation.
Here are a few ideas to block the Read call:
- Use a BlockingCollection to push Mat instances to the stream input. BlockingCollection has a Take() method that blocks until there is actually some data to read
- Use a ManualResetEvent to signal when data is available to be read (it has a Wait() method)
It would be easier to talk about that on The LibVLC discord, feel free to join !
If you manage to do that, please share your code as a new libvlcsharp-sample project!
QUESTION
We are using libvlcsharp to play a live mp3 network stream in our xamarin.ios app using the following code snippet
...ANSWER
Answered 2021-Nov-26 at 13:13Don't forget that Play()
is not a synchronous method as you might expect. It is a method that posts a stop message to a background thread, and only then starts to play the media.
When you're executing your IsStartingOrPlaying()
method right after, chances are that the state is not the one that you might have expected, thus calling the second Play()
QUESTION
Got this Dahua vto stream link: that works with omxplayer, but vlc won't play it:
...ANSWER
Answered 2021-Nov-10 at 05:29So what happened is that the library in Debian providing support for live555 was removed in February of this year, this affects all downstream distros including but not limited to RPi OS and Ubuntu:
https://askubuntu.com/a/1363113
The 2 active versions were 2020.01.19-1 and 2018.11.26-1.1, Live555 has since added GPL license headers to the offending files, however the RFC issue remains.
Now you may be tempted to just download the latest Live555 source code and compile it... it does not work. There have been changes to function names and structures referenced by VLC, and as such VLC will not compile against the source. You need to get an older version, I specifically used this one, which is a tweaked snapshot from 2020 prior to the modifications that prevent VLC compilation:
https://github.com/rgaufman/live555
The configuration you want is ./genMakefiles linux-with-shared-libraries
, I do not know if it is required but since my system is x86-64-bit I added -m64 to the compiler options first
After compilation and install, I went on to compile VLC, adding '--enable-live555'
and '--with-live555-tree=extras/live555-master'
after placing the root Live555 folder in the VLC extras folder, however VLC failed to compile, it turns out the Live555's make install does not copy all the header files needed to where VLC is looking. They were dropped as 4 subfolders into /usr/local/include/, and the actual libs into /usr/local/lib/. Adding the correct CXX/CPP flags will make it look where they were put, however I put them all in a single folder and used 1 flag.
I also had to '--disable-mod'
to work around a dependency version issue that I had no interest in fixing, since I do not use modplug or any mod files.
50 minutes later... VLC successfully compiled! However it was expecting the libraries for Live555 to be in /usr/lib/ not /usr/local/lib/, since it took so long to compile I was just fine with linking or copying the libraries into the expected folder, and after that VLC works with RTSP when linked to the new file. Or you can choose to maintain the original VLC and run the new file directly if you need to load the camera feeds.
QUESTION
ffmpeg.js uses a custom build of FFmpeg to keep its size low. I am trying to convert a .ts
into a .mp4
which has always been an easy task on my desktop (especially since they are even using the same codecs, aac and h.264), but on the custom build, I get the error sample1.ts: Invalid data found when processing input
.
The command being run is ffmpeg -i sample1.ts -report -c copy out.mp4
.
Other questions I see on this topic get past the initial reading of the input file and I cannot find good resources on what my problem is or how to fix it.
This is a rather nondescript error so I am not sure exactly what the problem is. I assume it means that this build does not have support for ts
files, but I am not even sure what that means in terms of codecs and muxers.
From the custom build make file, the enabled demuxers and decoders are
...ANSWER
Answered 2021-Nov-04 at 17:44This is very similar to How to compile ffmpeg to get only mp3 and mp4 support, but with a few different compilation options:
./configure --disable-everything --disable-network --disable-autodetect --enable-small --enable-demuxer=mpegts --enable-muxer=mp4 --enable-parser=aac,h264 --enable-decoder=aac,h264 --enable-protocol=file
The the link above for more details.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Demux
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page