sipsorcery | VoIP library for C # and .NET | Video Utils library
kandi X-RAY | sipsorcery Summary
kandi X-RAY | sipsorcery Summary
The diagram below is a high level overview of a Real-time audio and video call between Alice and Bob. It illustrates where the SIPSorcery and associated libraries can help.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of sipsorcery
sipsorcery Key Features
sipsorcery Examples and Code Snippets
Community Discussions
Trending Discussions on sipsorcery
QUESTION
When using the Source Reader I can use it to get decoded YUV samples using an mp4 file source (example code).
How can I do the opposite with a webcam source? Use the Source Reader to provide encoded H264 samples? My webcam supports RGB24 and I420 pixel formats and I can get H264 samples if I manually wire up the H264 MFT transform. But it seems as is the Source Reader should be able to take care of the transform for me. I get an error whenever I attempt to set MF_MT_SUBTYPE
of MFVideoFormat_H264
on the Source Reader.
Sample snippet is shown below and the full example is here.
...ANSWER
Answered 2020-Jan-23 at 13:31Source Reader does not look like suitable API here. It is API to implement "half of pipeline" which includes necessary decoding but not encoding. The other half is Sink Writer API which is capable to handle encoding, and which can encode H.264.
Or your another option, unless you are developing a UWP project, is Media Session API which implements a pipeline end to end.
Even though technically (in theory) you could have an encoding MFT as a part of Source Reader pipeline, Source Reader API itself is insufficiently flexible to add encoding style tansforms based on requested media types.
So, one solution could be to have Source Reader to read with necessary decoding (such as up to having RGB32 or NV12 video frames), then Sink Writer to manage encoding with respectively appropriate media sink on its end (or Sample Grabber as media sink). Another solution is to put Media Foundation primitives into Media Session pipeline which can manage both decoding and encoding parts, connected together.
QUESTION
I'm using the example code from the Sample Grabber Sink reference page except that I'm processing an mp4 file to get both audio and video samples (my sample code). To process the samples in the callback I need to know which ones are audio and which ones are video. The problem is the REFGUID guidMajorMediaType
never seems to get set.
Below are the results of printing out the properties of each callback sample. The smaller samples (less than 750 bytes) are audio and the larger ones the video. But the guidMajorMediaType
is always empty. Do I perhaps need to set a additional property on the IMFTopologyNode's
? I couldn't spot anything obvious.
ANSWER
Answered 2020-Jan-22 at 21:34From Using the Sample Grabber Sink
The Source Reader is an alternative to the Sample Grabber Sink and has a simpler progamming model.
Do you really need Sample Grabber Sink ? Source Reader is the modern way to do. I would say Sample Grabber Sink is deprecated.
If yes :
documentation is not clear : MFCreateSampleGrabberSinkActivate function
Remarks To create the sample grabber sink, call IMFActivate::ActivateObject on the pointer received in the ppIActivate parameter.
In their example, "Using the Sample Grabber Sink", they don't.
Perhaps using IMFMediaSink after ActivateObject on IMFActivate, you will get correct guidMajorMediaType inside OnProcessSample. It's just an optimistic way of looking at this. But I have doubts about this.
This Seems to be a bug. I confirm OnProcessSample passes GUID_NULL for REFGUID guidMajorMediaType. It should not, because all others parameters seems to be valid.
I just think Sample Grabber Sink is deprecated, and you should not use it.
Explain why you really need to use sample grabber sink, when other solutions exists, without bug.
For me "Sample Grabber Sink" is just a sort of DirectShow approach, and now, with MediaSession, Source Reader, tee node, and so on... it no longer has any interest.
EDIT
I want the simplest way to get audio and video samples to a byte * buffer from either a file or device source and ideally be able to include at least one transform (such as the H264 encoder/decoder) in between.
The Source Reader does exactly this. Yes you don't have byte * buffer, you have IMFSample. But with IMFSample, you can get byte * buffer.
But the Source Reader documentation also states "The source reader does not send the data to a destination; it is up to the application to consume the data.
Using the Sample Grabber Sink, it's up to you to consume the data. Same situation.
the source reader can read a video file, but it will not render the video to the screen.
The Sample Grabber Sink will not render the video to the screen. Same situation.
Also, the source reader does not manage a presentation clock, handle timing issues, or synchronize video with audio.
That's a difference, yes, but I don't really see any advantage. See MF_SAMPLEGRABBERSINK_SAMPLE_TIME_OFFSET
Offset between the time stamp on each sample received by the sample grabber, and the time when the sample grabber presents the sample.
Do you know wich offset to apply : extra work. Same situation.
The SampleGrabber returns the audio and video samples in the correct order with the correct timestamps
The source reader also returns the audio and video samples in the correct order with the correct timestamps. Same situation.
Won't that be extra work for the Source Reader case?
No, for me it will be the same extra work in both case.
Also :
Using Source Reader : Source Reader -> your application
Using Sample Grabber Sink : MediaSession (with Sample Grabber Sink) -> your application
In term of performance (cpu/memory/thread usage), I'm pretty sure Source Reader is better than MediaSession.
But it's up to you to choose the Sample Grabber Sink. I will just suggest you to tell to Microsoft there is a bug with REFGUID guidMajorMediaType.
QUESTION
I am able to display a video stream from an mp4 video by writing byte samples directly to the Enhanced Video Renderer (EVR) sink (thanks to answer on Media Foundation EVR no video displaying).
I'd like to do the same thing but for a webcam source. The current problem I've got is that my webcam only support RGB24 and I420 formats and as far as I can tell the EVR only supports RGB32. In some Media Foundation scenarios I believe the conversion will happen automatically provided a CColorConvertDMO
class is registered in the process. I've done that but I suspect because of the way I'm writing samples to the EVR the color conversion is not being invoked.
My question is what sort of approach should I take to allow RGB24 samples read from my webcam IMFSourceReader
to allow writing to the EVR IMFStreamSink
?
My full sample program is here and is unfortunately rather long due to the Media Foundation plumbing required. The block where I attempt to match the EVR sink media type to the webcam source media type is below.
The problem is the setting of the MF_MT_SUBTYPE
attribute. From what I can tell tt has to be MFVideoFormat_RGB32
for the EVR but my webcam will only accept MFVideoFormat_RGB24
.
ANSWER
Answered 2020-Jan-07 at 20:19I needed to manually wire up a colour conversion MFT (I'm pretty sure some Media Foundation scenarios wire it in automatically but probably only when using a topology) AND adjust the clock set on the Direct3D IMFSample provided to the EVR.
Working example.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install sipsorcery
The simplest possible example to place an audio-only SIP call is shown below. This example relies on the Windows specific SIPSorceryMedia.Windows library to play the received audio and only works on Windows (due to lack of .NET audio device support on non-Windows platforms). The GetStarted example contains the full source and project file for the example above.
SIPTransport,
SIPUserAgent,
RTPSession.
The WebRTC specifications do not include directions about how signaling should be done (for VoIP the signaling protocol is SIP; WebRTC has no equivalent). The example below uses a simple JSON message exchange over web sockets for signaling. Part of the reason the Getting Started WebRTC is longer than the Getting Started VoIP example is the need for custom signaling.
Run the dotnet console application,
Open an HTML page in a browser on the same machine.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page