media-source | Media Source Extensions | Media library

 by   w3c HTML Version: Current License: Non-SPDX

kandi X-RAY | media-source Summary

kandi X-RAY | media-source Summary

media-source is a HTML library typically used in Media applications. media-source has no bugs, it has no vulnerabilities and it has low support. However media-source has a Non-SPDX License. You can download it from GitHub.

This is the repository for the Media Source Extensions (MSE) specification. You're welcome to contribute! Let's make the Web rock our socks off!.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              media-source has a low active ecosystem.
              It has 195 star(s) with 62 fork(s). There are 81 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 80 open issues and 97 have been closed. On average issues are closed in 543 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of media-source is current.

            kandi-Quality Quality

              media-source has no bugs reported.

            kandi-Security Security

              media-source has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              media-source has a Non-SPDX License.
              Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.

            kandi-Reuse Reuse

              media-source releases are not available. You will need to build from source code and install.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of media-source
            Get all kandi verified functions for this library.

            media-source Key Features

            No Key Features are available at this moment for media-source.

            media-source Examples and Code Snippets

            No Code Snippets are available at this moment for media-source.

            Community Discussions

            QUESTION

            How to use a custom heap with IMFSourceReader
            Asked 2021-Feb-10 at 08:03

            In our application we are using an IMFSourceReader to handle the decode of a .mp4 file for us to play.

            What we would like to do is reserve an amount of memory in the application and then configure the IMFSourceReader to use this reserved memory as its heap when it allocates the IMFSampleObjects.

            I am wondering what might be the best way to try an achieve this. I believe that we will need to implement a custom media source as suggested in this documentation https://docs.microsoft.com/en-us/windows/win32/medfound/writing-a-custom-media-source#generating-source-data and use the MFCreateSourceReaderFromMediaSource method. Is that correct?

            Additionally I am still unclear on exactly where we would do the memory allocations. Will we need to create a new IMFMediaBuffer object as well?

            ...

            ANSWER

            Answered 2021-Feb-10 at 08:03

            I do not think it is realistic to supply custom memory heap without re-implementing Media Foundation primitives behind your source reader media pipeline (also, in the context of the question it would be worth mentioning its details).

            More importantly though, I suppose there is no real need or advantage in doing things this way. If you see increased memory pressure, it is highly unlikely that potential enormous effort in customization of memory allocator for primitives inside the source reader improves the situation. This is one of the reasons the feature does not exist in first place.

            Source https://stackoverflow.com/questions/66131690

            QUESTION

            Exception thrown at ... Access violating reading location
            Asked 2020-Aug-15 at 00:47

            Solved: stupid redeclaration issue I failed to notice on my side.

            Full exception message below:
            Exception thrown at 0x00007FF73EB618D7 in metadata_modifier.exe: 0xC0000005: Access violation reading location 0xFFFFFFFFFFFFFFFF. occurred
            The line in question: SafeRelease(&pIByteStream);


            I am trying to create an application in C++ which uses the win32 API to grab a media file's metadata (properties of the file including "Name", "#", "Title", "Contributing artists", etc)

            The page at: https://docs.microsoft.com/en-us/windows/win32/medfound/shell-metadata-providers lists 3 steps to achieve this:

            1. Get a pointer to the IMFMediaSource interface of the media source. You can use the IMFSourceResolver interface to get an IMFMediaSource pointer.
            2. Call MFGetService on the media source to get a pointer to the IPropertyStore interface. In the guidService parameter of MFGetService, specify the value MF_PROPERTY_HANDLER_SERVICE. If the source does not support the IPropertyStore interface, MFGetService returns MF_E_UNSUPPORTED_SERVICE.
            3. Call IPropertyStore methods to enumerate the metadata properties.

            As it provides a code example for step 2 and 3 (EnumerateMetadata), my code (full code can be found at the bottom) is focused on trying to achieve step 1.

            This is an outline of what I am currently doing:

            ...

            ANSWER

            Answered 2020-Aug-15 at 00:45
            void CreateMediaSource(IMFMediaSource **ppIMediaSource) {
                ....
                
                IMFByteStream *pIByteStream; // HERE
               
                ....
                
                if (SUCCEEDED(hr2)) {    
                    IMFByteStream* pIByteStream; // HERE
            
                    HRESULT hr3 = MFCreateFile(MF_ACCESSMODE_READ, MF_OPENMODE_FAIL_IF_NOT_EXIST, MF_FILEFLAGS_NONE, FileName, &pIByteStream);
            
                    ....
               }   
               SafeRelease(&pIByteStream);
            }
            

            Source https://stackoverflow.com/questions/63421458

            QUESTION

            Can't retrieve RTCVideoSourceStats from PeerConnection in firefox
            Asked 2020-Jul-09 at 10:29

            I am trying to get the height and width of a localStream that I'm sending over a peerconnection from the media-source stats retrieved from the PeerConnection.getStats or RTCRtpSender.getStats, but it does not get returned in the stats report. Does firefox not support these stats?

            ...

            ANSWER

            Answered 2020-Jul-09 at 10:29

            It's not implemented yet. To see a full dump of stats check this sample

            While not very up-to-date https://webrtc-stats.callstats.io/verify give a high-level overview of what statistics are implemented by different browsers.

            Source https://stackoverflow.com/questions/62807509

            QUESTION

            Is there a html video player that allows to set authentication headers?
            Asked 2020-May-24 at 16:08

            I am trying to play a WebM or a mp4 video file using HTML5 video from server that needs token based authentication. I cannot find any player that will support setting HTTP request headers fr requests that fetch media.

            There is support for setting headers only for HLS and DASH media.

            Already tried video.js: (https://github.com/videojs/video.js/issues/6348), react-player, video-react with no luck.

            I have implemented desired solution from scratch fetching it by XMLHttpRequest using MediaSource and reding file as Array Buffer (similar to https://html5-demos.appspot.com/static/media-source.html) but I would rather use some existing more robust solution.

            ...

            ANSWER

            Answered 2020-May-24 at 16:08

            It's a hack, but you could try using a ServiceWorker.

            It would be possible for your ServiceWorker to attach the appropriate authentication headers, and then you don't have to do anything special or weird in your video player at all. You can continue to use a standard tag. Additionally, the browser gets to keep its own behavior for what ranges to request, and you won't have to go through the headaches and incompatibilities of Media Source Extensions.

            See also: https://serviceworke.rs/strategy-cache-and-update.html

            Note though that this isn't always going to work... there are times when that Service Worker isn't loaded (such as when the user does a shift+refresh).

            Source https://stackoverflow.com/questions/59177432

            QUESTION

            Media Source Extension Javascript API vis-a-vis WebRTC. Some questions
            Asked 2020-Apr-09 at 17:32

            The closest I came across this is this question on SO but that is just for basic understanding. My question is: when Media Source Extension (MSE) is used where the media source is fetched from a remote end point, for example, through AJAX or fetch API or even websocket, the media is sent over TCP.

            1. That will handle packet loss and sequencing so protocol like RTP with RTCP is not used. Is that correct?
            2. But this will result in delay so it cannot be truly used for real-time communication. Yes?
            3. There is no security/encryption requirement for MSE like in WebRTC (DTLS/SRTP). Yes?
            4. One cannot, for example, mix a remote audio source from MSE with an audio mediaStreamTrack from a RTCPeerConnection as they do not have any common param like CNAME (RTCP) or are part of the same mediastream). In other words, the world of MSE and WebRTC cannot mix unless synchronization is not important. Correct?
            ...

            ANSWER

            Answered 2020-Apr-09 at 17:32
            1. That will handle packet loss and sequencing so protocol like RTP with RTCP is not used. Is that correct?

            AJAX and Fetch are just JavaScript APIs for making HTTP requests. Web Socket is just an API and protocol extended from an initial HTTP request. HTTP uses TCP. TCP takes care of ensuring packets arrive and arrive in-order. So, yes, you won't need to worry about packet loss and such, but not because of MSE.

            1. But this will result in delay so it cannot be truly used for real-time communication. Yes?

            That depends entirely on your goals. It's a myth that TCP isn't fast, or that TCP increases general latency for every packet. What is true is that the initial 3-way handshake takes a few round trips. It's also true that if a packet does actually get dropped, the application sees latency as suddenly sharply increased until the packet is requested again and sent again.

            If your goals are something like a telephony application where the loss of a packet or two is meaningless overall, then UDP is more appropriate. (In voice communications, we talk slow enough that if a few milliseconds of sound go missing, we can still decipher what was being said. Our spoken language is robust enough that if entire words get garbled or are silent, we can figure out the gist of what was being said from context.) It's also important that immediate continuity be kept for voice communications. The tradeoff is that realtime-ness is better than accuracy at any particular instant/packet.

            However, if you're doing something, say a one-way stream, you might choose a protocol over TCP. In this case, it may be important to be as realtime as possible, but more important that the audio/video don't glitch out. Consider the Super Bowl, or some other large sporting event. It's a live event and important that it stays realtime. However, if the time reference for the viewer is only 3-5 seconds delayed from live, it's still "live" enough for the viewer. The viewer would be far more angry if the video glitched out and they missed something happening in the game, rather than if they were just behind a few seconds. Since it's one-way streaming and there is no communication feedback loop, the tradeoff for reliability and quality over extreme low latency makes sense.

            1. There is no security/encryption requirement for MSE like in WebRTC (DTLS/SRTP). Yes?

            MSE doesn't know or care how you get your data.

            1. One cannot, for example, mix a remote audio source from MSE with an audio mediaStreamTrack from a RTCPeerConnection as they do not have any common param like CNAME (RTCP) or are part of the same mediastream). In other words, the world of MSE and WebRTC cannot mix unless synchronization is not important. Correct?

            Mix, where? Synchronization, where? No matter what you do, if you have streams coming from different places... or even different devices without sync/gen lock, they're out of sync. However, if you can define a point of reference where you consider things "synchronized", then it's all good. You could, for example, have independent streams going into a server and the server uses its current timestamps to set everything up and distribute together via WebRTC.

            How you do this, or what you do, depends on the specifics of your application.

            Source https://stackoverflow.com/questions/61100202

            QUESTION

            Is there a way to append to an object URL or otherwise create a stream as a URL?
            Asked 2020-Feb-02 at 20:06

            There was an interesting discussion over here on StackOverflow, and in some ways this question is a followup. I've also asked a similar question in the past, but I feel this is more generally a question about object URLs.

            There have been a number of times where I would like to implement a streaming version of a ".src" for image or video elements in JS, perhaps from a stream of bytes. Unfortunately, I only see two main options that are more controllable by JS:

            1. Create a Blob and then use URL.createObjectURL(). Unfortunately, this seems to be static - but perhaps there is a way to mutate the contents?
            2. Create a MediaSource. However, this only works for video and is much pickier than just using a video element, which is really the level of support I need.

            Any thoughts on how I can create some type of streaming object URL? And/or if not, does anybody know why JS hasn't implemented this type of streaming long, long ago?

            ...

            ANSWER

            Answered 2020-Feb-01 at 04:31

            There have been a number of times where I would like to implement a streaming version of a ".src" for image or video elements in JS, perhaps from a stream of bytes.

            Use a Service Worker to respond with a Response with a ReadableStream as the body.

            but I feel this is more generally a question about object URLs.

            Object URLs really only represent immutable Blobs. The MediaStream object URL is a special case, not really applicable here, and a deprecated API as srcObject exists for media elements these days.

            Create a Blob and then use URL.createObjectURL(). Unfortunately, this seems to be static - but perhaps there is a way to mutate the contents?

            No, Blobs are immutable.

            Create a MediaSource. However, this only works for video...

            ... or audio.

            Source https://stackoverflow.com/questions/60008344

            QUESTION

            Difference between "update" and "updateend" events in Media Source Extensions
            Asked 2019-Dec-07 at 23:18

            Could anyone explain the difference between the two SourceBuffer events and when to use one over the other? The W3C spec is confusing to me because it reads like the same thing ("completed" vs "ended"):

            1. update - The append or remove has successfully completed
            2. updateend - The append or remove has ended.

            Testing with the following code below. Chrome fires the error event, Firefox does not:

            ...

            ANSWER

            Answered 2019-Dec-07 at 09:31

            The difference lies in the success of the completion.

            The update event is only fired when either the append or removal operations did succeed, while updateend is also fired when there has been an error while appending, or when the operation has been aborted.

            Source https://stackoverflow.com/questions/59222837

            QUESTION

            How to Access HTML5 Video Decoding Status?
            Asked 2019-Nov-19 at 16:42

            I have been working with the HTML Video Media Source Extension (MSE) and here is an overview of the graph from w3.org for how MSE interact with HTML Video Element:

            If I understand correctly, MSE only feed the source. The video decoding job is still done by HTML Video Element and it is the only entry to access hardware decoding support, per StackOverflow JS Video decoding post saying.

            I have two questions:

            1. When accessing videos buffered attribute, does it refer to decoded buffer in HTML Element or downloaded/parsed buffer in MSE? If it refers to downloaded buffer, as MDN Doc saying, is that possible to get the decoded buffer range?
            2. On certain bad-performance computers, they cannot decode high resolution videos fast enough because of lack of GPU hardware support. At the same time, with really good internet bandwidth, Adaptive Bit Rate (ABR) algorithm will always try to feed high resolutions to those computers, leading to a choppy playback experience. Is there any solution to this?

            Thank you so much for any advice!

            ...

            ANSWER

            Answered 2019-Nov-19 at 16:42

            Looking at your questions in turn:

            1. It is the downloaded buffer - the decoded content is not available to the Javascript app level typically, and is not even available to the OS if it is an encrypted media stream and the device supports a secure media path. Assuming, the video is not encrypted there is nothing in theory to stop you decoding it yourself in Javascript, but it would obviously be slow. There are some ffmpeg ports to Javascript available (e.g. https://github.com/Kagami/ffmpeg.js/) but these will still be relatively slow.

            2. Most HMTML5 players will include a way to manually or programatically set or limit the maximum resolution that the player requests from the resolutions available in the manifest. Different players may have different ABR algorithms also and some will include CPU as a factor in the algorithm. Some players may even support multiple or custom ABR algorithms so you can add you own criteria. If you want to see an example of an algorithm that allows for CPU look at the 'DroppedFramesRule' in DASH.js: https://github.com/Dash-Industry-Forum/dash.js/wiki/ABR-Logic

            Source https://stackoverflow.com/questions/58341106

            QUESTION

            Custom ExoPlayer MediaSource -- where to start?
            Asked 2019-May-14 at 09:23

            I'm working on creating a custom media player using ExoPlayer (I've previously opened several questions on the same topic, because I'm very new to Android development and it seems like I run into a wall with every line of code I write).

            As part of this custom player I would like to download, parse, and handle an XML file that our business produces to define our content. This XML file gives a URL for a network ID (a 4-6 second video advertising the owner of the content), a URL for the content, and an ad tag URL for playing pre-roll and mid-roll ads.

            My goal is to pass this XML file as a video source to prepare(), call setPlayWhenReady(true), then have everything play as expected (network ID, content, and ads)

            To do this I believe I need to create a custom MediaSource -- however I cannot find any good documentation or tutorials on doing so. The ExoPlayer documentation on MediaSources is practically useless on this case, only describing how to utilize ConcatenatingMediaSource, MergingMediaSource, and LoopingMediaSource to customize the playback of media.

            Update

            Continuing to research on my own, it's possible what I want may be accomplished with a custom Extractor. When I pass the content to an ExtractorMediaSource I receive the error com.google.android.exoplayer2.source.UnrecognizedInputFormatException: None of the available extractors (MatroskaExtractor, FragmentedMp4Extractor, Mp4Extractor, Mp3Extractor, AdtsExtractor, Ac3Extractor, TsExtractor, FlvExtractor, OggExtractor, PsExtractor, WavExtractor, AmrExtractor) could read the stream.. This makes me wonder if it's better to have an Extractor parse the XML, pull the content, and pass the data back. I'm not sure yet what the difference is between these two components or which is a better fit, and the documentation is lacking.

            ...

            ANSWER

            Answered 2019-May-14 at 09:23

            Therefore the parsing of this XML file should be the responsibility of the video player, not of the client's app.

            So you are essentially trying to create a new schema for distributing video for an underlying player (whatever that might be, to deal with). This seems like client side logic. However, you want an answer so I'll try and give you one.

            First, Extractor in ExoPlayer should not be used for parsing your XML, as per the docs:

            Extracts media data from a container format.

            This would be used for extracting the video data from a video container e.g. MP4.

            In your scenario you probably want to look at something similar to the DashManifestParser who uses the ParsingLoadable.Parser whose responsibility it is to parse your input model. This ParsingLoadable.Parser is then used by MediaSource to get the information it needs for playback.

            However, I would not recommend doing any of that. The best option for you in this scenario would be to create a Parser to grab the content Url and just pass that to an underlying player. You content Url will link to an MP4 container, perhaps DRM'd content etc, but all of that can be handled by the player just fine without adding all of this other complexity.

            As for creating adverts, this can be done in a number of ways:

            • Have a single player instance, swapping between content and adverts. Easy but then you need to keep track of position information and also you will have buffering when you switch.
            • Have a single player instance but use ConcatenatingMediaSource, for this you would parse the xml create a MediaSource for the content and each advert, you then add these to the ConcatenatingMediaSource.
            • Have a single player instance but use the AdsLoader provided by ExoPlayer. This is the best solution but the documentation is sadly lacking. For this you can provide a link to load adverts and a separate link to load content.

            So. To summarise.

            • Create a parser that can get the info you need from the XML i.e. content link and advert links.
            • Create a player that creates a media source for the content and then media sources for the adverts, add these to the concatenating media source.

            If you'd like to see how to do certain aspects I would suggest taking a look at our open source library that uses Exo-Player under the hood. We've even recently started to use an AdsLoader. https://github.com/novoda/no-player

            Source https://stackoverflow.com/questions/56115056

            QUESTION

            iOS - How to get params in One-Link of AppsFlyer. App was installed and launched
            Asked 2019-Feb-17 at 09:28

            I can get params (campaign, media-source, etc...) of One-Link if my app does not install. I use the method below to do it.

            ...

            ANSWER

            Answered 2019-Feb-17 at 09:28

            onAppOpenAttribution is triggered every time you open the app from different deep-link (In your case the One-Link).

            I can get params (campaign, media-source, etc...) of One-Link if my app does not install.

            Right, 1st time the two callbacks are triggered onAppOpenAttribution and onConversionDataReceived.

            But if my app was installed, launched and then I click on another One-Link to open my app

            It can happen if you try to open the app from One-Link that not belongs to Appsflyer.

            For example, this link (Universal Link) https://rndemotest.onelink.me/7y5s/f78c46d5 will give you a media source, campaign etc. through onAppOpenAttribution, where 7y5s is your One-Link ID defined in "ONELINK CONFIGURATION" section of the dashboard.

            [EDIT]

            Be sure you run latest AppsFlyer SDK version,

            Deep linking with short links for iOS Universal Links or Android App Links is only supported from SDK version 4.8.0

            Generally, you should get a response as {"link": ""} for Full link a.e. {"link":"https://abc.onelink.me/2347196006?pid=User%20invite&c=CMTT2019einvite&af_dp=abc%3A%2F%2F"}

            For One-Link, you should get all inforamtion contains media source, ... .

            BTW here is a code snippet example how to handle onAppOpenAttribution response:

            Source https://stackoverflow.com/questions/54724412

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install media-source

            You can download it from GitHub.

            Support

            The issue is pending further clarification from the assignee, likely the original bug filer or another who reported aspects of the issue in the bug’s history. The feedback request needs to be in a comment associated with the addition of this label, along with a request for reassignment back to an editor once feedback is provided.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/w3c/media-source.git

          • CLI

            gh repo clone w3c/media-source

          • sshUrl

            git@github.com:w3c/media-source.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link