mediadevices | Go implementation of the MediaDevices API | Video Utils library
kandi X-RAY | mediadevices Summary
kandi X-RAY | mediadevices Summary
Go implementation of the MediaDevices API.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of mediadevices
mediadevices Key Features
mediadevices Examples and Code Snippets
Community Discussions
Trending Discussions on mediadevices
QUESTION
Here is my component code (it is called on a page with );
ANSWER
Answered 2021-Jun-08 at 19:26The issue seems to be that update
function which is called periodically does not have access to the latest detect
state from useState()
hook.
Some changes in functionality compared to the original code:
- AudioContext has it's own state - one of 'suspended', 'running', 'closed' or 'interrupted'. So mirroring has to be setup to update
detect
React state so React can re-render every time AudioContext state changes. - click handler was changed according to React's event handling
- setTimeout was replaced with setInterval for convenience
- cleanup added closing AudioContext when component is unmounted
- loading state displayed till user grants access to a microphone
For update
function to get latest detect
value I'm calling setDetect
with a callback. This looks hacky to me but it works, maybe a class component implementation is better (see bellow).
QUESTION
I am capturing user screen and audio using getDisplayMedia and getUserMedia and able to record the complete screen capture. But this works only on Chrome and not on Firefox. When I run my application on Firefox it throws error 'DOMException: AudioContext.createMediaStreamSource: No audio tracks in MediaStream'. Below is my code snippet. I have latest version of both browsers installed. Any help would be appreciated. Thanks in advance.
Note:- Its throwing error on line context.createMediaStreamSource(desktopStream)
...ANSWER
Answered 2021-Jun-08 at 10:59Firefox doesn't currently support capturing audio using getDisplayMedia
. There's a feature request for it.
What you could do is check whether your streams have any audio tracks before creating the audio node, like this:
QUESTION
Problem:
In my React application, I have setup screen recording within the application using navigator.mediaDevices.getDisplayMedia
when I set the stem to video tag and play it was playing successfully. But when I try to save it to a local machine it saving only 0 minutes of video.
This is my code.
...ANSWER
Answered 2021-Jun-07 at 19:14WebRTC (getDisplayMedia) typically records as webm.
I have a sample app running at record.a.video
and in the code (which looks similar to yours, but for the mimetype): https://github.com/dougsillars/recordavideo/blob/main/public/index.js#L652
QUESTION
I am recording and sending audio via a website. For that purpose I use the MediaRecorder API.
There are no issues when using the site on desktop or Android devices and according to the MediaRecorder documentation, since a release in September 2020, iOS 14 should be supported as well.
The MediaRecorder is instantiated like this:
...ANSWER
Answered 2021-Jun-07 at 17:33It turns out video/mp4
works with iOS. It can be used for audio-only as well, even though it says video.
Since other browsers don't support video/mp4
, a try/catch with the video/mp4
as a fallback can be used, which results in the following solution:
QUESTION
I am trying to access the webcam (more specifically the phone camera) using navigator.mediaDevices.getUserMedia(). However, no matter what I try, I am not getting the prompt to 'allow use of the camera'. I am primarily using Chrome. However, I have tried Brave & Edge, with the same result. Firefox does provide the prompt, although only for desktop. I have not been successful at all on any mobile browser.
To rule out some of the common answers I have found on StackOverflow and the web:
- Yes, permission is granted on my mac (Have also tested on Windows)
- No, I have not denied permission in the past
- I have tried both localhost and our https:// production server
I have tried several iterations of code with no success. Here is the the current code block doing the work:
...ANSWER
Answered 2021-Jun-07 at 15:35Check the headers of your application. The feature policy has the ability to block hardware on the client side.
Make sure Feature-Policy: camera: none
doesn't exist in your response headers.
QUESTION
I am trying to run this HTML example https://codepen.io/mediapipe/details/KKgVaPJ from https://google.github.io/mediapipe/solutions/face_mesh#javascript-solution-api in a create react application. I have already done:
- npm install of all the facemesh mediapipe packages.
- Already replaced the jsdelivr tags with node imports and I got the definitions and functions.
- Replaced the video element with react-cam
I don't know how to replace this jsdelivr, maybe is affecting:
...ANSWER
Answered 2021-Jun-07 at 14:59You don't have to replace the jsdelivr, that piece of code is fine; also I think you need to reorder your code a little bit:
- You should put the faceMesh initialization inside the useEffect, with [] as parameter; therefore, the algorithm will start when the page is rendered for the first time
- Also, you don't need to get videoElement and canvasElement with doc.*, because you already have some refs defined
An example of code:
QUESTION
I'm trying to establish peer connection between two clients via WebRTC and then stream the video from camera through the connection. The problem is, there's no video shown on the remote side, although I can clearly see the remotePc.ontrack
event was fired. Also no error was thrown. I do NOT want to use the icecandidates mechanism (and it should NOT be needed), because the result application will only be used on a local network (the signaling server will only exchange the SDPs for the clients). Why is my example not working?
ANSWER
Answered 2021-Jun-06 at 16:49ICE candidates are needed, as they tell you the local addresses where the clients will connect to each other.
You won't need STUN servers though.
QUESTION
Note: Minimal working example here. Click on change text
button, and you can see the text will appear for one frame and disappear.
I wanted to add an overlay to my video, so I stacked two canvases and filled text to the transparent top canvas.
However, the text does not stick. It disappears.
To test if I was filling the text correctly, I tried using a black canvas (no video) underneath.
I just needed to fillText once and the text stayed.
However, with the video canvas underneath, the text won't stick unless I draw text in requestAnimationFrame, which I believe keeps drawing the same text in every frame, which is unnecessary.
The text canvas is supposed to be separate from the video canvas. Why is this getting wiped out without requestAnimationFrame?
How can I fix it?
Minimal working example here, but the code is also shown below.
...ANSWER
Answered 2021-Jun-04 at 11:25when you update the canvas from the video, all of the pixels in the canvas are updated with pixels from the video. Think of it like a wall:
you painted "here's my text" on the wall.
Then you painted the whole wall blue. (your text is gone, and the wall is just blue).
If you paint the whole wall blue, and then repaint "here's my text", you'd still see your text on the blue wall.
When you redraw the canvas, you have to re-draw everything you want to appear on the canvas. If the video "covers" your text, you've got to redraw the text.
working example at https://github.com/dougsillars/chyronavideo which draws a chyron (the bar at the bottom of a news story) on each frame of the canvas.
QUESTION
From the highest level, I'm trying to pass a Blob to a function that will transcribe the data and return the transcript. I'm struggling to get the async parts of the process lined up correctly. Any insight would be appreciated.
The two files I'm working with are below. In the record.jsx file, I'm calling the googleTranscribe function ( that is located in the second file ) to do the transcription work and hopefully return the transcript. This is where I'm running into the problem - I can get the transcript but cannot return it as a value. I know I'm doing something wrong with async / await / promises, I just can't quite figure out what I'm doing wrong.
record.jsx
...ANSWER
Answered 2021-Jun-06 at 01:48The problem is in the second file. The line with your Axios should be modified as such:
QUESTION
I am using media stream recorder to use with ffmpeg in electron js then when stopping then starting again the record i am getting this error
i am starting record with timeslice = 0
...ANSWER
Answered 2021-Jun-01 at 20:09apparantly I had An old chrome version so it was the wrong version thanks to every one who helped
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install mediadevices
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page