getUserMedia | browser getUserMedia shim with a node.js style error | Runtime Evironment library
kandi X-RAY | getUserMedia Summary
kandi X-RAY | getUserMedia Summary
A tiny browser module that gives us a simple API for getting access to a user's camera or microphone by wrapping the navigator.getUserMedia API in modern browsers.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Find a module .
getUserMedia Key Features
getUserMedia Examples and Code Snippets
// overrides getUserMedia so it applies an invert filter on the videoTrack
{
const mediaDevices = navigator.mediaDevices;
const original_gUM = mediaDevices.getUserMedia.bind(mediaDevices);
mediaDevices.getUserMedia = async (...arg
import 'package:flutter/material.dart';
import 'package:flutter_webrtc/webrtc.dart';
import 'dart:core';
/**
* getUserMedia sample
*/
class GetUserMediaSample extends StatefulWidget {
static String tag = 'get_usermedia_sample';
@ov
// supposing we have the getUserMedia stream and a canvas
// we want to stream the canvas content and the
// amplified audio from user's microphone
var s = canvas.captureStream();
Community Discussions
Trending Discussions on getUserMedia
QUESTION
I am trying to take a snapshot of a video feed from a webcam. The preview works fine, but when I try to capture it and turn it into a picture only a very small part of it is captured. A 320x150 part of the right top corner.
Already tried:
- Changing CSS display property
- Setting canvas width and height to video height (Which shows 1200x720, so that is correct
- Changing the location of the canvas.
CSS:
...ANSWER
Answered 2022-Mar-11 at 17:57You need to set the size of the actual canvas, like this:
QUESTION
I'm trying to create a sound using Fourier coefficients.
First of all please let me show how I got Fourier coefficients.
(1) I took a snapshot of a waveform from a microphone sound.
- Getting microphone: getUserMedia()
- Getting microphone sound: MediaStreamAudioSourceNode
- Getting waveform data: AnalyserNode.getByteTimeDomainData()
The data looks like the below: (I stringified Uint8Array, which is the return value of getByteTimeDomainData()
, and added length
property in order to change this object to Array later)
ANSWER
Answered 2022-Feb-04 at 23:39In golang I have taken an array ARR1 which represents a time series ( could be audio or in my case an image ) where each element of this time domain array is a floating point value which represents the height of the raw audio curve as it wobbles ... I then fed this floating point array into a FFT call which returned a new array ARR2 by definition in the frequency domain where each element of this array is a single complex number where both the real and the imaginary parts are floating points ... when I then fed this array into an inverse FFT call ( IFFT ) it gave back a floating point array ARR3 in the time domain ... to a first approximation ARR3 matched ARR1 ... needless to say if I then took ARR3 and fed it into a FFT call its output ARR4 would match ARR2 ... essentially you have this time_domain_array --> FFT call -> frequency_domain_array --> InverseFFT call -> time_domain_array ... rinse N repeat
I know Web Audio API has a FFT call ... do not know whether it has an IFFT api call however if no IFFT ( inverse FFT ) you can write your own such function here is how ... iterate across ARR2 and for each element calculate the magnitude of this frequency ( each element of ARR2 represents one frequency and in the literature you will see ARR2 referred to as the frequency bins which simply means each element of the array holds one complex number and as you iterate across the array each successive element represents a distinct frequency starting from element 0 to store frequency 0 and each subsequent array element will represent a frequency defined by adding incr_freq
to the frequency of the prior array element )
Each index of ARR2 represents a frequency where element 0 is the DC bias which is the zero offset bias of your input ARR1 curve if its centered about the zero crossing point this value is zero normally element 0 can be ignored ... the difference in frequency between each element of ARR2 is a constant frequency increment which can be calculated using
QUESTION
ANSWER
Answered 2021-Dec-23 at 20:02Note that, at the time I'm writing, this is still experimental territory
You can query permissions using Permissions.query()
:
QUESTION
ANSWER
Answered 2021-Dec-21 at 09:48You should be using navigator.mediaDevices.getUserMedia
. The one in your first example snippet. The second example snippet is deprecated.
The issue here (from what I can see) is that getUserMedia
is suppose to request permissions everytime it is called. So the prompt is expected.
You need to store the stream in a variable, and that variable needs to be declared outside of the event handler.
QUESTION
I have a task, but I can't seem to get it done. I've created a very simple WebRTC stream on a Raspberry Pi which will function as a videochat-camera. With ionic I made a simple mobile application which can display my WebRTC stream when the phone is connected to the same network. This all works.
So right now I have my own local stream which shows on my app. I now want to be able to broadcast this stream from my phone to a live server, so other people can spectate it.
I know how to create a NodeJS server which deploys my webcam with the 'getUserMedia' function. But I want to 'push' my WebRTC stream to a live server so I can retrieve a public URL for it.
Is there a way to push my local Websocket to a live environment? I'm using a local RTCPeerConnection to create a MediaStream object
...ANSWER
Answered 2021-Dec-10 at 16:54Is there a way to push my local Websocket to a live environment?
It's not straightforward because you need more than vanilla webrtc (which is peer-to-peer). What you want is an SFU. Take a look at mediasoup.
To realize why this is needed think about how the webrtc connection is established in your current app. It's a negotiation between two parties (facilitated by a signaling server). In order to turn this into a multi-cast setup you will need a proxy of sorts that then establishes separate peer-to-peer connections to all senders and receivers.
QUESTION
First, I want to mention that I am very new to WebRTC, so any advice would be very helpful.
Currently I am using aiortc
library to build my own WebRTC app.
Here is what I am trying to do.
I have 2 peers, one is web browser, which is written in javascript, and another one is python script, which is working as signaling server and peer at the same time. So If you access to my web page, you will send video frame to server and then the server will make modification of that then send it back.
So I finished testing my app on LAN environment and everything worked as I expected. But once I deployed my app to remote server (Google cloud run) , I encountered Ice connection state failing issue. And gets this log on remote server.
(I think it is due to disconnection between peers, not low memory problem. I tried with 16GB RAM and 4 cpus and still didn't work)
Then, I dig into more information, and found that TURN/STUN server is necessary to build WebRTC app over Internet. So I added google STUN server to my RTCPeerConnection
like this. [{'urls': 'stun:stun.l.google.com:19302'}, {'urls': 'stun:stun1.l.google.com:19302'}, {'urls': 'stun:stun2.l.google.com:19302'}]
(I added both side on javascript and python because both side is working as peer) Unfortunately, it still didn't work.
Now, I am planning to build my own TURN server, but I am afraid if TURN server wouldn't solve this problem. So I would like to have any advice from you since I am quite stuck within my situation.
p.s I have done SSL encryption.(So GetUserMedia
is working fine)
Sdp details(Offer/Answer):
SDP
Offer
...ANSWER
Answered 2021-Dec-10 at 15:13If everything work on local, and this ice server are set, verify that your gcloud server have the correct firewall for webrtc port (not only your signaling port, check the sdp/ice you exchange). also this Webrtc page allow you to check is a stun/turn work on your client
You will not need stun on your python side, as it's a server his ip may be public (unless you don't want to). Stun allow to find your public ip and allow the port to remain open.
On your server you need to open your signaling port (certainly the WS where you exchange the sdp) and the P2P port (candidate lines in the sdp), the media/data will go through this one. For each media (sdp m line) there are usually one used port.
QUESTION
I have a webpage where I want user to take a picture with his laptop/phone camera. Once he clicks on a button a modal is shown and the following js will start the camera stream to take the picture:
...ANSWER
Answered 2021-Nov-17 at 11:12If any track has not been stopped then your camera will still be active. In your stopStreaming
function you only stop the first track in the returned array.
If you instead iterate through the tracks you may catch ones you aren't currently:
QUESTION
Building a recording app on React + Type Script. I tried to set state with getting stream, and it seems to be successfully gotten on console. But it couldn't be set up on record target stream
.
ANSWER
Answered 2021-Sep-28 at 04:28You should use the recordTargetStream
state as useCallback
hook's dependencies.
useCallback
will return a memoized version of the callback that only changes if one of the dependencies has changed.
every value referenced inside the callback should also appear in the dependencies array.
index.tsx
:
QUESTION
I'm currently developing an API which takes a videoTrack as input and return a processed videoTrack. I've managed to draw the processed video on a canvas and hoping use the canvas.captureStream()
to capture the videoStream from it.
As it turns out that I can capture a non-blank stream only if I load the canvas into the DOM document.body.appendChild(myTempCanvas)
and keep it displayed, the stream turns blank if I hide the canvas myTempCanvas.style.display="none"
.
So is there any way to capture the stream and keep the canvas hidden as well?
Example: If uncomment the line canvas.style.display = "none";
, then the output_video turns blank as well.
ANSWER
Answered 2021-Sep-26 at 10:54In your drawCanvas
function you are setting the canvas width and height to its computed CSS width and height.
When you hide the canvas through display:none
, or when it's not appended in the document, these values will be "0px"
, because the canvas is not in the CSSOM.
A canvas with zero width or height can't produce any output.
Setting the canvas width or height is a very slow operation, in Chrome, this will allocate a full new image bitmap every time you do so, added to the resetting of all the canvas context's default values.
So never set the canvas width or height in all the frames, only when it actually did change, and don't set it to the computed size of your element, set it to the size of your input image, and then set the CSS size accordingly.
QUESTION
How do i stream realtime audio from one client to possibly multiple clients with socket.io?
I got to the point where i can record audio and playback the audio in the same tab.
That's my current code for this:
...ANSWER
Answered 2021-Sep-24 at 12:40After a bunch of trial and error i got to a solution I'm satisfied with. Here is the client side javascript. Server side socket.io server just forwards the data to the correct clients, should be trivial.
There is some frontend stuff in there as well. Just ignore it.
main.js
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install getUserMedia
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page