videoInput | A video capture library for windows | Video Utils library
kandi X-RAY | videoInput Summary
kandi X-RAY | videoInput Summary
A video capture library for windows.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of videoInput
videoInput Key Features
videoInput Examples and Code Snippets
Community Discussions
Trending Discussions on videoInput
QUESTION
I'm looking to make a barcode scanner app. As I'm new to Swift and Xcode I've managed to get help from other stack-overflow articles to create a page where I can scan a barcode. My issue is I don't want full screen, I want to add Label on top and Button at bottom and In Center QRCode screen. I have Uploaded Screen as well.
But I'm Getting Full Screen. How can I achieve This Functionality. Here's My Code
...ANSWER
Answered 2022-Feb-01 at 08:46You can design the screen as per your design in the storyboard.
Like this:
The Gray color area is a UIView
in which we will add the previewLayer
for scanner.
ViewController code should look like this:
QUESTION
ANSWER
Answered 2022-Jan-27 at 21:56The information that you are looking for is down in the 'getDevices' function. The following runs on my system and will show the device name and id in the console window. It will also create a global array for audio and video devices that you may access in setup(); Note that the deviceList is obtained in preload() which is run only once before the rest of your code.
QUESTION
I'm writing an application that takes in input from webcams and does some image processing on it. Currently I use Qt for the video capturing and display. I get the list of available cameras using QMediaDevices::videoInputs()
.
However, this function seems does not support OBS virtual camera. The following code should dump the entire list of cameras on the system. However I can only find my laptop's internal camera can Snap's virtual camera. (I have both OBS and Snap installed)
...ANSWER
Answered 2022-Jan-17 at 03:09Over the weekends. I read the Qt6 change logs and found that they dropped DirectShow support. While OBS Virtual Camera is DShow only. OBS Virtual Camera can only work in Qt once they support Media foundation.
QUESTION
Code to detect available cameras and mic
...ANSWER
Answered 2021-Dec-27 at 11:04I have did the following changes in getMediaStream function and it worked very fine.
QUESTION
I have an avassetwriter
to record a video with an applied filter to then play back via avqueueplayer
.
My issue is, on play back, the recorded video displays a black/blank screen for the first frame. To my understanding, this is due to the writer capturing audio before capturing the first actual video frame.
To attempt to resolve, I had placed a boolean check when appending to the audio writer input whether the first video frame was appended to the adapter. That said, I still saw a black frame on playback despite having printed out the timestamps, which showed video having preceded audio...I also tried to put a check to start the write session when output == video, but ended up with the same result.
Any guidance or other workaround would be appreciated.
...ANSWER
Answered 2021-Nov-04 at 16:13It's true that in the .capturing
state you make sure the first sample buffer written is a video sample buffer by discarding preceding audio sample buffers - however you are still allowing an audio sample buffer's presentation timestamp to start the timeline with writer.startSession(atSourceTime:)
. This means your video starts with nothing, so not only do you briefly hear nothing (which is hard to notice) you also see nothing, which your video player happens to represent with a black frame.
From this point of view, there are no black frames to remove, there is only a void to fill. You can fill this void by starting the session from the first video timestamp.
This can be achieved by guarding against non-video sample buffers in the .start
state, or less cleanly by moving writer.startSession(atSourceTime:)
into if !hasWrittenFirstVideoFrame {}
I guess.
p.s. why do you convert back and forth between CMTime
and seconds? Why not stick with CMTime
?
QUESTION
I have an avassetwriter
to record a video with an applied filter to then play back via avqueueplayer
.
My issue is the audio output appends to the audio input, but no sound plays in the play back. Have not come across any existing solutions, and would appreciate any guidance available..
Secondarily, my .AVPlayerItemDidPlayToEndTime
notification observer, which I have to loop the playback, does not fire as well..
AVCaptureSession Setup
...ANSWER
Answered 2021-Oct-28 at 04:51Start your timeline at the presentation timestamp of the first audio or video sample buffer that you encounter:
writer.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
Previously you started the timeline at zero, but the captured sample buffers have timestamps that usually seem to be relative to amount of time passed since system boot, so there's a big, undesired duration between when your file "starts" (sourceTime
for AVAssetWriter
) and when video and audio appears.
Your question doesn't say that you don't see video, and I'd half expect some video players to skip over a big bunch of nothing to the point in the timeline where your samples begin, but in any case the file is wrong.
QUESTION
i have converted my class based component as below and i have converted to function based in the below but i am not sure about if my variables are are defined correctly and my function based component is running as a infinite loop can someone guide me right direction?
...ANSWER
Answered 2021-Sep-01 at 11:02import React, { useEffect } from "react";
import {
SafeAreaView,
StyleSheet,
ScrollView,
View,
Text,
StatusBar,
TouchableOpacity,
Dimensions,
} from "react-native";
import {
RTCPeerConnection,
RTCIceCandidate,
RTCSessionDescription,
RTCView,
MediaStream,
MediaStreamTrack,
mediaDevices,
registerGlobals,
} from "react-native-webrtc";
import io from "socket.io-client";
const dimensions = Dimensions.get("window");
const pc_config = {
iceServers: [
// {
// urls: 'stun:[STUN_IP]:[PORT]',
// 'credentials': '[YOR CREDENTIALS]',
// 'username': '[USERNAME]'
// },
{
urls: "stun:stun.l.google.com:19302",
},
],
};
function App(props) {
const [localStream, SetlocalStream] = useState(null);
const [remoteStream, SetremoteStream] = useState(null);
const socket = useRef(
io.connect("https://daae-171-61-.ngrok.io/webrtcPeer", {
path: "/io/webrtc",
query: {},
})
);
const sdp = useRef(null);
const pc = useRef(new RTCPeerConnection(pc_config));
const candidates = useRef([]);
useEffect(() => {
socket.current.on("connection-success", (success) => {
console.log(success);
});
socket.current.on("offerOrAnswer", (sdp) => {
sdp.current = JSON.stringify(sdp);
// set sdp as remote description
pc.current.setRemoteDescription(new RTCSessionDescription(sdp));
});
socket.current.on("candidate", (candidate) => {
// console.log('From Peer... ', JSON.stringify(candidate))
// candidates.current = [...candidates.current, candidate]
pc.current.addIceCandidate(new RTCIceCandidate(candidate));
});
pc.current = new RTCPeerConnection(pc_config);
pc.current.onicecandidate = (e) => {
// send the candidates to the remote peer
// see addCandidate below to be triggered on the remote peer
if (e.candidate) {
// console.log(JSON.stringify(e.candidate))
sendToPeer("candidate", e.candidate);
}
};
// triggered when there is a change in connection state
pc.current.oniceconnectionstatechange = (e) => {
console.log(e);
};
pc.current.onaddstream = (e) => {
debugger;
// this.remoteVideoref.current.srcObject = e.streams[0]
SetremoteStream(e.stream);
};
const success = (stream) => {
console.log(stream.toURL());
SetlocalStream(stream);
pc.current.addStream(stream);
};
const failure = (e) => {
console.log("getUserMedia Error: ", e);
};
let isFront = true;
mediaDevices.enumerateDevices().then((sourceInfos) => {
console.log(sourceInfos);
let videoSourceId;
for (let i = 0; i < sourceInfos.length; i++) {
const sourceInfo = sourceInfos[i];
if (
sourceInfo.kind == "videoinput" &&
sourceInfo.facing == (isFront ? "front" : "environment")
) {
videoSourceId = sourceInfo.deviceId;
}
}
const constraints = {
audio: true,
video: {
mandatory: {
minWidth: 500, // Provide your own width, height and frame rate here
minHeight: 300,
minFrameRate: 30,
},
facingMode: isFront ? "user" : "environment",
optional: videoSourceId ? [{ sourceId: videoSourceId }] : [],
},
};
mediaDevices.getUserMedia(constraints).then(success).catch(failure);
});
}, []);
const sendToPeer = (messageType, payload) => {
socket.current.emit(messageType, {
socketID: socket.current.id,
payload,
});
};
const createOffer = () => {
console.log("Offer");
// https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/createOffer
// initiates the creation of SDP
pc.current.createOffer({ offerToReceiveVideo: 1 }).then((sdp) => {
// console.log(JSON.stringify(sdp))
// set offer sdp as local description
pc.current.setLocalDescription(sdp);
sendToPeer("offerOrAnswer", sdp);
});
};
const createAnswer = () => {
console.log("Answer");
pc.current.createAnswer({ offerToReceiveVideo: 1 }).then((sdp) => {
// console.log(JSON.stringify(sdp))
// set answer sdp as local description
pc.current.setLocalDescription(sdp);
sendToPeer("offerOrAnswer", sdp);
});
};
const setRemoteDescription = () => {
// retrieve and parse the SDP copied from the remote peer
const desc = JSON.parse(sdp.current);
// set sdp as remote description
pc.current.setRemoteDescription(new RTCSessionDescription(desc));
};
const addCandidate = () => {
// retrieve and parse the Candidate copied from the remote peer
// const candidate = JSON.parse(this.textref.value)
// console.log('Adding candidate:', candidate)
// add the candidate to the peer connection
// pc.current.addIceCandidate(new RTCIceCandidate(candidate))
candidates.current.forEach((candidate) => {
console.log(JSON.stringify(candidate));
pc.current.addIceCandidate(new RTCIceCandidate(candidate));
});
};
const remoteVideo = remoteStream ? (
) : (
Waiting for Peer connection ...
);
return (
Call
Answer
localStream._tracks[1]._switchCamera()}
>
{remoteVideo}
);
}
export default App;
const styles = StyleSheet.create({
buttonsContainer: {
flexDirection: "row",
},
button: {
margin: 5,
paddingVertical: 10,
backgroundColor: "lightgrey",
borderRadius: 5,
},
textContent: {
fontFamily: "Avenir",
fontSize: 20,
textAlign: "center",
},
videosContainer: {
flex: 1,
flexDirection: "row",
justifyContent: "center",
},
rtcView: {
width: 100, //dimensions.width,
height: 200, //dimensions.height / 2,
backgroundColor: "black",
},
scrollView: {
flex: 1,
// flexDirection: 'row',
backgroundColor: "teal",
padding: 15,
},
rtcViewRemote: {
width: dimensions.width - 30,
height: 200, //dimensions.height / 2,
backgroundColor: "black",
},
});
QUESTION
I am using vue.js with element-ui library and I have this problem where I need to show this dialog component to be able to show the camera and the user's audio but I have the following error in console
TypeError: Cannot set property 'srcObject' of undefined"
As you can see, first I am calling the mounted instance where it collects the video and audio information of the user and in the show dialog function I am recovering the data.
Here is the code:
...ANSWER
Answered 2021-Aug-30 at 05:19You need to use Arrow functions as callback for your $nextTick
, otherwise the this
variable inside the callback will not be the component object
QUESTION
I use Google meet in Chrome browser for online video meeting. In Google meet I can select my webcam in video devices. I can select any real hardware or virtual webcam and thats works well.
I am sure Chrome detect all real or virtual webcam see content of chrome://media-internals/
But MediaDevices.enumerateDevices()
is only showing real hardware webcam and not showing virtual webcams.
ANSWER
Answered 2021-Aug-25 at 09:49An empty label and only the default devices being shown in enumerateDevices
after successful getUserMedia
is an edge case that can only happen when testing on file:///
urls. It should work normally on https://
urls (and on localhost) where a successful getUserMedia call grants permission to the extended list of devices (see this PSA for details)
QUESTION
I am trying to implement QR scanner to scan for QR code using angular2-qrscanner. After doing what it says in the documentation, I am gettting the error
"Cannot read property 'getMediaDevices' of undefined "
on the console. These are my codes.
AppModule.ts
...ANSWER
Answered 2021-May-26 at 12:33Try with ngAfterViewInit()
. It is called after the view is initially rendered. This is why @ViewChild()
depends on it.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install videoInput
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page