ontrack | : money_with_wings : A simple self-hosted budgeting app | Dashboard library
kandi X-RAY | ontrack Summary
kandi X-RAY | ontrack Summary
In a nutshell: a private budgeting tool that can be self-hosted. This project is an attempt to understand and control my own spending better without giving my banking/financial info to a 3rd party. The app is meant to be used with 1 login, and you can host easily your own instance. The app was designed by Iana Noda.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of ontrack
ontrack Key Features
ontrack Examples and Code Snippets
Community Discussions
Trending Discussions on ontrack
QUESTION
I'm trying to add filter effects to an audio stream I have playing on my website. I'm able to connect the Tone.js library to the audio stream but I'm not hearing any changes in the audio stream playing on the website. I'm not seeing any errors in the console and I've tried adjusting the filter from 50 to 5000 but nothing seems to have any effect on the audio. Do I need to set up the new Tone.Player()
to actually hear the audio? If so, how do you go about setting up the Player if there is no src for the existing audio element.
ANSWER
Answered 2022-Feb-28 at 07:50Working solution:
Removing the audioStream.play()
from where the JsSIP call is answered solves the issue.
I don't know the exact reason why this solves the issue (it might even be a workaround) but after much trial and error this way allows the audio to be available to ToneJS for effecting.
Any other solutions are welcome.
QUESTION
I have a task, but I can't seem to get it done. I've created a very simple WebRTC stream on a Raspberry Pi which will function as a videochat-camera. With ionic I made a simple mobile application which can display my WebRTC stream when the phone is connected to the same network. This all works.
So right now I have my own local stream which shows on my app. I now want to be able to broadcast this stream from my phone to a live server, so other people can spectate it.
I know how to create a NodeJS server which deploys my webcam with the 'getUserMedia' function. But I want to 'push' my WebRTC stream to a live server so I can retrieve a public URL for it.
Is there a way to push my local Websocket to a live environment? I'm using a local RTCPeerConnection to create a MediaStream object
...ANSWER
Answered 2021-Dec-10 at 16:54Is there a way to push my local Websocket to a live environment?
It's not straightforward because you need more than vanilla webrtc (which is peer-to-peer). What you want is an SFU. Take a look at mediasoup.
To realize why this is needed think about how the webrtc connection is established in your current app. It's a negotiation between two parties (facilitated by a signaling server). In order to turn this into a multi-cast setup you will need a proxy of sorts that then establishes separate peer-to-peer connections to all senders and receivers.
QUESTION
I'm trying to get a sort of voice chat working using WebRTC and a WebSocket for exchanging offers.
First I create my RTCPeerConection
...ANSWER
Answered 2021-Nov-07 at 18:13So it turns out I completely missed a step. Both Caller and Callee need to exchange their ice candidates. So I added the following code:
QUESTION
i have converted my class based component as below and i have converted to function based in the below but i am not sure about if my variables are are defined correctly and my function based component is running as a infinite loop can someone guide me right direction?
...ANSWER
Answered 2021-Sep-01 at 11:02import React, { useEffect } from "react";
import {
SafeAreaView,
StyleSheet,
ScrollView,
View,
Text,
StatusBar,
TouchableOpacity,
Dimensions,
} from "react-native";
import {
RTCPeerConnection,
RTCIceCandidate,
RTCSessionDescription,
RTCView,
MediaStream,
MediaStreamTrack,
mediaDevices,
registerGlobals,
} from "react-native-webrtc";
import io from "socket.io-client";
const dimensions = Dimensions.get("window");
const pc_config = {
iceServers: [
// {
// urls: 'stun:[STUN_IP]:[PORT]',
// 'credentials': '[YOR CREDENTIALS]',
// 'username': '[USERNAME]'
// },
{
urls: "stun:stun.l.google.com:19302",
},
],
};
function App(props) {
const [localStream, SetlocalStream] = useState(null);
const [remoteStream, SetremoteStream] = useState(null);
const socket = useRef(
io.connect("https://daae-171-61-.ngrok.io/webrtcPeer", {
path: "/io/webrtc",
query: {},
})
);
const sdp = useRef(null);
const pc = useRef(new RTCPeerConnection(pc_config));
const candidates = useRef([]);
useEffect(() => {
socket.current.on("connection-success", (success) => {
console.log(success);
});
socket.current.on("offerOrAnswer", (sdp) => {
sdp.current = JSON.stringify(sdp);
// set sdp as remote description
pc.current.setRemoteDescription(new RTCSessionDescription(sdp));
});
socket.current.on("candidate", (candidate) => {
// console.log('From Peer... ', JSON.stringify(candidate))
// candidates.current = [...candidates.current, candidate]
pc.current.addIceCandidate(new RTCIceCandidate(candidate));
});
pc.current = new RTCPeerConnection(pc_config);
pc.current.onicecandidate = (e) => {
// send the candidates to the remote peer
// see addCandidate below to be triggered on the remote peer
if (e.candidate) {
// console.log(JSON.stringify(e.candidate))
sendToPeer("candidate", e.candidate);
}
};
// triggered when there is a change in connection state
pc.current.oniceconnectionstatechange = (e) => {
console.log(e);
};
pc.current.onaddstream = (e) => {
debugger;
// this.remoteVideoref.current.srcObject = e.streams[0]
SetremoteStream(e.stream);
};
const success = (stream) => {
console.log(stream.toURL());
SetlocalStream(stream);
pc.current.addStream(stream);
};
const failure = (e) => {
console.log("getUserMedia Error: ", e);
};
let isFront = true;
mediaDevices.enumerateDevices().then((sourceInfos) => {
console.log(sourceInfos);
let videoSourceId;
for (let i = 0; i < sourceInfos.length; i++) {
const sourceInfo = sourceInfos[i];
if (
sourceInfo.kind == "videoinput" &&
sourceInfo.facing == (isFront ? "front" : "environment")
) {
videoSourceId = sourceInfo.deviceId;
}
}
const constraints = {
audio: true,
video: {
mandatory: {
minWidth: 500, // Provide your own width, height and frame rate here
minHeight: 300,
minFrameRate: 30,
},
facingMode: isFront ? "user" : "environment",
optional: videoSourceId ? [{ sourceId: videoSourceId }] : [],
},
};
mediaDevices.getUserMedia(constraints).then(success).catch(failure);
});
}, []);
const sendToPeer = (messageType, payload) => {
socket.current.emit(messageType, {
socketID: socket.current.id,
payload,
});
};
const createOffer = () => {
console.log("Offer");
// https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/createOffer
// initiates the creation of SDP
pc.current.createOffer({ offerToReceiveVideo: 1 }).then((sdp) => {
// console.log(JSON.stringify(sdp))
// set offer sdp as local description
pc.current.setLocalDescription(sdp);
sendToPeer("offerOrAnswer", sdp);
});
};
const createAnswer = () => {
console.log("Answer");
pc.current.createAnswer({ offerToReceiveVideo: 1 }).then((sdp) => {
// console.log(JSON.stringify(sdp))
// set answer sdp as local description
pc.current.setLocalDescription(sdp);
sendToPeer("offerOrAnswer", sdp);
});
};
const setRemoteDescription = () => {
// retrieve and parse the SDP copied from the remote peer
const desc = JSON.parse(sdp.current);
// set sdp as remote description
pc.current.setRemoteDescription(new RTCSessionDescription(desc));
};
const addCandidate = () => {
// retrieve and parse the Candidate copied from the remote peer
// const candidate = JSON.parse(this.textref.value)
// console.log('Adding candidate:', candidate)
// add the candidate to the peer connection
// pc.current.addIceCandidate(new RTCIceCandidate(candidate))
candidates.current.forEach((candidate) => {
console.log(JSON.stringify(candidate));
pc.current.addIceCandidate(new RTCIceCandidate(candidate));
});
};
const remoteVideo = remoteStream ? (
) : (
Waiting for Peer connection ...
);
return (
Call
Answer
localStream._tracks[1]._switchCamera()}
>
{remoteVideo}
);
}
export default App;
const styles = StyleSheet.create({
buttonsContainer: {
flexDirection: "row",
},
button: {
margin: 5,
paddingVertical: 10,
backgroundColor: "lightgrey",
borderRadius: 5,
},
textContent: {
fontFamily: "Avenir",
fontSize: 20,
textAlign: "center",
},
videosContainer: {
flex: 1,
flexDirection: "row",
justifyContent: "center",
},
rtcView: {
width: 100, //dimensions.width,
height: 200, //dimensions.height / 2,
backgroundColor: "black",
},
scrollView: {
flex: 1,
// flexDirection: 'row',
backgroundColor: "teal",
padding: 15,
},
rtcViewRemote: {
width: dimensions.width - 30,
height: 200, //dimensions.height / 2,
backgroundColor: "black",
},
});
QUESTION
I followed this example: https://github.com/pion/example-webrtc-applications/tree/master/sfu-ws
on local is working
I made a linux build, I put it on a server, is working
I put it inside a docker container, it's not working anymore.
On docker I opened the port range:
50000-50200:50000-50200/udp
...
ANSWER
Answered 2021-Aug-28 at 00:23The issue is that Pion (or any WebRTC implementation) is only aware of IP address it is listening on. It can't be aware of all the address that map/forward to it. People will also call this the Public IP
or NAT Mapping
. So when Pion emits it candidates they will probably look like 10.10.0.*
and the remote peer will be unable to contact that.
What you should do is use the SettingEngine and set SetNat1To1IPs. If you know the public IP of the host it will rewrite the candidates with the public IP.
ICE is a tricky process. To understand it conceptually WebRTC for the Curious#Networking may be helpful. Will make sure to answer any follow up questions on SO quickly!
QUESTION
Is there any way to differentiate between screen share track and camera track in a webrtc video call?
I am able to add both video tracks(camera as well as screen share track) using proper negotiation event.But,I cannot differentiate these 2 tracks (Since they both have property of kind
video and their id seems to be randomly generated and is different to the id of actual owner of the track )
I also went through couples of few similar questions that suggested the following things:
1.Differentiating using their ID.
This solution did not work for me because as soon as i will re-share my screen(after stop sharing and then sharing again),a new id will be assigned to the track coming from resharing.
2.Differentiating using transceiver.mid property
This too did not seem to work because while turning off the camera,camera track is removed from the peer instance(to save the bandwidth) and is added back when turning on the camera.This calls ontrack event on the remote side in which track has different transceiver.mid
property(not same as what mid
property previous camera track had)
In addition,I cannot assign any extra property to the stream obtained from getUserMedia
api.track
object seems to be immutable.
Please suggest a method which i can use to differentiate these 2 tracks.
Thanks
...ANSWER
Answered 2021-Aug-17 at 17:57As far as I know, mid and rid are the only properties of a track that are preserved end-to-end (the id is not preserved). Thus, your approach of using the mid is probably the correct one.
As you justly note, mid
s might be recomputed whenever a track is removed from the peer connection. You have two solutions to the issue:
- maintain a mapping between ids and mids, and recompute the mapping whenever you renegotiate;
- never remove a track, and use the
enabled
property of the track to stop sending video data.
The latter solution is simpler, and avoids the need to perform a round of signalling when the camera is disabled. (When one side sets enabled
, the other side should notice and set muted
on the corresponding remote track.)
QUESTION
Hi I have an issue basically I'm sending a stereo audio with WebRTC this way
...ANSWER
Answered 2021-Aug-13 at 17:18Check out chrome://webrtc-internals/
When I run your code, I get "undefined" for stream.getAudioTracks()[0].getSettings().channelCount when I call it in your GotRemoteStream function.
However, in chrome://webrtc-internals, in RTCInboundRTPAudioStream stats, I see 48 samples per second arriving (that must mean 48kHz stereo for Opus) and also "codec" says "stereo=1" which indicates you are really receiving stereo sound.
QUESTION
I'm trying to establish peer connection between two clients via WebRTC and then stream the video from camera through the connection. The problem is, there's no video shown on the remote side, although I can clearly see the remotePc.ontrack
event was fired. Also no error was thrown. I do NOT want to use the icecandidates mechanism (and it should NOT be needed), because the result application will only be used on a local network (the signaling server will only exchange the SDPs for the clients). Why is my example not working?
ANSWER
Answered 2021-Jun-06 at 16:49ICE candidates are needed, as they tell you the local addresses where the clients will connect to each other.
You won't need STUN servers though.
QUESTION
I have succesfully managed to establish a WebRTC connection between Node (server) and a browser. Server gets the video track on onTrack callback inside the RTCPeerConnection. Is there any way I can potentially convert the video track and make it work on ffmpeg so I can output it to rtmp.
Thanks in advance.
...ANSWER
Answered 2021-May-14 at 22:35The way I have done this is to use a socket to the node server, and then use ffmpeg to convert to RTMP:
I spawn FFMPEG
QUESTION
I'm doing a video conference website. The use case is a user are showing camera and everyone already see here camera. It mean the connection is stable. And user want to share screen. After I have screen stream, I add track to peerConnection but remote computer not fire ontrack event.
Here is my code after I got screen stream:
...ANSWER
Answered 2021-May-14 at 19:22You need to renegotiate after addTrack. You can either do so manually by calling createOffer, setLocalDescription and setRemoteDescription or rely on the onnegotiationneeded callback to happen as described in https://blog.mozilla.org/webrtc/perfect-negotiation-in-webrtc/
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
Install ontrack
Install on Ubuntu 16.04+
Spin up an instance (for free) using the Heroku deploy button below. Heroku account is required.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page