ontrack | : money_with_wings : A simple self-hosted budgeting app | Dashboard library

 by   inoda JavaScript Version: Current License: MIT

kandi X-RAY | ontrack Summary

kandi X-RAY | ontrack Summary

ontrack is a JavaScript library typically used in Analytics, Dashboard applications. ontrack has no bugs, it has a Permissive License and it has low support. However ontrack has 3 vulnerabilities. You can download it from GitHub.

In a nutshell: a private budgeting tool that can be self-hosted. This project is an attempt to understand and control my own spending better without giving my banking/financial info to a 3rd party. The app is meant to be used with 1 login, and you can host easily your own instance. The app was designed by Iana Noda.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              ontrack has a low active ecosystem.
              It has 695 star(s) with 60 fork(s). There are 10 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 14 open issues and 13 have been closed. On average issues are closed in 30 days. There are 6 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of ontrack is current.

            kandi-Quality Quality

              ontrack has 0 bugs and 0 code smells.

            kandi-Security Security

              OutlinedDot
              ontrack has 3 vulnerability issues reported (2 critical, 0 high, 1 medium, 0 low).
              ontrack code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              ontrack is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              ontrack releases are not available. You will need to build from source code and install.
              Installation instructions are available. Examples and code snippets are not available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of ontrack
            Get all kandi verified functions for this library.

            ontrack Key Features

            No Key Features are available at this moment for ontrack.

            ontrack Examples and Code Snippets

            No Code Snippets are available at this moment for ontrack.

            Community Discussions

            QUESTION

            Tone.js audio filters not being heard
            Asked 2022-Feb-28 at 07:50

            I'm trying to add filter effects to an audio stream I have playing on my website. I'm able to connect the Tone.js library to the audio stream but I'm not hearing any changes in the audio stream playing on the website. I'm not seeing any errors in the console and I've tried adjusting the filter from 50 to 5000 but nothing seems to have any effect on the audio. Do I need to set up the new Tone.Player() to actually hear the audio? If so, how do you go about setting up the Player if there is no src for the existing audio element.

            ...

            ANSWER

            Answered 2022-Feb-28 at 07:50

            Working solution:
            Removing the audioStream.play() from where the JsSIP call is answered solves the issue.
            I don't know the exact reason why this solves the issue (it might even be a workaround) but after much trial and error this way allows the audio to be available to ToneJS for effecting.

            Any other solutions are welcome.

            Source https://stackoverflow.com/questions/71159483

            QUESTION

            Push local WebRTC stream to a NodeJS server in the cloud
            Asked 2021-Dec-16 at 06:33

            I have a task, but I can't seem to get it done. I've created a very simple WebRTC stream on a Raspberry Pi which will function as a videochat-camera. With ionic I made a simple mobile application which can display my WebRTC stream when the phone is connected to the same network. This all works.

            So right now I have my own local stream which shows on my app. I now want to be able to broadcast this stream from my phone to a live server, so other people can spectate it.

            I know how to create a NodeJS server which deploys my webcam with the 'getUserMedia' function. But I want to 'push' my WebRTC stream to a live server so I can retrieve a public URL for it.

            Is there a way to push my local Websocket to a live environment? I'm using a local RTCPeerConnection to create a MediaStream object

            ...

            ANSWER

            Answered 2021-Dec-10 at 16:54

            Is there a way to push my local Websocket to a live environment?

            It's not straightforward because you need more than vanilla webrtc (which is peer-to-peer). What you want is an SFU. Take a look at mediasoup.

            To realize why this is needed think about how the webrtc connection is established in your current app. It's a negotiation between two parties (facilitated by a signaling server). In order to turn this into a multi-cast setup you will need a proxy of sorts that then establishes separate peer-to-peer connections to all senders and receivers.

            Source https://stackoverflow.com/questions/70260437

            QUESTION

            Can't hear MediaStream through WebRTC
            Asked 2021-Nov-07 at 18:13

            I'm trying to get a sort of voice chat working using WebRTC and a WebSocket for exchanging offers.

            First I create my RTCPeerConection

            ...

            ANSWER

            Answered 2021-Nov-07 at 18:13

            So it turns out I completely missed a step. Both Caller and Callee need to exchange their ice candidates. So I added the following code:

            Source https://stackoverflow.com/questions/69873099

            QUESTION

            problems when converting class based to function based component in react native
            Asked 2021-Sep-01 at 14:32

            i have converted my class based component as below and i have converted to function based in the below but i am not sure about if my variables are are defined correctly and my function based component is running as a infinite loop can someone guide me right direction?

            ...

            ANSWER

            Answered 2021-Sep-01 at 11:02
            import React, { useEffect } from "react";
            import {
              SafeAreaView,
              StyleSheet,
              ScrollView,
              View,
              Text,
              StatusBar,
              TouchableOpacity,
              Dimensions,
            } from "react-native";
            
            import {
              RTCPeerConnection,
              RTCIceCandidate,
              RTCSessionDescription,
              RTCView,
              MediaStream,
              MediaStreamTrack,
              mediaDevices,
              registerGlobals,
            } from "react-native-webrtc";
            
            import io from "socket.io-client";
            
            const dimensions = Dimensions.get("window");
            
            const pc_config = {
              iceServers: [
                // {
                //   urls: 'stun:[STUN_IP]:[PORT]',
                //   'credentials': '[YOR CREDENTIALS]',
                //   'username': '[USERNAME]'
                // },
                {
                  urls: "stun:stun.l.google.com:19302",
                },
              ],
            };
            
            function App(props) {
              const [localStream, SetlocalStream] = useState(null);
              const [remoteStream, SetremoteStream] = useState(null);
              const socket = useRef(
                io.connect("https://daae-171-61-.ngrok.io/webrtcPeer", {
                  path: "/io/webrtc",
                  query: {},
                })
              );
              const sdp = useRef(null);
              const pc = useRef(new RTCPeerConnection(pc_config));
              const candidates = useRef([]);
            
              useEffect(() => {
                socket.current.on("connection-success", (success) => {
                  console.log(success);
                });
            
                socket.current.on("offerOrAnswer", (sdp) => {
                  sdp.current = JSON.stringify(sdp);
            
                  // set sdp as remote description
                  pc.current.setRemoteDescription(new RTCSessionDescription(sdp));
                });
            
                socket.current.on("candidate", (candidate) => {
                  // console.log('From Peer... ', JSON.stringify(candidate))
                  // candidates.current = [...candidates.current, candidate]
                  pc.current.addIceCandidate(new RTCIceCandidate(candidate));
                });
            
                pc.current = new RTCPeerConnection(pc_config);
            
                pc.current.onicecandidate = (e) => {
                  // send the candidates to the remote peer
                  // see addCandidate below to be triggered on the remote peer
                  if (e.candidate) {
                    // console.log(JSON.stringify(e.candidate))
                    sendToPeer("candidate", e.candidate);
                  }
                };
            
                // triggered when there is a change in connection state
                pc.current.oniceconnectionstatechange = (e) => {
                  console.log(e);
                };
            
                pc.current.onaddstream = (e) => {
                  debugger;
                  // this.remoteVideoref.current.srcObject = e.streams[0]
                  SetremoteStream(e.stream);
                };
            
                const success = (stream) => {
                  console.log(stream.toURL());
                  SetlocalStream(stream);
                  pc.current.addStream(stream);
                };
            
                const failure = (e) => {
                  console.log("getUserMedia Error: ", e);
                };
            
                let isFront = true;
                mediaDevices.enumerateDevices().then((sourceInfos) => {
                  console.log(sourceInfos);
                  let videoSourceId;
                  for (let i = 0; i < sourceInfos.length; i++) {
                    const sourceInfo = sourceInfos[i];
                    if (
                      sourceInfo.kind == "videoinput" &&
                      sourceInfo.facing == (isFront ? "front" : "environment")
                    ) {
                      videoSourceId = sourceInfo.deviceId;
                    }
                  }
            
                  const constraints = {
                    audio: true,
                    video: {
                      mandatory: {
                        minWidth: 500, // Provide your own width, height and frame rate here
                        minHeight: 300,
                        minFrameRate: 30,
                      },
                      facingMode: isFront ? "user" : "environment",
                      optional: videoSourceId ? [{ sourceId: videoSourceId }] : [],
                    },
                  };
            
                  mediaDevices.getUserMedia(constraints).then(success).catch(failure);
                });
              }, []);
            
              const sendToPeer = (messageType, payload) => {
                socket.current.emit(messageType, {
                  socketID: socket.current.id,
                  payload,
                });
              };
            
              const createOffer = () => {
                console.log("Offer");
            
                // https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/createOffer
                // initiates the creation of SDP
                pc.current.createOffer({ offerToReceiveVideo: 1 }).then((sdp) => {
                  // console.log(JSON.stringify(sdp))
            
                  // set offer sdp as local description
                  pc.current.setLocalDescription(sdp);
            
                  sendToPeer("offerOrAnswer", sdp);
                });
              };
            
              const createAnswer = () => {
                console.log("Answer");
                pc.current.createAnswer({ offerToReceiveVideo: 1 }).then((sdp) => {
                  // console.log(JSON.stringify(sdp))
            
                  // set answer sdp as local description
                  pc.current.setLocalDescription(sdp);
            
                  sendToPeer("offerOrAnswer", sdp);
                });
              };
            
              const setRemoteDescription = () => {
                // retrieve and parse the SDP copied from the remote peer
                const desc = JSON.parse(sdp.current);
            
                // set sdp as remote description
                pc.current.setRemoteDescription(new RTCSessionDescription(desc));
              };
            
              const addCandidate = () => {
                // retrieve and parse the Candidate copied from the remote peer
                // const candidate = JSON.parse(this.textref.value)
                // console.log('Adding candidate:', candidate)
            
                // add the candidate to the peer connection
                // pc.current.addIceCandidate(new RTCIceCandidate(candidate))
            
                candidates.current.forEach((candidate) => {
                  console.log(JSON.stringify(candidate));
                  pc.current.addIceCandidate(new RTCIceCandidate(candidate));
                });
              };
            
              const remoteVideo = remoteStream ? (
                
              ) : (
                
                  
                    Waiting for Peer connection ...
                  
                
              );
            
              return (
                
                  
                  
                    
                      
                        
                          Call
                        
                      
                    
                    
                      
                        
                          Answer
                        
                      
                    
                  
                  
                    
                      
                         localStream._tracks[1]._switchCamera()}
                        >
                          
                            
                          
                        
                      
                    
                    
                      
                        {remoteVideo}
                      
                    
                  
                
              );
            }
            
            export default App;
            
            const styles = StyleSheet.create({
              buttonsContainer: {
                flexDirection: "row",
              },
              button: {
                margin: 5,
                paddingVertical: 10,
                backgroundColor: "lightgrey",
                borderRadius: 5,
              },
              textContent: {
                fontFamily: "Avenir",
                fontSize: 20,
                textAlign: "center",
              },
              videosContainer: {
                flex: 1,
                flexDirection: "row",
                justifyContent: "center",
              },
              rtcView: {
                width: 100, //dimensions.width,
                height: 200, //dimensions.height / 2,
                backgroundColor: "black",
              },
              scrollView: {
                flex: 1,
                // flexDirection: 'row',
                backgroundColor: "teal",
                padding: 15,
              },
              rtcViewRemote: {
                width: dimensions.width - 30,
                height: 200, //dimensions.height / 2,
                backgroundColor: "black",
              },
            });
            

            Source https://stackoverflow.com/questions/69012065

            QUESTION

            Pion custom SFU server not working inside docker
            Asked 2021-Aug-28 at 00:23

            I followed this example: https://github.com/pion/example-webrtc-applications/tree/master/sfu-ws

            • on local is working

            • I made a linux build, I put it on a server, is working

            • I put it inside a docker container, it's not working anymore.

            On docker I opened the port range:

            • 50000-50200:50000-50200/udp

              ...

            ANSWER

            Answered 2021-Aug-28 at 00:23

            The issue is that Pion (or any WebRTC implementation) is only aware of IP address it is listening on. It can't be aware of all the address that map/forward to it. People will also call this the Public IP or NAT Mapping. So when Pion emits it candidates they will probably look like 10.10.0.* and the remote peer will be unable to contact that.

            What you should do is use the SettingEngine and set SetNat1To1IPs. If you know the public IP of the host it will rewrite the candidates with the public IP.

            ICE is a tricky process. To understand it conceptually WebRTC for the Curious#Networking may be helpful. Will make sure to answer any follow up questions on SO quickly!

            Source https://stackoverflow.com/questions/68959096

            QUESTION

            Differentiate between screen share track and camera track in a normal webrtc
            Asked 2021-Aug-20 at 08:38

            Is there any way to differentiate between screen share track and camera track in a webrtc video call?

            I am able to add both video tracks(camera as well as screen share track) using proper negotiation event.But,I cannot differentiate these 2 tracks (Since they both have property of kind video and their id seems to be randomly generated and is different to the id of actual owner of the track )

            I also went through couples of few similar questions that suggested the following things:

            1.Differentiating using their ID.

            This solution did not work for me because as soon as i will re-share my screen(after stop sharing and then sharing again),a new id will be assigned to the track coming from resharing.

            2.Differentiating using transceiver.mid property

            This too did not seem to work because while turning off the camera,camera track is removed from the peer instance(to save the bandwidth) and is added back when turning on the camera.This calls ontrack event on the remote side in which track has different transceiver.mid property(not same as what mid property previous camera track had)

            In addition,I cannot assign any extra property to the stream obtained from getUserMedia api.track object seems to be immutable.

            Please suggest a method which i can use to differentiate these 2 tracks.

            Thanks

            ...

            ANSWER

            Answered 2021-Aug-17 at 17:57

            As far as I know, mid and rid are the only properties of a track that are preserved end-to-end (the id is not preserved). Thus, your approach of using the mid is probably the correct one.

            As you justly note, mids might be recomputed whenever a track is removed from the peer connection. You have two solutions to the issue:

            • maintain a mapping between ids and mids, and recompute the mapping whenever you renegotiate;
            • never remove a track, and use the enabled property of the track to stop sending video data.

            The latter solution is simpler, and avoids the need to perform a round of signalling when the camera is disabled. (When one side sets enabled, the other side should notice and set muted on the corresponding remote track.)

            Source https://stackoverflow.com/questions/68689543

            QUESTION

            WebRTC Sending Stereo Audio Stream, Receiving Mono Audio Stream
            Asked 2021-Aug-13 at 17:18

            Hi I have an issue basically I'm sending a stereo audio with WebRTC this way

            ...

            ANSWER

            Answered 2021-Aug-13 at 17:18

            Check out chrome://webrtc-internals/

            When I run your code, I get "undefined" for stream.getAudioTracks()[0].getSettings().channelCount when I call it in your GotRemoteStream function.

            However, in chrome://webrtc-internals, in RTCInboundRTPAudioStream stats, I see 48 samples per second arriving (that must mean 48kHz stereo for Opus) and also "codec" says "stereo=1" which indicates you are really receiving stereo sound.

            Source https://stackoverflow.com/questions/68761933

            QUESTION

            How to make WebRTC video streaming on local network working?
            Asked 2021-Jun-06 at 16:49

            I'm trying to establish peer connection between two clients via WebRTC and then stream the video from camera through the connection. The problem is, there's no video shown on the remote side, although I can clearly see the remotePc.ontrack event was fired. Also no error was thrown. I do NOT want to use the icecandidates mechanism (and it should NOT be needed), because the result application will only be used on a local network (the signaling server will only exchange the SDPs for the clients). Why is my example not working?

            ...

            ANSWER

            Answered 2021-Jun-06 at 16:49

            ICE candidates are needed, as they tell you the local addresses where the clients will connect to each other.

            You won't need STUN servers though.

            Source https://stackoverflow.com/questions/67859207

            QUESTION

            WebRTC Video Track to ffmpeg in Node
            Asked 2021-May-14 at 22:35

            I have succesfully managed to establish a WebRTC connection between Node (server) and a browser. Server gets the video track on onTrack callback inside the RTCPeerConnection. Is there any way I can potentially convert the video track and make it work on ffmpeg so I can output it to rtmp.

            Thanks in advance.

            ...

            ANSWER

            Answered 2021-May-14 at 22:35

            The way I have done this is to use a socket to the node server, and then use ffmpeg to convert to RTMP:

            I spawn FFMPEG

            Source https://stackoverflow.com/questions/67508284

            QUESTION

            WebRTC PeerConnection addTrack after connection established
            Asked 2021-May-14 at 19:22

            I'm doing a video conference website. The use case is a user are showing camera and everyone already see here camera. It mean the connection is stable. And user want to share screen. After I have screen stream, I add track to peerConnection but remote computer not fire ontrack event.

            Here is my code after I got screen stream:

            ...

            ANSWER

            Answered 2021-May-14 at 19:22

            You need to renegotiate after addTrack. You can either do so manually by calling createOffer, setLocalDescription and setRemoteDescription or rely on the onnegotiationneeded callback to happen as described in https://blog.mozilla.org/webrtc/perfect-negotiation-in-webrtc/

            Source https://stackoverflow.com/questions/67537807

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            A sandbox bypass vulnerability in Jenkins ontrack Plugin 3.4 and earlier allowed attackers with control over ontrack DSL definitions to execute arbitrary code on the Jenkins master JVM.

            Install ontrack

            Install with Homebrew
            Install on Ubuntu 16.04+
            Spin up an instance (for free) using the Heroku deploy button below. Heroku account is required.

            Support

            Feel free to use this however you'd like! If you use this, credit would be nice but I don't really care that much. I'm primarily maintaining this for my own use cases. But...if you have features you'd like to see built, or changes that you think should be made, please open issues on this repo and tag me in them! I'd love to improve the tool from your feedback.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/inoda/ontrack.git

          • CLI

            gh repo clone inoda/ontrack

          • sshUrl

            git@github.com:inoda/ontrack.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Dashboard Libraries

            grafana

            by grafana

            AdminLTE

            by ColorlibHQ

            ngx-admin

            by akveo

            kibana

            by elastic

            appsmith

            by appsmithorg

            Try Top Libraries by inoda

            journal

            by inodaRuby

            heroku-pinger

            by inodaRuby

            unbeatable_game

            by inodaJavaScript

            inoda.github.io

            by inodaHTML

            budgeting-app

            by inodaRuby