ws-audio-api | WebSocket Audio API : library to broadcast the sound | Audio Utils library
kandi X-RAY | ws-audio-api Summary
kandi X-RAY | ws-audio-api Summary
WebSocket Audio API: library to broadcast the sound from the microphone through a WebSocket
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of ws-audio-api
ws-audio-api Key Features
ws-audio-api Examples and Code Snippets
Community Discussions
Trending Discussions on ws-audio-api
QUESTION
I am trying to make an as-simple-as-possible Javascript frontend that will allow me to receive audio from a user's mic on a mouse click within a web browser using getUserMedia
, modify it to a custom sample rate and monochannel, and stream it over a websocket to my server where it will be relayed to Watson Speech API.
I have already built the websocket server using autobahn. I have been trying to make an updated client library drawing on whisper and ws-audio-api but both libraries seem outdated and include much functionality I don't need which I am trying to filter out. I am using XAudioJS to resample the audio.
My current progress is in this Codepen. I am stuck and having trouble finding more clear examples.
- Both whisper and ws-audio-api initialize the AudioContext on page load, resulting in an error in at least Chrome and iOS as audio context must now be initialized as a response to user interaction. I have tried to move the AudioContext into the
onClick
event but this results in my having to click twice to begin streaming. I am currently usingaudio_context.resume()
within theonClick
event but this seems like a roundabout solution and results in the page showing it is always recording, even when it's not, which may make my users uneasy. How can I properly initiate the recording on click and terminate it on click? - I have updated from the deprecated
Navigator.getUserMedia()
toMediaDevices.getUserMedia()
but not sure if I need to alter the legacy vendor prefixes on lines 83-86 to match the new function? - Most importantly, once I get a stream from
getUserMedia
, how can I properly resample it and forward it to the open websocket? I am a bit confused by the structure of bouncing the audio from node to node and I need help with lines 93-108.
ANSWER
Answered 2021-Apr-25 at 20:26I found help here and was able to build a more modern JavaScript frontend based on the code from vin-ni's Google-Cloud-Speech-Node-Socket-Playground which I tweaked a bit. A lot of the existing audio streaming demos out there in 2021 are either outdated and/ or have a ton of "extra" features which raise the barrier to getting started with websockets and audio streaming. I created this "bare bones" script which reduces the audio streaming down to only four key functions:
- Open websocket
- Start Streaming
- Resample audio
- Stop streaming
Hopefully this KISS (Keep It Simple, Stupid) demo can help somebody else get started with streaming audio a little faster than it took me.
Here is my JavaScript frontend
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install ws-audio-api
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page