web-audio-api | Node.js implementation of Web audio API | Audio Utils library
kandi X-RAY | web-audio-api Summary
kandi X-RAY | web-audio-api Summary
Node.js implementation of Web audio API
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Decode audio data from a buffer
- Buffer a Buffer .
- Validate given format
- Load the WebAssembly
web-audio-api Key Features
web-audio-api Examples and Code Snippets
Community Discussions
Trending Discussions on web-audio-api
QUESTION
Trying to create a pulsing tone. Found some help from another posting here (web audio api plays beep, beep,... beep at different rate). Needed to adapt the script. Came up with a satisfactory result that plays and pulses a tone. Cannot figure out a function to stop the tone: .stop() used with oscillators doesn't work.
Any help appreciated.
Here's my code:
...ANSWER
Answered 2021-Apr-07 at 19:34Moving comment to answer:
The AudioAPI is not the problem, it's a simple variable scoping problem.
Variables created in a function are not accessible outside that function, and the stopper()
function can't access the abc/xyz variables.
QUESTION
My use case is roughly equal to, adding a 15-second mp3 file to a ~1 min video. All transcoding merging part will be done by FFmpeg-android so that's not the concern right now.
The flow is as follows- User can select any 15 seconds (ExoPlayer-streaming) of an mp3 (considering 192Kbps/44.1KHz of 3mins = up to 7MB)
- Then download ONLY the 15 second part and add it to the video's audio stream. (using FFmpeg)
- Use the obtained output
Extracting fragment of audio from a url
RANGE_REQUEST - I have replicated the exact same algorithm/formula in Kotlin using the exact sample file provided. But the output is not accurate
(± 1.5 secs * c) where c is proportional to startTime
How to crop a mp3 from x to x+n using ffmpeg?
FFMPEG_SS - This works flawlessly with remote URLs as input, but there are two downsides,
- as
startTime
increases, the size of downloaded bytes are closer to the actual size of the mp3. ffmpeg-android
does not support network requests module (at least the way we complied)
- as
So above two solutions have not been fruitful and currently, I am downloading the whole file and trimming it locally, which is definitely a bad UX. I wonder how Instagram's music addition to story feature works because that's close to what I wanted to implement.
...ANSWER
Answered 2020-Aug-20 at 10:31Its is not possible the way you want to do it. mp3 files do not have timestamps. If you just jump to the middle of an mp3, (and look for the frame start marker), then start decoding, You have no idea at what time this frame is for, because frames are variable size. The only way to know, is to count the number of frames before the current position. Which means you need the whole file.
QUESTION
Context: I'm trying to create an audio visualizer using the Web Audio API with createMediaElementSource() very similarly to the model explained in this tutorial. The hosting service my client is using for their audio inserts a 302 redirect before the actual media, to track listening data.
Problem: In Safari, when I attach an AudioContext to an audio element that is linked to a source with a 302 redirect in front of it, it outputs silence instead of normal audio without any errors in the log. By contrast I've tested Chrome and Firefox, and they both work fine with no issues.
In the demo above, all three buttons attach and play the same audio source, but in the second and third it goes through the redirect first. The second attaches an AudioContext as well, while the third just plays the audio normally with no visual.
I posted about this issue last month and it was suggested that the problem was some missing CORS headers on the 302 redirect. However, I am now testing my own redirect server instead of using the hosting service, so that I can test my own CORS rules (see below). The issue remains even with these headers set, so this makes me think it's a bug in Safari with 302 redirects. What I'd like to know is A) Are there any other cross origin headers I can try adding that may resolve the issue, and B) If it is indeed a Safari bug, where do I go to report it and how long from that point until someone addresses it.
Headers I've set for my 302 redirect:
...ANSWER
Answered 2020-Jul-30 at 18:53Update: I've now reported this as a bug, and the Webkit devs have isolated the check causing the issue.
QUESTION
I'm trying to create an audio visualization for a podcast network, using the Web Audio API with createMediaElementSource() very similarly to the model explained in this tutorial. So far I've gotten it to work fine in Chrome, and you can see it here (note: click on the red box to start it).
Update: Based on discussion in the comments, it’s now become clear that the problem happens because the request gets redirected to another URL, by way of a 302 redirect.
However, Safari refuses to work, outputting no sound and producing no visualization although it shows the track playing. I believe it has to do with the CORS policy of the server I'm requesting the audio from, because I've alternatively tried using this audio source and it works great in all browsers. My suspicion is it's an issue arising due to this standard of the web audio API.
The fact that it only happens in safari makes me pray that there's some easy syntactic solution either on my end or the server host's end in their CORS policy to get this to work. I'm hoping someone can point out exactly what's going wrong in the header requests/responses that's causing this problem. Let me know if there's any more information I need to provide. I've left a simplified version of my AudioContext code below in case a problem surfaces there.
...ANSWER
Answered 2020-Jul-08 at 01:46Short answer: The maintainers of the service sending the 302
response to your request should update their backend config such that it adds the Access-Control-Allow-Origin
header to 302
responses (and any other 3xx
redirect responses) — not just to 200
OK responses.
If you can’t get them to do that, then basically you only have exactly two other options:
- Change your frontend code to make the request through a CORS proxy; or else
- Don’t make the request from your frontend code at all, but instead do it completely from your backend server-side code (where the same-origin policy doesn’t apply).
Explanation
Here’s what happens:
Your frontend code makes a request to a
https://rss.art19.com/episodes/….mp3
URL.The
https://rss.art19.com
server replies to with a302
redirect response that has aLocation: https://content.production.cdn.art19.com/…episodes/….mp3
header.The browser receives that
302
response and checks the response headers to see if there’s anAccess-Control-Allow-Origin
header. If there isn’t, the browser blocks your code from accessing the response from thehttps://content.production.cdn.art19.com/….mp3
redirect URL. Instead the browser will stop and throw an exception.
You can sometimes fix this problem by taking the redirect URL and using it as the request URL in your frontend code. For example, rather than using https://rss.art19.com/episodes/….mp3
in your code, use https://content.production.cdn.art19.com/…episodes/….mp3
— since the 200 OK
response from that URL does include the Access-Control-Allow-Origin
header).
But in many or most cases in practice, that strategy won’t work — because it’s not feasible to preemptively identify what the redirect URL will be.
Note that, by design, browsers intentionally don’t expose redirect URLs to frontend code. So it’s impossible from frontend code to programatically get a redirect URL and do another request with it.
QUESTION
I'm recording audio from nodejs using node-microphone (which is just a javascript interface for arecord), and want to store the stream chunks in an AudioBuffer
using web-audio-api (which is a nodejs implementation of the Web Audio API).
My audio source has two channels while my AudioBuffer
has only one (in purpose).
This is my working configuration for recording audio with arecord through my USB sound card (I'm using a Raspberry pi 3 running on Raspbian buster):
...ANSWER
Answered 2020-May-15 at 19:43So the input stereo signal is coming as 16 bits signed integers, interleaving left and right channels, meaning that the corresponding buffers (8 bits unsigned integers) have this format for a single stereo sample:
QUESTION
I have an audioContext that gets its media from createMediaElementSource. I want to parse this audio on the go into AudioBuffers or something similar that I can send over to another client over websockets.
...ANSWER
Answered 2020-May-04 at 23:44So, in the end this problem was solved by using an audio worklet node. When creating an AudioWorkletNode it is possible to pass options to it. One of the options is numberOfOutputs. By doing this my question is completely answered.
Mainfile
QUESTION
I am struggling to understand how to create an array of WebAudio oscillators, such as osc[i]. I have been able to construct single oscillators such as
...ANSWER
Answered 2020-Apr-11 at 21:50you can use a loop:
QUESTION
I'm trying to make a basic online video editor with nodeJS and ffmpeg.
To do this I need 2 steps:
set the in-and-out times of the videos from the client, which requires the client to view the video at specific times, and switch the position of the video. Meaning, if a single video is used as an input, and split it into smaller parts, it needs to replay from the starting time of the next edited segment, if that makes sense.
send the input-output data to nodejs and export it with ffmpeg as a finished vide.
At first I wanted to do 1. purely on the client, then upload the source video(s) to nodeJS, and generate the same result with ffmpeg, and send back the result.
But there are may problems with video processing on the client side in HTML at the moment, so now I have a change of plans: to do all of the processing on the nodeJS server, including the video playing.
This is the part I am stuck at now. I'm aware that ffmpeg can be used in many different ways from nodeJS, but I have not found a way to play a .mp4 webm video in realtime with ffmpeg, at a specific timestamp, and send the streaming video (again, at a certain timestamp) to the client.
I've seen the pipe:1 attribute from ffmpeg, but I couldn't find any tutorials to get it working with an mp4 webm video, and to parse the stdout data somehow with nodejs and send it to the client. And even if I could get that part to work, I still have no idea to play the video, in realtime, at a certain timestamp.
I've also seen ffplay, but that's only for testing as far as I know; I haven't seen any way of getting the video data from it in realtime with nodejs.
So:
how can I play a video, in nodeJS, at a specific time (preferably with ffmpeg), and send it back to the client in realtime?
What I have already seen:
Best approach to real time http streaming to HTML5 video client
Live streaming using FFMPEG to web audio api
Ffmpeg - How to force MJPEG output of whole frames?
ffmpeg: Render webm from stdin using NodeJS
No data written to stdin or stderr from ffmpeg
node.js live streaming ffmpeg stdout to res
Realtime video conversion using nodejs and ffmpeg
Pipe output of ffmpeg using nodejs stdout
can't re-stream using FFMPEG to MP4 HTML5 video
FFmpeg live streaming webm video to multiple http clients over Nodejs
http://www.mobiuso.com/blog/2018/04/18/video-processing-with-node-ffmpeg-and-gearman/
stream mp4 video with node fluent-ffmpeg
How to get specific start & end time in ffmpeg by Node JS?
Live streaming: node-media-server + Dash.js configured for real-time low latency
Low Latency (50ms) Video Streaming with NODE.JS and html5
Server node.js for livestreaming
Stream part of the video to the client
Video streaming with HTML 5 via node.js
How to (pseudo) stream H.264 video - in a cross browser and html5 way?
How to stream video data to a video element?
How do I convert an h.264 stream to MP4 using ffmpeg and pipe the result to the client?
https://medium.com/@brianshaler/on-the-fly-video-rendering-with-node-js-and-ffmpeg-165590314f2
...ANSWER
Answered 2020-Mar-11 at 23:15This question is a bit broad, but I've built similar things and will try to answer this in pieces for you:
- set the in-and-out times of the videos from the client, which requires the client to view the video at specific times, and switch the position of the video. Meaning, if a single video is used as an input, and split it into smaller parts, it needs to replay from the starting time of the next edited segment, if that makes sense.
Client-side, when you play back, you can simply use multiple HTMLVideoElement instances that reference the same URL.
For the timing, you can manage this yourself using the .currentTime
property. However, you'll find that your JavaScript timing isn't going to be perfect. If you know your start/end points at the time of instantiation, you can use Media Fragment URIs:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install web-audio-api
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page