NAudio | Audio and MIDI library for NET | Audio Utils library
kandi X-RAY | NAudio Summary
kandi X-RAY | NAudio Summary
NAudio is an open source .NET audio library written by Mark Heath.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of NAudio
NAudio Key Features
NAudio Examples and Code Snippets
Community Discussions
Trending Discussions on NAudio
QUESTION
If yes, which NAudio API will work. My preference is WaveOut if it is able. Thanks for any replies.
...ANSWER
Answered 2022-Jan-03 at 21:02No, NAudio does not play audio CDs. You need to look into the MCI_PLAY command to do that in Windows. It's not something very commonly needed these days.
QUESTION
I'm trying to install a package that's on nuget.
This one: https://www.nuget.org/packages/NAudio/
When I follow this guide: https://docs.microsoft.com/en-us/nuget/quickstart/install-and-use-a-package-in-visual-studio about how you add packages to your project I'm stuck at 2) since I only have "Microsoft Visual Studio Offline Packages" available as a source.
This is a new WPF project on a fresh install of VS.net 2019 community edition.
Any ideas what I'm doing wrong?
...ANSWER
Answered 2021-Sep-03 at 13:08As vernou already mentioned there must be also nuget.org
inside the list. If it's not there you can easily add it through the green plus in the upper right corner.
The name is nuget.org (but of course you can name it whatever you want) and the source must be https://api.nuget.org/v3/index.json. After adding this entry you should be able to find the package on nuget.
QUESTION
I am trying to convert .vox to .mp3 or .wav with NAudio
with the code below:-
ANSWER
Answered 2021-Jun-02 at 09:07I was able to convert .vox files to .wav with the following:-
QUESTION
Is there any way to play audio directly into a capture device in C#? In my project I will have to feed later on a virtual capture driver with audio so I can use it in other programs and play the wanted audio anywhere else, but Im not sure it is possible in C#, I tried to do this with NAudio (which is truly amazing):
...ANSWER
Answered 2021-May-31 at 07:07You cannot push audio to the device which generates audio on its own, "capture device".
Loopback mode means that you can have a copy of audio stream from a rendering device, but this does not work the other way.
The way things can work more or less as you assumed is when you have a special (and custom or third party, since no stock implementation of the kind exists) implementation of audio capture device, designed to generate audio supplied by external application such as your pushing the payload audio data via an API.
Switching to C++ will be of no help with this challenge.
QUESTION
I am using websocket to call received, accept and all. I am getting websocket event when call accepted.
websocket accept event output :-
...ANSWER
Answered 2021-May-19 at 09:34Here are the valid types for JSON:
A Foundation object that may be converted to JSON must have the following properties:
- The top level object is an
NSArray
orNSDictionary
.- All objects are instances of
NSString
,NSNumber
,NSArray
,NSDictionary
, orNSNull
.- All dictionary keys are instances of
NSString
.- Numbers are not
NaN
orinfinity
.
Your error is saying that at some point, there is an object which is not of the allowed type, it's of RTCIceCandidate
.
Seeing "RTC_OBJC_TYPE(RTCIceCandidate):\naudio\n0\ncandidate:1211696075 1 udp 41885439 3.8.66.208 62545 typ relay raddr 0.0.0.0 rport 0 generation 0 ufrag DT/j network-id 1 network-cost 10\nturn:3.8.66.208:3478?transport=udp"
make you think that's it's indeed a NSString
BUT, if we see the code of RTCIceCandidate
(since it's the culprit class), we see an override a description
:
QUESTION
I want to change windows 10 default audio output with NAudio.
NAudio has an api to get the default audio endpoint:
...ANSWER
Answered 2021-May-08 at 02:36Finally I couldn't find any solution with NAudio. I do it with PowerShell:
Add AudioDeviceCmdlets nuget package to your project from here.
Then we should use
Set-AudioDevice
command to set default audio device. It uses either device id or index. In the C# code we need a PowerShell nuget package. The package has been added as a dependency of AudioDeviceCmdlets nuget package, so take no action and go to the next step.Use this code to set default device:
QUESTION
I would like to know how can I split the channels of a WAV file into two byte arrays with the PCM data.
I've been trying to do this with NAudio, but I can't get it.
...ANSWER
Answered 2021-Apr-06 at 02:52You can try the following to split wav file into two byte arrays.
QUESTION
So I've written a program in C# that gets the current available audio output devices. So when i run the process I get the names of the devices inside the DataReceived event. When it receives "DONE" it kills the process and adds the saved names to the TMP_Dropdown options. but the problem is that when it gets to dropdown.ClearOptions() the program just stops without any error messages. When i add a breakpoint and just keep stepping through that function the yellow bar just dissapears and the function just stops. But when i just add some random strings to devices and don't run GetDevices() it works like a charm.
here is the code I was referencing above:
...ANSWER
Answered 2021-Mar-17 at 13:38Your issue is most probably multi-threading!
Most of the Unity API (anything immediately dependent on or influencing the Scene) can only be used in the Unity main thread, not from any background thread/task.
Your callback to process.OutputDataReceived
most probably happens on a separate thread.
You would rather need to "dispatch" the received data back into the Unity main thread.
QUESTION
I'm having an issue with a BufferedWaveProvider from NAudio library. I'm recording 2 audio devices (a microphone and a speaker), merge them into 1 stream and send it to an encoder (for a video).
To do this, I do the following:
- Create a thread where I'll record the microphone using
WasapiCapture
. - Create a thread where I'll record the speakers audio using
WasapiLookbackCapture
. (I also use aSilenceProvider
so I don't have gaps in what I record). - I'll want to mix these 2 audio so I have to make sure they have the same format, so I detect what's the best WaveFormat in all these audio devices. In my scenario, it's the speaker. So I decide that the Microphone audio will pass through a
MediaFoundationResampler
to adapt its format so it has the same than the one from the speaker. - Each audio chunks from the Wasapi(Lookback)Capture are sent to a
BufferedWaveProvider
. - Then, I also made a
MixingSampleProvider
where I pass theISampleProvider
from each recording thread. So I'm passing theMediaFoundationResampler
for the Microphone, andBufferedWaveProvider
for the Speakers. - In loop in a third thread, I read the data from the
MixingSampleProvider
, which is supposed to asynchronously empty theBufferedWaveProvider
(s) while it's getting filled. - Because each buffer may not get filled exactly at the same time, I'm looking at what's the minimal common duration there is between these 2 buffers, and I'm reading this amount out of the mixing sample provider.
- Then I enqueue what I read so my encoder, in a 4th thread, will treat it in parallel too.
Please see the flowchat below that illustrates my description above.
My problem is the following:- It works GREAT when recording the microphone and speaker for more than 1h while playing video game that uses the microphone too (for online multiplayer). No crash. The buffers are staying quite empty all the time. It's awesome.
- But for some reason, every time I try my app
during
a Discord, Skype or Teams audio conversation, I immediately (within 5sec) crash onBufferedWaveProvider.AppSamples
because the buffer gets full.
Looking at it in debug mode, I can see that:
- The buffer corresponding to the speaker is almost empty. It has in average 100ms max of audio.
- The buffer corresponding to the microphone (the one I resample) is full (5 seconds).
From what I read on NAudio's author's blog, the documentation and on StackOverflow, I think I'm doing the best practice (but I can be wrong), which is writing in the buffer from a thread, and reading it in parallel from another one. There is of course a risk that it's getting filled faster than I read it, and it's basically what's happening right now. But I'm not understanding why.
Help neededI'd like some help to understand what I'm missing here, please. The following points are confusing me:
Why does this issue happens only with Discord/Skype/Teams meetings? The video games I'm using are using the microphone too, so I can't imagine it's something like
another app is preventing the microphone/speakers to works correctly
.I synchronize the startup of both audio recorder. Do to this, I'm using a signal to ask the recorders to starts, and when they all started to generate data (through
DataAvailable
event), I send a signal to tell them to fill in the buffers with what they will receive in the next event. It's probably not perfect because both audio devices send theirDataAvailable
at different times, but we're talking about 60ms of difference maximum (on my machine), not 5 seconds. So I don't understand why it's getting filled.To bounce on what I said in #2, my telemetry shows that the buffer is getting filled this way (values are dummy):
ANSWER
Answered 2021-Mar-08 at 05:45Following more investigations and a post on GitHub: https://github.com/naudio/NAudio/issues/742
I found out that I should listen to the MixingSampleProvider.MixerInputEnded
event and readd the SampleProvider to the MixingSampleProvider when it happens.
The reason why it happens is that I'm treating the audio while capturing it, and there are some moments where I may treat it faster than I record it, therefore the MixingSampleProvider considers it has nothing more to read and stops. So I should tell it that no, it's not over and it should expect more.
QUESTION
I'm creating a chat client that has UDP for voice chat, but when I send the audio (in bytes) to the other clients, the client plays the audio everything is clear but I hear a random clicking sound in the back ground. I thought it might be because it's UDP and not checking if the data is correct, but no. Even when I send through TCP I can still hear the clicking sound in the background.
CODE to recreate:
...ANSWER
Answered 2021-Feb-27 at 17:25You should create a single output device and a single BufferedWaveProvider
, and start playing before you receive any audio. Then in the receive function, the only thing you need to do is add received audio to the BufferedWaveProvider
. However, you will still get clicks if the audio is not being received fast enough over the network which will mean you have dropouts in the received audio.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install NAudio
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page