audiograph | Windows AudioGraph native C with cppwinrt headers | File Utils library
kandi X-RAY | audiograph Summary
kandi X-RAY | audiograph Summary
Windows AudioGraph native C++ with cppwinrt headers.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of audiograph
audiograph Key Features
audiograph Examples and Code Snippets
Community Discussions
Trending Discussions on audiograph
QUESTION
I am trying to save device information
in my application.
I managed to save as string
to ApplicationData.Current.RoamingSettings
but unable to save as deviceinformation
which I need to use to enumerate my device when the app starts.
I am not sure what is the right approach to do so. Can somebody advise? Thanks.
...ANSWER
Answered 2021-Jan-11 at 07:29Please check document here,
For both
LocalSettings
andRoamingSettings
, the name of each setting can be 255 characters in length at most. Each setting can be up to 8K bytes in size and each composite setting can be up to 64K bytes in size.
The Windows Runtime data types are supported for app settings. But DeviceInformation
is not in the support list. For your scenario, we suggest your save some key value about DeviceInformation
such device id device kind. and get the device with device id.
Update
DeviceInformation
contains CreateFromIdAsync
method, you could store the DeviceInformation's id in to local setting, and retrieve DeviceInformation
with the following code.
QUESTION
Native c++ application is using c++/winrt classes to instantiate and use winrt::Windows::Media::Audio::AudioGraph
.
Inside AudioGraph
there is possibility to add effects to graph nodes. There are some already created effects (like echo effect) but there is also possibility to create custom audio effect.
Custom audio effect class must be a Windows Runtime Component
. There is a way to create custom audio effect in Windows Runtime Component c++/winrt project by creating class with Windows.Media.Effects.IBasicAudioEffect
interface in idl file (and providing implementation). This generates winmd, lib and winrt headers files.
Until this point everything is fine and working. But to instantiate audio effect it need to be registered and this steep I am missing. Application at runtime throw's an exception with "Class not registered" message when I want to instantiate audio effect class and also throw an exception "Failed to activate audio effect" when I want to instantiate it inside AudioGraph
node.
I do not know how to registered Windows Runtime Component from native c++ application.
Steps to create and use custom audio effect are describe here https://docs.microsoft.com/en-us/windows/uwp/audio-video-camera/custom-audio-effects. Code is in C# and used in UWP application but it could be converted to c++/winrt almost 1:1.
...ANSWER
Answered 2020-Sep-04 at 03:25This article solves this problem:
It is possible to use Registration-free WinRT (starting from Windows 10 1903) by modifying application manifest file (and not Windows Runtime Component Package manifest as suggested in documentation) like this:
QUESTION
I have a problem with the FrameOutputNode
of the UWP Audio Graph API. I have a very simple graph that reads audio from a wav (PCM 16000Hz, 16 bit mono) file and sends it to the frame output node for processing. When processing, I need the audio to be in shorts (as they are in the raw bytes of the file). But as I read here the data can only be read as floats.
Here is my code:
...ANSWER
Answered 2020-May-01 at 11:15My question was answered here. Basically, the floats are the range of the shorts -32768 to 32767 converted to range -1 to 1 in float.
So given a float x in the buffer (use (float*)dataInFloats = (float*)dataInBytes
to convert) you can calculate the corresponding short with:
QUESTION
I'm developping an audio application in C# and UWP using the AudioGraph API. My AudioGraph setup is the following : AudioFileInputNode --> AudioSubmixNode --> AudioDeviceOutputNode.
I attached a custom echo effect on the AudioSubmixNode. If I play the AudioFileInputNode I can hear some echo. But when the AudioFileInputNode playback finishes, the echo sound stops brutally. I would like it to stop gradually after few seconds only. If I use the EchoEffectDefinition from the AudioGraph API, the echo sound is not stopped after the sample playback has finished.
I don't know if the problem comes from my effect implementation or if it's a strange behavior of the AudioGraph API... The behavior is the same in the "AudioCreation" sample in the SDK, scenario 6.
Here is my custom effect implementation :
...ANSWER
Answered 2020-Feb-11 at 06:32By infinitely looping the file input node, then it will always provide an input frame until the audio graph stops. But of course we do not want to hear the file loop, so we can listen the FileCompleted event of AudioFileInputNode. When the file finishes playing, it will trigger the event and we just need to set the OutgoingGain of AudioFileInputNode to zero. So the file playback once, but it continues to silently loop passing input frames that have no audio content to which the echo can be added.
Still using scenario 4 in the AudioCreation sample as an example. In the scenario4, there is a property named fileInputNode1. As mentioned above, please add the following code in fileInputNode1 and test again by using your custom echo effect.
QUESTION
Trying to test the state of an AUGraph in a Swift 4.0 project.
...ANSWER
Answered 2017-Nov-30 at 11:44Yes, you're right. There's no output. Seemingly it's already muted in Swift 4.
QUESTION
I'm creating a function using the Audiokit
API
which the user presses music notes onto a screen and a sound comes out based on the SoundFont they chose. I then allow them to collect a host of notes and let them play it back in the order they chose.
The problem is that I am using an AKSequencer
to play the notes back and when the AKSequencer
plays the notes back it never sounds like the SoundFont. It makes a beep sound.
Is there code that lets me change what sound is coming out of the AKSequencer
?
I'm using audio kit to do this.
Sample is an NSObject
that contains midisampler, player, etc. Here's the code
ANSWER
Answered 2019-Aug-15 at 21:57At minimum, you need to connect an AKSequencer
to some kind of output to get it to make sounds. With the older version (now called AKAppleSequencer
), if you don't explicitly set the output, you will hear the default (beepy) sampler.
For example, on AKAppleSequencer
(in AudioKit 4.8, or AKSequencer
for earlier version)
QUESTION
I am setting up an AudioGraph in App.xaml.cs because if I try to do it on MainPage, the app hangs, never returning an AudioGraph.
Then I want to set a variable frequency that will be controlled by a a slider on MainPage.xaml.cs.
Then when a key 'a' is pressed, the frequency will be played through the audio graph.
To get this to work, I need to start the AudioGraph on MainPage.xaml.cs.
How do I take the AudioGraph that I can only get from App.xaml.cs and put it into an AudioGraph object on MainPage.xaml.cs?
I've tried initializing the AudioGraph on MainPage.xaml.cs, but it never returns and hangs.
I've tried setting the variable frequency on App.xaml.cs and couldn't because the class is sealed.
In fact, both classes are sealed, so I'm not sure how to get the two to communicate variables with each other. Even when I make them public, it won't work.
Here is MainPage.xaml.cs
...ANSWER
Answered 2019-Mar-26 at 03:47I got it to work with the following code
QUESTION
I have a C# UWP application that uses the AudioGraph API.
I use a custom effect on a MediaSourceAudioInputNode
.
I followed the sample on this page : https://docs.microsoft.com/en-us/windows/uwp/audio-video-camera/custom-audio-effects
It works but I can hear multiple clicks per second in the speakers when the custom effect is running.
Here is the code for my ProcessFrame
method :
ANSWER
Answered 2019-Feb-20 at 13:56The problem is not specific to custom effects, but it is a general problem with AudioGraph (current SDK is 1809). Garbage collections can pause the AudioGraph thread for a too long time (more than 10ms, it is the default size of audio buffers). The result is that clicks can be heard in the audio output. The use of custom effects puts a lot of pressure on the garbage collector.
I found a good workaround. It uses the GC.TryStartNoGCRegion
method.
After this method is called, the clicks completely disappear. But the app keeps growing in memory until the GC.EndNoGCRegion
method is called.
QUESTION
I am developing a new UWP app which should monitor sound and fire a event for each sudden sound blow (something like gun fire or clap).
- It needs to enable default Audio Input and monitor live audio.
- Set audio sensitivity for identifying environment noise and recognizing clap/gun-fire
- When there is a high frequency sound like a clap/gun-fire sound (Ideally it should be like configured frequency like +/-40 then it is a gun-fire/clap) then it should call a event.
No need to save Audio I tried to implement this
SoundMonitoringPage:
...ANSWER
Answered 2019-Jan-09 at 07:07Answering the "is this the right approach" question: no, the AudioStateMonitor will not help with the problem.
AudioStateMonitor.SoundLevelChanged tells you if the system is ducking your sound so it doesn't interfere with something else. For example, it may mute music in favour of the telephone ringer. SoundLevelChanged doesn't tell you anything about the volume or frequency of recorded sound, which is what you'll need to detect your handclap.
The right approach will be along the lines of using an AudioGraph (or WASAPI, but not from C#) to capture the raw audio into an AudioFrameOutputNode to process the signal and then run that through an FFT to detect sounds in your target frequencies and volumes. The AudioCreation sample demonstrates using an AudioGraph, but not specifically AudioFrameOutputNode.
Per https://home.howstuffworks.com/clapper1.htm clapping will be in a frequency range of 2200Hz to 2800Hz.
Recognizing gunshots looks like it's significantly more complicated, with different guns having very different signatures. A quick search found several research papers on this rather than trivial algorithms. I suspect you'll want some sort of Machine Learning to classify these. Here's a previous thread discussing using ML to differ between gunshots and non-gunshots: SVM for one Vs all acoustic signal classification
QUESTION
I'm developping a UWP audio application. Latest Windows 10 SDK version 1803.
I would like to increase the SamplesPerQuantum used on the AudioGraph of my application. According to the docs, I should specify the properties DesiredSamplesPerQuantum and QuantumSizeSelectionMode before creating the AudioGraph.
I'm creating the AudioGraph like this :
...ANSWER
Answered 2018-Nov-21 at 03:10I tried everything, the SamplesPerQuantum property is always 480...
By default, the quantum size is 10 ms based at the default sample rate. The system will choose a quantum size as close as possible to the one you specify. if your the sample rate of speak device is limited to 48000hz, the SamplesPerQuantum
will be limited to 480. For your requirement, you could set sample rate to 96000hz. Then your setting could be available.
I have discussed with media team, and they give the following reply. The general idea is DesiredSamplesPerQuantum
related with your hardware.
Update
The behavior the customer is seeing is dependent on the underlying audio hardware. The DesiredSamplesPerQuantum
property is only a suggestion to the underlying hardware. If the hardware / driver does not support the requested quantum then it will not be set.
When the GC runs there may be clicks or pops in the audio. This is because managed languages are nondeterministic.
And this is James Dailey's blog that you could refer.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install audiograph
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page