Plugin.AudioRecorder | Audio Recorder plugin for Xamarin and Windows | Form library
kandi X-RAY | Plugin.AudioRecorder Summary
kandi X-RAY | Plugin.AudioRecorder Summary
The following permissions/capabilities are required to be configured on each platform:. Additionally, on OS versions Marshmallow and above, you may need to perform a runtime check to ask the user to access their microphone.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Plugin.AudioRecorder
Plugin.AudioRecorder Key Features
Plugin.AudioRecorder Examples and Code Snippets
Community Discussions
Trending Discussions on Plugin.AudioRecorder
QUESTION
I am trying to implement audio streaming from my Xamarin.Forms app to my backend (.net core 2.2). Then my backend will call the Azure cognitive API to transcribe the voice in the audio and return back a string with the transcribed text.
The transcription has to be done and shown in a text box component while the user is speaking (not when he finishes speaking).
To record the audio and put it in a stream I am using the Plugin.AudioRecorder from Nate Rickard (https://github.com/NateRickard/Plugin.AudioRecorder) and it works well. Basicly, it fills a stream with the audio while the user is speaking and saves it to a file.
Nate Rickard also has another plugin using the Azure cognitive service SpeechToText (https://github.com/NateRickard/Xamarin.Cognitive.Speech). This one uses the Plugin.AudioRecorder to capture the voice and then an HttpClient to request the transcription to Azure, getting the text as response. This solution does all the work in the Xamarin.Forms app and I would like the following:
- Send the request stream to my backend instead of directly send it to Azure.
- From my backend send the request to Azure.
- Obtain the Azure response and send it back to my Xamarin.Forms app.
The 2nd and 3rd steps are identical as implemented in the Xamarin.Cognitive.Speech plugin. I am stuck in the first step when I have to handle the http request in my backend. Sending a HttpRequestMessage with a PushStreamContent in it as implemented in Xamarin.Cognitive.Speech plugin, I have changed the URL to send it to my backend instead of send it to azure.
When I run the app I get a 415 status code (Unsuported Media Type error).
Here is the sample code of how the PushStreamContent is built (code from Xamarin.Cognitive.Speech plugin):
...ANSWER
Answered 2020-Feb-18 at 03:46use the following code get the file in the request
QUESTION
ANSWER
Answered 2020-Feb-06 at 12:59Not able to record sound in the emulator because the android emulator doesn’t support it yet. This code should only work on the phone.
Note: The Android Emulator cannot record audio. Be sure to test your code on a real device that can record.
This is the official document
https://developer.android.com/guide/topics/media/mediarecorder?hl=en
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Plugin.AudioRecorder
Install into your platform-specific projects (iOS/Android/UWP), and any PCL/.NET Standard 2.0 projects required for your app.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page