Xamarin.Cognitive.Speech | client library that makes it easy to work | iOS library
kandi X-RAY | Xamarin.Cognitive.Speech Summary
kandi X-RAY | Xamarin.Cognitive.Speech Summary
A client library that makes it easy to work with the Microsoft Cognitive Services Speech Services Speech to Text API on Xamarin.iOS, Xamarin.Android, UWP, and Xamarin.Forms/.NET Standard libraries used by those platforms
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Xamarin.Cognitive.Speech
Xamarin.Cognitive.Speech Key Features
Xamarin.Cognitive.Speech Examples and Code Snippets
var audioFile = "/a/path/to/my/audio/file/in/WAV/format.wav";
var detailedResult = await speechClient.SpeechToTextDetailed (audioFile);
///
/// Recognition result.
///
public class RecognitionResult
{
///
/// A string indicating the result sta
var audioFile = "/a/path/to/my/audio/file/in/WAV/format.wav";
var simpleResult = await speechClient.SpeechToTextSimple (audioFile);
///
/// A single speech result combining Recogniton result with Speech result. This is used for Simple result mode
// simple output mode
var simpleResult = await speechClient.SpeechToTextSimple (stream, );
// ... or detailed output mode
var detailedResult = await speechClient.SpeechToTextDetailed (stream, );
// start recording audio
var audioRecordTask = awai
Community Discussions
Trending Discussions on Xamarin.Cognitive.Speech
QUESTION
I am trying to implement audio streaming from my Xamarin.Forms app to my backend (.net core 2.2). Then my backend will call the Azure cognitive API to transcribe the voice in the audio and return back a string with the transcribed text.
The transcription has to be done and shown in a text box component while the user is speaking (not when he finishes speaking).
To record the audio and put it in a stream I am using the Plugin.AudioRecorder from Nate Rickard (https://github.com/NateRickard/Plugin.AudioRecorder) and it works well. Basicly, it fills a stream with the audio while the user is speaking and saves it to a file.
Nate Rickard also has another plugin using the Azure cognitive service SpeechToText (https://github.com/NateRickard/Xamarin.Cognitive.Speech). This one uses the Plugin.AudioRecorder to capture the voice and then an HttpClient to request the transcription to Azure, getting the text as response. This solution does all the work in the Xamarin.Forms app and I would like the following:
- Send the request stream to my backend instead of directly send it to Azure.
- From my backend send the request to Azure.
- Obtain the Azure response and send it back to my Xamarin.Forms app.
The 2nd and 3rd steps are identical as implemented in the Xamarin.Cognitive.Speech plugin. I am stuck in the first step when I have to handle the http request in my backend. Sending a HttpRequestMessage with a PushStreamContent in it as implemented in Xamarin.Cognitive.Speech plugin, I have changed the URL to send it to my backend instead of send it to azure.
When I run the app I get a 415 status code (Unsuported Media Type error).
Here is the sample code of how the PushStreamContent is built (code from Xamarin.Cognitive.Speech plugin):
...ANSWER
Answered 2020-Feb-18 at 03:46use the following code get the file in the request
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Xamarin.Cognitive.Speech
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page