kandi X-RAY | oboe Summary
kandi X-RAY | oboe Summary
Oboe is a C++ library which makes it easy to build high-performance audio apps on Android. It was created primarily to allow developers to target a simplified API that works across multiple API levels back to API level 16 (Jelly Bean).
Top functions reviewed by kandi - BETA
oboe Key Features
oboe Examples and Code Snippets
Trending Discussions on oboe
I have a library that uses some c++ compiled code and I would like to use this and other functions to try it out but they all use Pointer and PointerByReference as arguments instead of normal types....
ANSWERAnswered 2021-Oct-21 at 15:21
Since you don't include more of the API for a specific call I can't directly answer your implementation question, but I will address the portion of the question that asks about the types.
In C (and C++), arrays are stored in contiguous native memory. If you know the type (float in this case) you can simply offset from the pointer to get the appropriate element, e.g.,
foo is at the pointer location to the array,
foo would be at that pointer location plus an offset equal to the type byte size (4 bytes for a float) and so on.
PointerByReference are JNA types.
PointerByReference is a pointer that points to a
Pointer; you can call the
getValue() function on the
PointerByReference() to retrieve this pointer.
Based on the way I read this API, that Pointer is actually the beginning of the float array, so you'd just use that Pointer and retrieve the array from its location.
So this is likely what you need to do:
My android studio project is failing to build with this error when linking c++ to java using jni...
ANSWERAnswered 2021-Jun-18 at 00:17
Probably, you need to define actual variable separately in .cpp file for the static class-member, SoundGenerator::_Frequency, as follows.
I am using oboe library to make a music app. There I produce music by writing PCM float values to the given pointer. I rarely have underruns which I can hearwhich. I also verify this with the following oboe APIs:...
ANSWERAnswered 2021-Apr-14 at 23:08
I dived deep and found out that
Returns the number of XRuns for the streams whole lifetime. I thought it would return the value relative to previous call. So apparently I was getting 12 but there were no underrun because this was from the session which had previous underrun.However, I would love to see an explanation of what does it mean to have a non-full buffer in the trace.
I have a moleculer-based microservice that has an endpoint which outputs a large JSON object (around tens of thousands of objects)
This is a structured JSON object and I know beforehand what it is going to look like....
ANSWERAnswered 2021-May-15 at 05:57
Are you asking for a general approach recommendation, or for support with the particular solution you have?
If it's for the first, then I think your best bet for communicating between the server and the client is through websockets, perhaps with something like Socket.io. A long lived connection will serve you well here, since it will take a long time to transmit all your data across.
Then you can send data from the server to the client any time you like. At that point you can read your data on the server as a node.js stream and emit the data one at a time.
The problem with using Oboe and writing to the response on every node is that it requires a long running response, and there's a high likelihood the connection could get interrupted before you've sent all the data across.
I understand that in
R it is best to avoid loops where possible. In that regard, I would like to perform the function of the code below but without using the nested loops.
The loops check whether the
f'th element of the vector
things_I_want_to_find is present in the
i'th row of
thing_to_be_searched. For example, when both
f are 1, the code checks whether
"vocals" is present in
john's row. Because
"vocals" is present in
john's row, the name and instrument are added to vectors
name. When both loops are complete these two vectors can be combined in a
I know that there is the
apply() family of functions in R but I don't know if they are able to be used in this case. Has anyone got any helpful hints or suggestions?
ANSWERAnswered 2021-May-21 at 13:00
library(tidyverse) thing_to_be_searched %>% # Melt wide data to long pivot_longer(-1) %>% # Drop unwanted column select(-name) %>% # Filter wanted values only filter( value %in% things_I_want_to_find) %>% # Only keep unique rows unique()
I'm trying to implement Oboe library into my application, so I can perform low latency audio playing. I could perform panning, playback manipulation, sound scaling, etc. I've been asking few questions about this topic because I'm completely new to audio worlds.
Now I can perform basic things which internal Android audio class such as
SoundPool provides. I can play multiple sounds simultaneously without noticeable delays.
But now I encountered another problem. So I made very simple application for example; There is button in screen and if user taps this screen, it plays simple piano sound. However fast user taps this button, it must be able to mix those same piano sounds just like what
My codes can do this very well, until I taps button too much times, so there are many audio queues to be mixed....
ANSWERAnswered 2021-May-06 at 10:52
Does Oboe stop rendering audio if it takes too much time to render audio like situation above?
Yes. If you block
onAudioReady for longer than the time represented by
numFrames you will get an audio glitch. I bet if you ran
systrace.py --time=5 -o trace.html -a your.app.packagename audio sched freq you'd see that you're spending too much time inside that method.
Did I reach limit of rendering audio, meaning that only limiting number of queues is the solution? or are there any ways to reach better performance?
Looks like it. The problem is you're trying to do too much work inside the audio callback. Things I'd try immediately:
- Different compiler optimisation: try
- Profile the code inside the callback - identify where you're spending most time.
- It looks like you're making a lot of calls to
cos. There may be faster versions of these functions.
I spoke about some of these debugging/optimisation techniques in this talk
One other quick tip. Try to avoid raw pointers unless you really have no choice. For example
AudioStream* stream; would be better as
std::vector* players; can be refactored to
I'm trying to implement Oboe library to my project to perform the least latency when playing sounds. I could perform playback manipulating, panning, mixing, the basics things which internal Android library such as
SoundPool can provide.
I wanted to test panning effect, I connected headphones to my device, but then, I noticed that any sound didn't play through my headphones. Device speaker also didn't play any sound either. So I disconnected my headphones to check if something went wrong. I pressed sound playing button, but I couldn't hear any sound even though headphones are completely disconnected from device.
When I firstly open the app, it can play sound through device speaker without any problems, but as soon as I connect headphones, no sound plays through both device speaker and headphones.
Same for opposite, if I start app with headphones connected to device, it can play sound very well through headphones. It can even perform panning effect too, but if I disconnect headphones from device, it stops playing any sound.
- Do I have to re-open audio stream whenever preferred audio device is changed?
- If yes for 1, is there any way to notify app that audio device is changed, so I can manually close stream and re-open it? Or can I make Oboe automatically handle audio device changes?
ANSWERAnswered 2021-Apr-17 at 16:14
You need to open a new audio stream when primary audio device changes as stated here in oboe full guide. Oboe will handle the detection of device change for you automatically if you configure it.
Disconnected audio stream An audio stream can become disconnected at any time if one of these events happens:
The associated audio device is no longer connected (for example when headphones are unplugged). An error occurs internally. An audio device is no longer the primary audio device. - When a stream is disconnected, it has the state "Disconnected" and calls to write() or other functions will return Result::ErrorDisconnected. When a stream is disconnected, all you can do is close it.
Here is an example
I'm developing Flutter plugin which is targeting only Android for now. It's kind of synthesis thing; Users can load audio file into memory, and they can adjust pitch (not pitch shift) and play multiple sound with the least delay using audio library called Oboe.
I managed to get PCM data from audio files which MediaCodec class supports, and also succeeded to handle pitch by manipulating playback via accessing PCM array manually too.
This PCM array is stored as float array, ranging from -1.0 to 1.0. I now want to support panning feature, just like what internal Android class such as SoundPool. I'm planning to follow how SoundPool is handling panning. There are 2 values I have to pass to SoundPool when performing panning effect : left, and right. These 2 values are float, and must range from 0.0 to 1.0.
For example, if I pass (1.0F, 0.0F), then users can hear sound only by left ear. (1.0F, 1.0F) will be normal (center). Panning wasn't problem... until I encountered handling stereo sounds. I know what to do to perform panning with stereo PCM data, but I don't know how to perform natural panning.
If I try to shift all sound to left side, then right channel of sound must be played in left side. In opposite, if I try to shift all sound to right side, then left channel of sound must be played in right side. I also noticed that there is thing called
Panning Rule, which means that sound must be a little bit louder when it's shifted to side (about +3dB). I tried to find a way to perform natural panning effect, but I really couldn't find algorithm or reference of it.
Below is structure of float stereo PCM array, I actually didn't modify array when decoding audio files, so it should be common structure...
ANSWERAnswered 2021-Apr-13 at 08:37
Pan rules or pan laws are implemented a bit different from manufacturer to manufacturer.
One implementation that is frequently used is that when sounds are panned fully to one side, that side is played at full volume, where as the other side is attenuated fully. if the sound is played at center, both sides are attenuated by roughly 3 decibels.
to do this you can multiply the sound source by the calculated amplitude. e.g. (untested pseudo code)
I'm trying to add protobuf to my flutter project. Here's the tree
ANSWERAnswered 2021-Jan-05 at 19:08
First of I don't see your configuration for the protobuf compiler, if it is not set it will search for it in the system path (Is it available ?). Usually to avoid issues I'm using the distributed version of the compiler by configuring the protobuf configuration in the app gradle build file. Also i'm using the lite runtime as it is the recommended way for Android.
ANSWERAnswered 2021-Jan-02 at 22:23
what is the second argument ...?
(foo(args), ...) is a fold expression, using comma operator, equivalent to
(foo(arg0), foo(arg1), .., foo(argn))
What does args.template mean?
template is used to disambiguate meaning of
< for dependent type:
without that, it would be parsed as
(args.buildDefaultEffect < decltype(stack.getType())) > ().
No vulnerabilities reported
Reuse Trending Solutions
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page