resampler | A Simple and Efficient Audio Resampler Implementation in C | Speech library

 by   cpuimage C Version: Current License: MIT

kandi X-RAY | resampler Summary

kandi X-RAY | resampler Summary

resampler is a C library typically used in Artificial Intelligence, Speech applications. resampler has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

A Simple and Efficient Audio Resampler Implementation in C
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              resampler has a low active ecosystem.
              It has 109 star(s) with 50 fork(s). There are 6 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 0 open issues and 3 have been closed. On average issues are closed in 104 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of resampler is current.

            kandi-Quality Quality

              resampler has no bugs reported.

            kandi-Security Security

              resampler has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              resampler is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              resampler releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of resampler
            Get all kandi verified functions for this library.

            resampler Key Features

            No Key Features are available at this moment for resampler.

            resampler Examples and Code Snippets

            No Code Snippets are available at this moment for resampler.

            Community Discussions

            QUESTION

            When decoding a jpeg how would I reverse chrominance downsampling in matlab?
            Asked 2021-Apr-27 at 22:21

            Hi trying to make a simple jpeg compressor which also decompresses the image. I use the following code to downsample the chrominance of an image for the first step of jpeg compression.

            ...

            ANSWER

            Answered 2021-Apr-27 at 22:21

            Using imresize for up-sampling is "almost correct".

            Instead of using imresize, you better use vision.ChromaResampler for up-sampling:

            Source https://stackoverflow.com/questions/67288940

            QUESTION

            Aggregating performance measures in mlr3 ResampleResult when some iterations have NaN values
            Asked 2021-Apr-14 at 11:38

            I would like to calculate an aggregated performance measure (precision) for all iterations of a leave-one-out resampling.

            For a single iteration, the result for thie measure can only be 0, 1 (if positive class is predicted) or NaN (if negative class is predicted.

            I want to aggregate this over the existing values of the whole resampling, but the aggregation result is always NaN (naturally, it will be NaN for many iterations). I could not figure out (from the help page for ResampleResult$aggregate()) how to do this:

            ...

            ANSWER

            Answered 2021-Apr-14 at 11:38

            I have doubts if this is a statistically sound approach, but technically you can set the aggregating function for a measure by overwriting the aggregator slot:

            Source https://stackoverflow.com/questions/67090862

            QUESTION

            Error when running C++ libraries in iOS Xcode Project
            Asked 2021-Apr-14 at 07:59

            I have C++ files in my iOS Xcode project. Those files uses the next libraries that I'm calling via HomeBrew:

            • mpg123/1.26.5
            • libgcrypt
            • ffmpeg
            • libgpg-error
            • fftw
            • libsndfile

            The way I'm including them in the project is by setting the HeaderSearch Paths:

            And the Library Search Paths:

            That is all I'm doing to call those libraries. The error that I'm getting when I compile the project is the next one:

            ...

            ANSWER

            Answered 2021-Apr-14 at 07:59

            Since you've installed this using brew you are trying to link against libraries built for the Mac. You need to build those libraries for iOS. Note this will typically involve making a fat binary of the different architectures you'll need per library. You can easily test this for fftw and see if the linker errors disappear. Here are some references to build or download a pre-built version.

            https://github.com/godock/fftw-build

            In theory once you link up against the iOS version, you should see errors like

            Source https://stackoverflow.com/questions/66979282

            QUESTION

            The error "KeyError: date" insists if the CSV file is UTF-8-encoded and its date is totally correct, and the dataframe code also is correct
            Asked 2021-Apr-11 at 20:48

            ANSWER

            Answered 2021-Apr-10 at 22:25

            Based on your posted code I made some changes:

            1. I have put in a print statement for the DataFrame after read. This should show the datatypes for each column in the DataFrame. For the date - field it should be "datetime64[ns]".

            2. Afterwards you don't have to parse it again as a date.

            3. Some code changes for the "cases" - field and to visualize it.

            Source https://stackoverflow.com/questions/67039337

            QUESTION

            NAudio's BufferedWaveProvider gets full when recording and mixing an audio
            Asked 2021-Mar-08 at 05:45

            I'm having an issue with a BufferedWaveProvider from NAudio library. I'm recording 2 audio devices (a microphone and a speaker), merge them into 1 stream and send it to an encoder (for a video).

            To do this, I do the following:

            1. Create a thread where I'll record the microphone using WasapiCapture.
            2. Create a thread where I'll record the speakers audio using WasapiLookbackCapture. (I also use a SilenceProvider so I don't have gaps in what I record).
            3. I'll want to mix these 2 audio so I have to make sure they have the same format, so I detect what's the best WaveFormat in all these audio devices. In my scenario, it's the speaker. So I decide that the Microphone audio will pass through a MediaFoundationResampler to adapt its format so it has the same than the one from the speaker.
            4. Each audio chunks from the Wasapi(Lookback)Capture are sent to a BufferedWaveProvider.
            5. Then, I also made a MixingSampleProvider where I pass the ISampleProvider from each recording thread. So I'm passing the MediaFoundationResampler for the Microphone, and BufferedWaveProvider for the Speakers.
            6. In loop in a third thread, I read the data from the MixingSampleProvider, which is supposed to asynchronously empty the BufferedWaveProvider(s) while it's getting filled.
            7. Because each buffer may not get filled exactly at the same time, I'm looking at what's the minimal common duration there is between these 2 buffers, and I'm reading this amount out of the mixing sample provider.
            8. Then I enqueue what I read so my encoder, in a 4th thread, will treat it in parallel too.

            Please see the flowchat below that illustrates my description above.

            My problem is the following:
            • It works GREAT when recording the microphone and speaker for more than 1h while playing video game that uses the microphone too (for online multiplayer). No crash. The buffers are staying quite empty all the time. It's awesome.
            • But for some reason, every time I try my app during a Discord, Skype or Teams audio conversation, I immediately (within 5sec) crash on BufferedWaveProvider.AppSamples because the buffer gets full.

            Looking at it in debug mode, I can see that:

            • The buffer corresponding to the speaker is almost empty. It has in average 100ms max of audio.
            • The buffer corresponding to the microphone (the one I resample) is full (5 seconds).

            From what I read on NAudio's author's blog, the documentation and on StackOverflow, I think I'm doing the best practice (but I can be wrong), which is writing in the buffer from a thread, and reading it in parallel from another one. There is of course a risk that it's getting filled faster than I read it, and it's basically what's happening right now. But I'm not understanding why.

            Help needed

            I'd like some help to understand what I'm missing here, please. The following points are confusing me:

            1. Why does this issue happens only with Discord/Skype/Teams meetings? The video games I'm using are using the microphone too, so I can't imagine it's something like another app is preventing the microphone/speakers to works correctly.

            2. I synchronize the startup of both audio recorder. Do to this, I'm using a signal to ask the recorders to starts, and when they all started to generate data (through DataAvailable event), I send a signal to tell them to fill in the buffers with what they will receive in the next event. It's probably not perfect because both audio devices send their DataAvailable at different times, but we're talking about 60ms of difference maximum (on my machine), not 5 seconds. So I don't understand why it's getting filled.

            3. To bounce on what I said in #2, my telemetry shows that the buffer is getting filled this way (values are dummy):

            ...

            ANSWER

            Answered 2021-Mar-08 at 05:45

            Following more investigations and a post on GitHub: https://github.com/naudio/NAudio/issues/742

            I found out that I should listen to the MixingSampleProvider.MixerInputEnded event and readd the SampleProvider to the MixingSampleProvider when it happens.

            The reason why it happens is that I'm treating the audio while capturing it, and there are some moments where I may treat it faster than I record it, therefore the MixingSampleProvider considers it has nothing more to read and stops. So I should tell it that no, it's not over and it should expect more.

            Source https://stackoverflow.com/questions/66114718

            QUESTION

            Resample a dataframe, interpolate NaNs and return a dataframe
            Asked 2021-Feb-18 at 17:07

            I have a dataframe df that contains data in periods of 3 hours:

            ...

            ANSWER

            Answered 2021-Feb-18 at 17:07

            I found out, that I could just append my initial approach with .interpolate() and it would work:

            Source https://stackoverflow.com/questions/66264046

            QUESTION

            How to set sample format when using sox with ffmpeg?
            Asked 2020-Dec-18 at 13:15

            I am trying to convert a 44.1k 16bit flac file into 48k 32 bit (float) wav file.

            This is the command I use:

            'ffmpeg -i in.flac -af aresample=resampler=soxr:precision=28:out_sample_fmt=fltp:out_sample_rate=48000 out.wav'

            No matter which value I use for out_sample_fmt like s32, flt, fltp the output out.wav is only 16 bit.

            What am I doing wrong here? How to get the highest quality (as in resampling) 32 bit floating point wav file with ffmpeg using soxr?

            ...

            ANSWER

            Answered 2020-Dec-18 at 13:15

            The issue isn't with soxr or aresample. Typically, after media data is filtered, it is encoded before being written to output. For each output format, there is a default encoder designated for each type of stream (audio, video..). In case of WAV, it's pcm_s16le for audio.

            Add -c:a pcm_f32le for 32-bit floating point PCM, in little-endian order. Change le to be for big-endian.

            Source https://stackoverflow.com/questions/65357379

            QUESTION

            Does Windows Media Foundation on UWP have a resampler? If so how do I use it?
            Asked 2020-Nov-18 at 14:46

            Using Win32 I have access to CLSID_CResamplerMediaObject which means I can reduce my channel count from say 6 to 2.

            On UWP this is no longer defined and the only reference to a resampler I can find is CLSID_AudioResamplerMediaObject. When I create an instance of this class however, and pass it my MFMediaType_Float or MFMediaType_PCM type, it says that the provided types aren't supported...

            ...

            ANSWER

            Answered 2020-Nov-18 at 14:46

            CLSID_AudioResamplerMediaObject and CLSID_CResamplerMediaObject are the same thing, same GUID of {f447b69e-1884-4a7e-8055-346f74d6edb3}.

            The error you mentioned is not coming from audio resampler, you have it from Source Reader API. The error is presumably correctly reported indicating the situation that for given media source the reader cannot provide a conversion to supplied media type. There can be multiple reasons for this to happen.

            Source https://stackoverflow.com/questions/64895141

            QUESTION

            How to get NAN when downsampling with last
            Asked 2020-Nov-17 at 03:19

            I want to downsample some time series data, giving me the last value of each quarter:

            ...

            ANSWER

            Answered 2020-Nov-17 at 03:19

            An enhancement is probable which will address this: Github issue #37768

            In the meantime, here is a workaround:

            Source https://stackoverflow.com/questions/64793972

            QUESTION

            IMFTransform::ProcessOutput Efficiency
            Asked 2020-Nov-11 at 10:46

            I've noticed that as apparently documented IMFTransform::ProcessOutput() for a resampler can only output one sample per call! I guess its more orientated at large frame size video coding. Given all the code I have been looking at as reference for related audio playback allocates one IMFMediaBuffer per call of ProcessOutput, this seems a little insane and terrible architecture - unless I am missing something?

            It is especially bad from the point of view of media buffer usage. For example a SourceReader decoding my test MP3 gives me chunks of about 64KB in one sample with one buffer. Which is sensible. But GetOutputStreamInfo() is requesting a media buffer of just 24 bytes per call for ProcessOutput().

            64KB chunks => chopped into many 24B chunks => to further processing, seems very daft overhead (the resampler would be doing a lot of overhead per every 24 bytes, and enforcing that overhead later down the pipeline if its not consolidated).

            From https://docs.microsoft.com/en-us/windows/win32/api/mftransform/nf-mftransform-imftransform-processoutput

            Its says:

            1. The MFT cannot return more than one sample per stream in a single call to ProcessOutput
            2. The MFT writes the output data to the start of the buffer, overwriting any data that already exists in the buffer

            So it's not even the case it can append to the end of partially full buffer attached to the sample.

            I could create my own pooling object that supports the media buffers interface but pointer bumps into a vanilla locked media buffer I guess. The only other option seemingly being to lock/copy those 24 bytes to another larger buffer for processing. But this all seems excessive, and at the wrong granularity.

            What is the best way to deal with this?

            Here is a simplified sketch of my test so far:

            ...

            ANSWER

            Answered 2020-Nov-11 at 10:46

            I wrote some code for you to prove that audio resamper is capable to process large audio blocks at once. It is good, efficient processing style:

            Source https://stackoverflow.com/questions/64784248

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install resampler

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/cpuimage/resampler.git

          • CLI

            gh repo clone cpuimage/resampler

          • sshUrl

            git@github.com:cpuimage/resampler.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link