python-sounddevice | sound Play and Record Sound with Python snake | Audio Utils library
kandi X-RAY | python-sounddevice Summary
kandi X-RAY | python-sounddevice Summary
:sound: Play and Record Sound with Python :snake:
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Query the list of devices
- Return a device id
- Raise PortAudioError
- Split value
- Generate a stream of data
- Check the output
- Set the status of the callback
- Signal handler
- Play a record
- Query the list of supported devices
- Play data
- Write data to the stream
- Read data from the stream
- Check the input parameters
- Play a buffer
- Record a buffer
- Wait for the last call
- True if the stream is active
- Create the body
- True if the stream is stopped
- True if there is an input overflow
- True if input is underflow
- Exit event handler
- Abort the stream
- True if the pin is enabled
- Flag indicating if output is output overflow
- Check the output format
python-sounddevice Key Features
python-sounddevice Examples and Code Snippets
Community Discussions
Trending Discussions on python-sounddevice
QUESTION
I'm working on a script that sends data from a microphone to Google Cloud Speech-to-Text API. I need to access gRPC API to produce live readings during recording. Once the recording is completed, I need to access REST API for more precise asynchronous recognition.
The live streaming part is working. It is based on the quickstart sample, but with python-sounddevice instead of pyAudio. The stream below records cffi_backend_buffer
objects into a queue, a separate thread collects these objects, converts them to bytes, and feeds them to the API.
ANSWER
Answered 2022-Jan-01 at 11:10Turns out, there are two things wrong with this code.
- It looks like the
cffi_backend_buffer
objects that I put into the queue behave like pointers to a certain area of memory. If I access them right away, as I do in streaming recognition, it works fine. However, if I collect them in a queue for later use, the buffers they point to become overwritten. The solution is to put byte strings into queues instead:
QUESTION
I am trying to repurpose the example that records arbitrary length microphone audio to capture audio output https://python-sounddevice.readthedocs.io/en/0.4.1/examples.html#recording-with-arbitrary-duration
I am not sure how to setup the device correctly and whether I can actually capture the output sound with python-sounddevice
...ANSWER
Answered 2021-Jul-13 at 20:58Looking into it more it seems like I need to use a virtual audio driver to capture output such as https://github.com/ExistentialAudio/BlackHole for Mac or pulseaudio can be setup to do this on Ubuntu then I can connect to the virtual audio driver which can capture the output and then write it out
QUESTION
I'm trying to get started with programming a small synthesizer (wave generator) on my raspberry pi 3a+. To start off, I tried to use python's sounddevice module to play a stream from a numpy - array. However, my raspberry pi doesn't output any sound, which is weird, since the exact same code works perfectly fine on my laptop and produces a nice, steady sine - wave tone, like you'd expect.
The code I used is basically just a copied example code from the sounddevice documentation, it can be found here: https://python-sounddevice.readthedocs.io/en/0.4.1/examples.html#play-a-sine-signal
I think downloaded all required modules on my pi (portAudio etc.), as I have downloaded the same ones on my laptop, where the code works.
Could it be that sounddevice just can't handle some part of the pi's hardware, or that I messed up somewhere in the ALSA - settings (although I checked several times)?
Interestingly, the pi plays sound perfectly fine with the simpleaudio - module, which is sadly not versatile enough for what I'm planning to do, which is why I need sounddevice or something similar. I'd be very thankful if someone could help me here.
...ANSWER
Answered 2020-Oct-28 at 14:53You need to set the samplerate in /etc/asounc.config to whatever samplerate you plan to use.
QUESTION
I have bought a soundcard: Focusrite Scarlett 4i4 3rd Gen, with 4 outputs channels. I also have 4 speakers and I will link each speaker with the soundcard. I would like to be able to set separately the volume of each speaker, with maybe a tkinter interface (ultimately, but that is not the point).
I have seen that we could have plenty of different librairies (I'm using windows 10 for this project): the ones that seems to be interesting are sounddevice and soundcard.
I would though like to select the soundcard as my output device, and to specify which channel(s) must play sound right now. A good usage would be to have a .wav file in mono to implement in 1,2,3 or 4 speakers ; or a .wav file in stereo to implement in the same way, but with the first channel of the stereo in 2 speakers and the second channel of the stereo in the 2 other speakers. The perfect usage would be to create a surround 4.0 effect, making a square of speakers and being able to "turn around" with the sound: you can imagine that I put a sound of a train, and that this sound is turning around as if the train was turning around you.
sounddevice.AsioSettings() seems to allow us to control which output to use to play something, right? (https://python-sounddevice.readthedocs.io/en/0.3.15/api/platform-specific-settings.html) But when I see the doc in details, I also note that sounddevice.play() allows us to specify the mapping argument, which I don't really understand. (https://python-sounddevice.readthedocs.io/en/0.3.15/api/convenience-functions.html#sounddevice.play) I suppose that I will have to install Asio in all cases, which is not a problem (I hope!).
As my purpose is to control each speaker, what could I specify and how could I achieve that using the souddevice library or another one? Also, Is it possible to control the volume of each speakers, using those libraires or other ones (ex: pycaw)?
Thank you very much!
Elyurn
PS: If no solution exists with python, it would be a pleasure if you have ideas to achieve this goal another way (like a software able to do that for example).
...ANSWER
Answered 2020-Jul-05 at 17:25Both the AsioSettings
and the mapping
argument are for statically selecting channels. You cannot use either to mix signals or change their volume.
If you want to use the first few channels of your sound card in ascending order (e.g. channels 1, 2, 3 and 4), you don't need them at all. For example, you can simply use channels=4
, which will select the first 4 channels. Even simpler, if you use sounddevice.play()
, the number of channels will be determined by the given NumPy array, and you don't have to explicitly specify the channels
parameter.
If you know the desired movement (of the train in your example) in advance, you can pre-compute the 2- or 4-channel signal. Then you can simply play the multi-channel signal with sounddevice.play()
(using AsioSettings
or the mapping
argument if needed).
If you don't know the movement in advance (e.g. if it's computed in real-time), you can use a sounddevice.OutputStream
and implement a custom callback
function which does the weighting of signals.
As for how exactly to mix the signal into the output channels, this doesn't really have anything to do with the sounddevice
or soundcard
modules. You can probably find signal processing libraries to do that, or you can implement it on your own. The appropriate search term for this is "panning". For two channels you can use "stereo panning", for more channels there are other methods, like "vector base amplitude panning (VBAP)", "Ambisonics amplitude panning", ...
QUESTION
I'm trying to create a low-level Stream that will allow me to output a WAVE file, while simultaneously recording an input on the same audio device. My audio device is setup so that the output WAVE file will be played through the output and this runs through a system that then goes to an input on the device. Using the convenience function playrec() from python-sounddevice gives me a full recording of the what's seen at the input, however with my code using the lower-level Stream() function the recording starts late and the last tiny bit of the audio isn't recorded. The reason I want to use the lower-level Stream() function is to test whether or not I can decrease the overall delay in that system compared to playrec(). I tried changing the blocksize and buffersize to no avail.
...ANSWER
Answered 2020-Jun-21 at 09:42If you don't mind having the whole input and output signals in memory at once, you should feel free to use sd.playrec()
. You will not be able to decrease the latency with your own code using sd.Stream
. sd.playrec()
internally uses sd.Stream
and it adds no latency.
If you want to reduce the latency, you should try to use lower values for the blocksize
and/or latency
parameters. Note, however, that low values will be more unstable and might lead to glitches in the playback/recording.
If you don't want to have all the data at once in memory, you cannot use sd.playrec()
and you can try it with sd.Stream
, like in your example above.
Note, however, that the queue in these two adjacent lines is useless at best:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install python-sounddevice
You can use python-sounddevice like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page