# GPU Audio



## Pier (Mar 2, 2021)

Not sure where to put this, but I thought virtual synth users would be interested in this:









About | GPU Audio


GPU Audio is the world’s first company to unlock your GPU to power your music and audio production workflows. Download the world’s first GPU Audio powered plugins, join our early access and beta testing communities, and find out about new products from third parties.




www.braingines.com





It's a company that has developed some tech that could use GPUs to do the DSP computations. This is, GPUs on the same machine, or even remote ones on a server in the local network much like using VEP.

The dev is answering questions here on Reddit:


----------



## rgames (Mar 2, 2021)

That's pretty interesting. But I'm curious to see how well they deal with the latency issue. Frankly, I haven't needed more processing power for about a decade. I hit realtime bottlenecks long before I hit CPU bottlenecks, so offloading processing from the CPU won't help me much. In my experience, that's the much more common bottleneck these days.

Of course, CPU usage goes up as latency goes down. So if they do somehow manage to deal with the latency issue then maybe the advantage appears at that point.

rgames


----------



## Pier (Mar 2, 2021)

rgames said:


> That's pretty interesting. But I'm curious to see how well they deal with the latency issue. Frankly, I haven't needed more processing power for about a decade. I hit realtime bottlenecks long before I hit CPU bottlenecks, so offloading processing from the CPU won't help me much. In my experience, that's the much more common bottleneck these days.
> 
> Of course, CPU usage goes up as latency goes down. So if they do somehow manage to deal with the latency issue then maybe the advantage appears at that point.
> 
> rgames


The dev mentioned on Reddit latency is about 1ms.

Yeah, personally I haven't hit a CPU limit either on a desktop in years. It's another story on laptops though.


----------



## companyofquail (Mar 2, 2021)

Thanks for the heads up. I signed up for curiosity and I like playing with technology.


----------



## rgames (Mar 2, 2021)

Pier said:


> The dev mentioned on Reddit latency is about 1ms.
> 
> Yeah, personally I haven't hit a CPU limit either on a desktop in years. It's another story on laptops though.


Yeah I saw the 1 ms number in there but I was assuming that's just a reference for a starting point. 

Is 1 ms really low enough to make a difference? 2-3 ms is do-able for most projects that aren't huge orchestral templates. That's already low enough from an instrument response standpoint - you can't tell the difference between 1 ms and 3 ms.

So that leaves realtime monitoring of audio effects (e.g. for live streams). But I don't think 1 ms is low enough for that (is it?). I think you'd still have some weird delay artifacts at only 1 ms. Granted, they're worse at 2-3 ms but I'm not sure the difference is enough from a practical standpoint so I was thinking they'd be pushing well below that. If you can get down low enough to make those effects inaudible then I think that's a big advantage from a live sound standpoint.m But I don't know. Maybe 1 ms is a big jump from 2-3 ms.

I've never run at latencies that low but I bet you would hit some CPU bottlenecks there. So the GPU approach might help a bunch.

rgames


----------



## chimuelo (Mar 2, 2021)

Technology first came from MIT. It was a BETA but allowed ATI 8X cards to use the GPU for Reverb which sucked as bad as Cubase VST Reverb as it was ‘98 IIRC.

Next thing I knew NVidia bought it and it was referred to as CUDA but glad to see that it’s evolved into a full on audio option.


----------



## EvilDragon (Mar 3, 2021)

The thing is, GPU audio processing is never going to be good for serial processing (which is most of the audio processing your DAW does). As soon as you have plugins one after another, you have created a serial dependency that is not really easily parallelizable and has to be processed in successive chunks.


----------



## chrisr (Mar 3, 2021)

companyofquail said:


> Thanks for the heads up. I signed up for curiosity and I like playing with technology.


Could you let us know _when_ they propose to send you the alpha/beta (I get the impression they might be some time away from that for some reason...) and please report back on your experiences once you've had hands-on. Thanks


----------



## Markrs (Mar 3, 2021)

Pier said:


> Not sure where to put this, but I thought virtual synth users would be interested in this:
> 
> 
> 
> ...



Looks like very exciting stuff to reduce the CPU intensity of plugins and distribute that to the GPU.


----------



## chrisr (Mar 3, 2021)

Markrs said:


> Looks like very exciting stuff to reduce the CPU intensity of plugins and distribute that to the GPU.


The idea has been floated several times before by various different people over the past 15 or so years - and previously has always failed to make it to becoming a viable, well adopted product. Would love for that to change, but I still hear Scotty's voice echoing round the back of my head.


----------



## ReleaseCandidate (Mar 3, 2021)

chimuelo said:


> Technology first came from MIT. It was a BETA but allowed ATI 8X cards to use the GPU for Reverb which sucked as bad as Cubase VST Reverb as it was ‘98 IIRC.
> 
> Next thing I knew NVidia bought it and it was referred to as CUDA but glad to see that it’s evolved into a full on audio option.


You are confusing quite some things. Shader languages evolved to (more) general processing languages, but audio has never been a real factor although there has been (and is) quite some research about using GPGPU for audio. 

AMD ships their GPUs with True Audio ASICs, that would accelerate convolutions and positional audio calculations, since 2013. 

Nowadays NVidia specially talks about audio rays too when they speak about their realtime raytracing engine, so they have/are working on a positional audio solution too. I highly suspect that we won't see many uses (again first in games) of that either, because the CPU still handles audio (in general) sufficiently fast.

I'm curious how Braingines actually implemented their audio engine.


----------



## chrisr (Mar 17, 2021)

Hey @companyofquail - just wondering if you've had any more info from them since you signed up a couple of weeks back? Have they sent a beta yet - or given any more info etc?

best,
Chris


----------



## companyofquail (Mar 17, 2021)

chrisr said:


> Hey @companyofquail - just wondering if you've had any more info from them since you signed up a couple of weeks back? Have they sent a beta yet - or given any more info etc?
> 
> best,
> Chris


Nothing at all


----------



## chrisr (Mar 17, 2021)

Ok thanks - maybe still a bit soon still. I am hoping that it's legit and they're bringing a new approach to the table. Time will tell I guess.


----------



## Technostica (Mar 17, 2021)

Apple's M1 chip might open the door for audio processing on the GPU as it uses a shared memory model.
So both the CPU and GPU can access the same data in RAM without having to make a copy, which is the traditional method.
The AI cores can also do the same.
This is separate from the fact that they are also using integrated memory which is just a packaging thing and helps in keeping the size down.


----------



## Daniel Vrangsinn (Apr 10, 2021)

rgames said:


> Yeah I saw the 1 ms number in there but I was assuming that's just a reference for a starting point.
> 
> Is 1 ms really low enough to make a difference? 2-3 ms is do-able for most projects that aren't huge orchestral templates. That's already low enough from an instrument response standpoint - you can't tell the difference between 1 ms and 3 ms.
> 
> ...


Actually, that's more than good enough for real time monitoring, live performance and recording. Even the super expensive DSP systems will have a latency. The human ear is not able to notice latency below 5 milliseconds, most won't notice anything below 10 milliseconds. I've been using plugins realtime in the studio for a couple of decades more or less. Since drivers managed to get the latency below 10m/s.

Most keyboards in the old days would have more than 10m/s latency out from the factory. People still played them just well. If they manage to get this GPU working. I would be able to squeeze just the little bit more out of my daw so I could record a full band live with realtime effects on every track -I can do that already now, but with this extra processing power, I am pretty sure I could include the heavy mastering processors on the main bus as well, and stream the recording as well live on internet. 

I see enormous possibilities here


----------



## Tatiana Gordeeva (Apr 26, 2021)

Will this GPU-based tech have an impact on granular synths, especially when one runs many of them concurrently?


----------



## Pier (Apr 27, 2021)

Tatiana Gordeeva said:


> Will this GPU-based tech have an impact on granular synths, especially when one runs many of them concurrently?


Yes, it should help if it works as promised.


----------



## Milkman (Apr 29, 2021)

This is quite interesting. While I havent hit an actual CPU bottleneck for audio DSP in many years, real CPU processing overloads aren't the main issue that is affecting many people across a number of platforms today. CoughCubasecough.

ASIO internal overloads due to hyperthreading(virtual cores) / scheduling / CPU power saving states are fairly common on a variety of susceptible hardware platforms, and this is of course 100% the fault of the CPU, mainboard, and operating system vendors, who have elevated some types of compute priorities over others. Some DAWs are more/less susceptible to this, but having the ENTIRE DSP done on a GPU instead of the CPU could very well alleviate some or all of that, if implemented well.

This is exciting!

(or, later down the road, it might introduce MORE issues, in terms of GPU vendors writing new drivers, supporting new tech, new chipsets, etc, but if the implementation is done right and the drivers allow exclusive access without too many interrupts, this could work.)


----------



## timbit2006 (Nov 13, 2021)

Has anyone received or heard anything about this technology yet?


----------



## Tatiana Gordeeva (Nov 13, 2021)

timbit2006 said:


> Has anyone received or heard anything about this technology yet?


Not much. Maybe something new from the companies and universities mentioned in this paper?


https://arxiv.org/pdf/2104.12922.pdf



ONE BILLION AUDIO SOUNDS FROM GPU-ENABLED MODULAR SYNTHESIS 

Joseph Turian ∗ Spooky Audio Berlin, Germany 
Jordie Shier ∗ , George Tzanetakis , Kirk McNally
Computer Science and Music Technology University of Victoria Victoria, Canada [email protected] 
Max Henry Music Technology Area McGill University Montreal, Canada [email protected]


----------



## Tatiana Gordeeva (Nov 13, 2021)

Also this thread is alive now:


----------

