# Synths and GPUs



## thereus (Nov 24, 2021)

VSTs are all still restricted to putting as much CPU power through a realtime buffer as they possibly can. That's the model we use, but it's monstrously out of date which is why soft synths haven't really evolved very much for a good while. A GPU can be used to make a much more powerful signal processor and, therefore, to make much more powerful synths. Look at the latest NVidia GPUs, for instance. If the issue is that using a GPU involves buses that slow down the output, then why not have a lofi/hifi option to do more powerful rendering that isn't real-time after the midi is laid down using a "live" less detailed option?


----------



## doctoremmet (Nov 24, 2021)

thereus said:


> which is why soft synths haven't really evolved very much for a good while


I am not sure if I agree with the implications here:

A) who says they haven’t? This is some sort of _opinio communis_?

B) and if they haven’t, this is due to a lack of processing power?


----------



## thereus (Nov 24, 2021)

A Purely my opinion and is in comparison with other advances in signal processing.
B It's more due to how the processing power has developed and how it has been used. We have seen relatively little development in CPUs but massive development in GPUs. That is why I am wondering about why we seem to be so bound to the CPU and assuming it is because we need to be to maintain realtime processing.


----------



## d.healey (Nov 24, 2021)

You might be interested in this - https://soul.dev/

GPUs are very good at certain types of computations but I'm not sure they are good for DSP.


----------



## doctoremmet (Nov 24, 2021)

thereus said:


> A Purely my opinion and is in comparison with other advances in signal processing.
> B It's more due to how the processing power has developed and how it has been used. We have seen relatively little development in CPUs but massive development in GPUs. That is why I am wondering about why we seem to be so bound to the CPU and assuming it is because we need to be to maintain realtime processing.


Gotcha! I was thinking about your post and more broadly about what progress would actually need more processing power and I guess all kinds of (physical) modelling and sample mangling / timestretching could benefit. I’m looking at you IRCAM Stretch algorithms in Falcon!


----------



## thereus (Nov 25, 2021)

d.healey said:


> You might be interested in this - https://soul.dev/
> 
> GPUs are very good at certain types of computations but I'm not sure they are good for DSP.


GPUs are great for DSP. The issue, as I suggested in the OP is not the GPU per se but the latency that is introduced when using it in a PC. That latency is not because of the GPU itself but because way the DMA is set up which introduces latency that doesn't matter for graphics but kills audio. Nobody is going to redesign PC architecture for us. Dedicated audio DSP architectures are not going to arrive with the power we need at a sensible price at a sensible scale because everyone other than us just needs to drive some speakers. My question is why are we not making our workflow match the tech? Why do we not record with a rough cut that can be handled in real time but then render outside real time for a full quality result? We already do have to do this to some degree anyway, since as as soon as our CPUs are maxed out we create the audio files to replace the midi processing on some of our tracks. Better to keep he midi in rough throughout and allow the GPU to run massive rendering offline to generate the final result. Doing so would enable the GPU to provide processing that could do musical things that would dwarf what we can do in real time in a CPU.


----------



## Collywobbles (Nov 25, 2021)

I've also always wondered why gpu's haven't been leveraged for some additional processing power. Even if it's a case of only being able to run certain types of effects it would still help imo. 

At the absolute least it could probably be incorporated into the audio rendering pipeline to speed up bounces, exports, stems etc.


----------



## d.healey (Nov 25, 2021)

thereus said:


> GPUs are great for DSP


I don't disagree because I don't know, but got any examples?.


----------



## thereus (Nov 25, 2021)

d.healey said:


> I don't disagree because I don't know, but got any examples?.


Here's one. These people process sonar just as we process sound but they have no need to restrict themselves to realtime output.





__





StackPath






www.militaryaerospace.com


----------



## d.healey (Nov 25, 2021)

thereus said:


> Here's one. These people process sonar just as we process sound but they have no need to restrict themselves to realtime output.
> 
> 
> 
> ...


Thanks. It says they achieved "near realtime" and that was 2013, with PCIE4/5 and today's cards what is the main bottleneck?


----------



## thereus (Nov 25, 2021)

Near real time for those guys and what we would think of as near real time is not the same thing...


----------



## thereus (Nov 25, 2021)

Urs says something similar to what I am asserting here, here...



Where I am asking a different question, though, is why do we care so much about real time? Let's find a more creative way to use the off-the-shelf hardware we already have to do more. To stick with U-he, for a moment, there is already a "quality" button in Diva that allows you to balance fidelity with CPU usage. Why not extend that concept to become a "super hi-fi" high-latency option that would enable us access to our GPUs to do off-line rendering? It would enable much more complexity in synths and effects, even if the real-time performance sounded less than magnificent. It wouldn't suit the live performers but, mostly, those people are not in VI-Control...


----------



## thereus (Nov 25, 2021)

Why do you have the view that GPUs are not appropriate for audio DSP other than because of latency introduced between the various parts of the PC?

[Let's leave aside that they need a level of cooling that makes your studio sound like a wind tunnel...]


----------



## thereus (Nov 25, 2021)

https://arxiv.org/pdf/2104.12922.pdf



Woah!!!!!

Maybe we are closer than I thought. It makes you wonder why the people we all buy synths from aren't further ahead with this... 

It looks a lot as though the latency DMA problem has long been solved in the architecture, so latency shouldn't be an issue, which begs the question... 

Why are the people we buy synths from so far behind the curve?

That's a gap in the market for somebody...


----------



## thereus (Nov 25, 2021)

Sorry for all this, but I am finding it fascinating...





__





Making Music with Shaders: Practical Additive GPU Audio Synthesis [pdf] | Hacker News







news.ycombinator.com


----------



## Pier (Nov 26, 2021)

DSP in GPUs is definitely possible.

See this thread:





__





GPU Audio


Not sure where to put this, but I thought virtual synth users would be interested in this: https://www.braingines.com/ It's a company that has developed some tech that could use GPUs to do the DSP computations. This is, GPUs on the same machine, or even remote ones on a server in the local...




vi-control.net





Personally, I'd be happy if plugins actually leveraged the GPU for rendering the UI. I think most don't since the UI performance is typically really bad.


----------



## thereus (Nov 27, 2021)

Pier said:


> DSP in GPUs is definitely possible.
> 
> See this thread:
> 
> ...


Yes. I suppose since the UI is so last gen, I must be in some kind of dream world to imagine that the engines will catch up with the tech.


----------

