# 20 instances of impulse response with 0% CPU hit



## Frederick Russ (Sep 9, 2009)

Just scoping this out with the VI folks to see if the PC crowd here has tried this:

Ingo Leif Software GPU Impulse Reverb
http://www.hitsquad.com/smm/programs/GPUImpulseReverb/

Its basically a Windows-based impulse reverb plug-in which relies on regular NVIDIA graphics adaptor to do most of its computation with next to 0% CPU consumption. Its freeware so I'd like to see if anyone has tried this yet or heard of it. I was talking to someone the other day who said he was able to get 20 instances of these plug-ins with nearly 0% CPU hit. Its currently only for Windows at the moment - not sure if it will be developed for Mac OS X but it sounds like promising technology.


----------



## Peter Emanuel Roos (Sep 10, 2009)

Wow, that's very interesting! Will check it out this weekend!


----------



## Justus (Sep 10, 2009)

Frederick Russ @ Thu Sep 10 said:


> relies on regular NVIDIA graphics adaptor to do most of its computation with next to 0% CPU consumption



Cool idea!!!


----------



## Peter Emanuel Roos (Sep 10, 2009)

Haha! When I checked the site, my own Google AdWords ad was there, lol :D


----------



## MNovy (Sep 10, 2009)

Very nice idea, but unfortunatelly it has a lot of settings missing, like
shaping the wav form as in any other IR plug in.

But I like this concept.


----------



## R. Soul (Sep 10, 2009)

But I suspect you need a top of the range graphics card to run 20 instances? 
My graphics card is quite basic as I don't do video editing or play games. Still, anything that lowers your CPU hit is very welcome. Looking forward to a more complete version.


----------



## NYC Composer (Sep 10, 2009)

Does anyone really need 20 instances of verb? or 15? or 10?


----------



## Angel (Sep 10, 2009)

that soft is work in progress... like a technical study... i observed the project the past few years at kvr-audio.

What a pity I have ATI-Cards in all of my computers


----------



## Stevie (Sep 10, 2009)

Nebula is also CUDA based. although not the whole processing can be done on the GFX card, since it's more complex.


----------



## Aaron Dirk (Sep 10, 2009)

I just so happen to have a Nvidia GeForce 9800GT in my PC
(because COD4 still kicks major butt!)

The vsti image doesn't look like anything major, compared to Altiverb and such
I was going to Download Red Wire guitar speaker IR's today, so something like this would be fine for auditioning a bunch of these.



So.... I guess I could give this a try


----------



## JPRmusic (Sep 10, 2009)

Liquidsonics has a beta called Reverberate LE which does the same thing. They've got a regular version to buy, but the beta CUDA version is still free. I've been running several instances of it to save CPU power for LASS. No glitches so far. I use the same computer for some video editing so its nice to put the card to use.

Jeff


----------



## Frederick Russ (Sep 10, 2009)

Thanks Jeff. Here's the link for anyone who is interested:

http://www.liquidsonics.com/software_reverberate_le.htm

Its still Windows only for now. Hopefully some brave soul will begin development for Mac OS X since we too use NVIDIA cards. But I did find this little note in their support section:



> Q) Do you support AU or Mac VST?
> A) Currently, no. In future, perhaps. If you are interested in a Mac version of any of our plug-ins, please contact us. If enough people want a Mac version of a specific product, we'll write one.



If anyone who is on Mac OS X, we really need to let them know we would like to be onboard too with a group mentality. Contact them here:

http://www.liquidsonics.com/support.htm


----------



## NYC Composer (Sep 10, 2009)

Seems like a good thing ( though I maintain that 20 instances of 'verb =overkill). I followed up with the company,re a Mac version . C'mon everybody.


----------



## janila (Sep 10, 2009)

VSL MIR should run on a graphics card. (o)


----------



## Aaron Dirk (Sep 10, 2009)

I probably wouldn't hold your breath on a GPU mac version, as it looks like they dropped the GPU in the full release for PC


----------



## schatzus (Sep 10, 2009)

I really don't want to hijack this thread because I find the possibilities of running convo off-processor very cool... but,



> but definetly yes if your doing orchestral stuff
> my minimum is 8 altiverb
> my preferd is 12, a little risky with my current setup
> my optimal and what i want to acheive with my new system "still in the box" is 16



perhaps maybe we are over thinking reverb?

Really...this is not a jab...I always keep an open mind and I want to better understand this approach. I just can't think of "how" I would use that many reverbs in an orchestral setting. Isn't the idea to give the illusion that all of the instruments are in the same room, playing at the same time? How is this better accomplished with that many reverbs as apposed to a few? A handful max?


----------



## Hal (Sep 10, 2009)

u have two ways
either using a lot of digital reverbs + delays,EQ and panning to emulate orchestral placement and depth or you could use altiverb's orchestral position feature
in this case you will need a violin one verb,violin second verb,violas verb,cellos verb,basses,horns,trombons,flutes,oboe,piano,choir,timpaniMarimba,solo instruments and others and others...keep on counting
you might be using synth or guitars that need more and different verb and effects setting.

so using reverbs is not like
play your orchestra and put one reverb on the final buss


----------



## Frederick Russ (Sep 10, 2009)

Some mockup guys do use a fairly extensive chain. This was especially true when VSL was released. Because it was close miked and recorded on Silent Stage, people started looking at ways to add in early, loose and long reflections into their instrument groups (strings, brass, percussion and woodwind) which all are exciting the room in different ways based on their positioning in relation to the stage and hall. Some people do this by adjusting delays while others works off of a fairly extensive impulse response chain.

This has changed significantly since the early days of VSL where people began to rethink recorded ambience in the samples (EWQLSO for instance with three mic positions to forgo the need for added reverb) although some did so for a different sound especially in conjunction with close mic samples. Sonivox also recorded their symphonic samples on a sound stage that is very friendly to added ambience halls of all sizes so early and loose reflections were handled - just add a concert hall impulse across your master bus (or four - one per instrument group).

People wanted to blend different libraries for a more custom sound. More and more projects using varying libraries like Project SAM, Sonivox, East West, Westgate, Vienna Instruments, Ivory Piano, etc, were being combined. The beauty is finding the strengths of each library so you get the best of each one. The bad news is now you're faced with so many different ambient choices each developer made when recording the library so its up to the mockup guy turned producer to combine these to where they sounded like all the players are in the same room. That's done by IR chains on the dryer libraries - some libraries get none at all, and some can get as much as three (early & loose reflections + hall). This is fine tuned to with the goal of getting everybody in the same room playing together instead of mismatched libraries essentially being disembodied spirits who visit sessions but clearly don't sound like they belong.


----------



## schatzus (Sep 10, 2009)

Thanks for the additional opinions... I completely see the point. 
I never really was in favor of the completely dry approach with VSL. I like the recording techniques which capture the room but there can be the "disembodied spirits who visit sessions" sound if the mismatch is too diverse. 
I find matching different libraries is easier when those libraries give you the close, stage, far samples and then I use EQ and reverb to push and pull the instruments into correct placement within the room.
(Panning obviously for left and right placement if the samples were not recorded in their proper or easily matched spaces.)
My thoughts have been more around reverb usage per section to bring cohesiveness and then more instances for bringing them all into one room.
Thanks again...more to consider...


----------



## chimuelo (Sep 10, 2009)

http://www.nforcershq.com/bionicfx-nvidia-gpu-audio-effect-processor/ (http://www.nforcershq.com/bionicfx-nvid ... processor/)

I BETA tested this way back when, and they did have ATI compatability. I believe the card was called Radeon 9000 series, but it was designed using NVidia as the reference.
It does work pretty well as I had the PIII when this first came out, with it's massive 1MB L2 cache.......... >8o 

My idea was to run all audio using the Scope DSP cards, and Gigastudio 2.54 was still a single core app that hardly used any CPU juice, mostly memory sub system, and the Graphics card could do a few verbs. Back then this was exciting but the verbs were static and couldn't be altered w/o zippering, etc. But having 2 x AUX's with staic verbs and then the hardware could be MIDI controlled. I was really happy but all of a sudden NVidia stepped in, and that's all she wrote.

NVidia bought out the company before ATI could make a move, and only recently with the CUDA stuff have I heard about it again.
I was getting 2 or 3 impulses back then but this was 2002-2003 I believe.
Too bad nobody developed top shelf algorhythims, just think how far along with this they would have been by now, and all of the power seems to double every couple of years.


----------



## Hannes_F (Sep 10, 2009)

> Does anyone really need 20 instances of verb? or 15? or 10?
> 
> probably no if ur mixing a pop song
> 
> ...



And all that because the samples are wrongly recorded to begin with. It's a shame actually. Nuff said ...


----------



## chimuelo (Sep 10, 2009)

:mrgreen: :lol: 
I raised Holy Hell with a Chamber Orchestra I went and saw a few months back.
They played in Hamm Hall, which is acoustically treated and designed really well.
Here in Vegas the Casino's spare no expense when it comes to entertainment or free Alchohol.
But I couldn't believe they weren't bathed in IR's or using the mic models and GigaPulse...... /\~O 
They sounded so realistic and dry........Yeecchhh..
I often work with female " vocalists " who always demand they be bathed in reverb. I think it gives them the Breast Enhancement Syndrome or something.
They aren't satisfied until they get that " sewers of Paris sound.
Many performers using VSTi Pianos and even hardware stage models over use the effect as well. 
They always ask how the mix is and I always reply " Muddy."
I can't understand why people who want that massive amount of reverb don't at least clip the tails w/ Sidechaining or a key filtered Gate.


----------



## Hannes_F (Sep 10, 2009)

chimuelo @ Fri Sep 11 said:


> They sounded so realistic and dry........



That is what I am going for lately. There will be some demos soon and you will understand in a second.


----------



## chimuelo (Sep 11, 2009)

I like dry and wet. Large to small, close to far.
But that's why I use hardware or a really well designed DSP algo with loads of memory attached to it.
Using MIDI CC's with 3 parameters will get you most of the sounds you will need.
Actually I have heard guys use IR's with a MIDI controlled Gate and make them sound pretty sweet.
With IR's it's tempting to have a long tail as it better hides their static nature. But using the threshold of a gate attached to velocity or note number ( lower is less because bass notes add mudd ) or frequency controlled key filtering, and side chaining.
That will take a generic IR and add life to it. especially with sidechained drums. The Phil Collins Tom sound comes to mind.
Adding depth to IR's is a breeze with a good M/S encoding plug too. You will find you don't need much reverb but rather distance. That way you can use the reverb to make the distance more realistic, and create the effect of being 40 feet away on a wooden stage floor, or even add dampening like the sound you would get when the curtain closes.
Has anyone ever noticed that conrete with a broom finish deadens the sound.
And concrete that has the smooth glistening finish has the most beautiful early reflections.
Yepp that's really sick, but recording vocals of someone laying down in your garage is pretty cool....much better than the bathtub reflections. The basements are always pretty good too as they have the early reflections of the floor, and the walls are your meter sizes.... :shock: 
Yes as a child I did this shit. I even took a Pignose and my Guitar down into these giant square 20' sewers in South St.Louis.
My sickness for ambience goes really deep.


----------



## Ian Livingstone (Sep 11, 2009)

Love the concept and had a quick play just now but the latency is a dealbreaker - 8192 samples whatever that is - feels like half a second!

UAD1 was based on old videocard dsp so this has been around for a while - and maybe 2 years ago we'd need this but can't help thinking with current cpus even things like the UAD2s are just used as expensive dongles - the processing could easily be done native....

Ian


----------



## NYC Composer (Sep 12, 2009)

I think everyone should work in whatever way they feel best serves their own creative vision. It's true, I mostly do pop music, some of it big sounding orchestral pop, but I think my max verb usage even when doing orchestral mockups has been 6 or 7 instances.

I will admit to the world that I can no longer tell one decent reverb from another in a mix. I'd bet in a blind test, man people would have a pretty difficult time identifying, say, the right 'verb being used out of five possibles. I recently heard that a few well known engineers couldn't consistently identify which was a .wav and which was a 320 .mp3. Maybe it's just an anecdote, but I think we sometime get fussy in ways we ourselves couldn't distinguish within our own compositions. Remember the speaker cable wars?


----------



## re-peat (Sep 13, 2009)

NYC Composer @ Sat Sep 12 said:


> (...) I think my max verb usage even when doing orchestral mockups has been 6 or 7 instances. (...)



Same here. I usually get by with even less: 3 or 4 instances maybe (and always at least one delay unit as well).
Thing is: in my experience, everytime it feels like a piece needs more than, say, 4 reverbs, it always turns out that there is something fundamentelly wrong with the arrangement. The better the arrangement/orchestration - choice of sounds, attention to the programming and balance, etc. ... -, the less reverb (number of instances, I mean) it needs.

_


----------



## MaraschinoMusic (Sep 16, 2009)

Jumped in late on this one... but 20 reverbs???

I just don't get that concept at all. I don't even understand the need for 10, or 8... :roll:

I use two, and often only one (but usually two), although one will sometimes do... (OK - we get the point!)

Pan for left/right, level for large/small, reverb send level for close/far - works for me.

What am I doing wrong?


----------



## Stevie (Sep 17, 2009)

agreed, David.
I can only think of the saying:
Too many cooks spoil the broth.
Or different: this must be get muddy as shit!


----------



## LiquidSonics (Apr 30, 2010)

Aaron Dirk @ Thu Sep 10 said:


> I probably wouldn't hold your breath on a GPU mac version, as it looks like they dropped the GPU in the full release for PC


This is only because there were a lot of things that Reverberate full does that aren't very easy to do on the GPU (like a modulating EQ, chorus etc) so I felt it was better to do all-or-nothing so people didn't find the CPU being consumed by all the other processes going on and then get a little turned off by the concept. Plus I feel the CPU version is so efficient at low latencies (which I assume is what people tend to want) that until the GPU version is better at low latencies there is less of a need for it. This is kinda backed up by the webstats which show a 25/75 GPU/CPU download split (well some of this is down to people not having NVidia cards I suppose). When you push the CPU version up to high latencies it's even more efficient, though at 8192 samples the GPU version does push ahead in my performance tests. Open to any disagreement or discussion here


----------

