My reverb adventure reaches an end....

ProfoundSilence

Senior Member
Jeez, my reverb scheme at the end was absurd, yeah.
to back that up, I provided an example on scoreclub of why I'm working this hard on making my sampled instruments respond musically - here is a celli legato patch, played in real time - a little rushed at a point, but with no reverb and no crazy amounts of EQ(mainly to make it sound more "hollywood") it goes from a somber almost weeping cello line, i pause for a moment, arm the contrabass using the same setup but an octave below - and a dangerous, menacing, lurking sound.


I can pretty much track anything with either a longs patch or a shorts patch and swap individual notes out for the fine-tuned articulation in no time... adventurous line? sure - load up some sort of portato, expressivo, or decrescendo patch, play it in... then swap out the shorter notes for spiccato, staccato, martele, or a Sfz - heck even a single trill. Now even adding in runs is easy with very little fuss.

In the future it'll be a little more reasonable I'm sure, but at the moment - samples can be used in ways that are plenty musical enough to save me the hassle of dealing with creating a fake space and trying to acheive any real sense of punch. It might take me some more ram, but my CPU thanks me that it isn't running multiple chains of convos, reverbs, MB, delays, and EQs.
 

Attachments

I like music

Senior Member
Thanks @Mike T and @ProfoundSilence .i ask because of sample meddling strings.

Due to my computer being very old and low on CPU power (and the fact that I am not going to spend much more money on music stuff in the coming year) I am trying to get them to sit nicely with my infinite stuff, using nothing but their own spatialisation features + some stock convo verb.

I feel everyday that I've got this right, and then I AB them against Hollywood Strings and CSS, and there's a 5% missing, but that 5% may as well be 50% because that final sheen/air which seems to be missing, suddenly feels amplified.

I still love the SM strings but this is currently the biggest bug in my otherwise happy life. So close yet so far with that feeling of the strings being in space.
 

I like music

Senior Member
I still do, although I have replaced the 1) convo for 1) EAReverb2 nowadays (but have just purchased DearVR as well). But didn’t you get the memo over in the Aaron Venture thread? The new new thing is the Hallelujah effect! We may need to purchase expense hardware for our 19” racks. Sorry, it’s the only way ;)
Haha ive been Googling this effect! I really hope it doesn't lead to more money being spent.
 

ProfoundSilence

Senior Member
Thanks @Mike T and @ProfoundSilence .i ask because of sample meddling strings.

Due to my computer being very old and low on CPU power (and the fact that I am not going to spend much more money on music stuff in the coming year) I am trying to get them to sit nicely with my infinite stuff, using nothing but their own spatialisation features + some stock convo verb.

I feel everyday that I've got this right, and then I AB them against Hollywood Strings and CSS, and there's a 5% missing, but that 5% may as well be 50% because that final sheen/air which seems to be missing, suddenly feels amplified.

I still love the SM strings but this is currently the biggest bug in my otherwise happy life. So close yet so far with that feeling of the strings being in space.
FWIW, before I gave up on adding any real girth to SM - I realized that you really cannot amplify something that isn't already there. Tried all sorts of subharmonic boosters and whatnot - but my trail left at this:

creating pitched down versions of the instrument, with the drop rounded off.

trying to get it to add a hint of bottom end without messing up the whole harmonic structure(strenght of overtones is actually what creates a recognizable timbre) before going into a reverb.

ofcourse in this example, I had like -.5 db in the 3.2k area and a bit of a low shelf past 7k - with a slash of altiverb on it's default setting - but maybe this is part of the key to getting it to work. Try having a super quite +12 or +24 double of SM strings to add that top end "sheen". just start at 0 and blend it in until it's on the cusp of being noticeable.
 

Attachments

OP
Mike T

Mike T

boring member
I feel everyday that I've got this right, and then I AB them against Hollywood Strings and CSS, and there's a 5% missing, but that 5% may as well be 50% because that final sheen/air which seems to be missing, suddenly feels amplified.
It's a real wormhole to go down, definitely.

I know this has been discussed ad nauseam, but this is another reason why I am skeptical about some of these libraries. Yes, sometimes traditional samples end up sounding a little awkward as far as performance/phrasing, but the way the brain (especially the average brain that isn't so attuned to the real thing) reacts to that is different from how it reacts to something that doesn't have any/enough subconsciously perceived and understood spatial information in it. That's something which all of our brains are wired for at an absolutely fundamental level, and to which we are physiologically extremely sensitive. When it isn't there, or it's faked, we know it, and it feels not quite right.
 

ProfoundSilence

Senior Member
at one time, I literally every individual instrument from sample modeling have an instance of mir pro, then another instance per section, which wasn't used by the section.

Then I had each individual instrument send a small amount to each other instrument(i.e. trumpet 1 got a little more of trumpet 2, and trumpet 3 had less on the send and was panned further to the right - while trumpet 2 had an equal amount of trumpet 1 and 3, with the panning being different)

then each one was sent to a "delay" bus - where I counted the feet in MIR pro with a ruler, calculated the MS delay it would take to hit it, then sent trumpets 1 2 3 to this "bleed delay" bus. this would then branch off into 2 seperate sub buses, one with the exact delay before hitting the horn section verb, and the other before hitting the trombone section(which was much closer)

the result? a very creative way to create very realistic phasing and a not so cost efficient way to heat my room.
 

I like music

Senior Member
at one time, I literally every individual instrument from sample modeling have an instance of mir pro, then another instance per section, which wasn't used by the section.

Then I had each individual instrument send a small amount to each other instrument(i.e. trumpet 1 got a little more of trumpet 2, and trumpet 3 had less on the send and was panned further to the right - while trumpet 2 had an equal amount of trumpet 1 and 3, with the panning being different)

then each one was sent to a "delay" bus - where I counted the feet in MIR pro with a ruler, calculated the MS delay it would take to hit it, then sent trumpets 1 2 3 to this "bleed delay" bus. this would then branch off into 2 seperate sub buses, one with the exact delay before hitting the horn section verb, and the other before hitting the trombone section(which was much closer)

the result? a very creative way to create very realistic phasing and a not so cost efficient way to heat my room.
'Fuck that!' My brain, family, my wallet and my CPU say in unison.

I have a lot of thinking to do on this. Luckily I have csb, CSS, infinites and Hollywood orchestra. Perhaps ill just make 2 separate templates (one more modelling focused) and one more traditional, and just do it piece by piece.

Your way sounds creative but right now I couldn't hope to match that.
 

ProfoundSilence

Senior Member
'Fuck that!' My brain, family, my wallet and my CPU say in unison.

I have a lot of thinking to do on this. Luckily I have csb, CSS, infinites and Hollywood orchestra. Perhaps ill just make 2 separate templates (one more modelling focused) and one more traditional, and just do it piece by piece.

Your way sounds creative but right now I couldn't hope to match that.
well Aaron's libraries are a step in the more palatable direction for sure.

I'd certainly say that reverbs need to be tiered to mimic microphones.

i.e. one mid sized studio send one larger room, and lastly a hall. a single large hall on an anchechoic recording is too easy to spot
 

I like music

Senior Member
Oh man, yeah, I used to do "bleed" stuff too... and the sound reflecting off the wall opposite the player (maybe just brass).
I am assuming based on your other posts that you don't do all this now. Is it because the tradeoff in effort and resource was not worth it, or because it didn't actually have any impact at all?
 
OP
Mike T

Mike T

boring member
I am assuming based on your other posts that you don't do all this now. Is it because the tradeoff in effort and resource was not worth it, or because it didn't actually have any impact at all?
I kind of got sick of it. It appealed to the physicist in me but the musical result wasn't worth it. I decided I'd rather have to work at refining a fake performance instead of asymptotically approach a kind of slightly but not really ok impression of an orchestra in a room. Sometimes I entertain the thought of going back, but I know it's a greener grass thing.

I'd certainly say that reverbs need to be tiered to mimic microphones.
Yeah. Just dumping it onto something doesn't work. Fake mic positions is the way to approach it. First something simple to add a little body, then send that to something more roomy but first maybe narrow the signal a bit and start taking away some frequencies, then do another send with more "distant" settings, etc., then a tail over everything... oy vey.
 

tomosane

Member
I love ValhallaRoom and especially VVV, but I gotta say having recently bought PhoenixVerb and R4 for I think less than 50€ total, Valhalla's status as the canonical bang-for-buck reverbs might be over for now

Though I also have to praise Valhalla for having the most no-nonsense kind of activation scheme, where you can do it completely offline and without any kind of machine-specific code. Keeping those two reverbs backed up on several different physical locations is thus quite reassuring for a tech doomer like myself -- no telling when you can no longer activate those other plugins on a new laptop! I think I could get by fine with just VR and VVV, even if all else goes to shit.
 

ProfoundSilence

Senior Member
that's the best way to look at it, sick of it.

I didn't unlearn anything, but I did learn to appreciate good sampling.

the issue is that competing with high sonic quality, it's an uncanny valley problem, where you could spend thousands more and increase your experimentation and get closer but never there

some people are happy with good enough and move on. even master SM wizard samy cheboub made a tutorial on SM because it kept changing. Before he disappeared back into the aether he was singing high praise of century brass, so I suspect he surrendered back to samples himself.

it was a fun adventure, no regret. but now I'm polar opposite and I have no reverb on my template and i have only EQ'd my strings here and there and feel like that's cost me in ram but saved my soul
 

Akarin

nicolas-schuele.com
I read all the reverb threads. I buy many reverb plugins... ...I always go back to Spaces 2 for convo, VSS3 for tails and Valhalla for sound design 🤷‍♂️
 
OP
Mike T

Mike T

boring member
VSS3 was one of the most surprising letdowns for me! Again, as far as what my goal was... it seems squarely to be an "effect" reverb added for gloss, and not nearly as useful for any kind of real space.
 

I like music

Senior Member
I kind of got sick of it. It appealed to the physicist in me but the musical result wasn't worth it. I decided I'd rather have to work at refining a fake performance instead of asymptotically approach a kind of slightly but not really ok impression of an orchestra in a room. Sometimes I entertain the thought of going back, but I know it's a greener grass thing.



Yeah. Just dumping it onto something doesn't work. Fake mic positions is the way to approach it. First something simple to add a little body, then send that to something more roomy but first maybe narrow the signal a bit and start taking away some frequencies, then do another send with more "distant" settings, etc., then a tail over everything... oy vey.
OK this sounds super interesting but I'm not sure exactly what this approach would look like. Specifically, are you saying to effectively mimic three mics by staging 3 verbs, with each verb creating a different depth? Can you just throw three of them on the instrument? And is the objective to get just the ER portion from the different layers and then finally a tail verb? Sorry for the questions but this is the final piece of the puzzle for me and my many years of stumbling around not actually making music. Any help appreciated!

Please do let me know if off topic.
 

ProfoundSilence

Senior Member
OK this sounds super interesting but I'm not sure exactly what this approach would look like. Specifically, are you saying to effectively mimic three mics by staging 3 verbs, with each verb creating a different depth? Can you just throw three of them on the instrument? And is the objective to get just the ER portion from the different layers and then finally a tail verb? Sorry for the questions but this is the final piece of the puzzle for me and my many years of stumbling around not actually making music. Any help appreciated!

Please do let me know if off topic.
you send them seperately.

i.e.

first send: basic room sound - something to create ambience and a short "tail"
then you send a medium size room, this adds some more body and resonance.

then you send a 3rd "tail" which is a longer verb.

these can certainly have different length pre-delays if you'd like - and different pre-EQ.

the idea is that this overlap gives body and fades out - while also creating a logical stepping stone from ultra close to far away(or in a big room)