# Virtual Soundstage



## shapeshifter00 (Aug 9, 2014)

Hi peeps, 

Anyone here got the VSS from parallax audio? I was thinking of buying and was wondering if thw backup CD option is in a box or is it just the CD? I like boxed things


----------



## re-peat (Aug 9, 2014)

shapeshifter00 @ Sat Aug 09 said:


> I like boxed things


Then you will definitely love the sound of VSS.

_


----------



## José Herring (Aug 9, 2014)

As long as the box is treated and tuned to reduce standing waves then it can be of great benefit. If not, I don't care what plugin is in it, it just won't sound right.


----------



## shapeshifter00 (Aug 9, 2014)

I know it does not matter but 20 dollars extra for a backup is pointless for me so was just asking.


----------



## Michael K. Bain (Aug 10, 2014)

I don't know about a box, but I do know I love VSS.


----------



## rayinstirling (Aug 10, 2014)

Well I'm not so happy with VSS so Now I'm thinking I should have bought the box instead of the download.


----------



## vrocko (Aug 13, 2014)

I had been using VSS for a while but recently stopped because a mastering engineer had complained about my mixes having slight phasing issues, he was able to work with it and he ended up making the master sound really good regardless.

I had emailed Gabriel about it and he responded to me today, he acknowledges that it can definitely have phasing issues caused mostly by the mid Microphone, which he informed me uses a combination of the left and right input channel as its input, he also said that Using a Decca Tree for VSS probably wasn't the best idea. He said that he has been working really hard on VSS2 which he informed me is really close to being done, he confidently assured that the phasing issues have been completely fixed plus other upgrades which he didn't get into but suggested that his testing in VSS2 trumps VSS1 in all aspects.


----------



## constaneum (Aug 13, 2014)

VSS2 !! Will it be a free upgrade for VSS1 owners?? I wonder. =D


----------



## shapeshifter00 (Aug 13, 2014)

I just bought it so I hope it's a free update if it comes out soon


----------



## Sid Francis (Aug 13, 2014)

Funny to read about this because I stopped using it when I became aware of the phasing. Hoping for a remedy.


----------



## shapeshifter00 (Aug 13, 2014)

I notice a bit of phasing as well, but I only use it on a couple of instruments from libraries that are really dry like GPO4, and I only use the distance and then I pan it manually and not with VSS, I felt that was a bit better.


----------



## Kejero (Aug 14, 2014)

I tried it in a few projects but always ended up disabling it. the phasing was a huge problem on many instruments, although it's far less problematic with very dry instruments. I've got to say I've kind of given up anyway trying to "place instruments in a 3d room". I find the results much more pleasing when I use good old fashioned panning. It may not sound like an actual orchestral recording, but then, it never will.
Not to say I don't encourage the effort


----------



## Blackster (Aug 14, 2014)

@Kejero: I used to use it on everything but step after step I also disabled it again. So I see where you are coming from.

I'd highly recommend to have a look at the OceanWayPlugin from UA. At the moment this thing takes care about my placement very well!


----------



## Sid Francis (Aug 14, 2014)

Blackster; since I have been looking for 3D placement for a long time now and just can´t afford SPAT: would you like to make us a short demo file with a dry (perhaps WW) instrument and then the same instrument panned 20feet to the back and to one side (if that is possible with Ocean Way)?

I heard about that plug but have not much info....


----------



## Blackster (Aug 14, 2014)

@Sid: I don't have time to make a technical demo right now, but for the demo songs for my upcoming free E-Ukulele, OceanWay takes care about all placements. It is not exactly an orchestral setting but still, I've used Vlas, Cellos and Basses in "Walking on Tiptoes". 

[flash width=450 height=210 loop=false]http://player.soundcloud.com/player.swf?url=https://soundcloud.com/audiowiesel/sets/e_ukulele&secret_url=false[/flash]

I know that these aren't the distances you are looking for but maybe it gives an idea!?


----------



## Michael K. Bain (Aug 14, 2014)

Ca anyone tell me how to listen for phasing? I don't quite understand it. I use VSS, but with dry instruments like VSL, WIVI and Kirk Hunter. I also use some Miroslav with reverb turned off, but it's still a little wet. I put one overall reverb on the main. Will phasing be noticeable on such a setup? 

If you hear phasing on this song of mine, would you please point out the time you hear it most strongly to serve as an example:

https://soundcloud.com/michael-k-bain/sunflower-waltz

Thanks,
Mike


----------



## Sid Francis (Aug 14, 2014)

Wonderful Frank, thank you. Sounds really good. I will investigate a bit about the plug

edit: Oooh: UA, therfore I forgot about it :-(


----------



## Dom (Aug 15, 2014)

Percy Faith Fan @ Thu Aug 14 said:


> Ca anyone tell me how to listen for phasing? I don't quite understand it.


I wouldn't call the problem with VSS 'phasing', but the problem is that there can be way too much out-of-phase content between left and right.
You can hear this with speakers that have good stereo imaging and it's as if the sound is in your head, sort of upside down. Once you've heard it it's easy to recognise. The main problem is that when summed to mono it cancels so much of the signal that it dips a lot in level. Also if you use it for TV it will be beyond 'legal',
You can also use a phase correlation meter. Most DAWs have once built in.

I have also stopped using VSS because of this problem, but I await the new version eagerly.


----------



## brett (Aug 15, 2014)

I agree with others - unless you are careful there can be significant phasing issues but on balance VSS is a valuable tool that has it's place

You guys should checkout the screenshot of VSS2 on the parallax site. Looking forward to this

B

Edit: Dom is right. I mean to say phase issues rather than phasing per se


----------



## Carbs (Aug 15, 2014)

vrocko @ Wed Aug 13 said:


> I had been using VSS for a while but recently stopped because a mastering engineer had complained about my mixes having slight phasing issues, he was able to work with it and he ended up making the master sound really good regardless.
> 
> I had emailed Gabriel about it and he responded to me today, he acknowledges that it can definitely have phasing issues caused mostly by the mid Microphone, which he informed me uses a combination of the left and right input channel as its input, he also said that Using a Decca Tree for VSS probably wasn't the best idea. He said that he has been working really hard on VSS2 which he informed me is really close to being done, he confidently assured that the phasing issues have been completely fixed plus other upgrades which he didn't get into but suggested that his testing in VSS2 trumps VSS1 in all aspects.



Well now, that would be neat if he has improved the sound.

I never got past demo mode of VSS.


----------



## Michael K. Bain (Aug 15, 2014)

Dom @ Fri Aug 15 said:


> Percy Faith Fan @ Thu Aug 14 said:
> 
> 
> > Ca anyone tell me how to listen for phasing? I don't quite understand it.
> ...



Thanks very much for the info. I'm going to sum one of my tracks to mono and see if it dips way down in level.


----------



## Michael K. Bain (Aug 15, 2014)

brett @ Fri Aug 15 said:


> You guys should checkout the screenshot of VSS2 on the parallax site. Looking forward to this


That looks awesome, thanks for directing me to it.


----------



## Per Lichtman (Aug 16, 2014)

I haven't been cleared to talk about VSS2... but since I wasn't asked not to hint, I hope Gabriel won't mind if I say that the blurred out parts of the image are "very interesting". The discussion will look very different after it comes out.


----------



## playz123 (Aug 16, 2014)

While some have decided to 'throw the baby out with the bath water', in turn I think quite highly of this plug-in and have always found it to be a welcome addition to my collection of tools. There's no question than when used in certain ways it's not perfect yet, but on the other hand there any many times when it can used without encountering the problem described. One only needs to read the high praise for VSS that has been offered in other threads to realize that obviously some people don't feel it's as bad as suggested in this one. In any case, hints are there that version 2 will alleviate some concerns, so like everyone else, I look forward to seeing what Gabriel has done. Still think the current version is worth every penny though. Just my two cents!


----------



## benmrx (Aug 16, 2014)

Man, that GUI for v2 looks NICE!!


----------



## re-peat (Aug 16, 2014)

playz123 @ Sat Aug 16 said:


> (...) One only needs to read the high praise for VSS that has been offered in other threads to realize that obviously some people don't feel it's as bad as suggested in this one. (...)


That’s the real tragedy, Frank. (And one of the main reasons for my VSS-aversion). People buy this, totally convinced that all their spatialization worries will be over, use it willy-nilly and all over the place and then refuse (or are too stupid) to open their ears to the simple fact that things simply sound awful. Of course they’ll praise it. Anyone who’s stupid enough to buy it and use it without really listening, is also going to be stupid enough to praise it, it seems to me. 

(I always think of VSS as a sort of Trojan Horse. Not the bug, but the danger-instead-of-gift-bearing contraption. You open the doors to your mix and welcome this thing inside and, hop, out jump all these problems, instantly infesting your production with sonic disaster from top to bottom and from left to right.)

Come to think of it, I’m actually more annoyed with the average VSS-user ― the one who’s too lazy to really listen, too lazy to really learn to understand spatialization, and who believes that ‘realism’ in his mock-ups can only be guaranteed if its space is supposedly ‘real’ as well ― than I am with VSS itself.

It’s like all those MIR-users who, once they’ve purchased it, suddenly stop hearing what reverb is supposed to do in a mix, and now drown their instruments in swamps of Tonmeister-approved mud ― thinking that if it comes out of MIR it has to sound good (which is total nonsense of course) ― keen as they are to make absolutely sure that we hear that they own this expensive piece of software.

It’s a painful delusion. Doubly so, because, when used well, VSS and certainly MIR have indeed something genuinely useful and valuable to offer. 
(In the case of VSS: with dry, or at least dry-ish, sources that have a strong ‘mid’ (as opposed to ‘side’) gravitation.)

But a plugin that encourages people to re-position and re-spatialize patches from libraries like ProjectSAM’s, Spitfire’s or Cinesamples’ (to name three distinctly spacious ones), insisting that this can be done without any damage to the sound, should be quarantained in a quintuple-locked box labelled “keep away - highly destructive content”.
(More proof for my case that a large chunk of the crowd of VSS-users is mentally challenged: first they buy Spitfire or Cinesamples, partly because of the “awesome sound” of AirLyndhurst and the Sony Soundstage we may assume, and then they go ruin that sound completely by running these libraries through VSS.)

I hope that Per is right and that VSS2 will be everything its developer (and its users) hope it is, but unless it approaches spatialization more intelligently and musically than its predecessor does and unless a firm recommendation is included not to mess with the spatialization of libraries like the ones mentioned in the previous paragraph, I’ll stay where I stand today: strongly cautioning against it. And not thinking too highly of its developer nor of the cranial stuffing of its average user.

_


----------



## germancomponist (Aug 16, 2014)

re-peat @ Sun Aug 10 said:


> shapeshifter00 @ Sat Aug 09 said:
> 
> 
> > I like boxed things
> ...



:-D

Love this comment! o/~



> But a plugin that encourages people to re-position and re-spatialize patches from libraries like ProjectSAM’s, Spitfire’s or Cinesamples’ (to name three distinctly spacious ones), insisting that this can be done without any damage to the sound, should be quarantained in a quintuple-locked box labelled “keep away - highly destructive content”.
> (More proof for my case that a large chunk of the crowd of VSS-users is mentally challenged: first they buy Spitfire or Cinesamples, partly because of the “awesome sound” of AirLyndhurst and the Sony Soundstage we may assume, and then they go ruin that sound completely by running these libraries through VSS.)



Nothing to add..... .


----------



## shapeshifter00 (Aug 16, 2014)

I only use it for the Garritan personal orchestra woodwinds so they wont be so upfront. Spitfire stuff w the decca mic is recorded in a space so would never use VSS on that and I doubt a lot does. I like VSS for a couple of things but I avoid it if I can. I am no professional and just make some music as a hobby


----------



## benmrx (Aug 16, 2014)

Some pretty harsh words towards end users there that might like what this plugin (in current form) adds. It's pretty subjective stuff, and to call people mentally challenged if they happen to like something is kinda.... just plain mean spirited. 

The same could be said for people that enjoy certain tape or console emulations. Some like it, some don't, but none of them are mentally challenged for having an opinion on the tone it imparts...., because there is no right answer to this stuff. 

I would imagine a large percentage of people using 'wet' libraries with VSS are doing so because they're ALSO using fairly dry libraries as well and using VSS to help put everyone in the same room. I doubt many people are using pure Spitfire or Cinesamples templates (I.E. no other developer mixed in) with VSS. Could be wrong, but in any case, it doesn't make them mentally challenged. They just have a different opinion and/or aesthetic.


----------



## germancomponist (Aug 16, 2014)

benmrx @ Sat Aug 16 said:


> ...They just have a different opinion and/or aesthetic.



Different opinions are fine, no doubt about that. But "verschlimmbessern" never was and is a good solution... .


----------



## benmrx (Aug 16, 2014)

germancomponist @ Sat Aug 16 said:


> benmrx @ Sat Aug 16 said:
> 
> 
> > ...They just have a different opinion and/or aesthetic.
> ...



I'm a dumb yank, so I have no idea what "verschlimmbessern" means, but if it "never was AND is a good solution" ..., then I'm lost in translation.


----------



## dryano (Aug 16, 2014)

benmrx @ Sat Aug 16 said:


> I would imagine a large percentage of people using 'wet' libraries with VSS are doing so because they're ALSO using fairly dry libraries as well and using VSS to help put everyone in the same room. I doubt many people are using pure Spitfire or Cinesamples templates (I.E. no other developer mixed in) with VSS. Could be wrong, but in any case, it doesn't make them mentally challenged. They just have a different opinion and/or aesthetic.




Yes, they are mentally challenged. VSS doesn't create a room, that Spitfire, Cinesamples or other stage-recorded instruments could be brought into. It does panning, phase-delays, some ER-like delay stuff and a filter for distance. Now those elements could be considered the ingredients of a room simulation, but a very simple one. It can be helpful with dry instruments, but it simply has to ruin every sound, that aleady was recorded in a room, even if you only use the close mics. Maybe... an unexperienced user will not hear it immediately. What he/she will here, is a stereo-widening effect because of the phasing and that effect might sound desireable in the first place, again for unexperienced ears. For people, that have trained their ears for analytical listening only the alarm bells start to ring.

So whats the reason people like that plugin? I would say its 20% of "oh in that demo it sounds wider and deeper than the dry version"- due to phasing and 80% of the praise, it gets from forum discussions and marketing claims. Now the difference to MIR, which for me is the true evil of audio processing is, that VSS doesn't have the Tonmeister-approval (what a wonderful word re-peat  and the big scientific theoretical backup, that VSL provides and of course, the interface of MIR is even nicer and it offers different rooms... You can for example place your Main Mic at the "famous 7th row of the Vienna Hall" if I remember the video commentary right.... I never laughed that much, while watching a product video.


----------



## benmrx (Aug 16, 2014)

dryano @ Sat Aug 16 said:


> benmrx @ Sat Aug 16 said:
> 
> 
> > I would imagine a large percentage of people using 'wet' libraries with VSS are doing so because they're ALSO using fairly dry libraries as well and using VSS to help put everyone in the same room. I doubt many people are using pure Spitfire or Cinesamples templates (I.E. no other developer mixed in) with VSS. Could be wrong, but in any case, it doesn't make them mentally challenged. They just have a different opinion and/or aesthetic.
> ...



Trade room for space. Seriously, why does anyone care enough to call someone names if they happen to like a plugin that you don't. Whether it's helping someone place different instruments together in a 'room', 'space', 'hall', 'environment', etc., it's semantics.

If it doesn't help YOU, then no problem, move on. But don't bag on someone because it helps THEM.


----------



## marclawsonmusic (Aug 16, 2014)

benmrx @ Sat Aug 16 said:


> If it doesn't help YOU, then no problem, move on. But don't bag on someone because it helps THEM.


Hi benmrx,

FWIW, I would not take these comments so personally... I do not believe they are intended that way.

I have found that people on this board are very passionate about _craft_. So, when a product like VSS shows up and claims to be a shortcut to good craft... AND doesn't quite deliver... the more experienced members will chime in and call it for what it is - sometimes in a very passionate way.

Piet makes some very good points about samples with embedded ambience - there are much easier ways to get those libraries to blend with drier libraries than trying to strip them of their built-in ambience (which can be very destructive audio-wise).

Truthfully, the whole idea of trying to remove _natural _ambience only to add _artificial _ambience and then send all that through a final _artificial _ambience (tail) really IS a bit preposterous. So, I can understand why Piet would say it is crazy.

Again, FWIW, I have learned a lot from Piet's contributions, and sometimes the most passionate and strongly-worded posts have taught me the most.

All the best,
Marc


----------



## Michael K. Bain (Aug 16, 2014)

marclawsonmusic @ Sat Aug 16 said:


> benmrx @ Sat Aug 16 said:
> 
> 
> > If it doesn't help YOU, then no problem, move on. But don't bag on someone because it helps THEM.
> ...


Then those people should comment on the product; they shouldn't call the users of that product"stupid" or "mentally challenged". Benmrx is correct; that's mean-spirited...and judgmental...and rather snobbish and elitist.


----------



## Per Lichtman (Aug 16, 2014)

Just a few more thoughts on my end.

First, I don't have a vested interest in all this: I've never been paid in relationship to VSS, SPAT or MIR so I'm speaking just a user and reviewer (with occasional advance access) here.

Last year when I reviewed VSS1, I gave it a positive review for many reasons and at the same time also suggested that it was best used with ER turned all the way down (though I would have preferred a mute button). To those frustrated by phase differences, my first question would be whether those were primarily based on the default settings or whether the opinions remained the same with ER disabled and/or with the center channel brought all the way down?

As to comments about moving sounds around and mixing libraries and things of that ilk, I would offer the following. If you are mixing libraries, there will normally always one that has spacing/staging/acoustics that you like the best. That becomes the reference point for how you handle the others because if it does not, the different spaces and placement in each library can become more and more distracting. If you just plop a bunch of string libraries recorded in-situ in diferent locations on top of each other without making any sort of adjustments, they tend to fight each other. There are lots of ways to address that (and I've used a wide gamut since I got serious about digital audio in 1998) and pretty much every one of them can be abused. Who here hasn't heard a mix made using more "traditional" methods over the years that had serious panning and/or balance issues, like orchestral instruments add extreme panning postions that bear no resemblance to seating they are ostensibly emulating? There has never been a fool-proof way for everyone to get good placement results in the past, so it seems unfair (to me) to attack the tools that try something different or the people the use them. Some people will get good at traditional panning, etc., and some won't but the idea that people that would have been capable in that area ended up stifling their development because they started using a tool with an alternate approach is (as far as I can tell) unsupported. To support that, we wouldn't simply have to hear poor mixes made with a given tool - we would have to hear better mixes made by the same user before they started using the tool, and even then we'd need a representative sample.

Here are a few of the reasons why I (personally) found VSS to be a product with several merits. Back in late 2002 and early 2003 I was not experienced as an audio engineer yet (I'd only gotten serious about digital audio in 1998) and spent way too much time constructing orchestral templates vs. composing. During that time, one of the things I kept wishing for was a product that would make it possible to visually place the players in my orchestra with the reference point of an orchestral seating chart. This would have saved me so much time an effort compared to what I was doing at the time: I was looking at chart after orchestral seating chart, downloading every description any user had given of their preferred panning positions that I had, using tons of trial and error and doing my best from the perspective of a guy that really only worked with small ensembles, never a full orchestra, in terms of my writing at the time. It was hard, it was time-consuming and it was very frustrating in relationship to what came out of it (and if I'm ever particularly brave, I may share some of it). As you guys remember, there weren't really "in-situ" libraries as we know them now, yet and there weren't generally even suggested preset panning or verb positions in a given library. Some of the only multi-mic libraries out there were the Project SAM series.

I wasn't dealing with mixing in my orchestral work at the time because I enjoyed it, or because I had an aptitude for or wanted more control - it was simply a necessity in order for me to express myself creatively at the time. That's why I was intrigued by the prospect of GigaPulse and by the "out of the box" in-situ with ambience approach in EWQLSO when it came out: it looked like I was going to have an alternative to thinking like an audio engineer.

So what's my point? Well, over a decade later, when I had done a lot of audio tech consulting and audio engineering for various artist and companies, I took a look at VSS not only from the perspective of how I worked in the present but also at the challenges I had faced in my earlier period. I found that tools like VSS allow the user to think like a composer instead of an engineer and to not have to worry as much about tweaking and analyzing in order to get started (especially for libraries that weren't recorded in-situ). From my perspective, that alone is a big deal. Would the results of hiring an experienced mixing engineer be better than a novice using trial and error with a tool they don't fully understand yet? Well yeah, but that's going to be true pretty much regardless of the tool and if you've never done any audio engineering in your life, a visual placement tool is a heck of a lot easier to learn than establising and memorizing pan relationships for the 5 string section, 8 woodwind players and variable brass and percussion players in a classical size orchestra - and that's keeping it pretty simple.

Now getting to VSS1 specifically, here were some of the things I really liked (despite not being partial to the ERs).

1) CPU: It was one of the lightest CPU usage plug-ins on my system.

2) Seating Chart Graphic and Library Specific Presets: The user didn't have to start from square one placing things because there were some very helpful reference points.

3) Price: The product came in just under $100 USD and didn't entail a dedicated system, or an additional DSP card or use massive CPU or do anything else that might lead me to have to spend more money on it.

Now there were (and are) of course competing tools, but idea that VSS is a crutch that keeps would-be audio engineers from developing their craft is just not consistent with my experience before VSS1 came out or after. It presents another approach to working with instrument placemnt that is (to my eyes) completely valid, whatever its strengths and weaknesses may be.

To insist that everyone be good audio engineer to be a good composer is a little like insisting that it's wrong for people to use sample libraries to help develop their musical ideas interactively as opposed to hearing it all internally and then notating the ideas for live performance. From my perspective, one approach is no more valid than the other: in the end it's the music that gets created that matters.

That ended up being longer than I'd intended and I hope it doesn't get taken out of context. I'm definitely not saying "don't listen carefully to what you're doing to your audio" or "don't pay attention to the effect of a given plug-in on your work" or even "the details of the end product don't matter". I am saying that creating a work environment (including either tools or support staff or both) that is transparent as possible to the work you want to do, one that allows you to stay focused and play to your strengths is a very good thing. And I would be very surprised if there weren't a lot users for whom a visual placement plug-in could help create environment.

Okay, end rant - I'm guessing other people typed a lot of things while I wrote that.


----------



## Per Lichtman (Aug 16, 2014)

Okay, I read the comments that were written between the time I started and finished my previous post. Just to be completely clear, I was not trying to attack any previous poster, just trying to address the subtext I perceived in some of them. That subtext being "what's the point of using plug-ins like this" or "do plug-ins like this serve a positive function when compared to other methods?" I felt that a lot of context was helpful when addressing those questions, so I apologize if I got a little verbose.


----------



## jensos (Aug 17, 2014)

Hi,
so I'm outing myself as being one of the lazy, delusional and mentally challenged people who bought VSS to obtain "easy" spatialisation. And I will admit that I lack the knowledge, experience and probably the analytic skills needed to set up an "ideal" room simulation effect chain.
Suffice it to say that my focus is on the writing and orchestration part, but nonetheless I would like to have a template whose mix and spatialisation does not cause nausea in listeners.

Therefore, I am really in need of some (constructive!) input. Could anyone please suggest a setup that would be at least acceptable to the experts in this forum? The total cost should not exceed that of MIR Pro.

I have libraries of a broad wetness range, the driest being the woodwind and brass libraries by VSL.

Btw, I had also envisaged buying MIR Pro, but after reading Piet's comments I'm almost glad that I didn't. 

I cannot say that I particularly enjoyed the wording Piet chose, but on the other hand: if he's right, there might be something to be learned here - which I always welcome. Sorry if I repeat myself, but it would be great if there was a passable solution for users like me who just don't have enough time to really learn all the details of spatialisation. The promise of some products to provide a perfect solution right out-of-the-box is of course not realistic, but maybe there is an approximation to that?

Any suggestions would be highly welcome. Thanks!!
Best,
Jens


----------



## re-peat (Aug 17, 2014)

benmrx @ Sat Aug 16 said:


> (...) it's semantics. (...)


If only it were, Ben, if only it were. Alas, it isn’t. It’s real-life day-to-day frustration, confusion and demotivation for large numbers of people desperately searching for ways to bring their samples and modeled sounds together in a coherent and convincing space, but … never succeed. As is amply illustrated by hundreds of threads dealing with questions about reverb and spatialization, and hundreds more threads by people either too deaf or too shameless, or both, to refrain from sharing their dreadful-sounding mock-ups with the rest of us.

The reason I always get a wee bit irritated whenever the acronyms MIR or VSS appear on the horizon, is because of all the confusion and delusion these tools seem to cause. The developers of these tools promise their users spatial nirvana, and make it appear as if you can get spatial realism, positioning and depth from these tools like you can get water from a tap. And that is just plain wrong and terribly misleading, I find. And for several reasons.

First of all, the concept of spatial realism is meaningless and absurd in a mock-up. It shouldn’t even be considered, let alone sold. Because it can never exist. Unfortunately, people seem to be all too willing to buy into this fable and then somehow try to make it happen in their mixes, no matter what, all the time believing that “it’s gotta enhance the realism of things, surely, because it uses real recordings of real spaces”, overlooking all the while the most basic requirements for a decent, solid sound and ignoring to pay attention to all the fundamentals that need to be seen to first in order to make a mock-up a listenable thing. 
That’s what’s so painful: the assumed easy-to-implement realism of convolution-based spatialization seems to divert people’s attention from where it should be: with the careful selection of the right samples, and with the writing and programming. Because those are the things which, before and beyond anything else, make all the difference between a good and a bad mock-up. And not the fact that it was processed with IR’s from some illustrious place, rather than with some humble anonymous reverb. 

Which brings me to a second point: there’s nothing more ridiculous, I find, than the pathetic make-believe of samples processed with a Konzerthaus or Schubertsaal IR or whatever. (And again, yes: anyone who fails to understand the complete and preposterous absurdity of this is, in my view, if not entirely mentally, then certainly completely musically challenged.)
I’m not a snob, PercyFaithFan. (Well, not in this discussion anyway.) I’m simply exposing what I consider to be some deep-rooted delusions. If there’s any snobbery in here, and snobbery of the most idiotic kind, it’s nested in the ludicrous idea that a mock-up might sound any less like a mock-up when processed with IR’s from the Mozartsaal. Or Teldex. Or ToddAO. Or Ocean Way. Or Altiverb’s Disney Hall or Concertgebouw. Or whatever. 
Write some famous names on the box of your IR’s, tell people their samples are going to sound as if recorded in those legendary places (like so many famous and great-sounding music was) and that their mixes will therefore attain some of the patina of those places and thus come out sounding infinitely better than before, and people simply seem to switch off their capacity for intelligent thought and musical acumen completely. It’s amazing. Brain-dead deaf rabbits in front of the lantern of convoluted realism. A most fascinating, but very sad thing.

Thirdly, this IR-based “myth of realism” is a totally unnecessary complication of what is already a fairly delicate excercise: trying to create a solid, decent-sounding mock-up. And the funny thing is: of all the challenges involved in producing a decent-sounding mock-up, bringing space, depth and placement into your mix is actually the easiest and simplest by far. It really is. If only you understand (and are prepared to accept) what a mock-up is and what it really requires, AND if you can keep your head free from all the nonsense that has emerged about this subject in recent years. Nonsense which says that, for example, you need different reverbs for ER’s and tails. No, you don’t. Or that you need convolution reverbs for realistic reflections, and algorithmic reverbs for lush tails. Bollocks. Or that your mock-up can never sound really real without VSL’s “innovative multi-parameter holistic approach to spatial modeling for virtual orchestras”. Or that you can safely re-arrange Spitfire’s chairs by cunning use of VSS’s trickery. Complete rubbish, that’s what that is. And disastrously dangerous rubbish it is too. 

Certainly not “just semantics”.

But again: MIR itself is not bad software. Absolutely not. It’s the misconceptions and delusions which surround it (some of them sadly very much kept alive by its developer and its most-visible users), and the analytical lazyness which it seems to instill, that make it such a potentially hazardous affair.
Same thing with VSS: use it carefully and there’s good things to be done with it, sure. But use it carelessly, without listening or thinking, and you end up with bad sound deluxe: completely desintegrated audio.

In my view, there are only three ways to spatialize sensibly (in the context of mock-ups, that is). But before you can begin with any of those three, and this is the most import line of this entire post, you first have to learn to select your sounds wisely, learn how to use them, know how to write, and be committed to program your music with musical insight, passion and care. And you have to be willing to accept that a mock-up is a completely artificial concoction, in no way related or similar to a recording/production of a real orchestra. If you skip any of these steps or ignore any of their implications, forget about spatialization as well. Or use MIR and keep fooling yourself how fantastically real everything sounds.

Anyway, my three suggestions to spatialize sensibly:
(1) Don’t do it, if it doesn’t need to be done. This applies to all libraries which have a distinct spatial presence baked into their samples: leave them alone. (At most, use only a touch of additional reverb for extra glue & gloss. If needed. But always sparingly.)
(2) Do it the old-fashioned way. Using stereo-width control, delay, EQ, reverb and panning (not necessarily all five all the time). Worked extremely well for decades, still does today. It takes a bit more effort of course, and you have to know what you’re doing (and you have to have the ears to hear what you’re doing), but everything a mock-up could possibly ever need, spatializationwise, can be done this way.
(3) Use SPAT. If there is one thing I’m totally convinced of (and more convinced with every day I use it), it’s this: if everyone here had SPAT and knew how to use it, we would never see another reverb- or spatialization-related thread on V.I. ever. Seriously. And everyone would immediately understand, epiphaniously so, what I mean when I say that MIR and VSS and UAD’s Ocean Way and the Cholakis IR’s for LASS are a deplorable waste of money. Not in the sense that they’re intrinsically bad or completely without use ― although at least VSS1 and the Cholakis set are, to my ears, deeply flawed ―, but in the sense that they, in stark contrast with SPAT, have absolutely nothing to offer, and now comes the important bit, in the way of solid solutions for mock-up specific problems.
(And just for the record: I have no connections with Flux at all, have not received any free software from them, didn’t participate in any beta-testing or anything … I’m simply an ordinary, full-price-paying customer who quickly became a rapturously enthusiastic user.)

Even so, you don’t need SPAT anymore than you need MIR. SPAT makes a few things extraordinarily easy to accomplish, yes, and invariably with better-sounding results than when accomplished in any other way ― always keep in mind that we’re talking mock-up productions here ―, but that still doesn’t make it an absolutely essential requirement, I believe. More a sort of luxurious high-quality convenience, I’d say. No, the only things you really need, in my view, are a good understanding of what a mock-up is (and thus a healthy distrust of anything that flirts with the idea of orchestral mock-realism), a decent reverb (doesn’t matter which type or brand), a good knowledge of conventional production tools & techniques, and the concentration which makes you focus, first and foremost, on the musical integrity of the track itself, rather than on matters which are completely irrelevant and of minor importance at best.

Trust me, if you can make your music, and the way it is written & programmed, strong and solid enough on its own, the relative unimportance of reverb and spatialization will become immediately apparent. Amazingly so. And the bloated pretentions of some of the software mentioned above will be revealed for what they truly are: the emperor’s nakedness. Be the child who sees that.

_


----------



## The Darris (Aug 17, 2014)

jensos @ Sun Aug 17 said:


> Therefore, I am really in need of some (constructive!) input. Could anyone please suggest a setup that would be at least acceptable to the experts in this forum? The total cost should not exceed that of MIR Pro.



Firstly, I do not consider myself a professional when it comes to specialization. However, I used Peter Alexander's training videos from the Visual Orchestration course to help get a better understanding of what is involved in it and so far, I feel my mixes have become increasingly better. With that said, I use 3 tools. Panning, VerbSessionV3(Algorithmic), and Spaces (Convolution). 

Since 95% of the libraries I use are recorded in position (players in their performance seating), I don't have to deal with too much of the panning. Some instruments I do some slight panning just to get some clarity and definition to them if they don't left or right well with the other instruments. 

Next, I apply VerbSessionV3 to all my instruments that have the smallest reverb tail length. In this case, all of my non-Spitfire instruments get that reverb. I have played with the settings in VerbSessionV3 to match the sound of Air Lyndhurst to the best of my ability. This takes time and it will vary from section to section and library to library. There isn't one setting that works for all. 

Finally, I push everything in separate section buses that have an instance of Spaces to give a final hall sound to my mix. This is a step that takes a lot of tweaking in your final mix to make sure it doesn't come out muddy. 

Anyway, that is my overall technique but like I said, it takes time and patience but the end result is well worth it.

Cheers,

Chris


----------



## jensos (Aug 17, 2014)

Chris, thank you very much for your help. I guess there can never be a single "clean" (meaning: theoretically valid and provably optimal) solution to the problem of combining different libraries recorded in different spaces. But your approach looks systematic and seems to make a lot of sense to me. Your pointer to Scoring Stages #3 is very helpful, I will take a good look at that one.

So far I have used VSS to place my dry (VSL) libraries with the ER enabled and have never heard a phasing problem. But it could well be that Piet is right and more critical listening is necessary.

Thanks again,
Jens
--


----------



## The Darris (Aug 17, 2014)

jensos @ Sun Aug 17 said:


> Your pointer to Scoring Stages #3 is very helpful, I will take a good look at that one.



Just to be clear, this is the training series I was talking about: http://www.alexanderpublishing.com/Products/The-Visual-Orchestration-Trilogy__Spec-VizOrch-Bundle-Dwnld.aspx

It has a hefty price tag for all three. I actually only got 2 and 3 as those were really the ones I needed to help explain this whole placement thing. The biggest point Peter makes in his series is that all of the placement work is ear training. Our ears are very sensitive to sound so doing as many AB comparisons as much as possible will only make you stronger with how it sounds. I promise that you will notice a huge difference in your mixes once you spend a few days playing around and setting stuff up. 

Best,

Chris


----------



## Per Lichtman (Aug 17, 2014)

@re-peat All I can say is, I disagree with a lot of what you said, but respect that your own approach my also yield results that are very useful to you.

At the same time, several of the tools you have derided have helped me in my own work, including mixes that were acclaimed by Grammy winning performers, a sound designer and audio tech. teacher that taught several of the THX/Skywalker Sound guys and one of the most prolific soap-opera producers on network television. Am I implying that any of those people have better ears than you? Not at all, but it seems just as unfair to imply that the reverse is true.

There really is a lot of room for personal taste on this.

@jensos While it's great to always be on the lookout for helpful tools, don't let the discussion here fool you into the idea that you've bought something sub-par. The tool you already bought has one of the best price to performance ratios out there, and frankly every single tool is going to have pros and cons.


----------



## Per Lichtman (Aug 17, 2014)

@Jensos But in case you don't already, I would suggest using your tools in conjunction with a convolution reverb library. And despite the disparaging wording some others have used in my own very close listening (and at times blind and repeatable preferences) I found that the "best" convolution reverb libraries sounded better to me than algorithmic ones, but don't break the bank, regardless of the approach you take.

My favorite reverb solutions (so far) have either been Acustica Audio Nebula libraries based on real spaces/real plates/real springs or Numerical Sound libraries based on real spaces. I've been egging Numerical Sound to release another collection for a while, but unless I ever succeed in that, the least expensive ones are from their Hollywood Series. I'm also curious about Peter Roos Teledex plug-in whenever that comes out.

And to be clear, I'm not saying (at all) that re-peat doesn't make great music using the methods and preferences they have expressed and haven't listened to enough of their music to have any sort of opinion on the subject. My own opinion for the best workflow simply diverges and has gotten me work with a lot of people I really respect and look up to over the years and I feel free to present it as what it is - opinion.


----------



## germancomponist (Aug 17, 2014)

Opinions are good/great/ not so great....., but results are much more important!

I would like it to listen to an example what shows how good this plugin works. 

Feel free to post an example!


----------



## Per Lichtman (Aug 17, 2014)

@germancomponist I'll try to whip something up a little later if I have time - but I'm away from my studio computer right now, downgrading a laptop from Mavericks to Mountain Lion. 

@jensos By the way, a helpful approach for learning placement is if you can find a favorite recording of a live orchestra where a single instrument of section plays a solo right before a pause. The simpler the excerpt is to reproduce in your MIDI programming, the better.

If you take that audio clip and bring it into your DAW, you can then program your own part on top of that (so as to match phrasing as closely as possible).

After you've done so move the audio clip and your MIDI clip on the timeline so that they play consecutively instead of concurrently. Then set your loop points to include both and listen to looped playback.

Now begin to modify the settings for your programmed sequence to try and get closer to the original recording. Believe me, it can be a lot easier when you're getting immediate feedback on the accuracy like this. What follows is a short simplified workflow that I use when I'm not adding a lot of other stages.

Normally, I set placement (whether via panning or a tool like VSS) first. Then I modify volume and sometimes make EQ changes. At that point, I start to bring up the reverb send or sends (depending on whether I'm using a combined ER and Tail or separated ones) until it gets similar.

That's pretty much it for a simplified workflow and you can get great results with that alone.

There's a lot of good information in Alexander University's Visual Orchestration 3 about hall sizes and reverb times, in case you happen to know the space used for recording, that can help pick an appropriate reverb.

http://www.alexanderpublishing.com/Products/Visual-Orchestration-3--DOING-The-Basic-Virtual-Orchestral-Mix__Spec-VizOrch-03-Dwnld.aspx (http://www.alexanderpublishing.com/Prod ... Dwnld.aspx)

Anyway, I hope that helps!


----------



## José Herring (Aug 17, 2014)

germancomponist @ Sun Aug 17 said:


> Opinions are good/great/ not so great....., but results are much more important!
> 
> I would like it to listen to an example what shows how good this plugin works.
> 
> Feel free to post an example!



Yes. As with anything I think it's in the hands of the practitioner. The best examples of creating an artificial space I've heard was with MIR, which also produced some of the most horrible examples in the hands of lesser skilled people.

SPAT is great at what it does. But, I've only had real success with it using close miced live recordings. Using it on samples didn't work out so well for me if the sample had any of its own space already. Though it did work well with The Trumpet which has no ambience built in that I can hear.

VSS, I tried the demo and was not impressed with the first version of it. It sounded phasey and really fake to my ears. I'm looking forward to version 2 to see if any of that has been fixed. 

Like with anything, these tools have there use. The worse use of them is trying to get an ambient sample more ambient. Or to put a sample recorded in place into another place. The best use is for close miced solo recordings to place it in a space.

That being said, I've had even more success with spacial placement just using EQ and pan.


----------



## AR (Aug 18, 2014)

Some of you tried a cool plugin by SPL called Mo-verb. Great tool get achieve more ambience. I sometimes use it on percussion when I want it way way in the back


----------



## re-peat (Aug 18, 2014)

josejherring @ Mon Aug 18 said:


> (...) Using it on samples didn't work out so well for me if the sample had any of its own space already.(...)


SPAT was never designed, and certainly doesn’t encourage its users to explore re-spatializing samples that already have a distinct baked-in reverb in them.
In essence, SPAT is simply a virtual space, with full control over the space and over the source(s) inside it. You don’t add it to your source sound, _you put your source sound in it._ Big and fundamental difference. Which is why SPAT doesn’t have a dry/wet parameter. Chambers, rooms, halls, studios don’t have a dry/wet slider either, do they?

And having put your source sound inside SPAT, you then have complete control ― down to the tiniest detail if you like ― over the position, rotation, dispersion and projection of this source as well as over the way in which the space will respond to it and interact with it. A space, if it still needs mentioning, of any shape, size or character you care to imagine.

What SPAT doesn’t do, intelligently designed software as it is, is tell you that your samples (or modeled sounds) are going to sound as if recorded in ToddAO or OceanWay or Teldex or the Mozartsaal or whatever. See, that’s not the sort of sillyness that SPAT deals in. SPAT (or at least: the people behind it, and me agreeing) rightly feels that people who want that sort of thing, need to use different software.
What SPAT *does* do however, is offer you all the options to create a spatial environment for your samples (or modeled instruments) that is fully compatible with any of those spaces. And it also allows you to place your samples anywhere in that space. And always totally believably so.

It’s all a mock-up musician needs, really. It is, in fact, the best thing that could have come along for anyone working with virtual instruments. And this is not just me raving irrationaly. This is me being dead serious and expressing it as my deepest conviction, based on quite a few years of intense experience with this truly remarkable software.

Ask me to place The Trumpet in amongst Sable, and I’ll do it. No problem. (Leaving Sable untouched of course). I can even make it sound as if the trumpet has turned its bell away from the audience and pointed towards the back of the hall. Or make it sound as if the trumpet is off-stage (as is sometimes prescribed in certain scores). All without any difficulty at all, and always sounding pretty convincing, and sonically (and stereophonically) totally solid and integer. (A not unimportant aspect.)
The VSL harp to the right side of the Cinebrass trombones (again: leaving these unprocessed), somewhere in the back of the mix? Piece of cake. 
The Galaxy VintageD in some not-too-wet-sounding chamber, slightly to the left, together with the StraightAhead bass, deep center, and the Mixosaurus drumkit slightly more to the right? Ready when you are.
The VSL woodwind section nicely distributed in the middle row of a virtual orchestra dominated by the Berlin Strings, and subtly panned from middle left to middle right? And some SampleModeling horns thrown in behind them for good measure? Won’t take more than a few minutes.
LASS in a studio? LASS in a small hall? LASS in a big hall? LASS in an airport hangar? LASS in a rehearsal room with its doors closed? All perfectly possible and always sounding completely satisfying in its spatial definition.

The one thing you shouldn’t do with SPAT, is send Albion or Symphobia in, and hope to be able to reposition their low strings or brass or something. Or use it to try and pan the BML flute differently than it was recorded. No, no, no, no. Again: SPAT doesn’t do these kind of bad-sounding circus tricks. You can actually watch it lower its brow disapprovingly at you, and hear it utter a displeased snort, when asked to do such stupid thing.

SPAT really is ultra-professional software (and I don’t use the word lightly, but in this instance to the fullest of its meaning) and it assumes ― and thank god for this refreshing attitude in a piece of modern software ― that its users are intelligent, professional people too, who know what they want, who know what to do and who also know what not to do.

_


----------



## AR (Aug 18, 2014)

As for Spitfire...I recently recognized that when using just the Overheads and ambient mics I'll end up having harder L/R panned instrument than having the Tree mic on. But sometimes I wish I could pan the spitfire brass more to the left or right without loosing stereo width.


----------



## dedersen (Aug 18, 2014)

Seems relevant to mention that SPAT is quite heavily discounted right now, at 30% off.


----------



## playz123 (Aug 18, 2014)

AR @ Mon Aug 18 said:


> As for Spitfire...I recently recognized that when using just the Overheads and ambient mics I'll end up having harder L/R panned instrument than having the Tree mic on. But sometimes I wish I could pan the spitfire brass more to the left or right without loosing stereo width.



Since this thread is supposed to be about VSS and not Spitfire, I can only suggest that most Spitfire libraries really don't require much panning at all, if any, and certainly don't require VSS.

Re. the opinions expressed in this thread, there does seem to be a tendency by some to insist that a) VSS is basically unusable or shouldn't be used, and b) everyone who likes it must have faulty hearing or is incapable of knowing how and when to use it.  To be honest, while I respect everyone's right to share their opinion, I still believe that VSS is a useful tool, and when used properly in some situations it can be most effective. Much depends on when one uses it and how...just like many other 'tools' that are available. Certainly if one doesn't like it or feels it's not for them, that's fine, simply move on. But please also consider the fact that some of are as aware of the merits of panning and/or spacial placement and what works and what doesn't as you might be. Personally, I think Piet summed up VSS nicely when he wrote "...use it carefully and there’s good things to be done with it, sure. But use it carelessly, without listening or thinking, and you end up with bad sound". Not sure anyone can disagree with that.


----------



## José Herring (Aug 18, 2014)

playz123 @ Mon Aug 18 said:


> AR @ Mon Aug 18 said:
> 
> 
> > As for Spitfire...I recently recognized that when using just the Overheads and ambient mics I'll end up having harder L/R panned instrument than having the Tree mic on. But sometimes I wish I could pan the spitfire brass more to the left or right without loosing stereo width.
> ...



I don't think that anybody is trying to suggest that VSS is unusable. People just stress their opinion on a particular product. Some people, erhmm, express their opinion as absolute fact. But, it's just an opinion none the less.

Personally, I don't care how "highbrow" or uberprofessional any of these products are or claim to be. They all sound pretty bad imo. Just in a pinch you can press some pretty convincing things out of products like SPAT. I'm sure the same holds true for VSS. Just imo, the first version of VSS, didn't sound all that professional.

In the end, I'm heading in a different direction these days with my music that it's probably better for me not to get involved in these discussions. I honestly haven't used positioning software in over a year, just because, I'd rather just record the real thing in a good room rather than monkey around with plugins which is getting rather tedious for me. Also, when I use The Trumpet or even VSL, EQ and reverb seem to do just fine.

As for samples recording them in a good room and keeping the dynamics in tact is far better than using any of these tools. So I'm leaning towards that.

In the end. Use whatever gets you there. Just take care not to lie to yourself. It's easy to fool yourself with this stuff because the basics of it are a deception anyway. Trying to fool somebody else that you have the real thing. 

And, though I seldom see things from Piet's point of view. I will say that he's right about one thing. You can chase the dream so hard of trying to emulate the real thing, trying plugin after plugin after plugin. In the end, it's not the real thing and what one should be chasing is how to make the fake thing more musical because no sample will ever sound like this and it's foolish to try: https://www.youtube.com/watch?v=Egnbhf3aivQ#t=20


----------



## benmrx (Aug 18, 2014)

re-peat @ Sun Aug 17 said:


> benmrx @ Sat Aug 16 said:
> 
> 
> > (...) it's semantics. (...)
> ...



Ok, for the record, when I said "semantics" I'm referring to someone saying 'I want to place my instruments in the same room' means the same thing as 'I want to place my instruments in the same environment' or 'space', etc. IMO, that's semantics. 

To me, your issues with VSS or MIR are more to do with what's being talked about in the 'hype' thread. Most of what your saying could be applied to anything. Any new string library that says 'this will revolutionize string writing' or tape simulation that says 'this sounds exactly like tape' etc. most people know it's just marketing. If someone buys a $100 plugin (let's not forget that we have people from all walks of life here and in all different levels of experience) and gets bummed because it wasn't a magic bullet then it might be more considered to be naive on their part, not because they're dumb. Maybe they only have $100 to spend. Also, can you even compare VSS to SPAT which is around x10 the price? Just like it it would be unfair to compare the strings in GPO to the Sable line. However, should the strings in GPO be marketed as 'not as good as Sable'? Never gonna happen. 

It's up to us as end users to either believe the hype (whether it's for a plugin, strings library, etc.) or not. No developer is going to downplay their own product. They're going to market it as the coolest thing since sliced bread, like EVERY other product that has ever been released, ever. Same again could be said for a masking plugin for Photoshop. 'Now you can mask out frizzy hair with one click!' No you can't. And most people buying it already know that, but MAYBE it helps them at SOME point in the process..., or (more importantly) helps them to understand or look at the process from a different angle. "I never thought to make a grey scale first"..., or in the case of VSS, "I didn't think to roll off so much bottom end from the trumpets, but that aspect is really working".

I came to the world of composing and virtual instruments from being a record producer for 15 years. I've got decent, well trained ears. I've been mixing long enough to know what phase issues sound like. It took me a while to figure out how to get a good mix in my orchestral mockups because the approach is extremely different from mixing a pop/rock/singer songwriter piece. VSS REALLY helped me to hear things from a different angle, and was just as much a learning tool as anything else. I might not use it as much anymore because I like my own personal developed methods better now (which yes..., better writing simply leads to better mixes so that DOES make a huge difference) which offer me more flexibility. However, I'll be stoked to try out VSS2 when it arrives.


----------



## jensos (Aug 18, 2014)

@Per Lichtman and Chris, thank you very much for your replies. They are indeed very helpful. This whole debate here is actually quite useful for me, as it is making me focus more on the mixing and spatialisation issue. Having set up a half-way workable template many months ago I had totally neglected this aspect. (My main focus is really on learning good orchestration right now.) And I do admit that reading the comments by the true experts here is at times intimidating :o 

Btw, I'm indeed running everything through a convolution reverb. I have also done my best to get the relative loudness of the individual sections right, looking at all kinds of information sources (such as the natural volume table by VSL, the Rimsky book and various recordings). But all this has never been systematic.

I'm sure it is a very good idea to go over Peter Alexander's tutorials and tackle it all systematically. 

Thanks again for your help. Much appreciated!! And all the best,
Jens
--


----------



## Per Lichtman (Aug 18, 2014)

@jensos I'm glad the discussion's helpful to you and I'm wishing you lots of luck with the journey.


----------



## Mystic (Aug 18, 2014)

Same here. I actually found VSS the other day for the first time and saw it had settings for Hollywood Strings in it and thought to myself "why the hell would I need to position something that is already positioned in the samples?" so I started digging a little more because I was interested in exactly what it did. Then this thread came up and I learned quite a bit from it.


----------



## re-peat (Aug 19, 2014)

benmrx @ Mon Aug 18 said:


> (...) Most of what your saying could be applied to anything. (...)


I don’t think so, Ben. One doesn’t see the same delusions and misconceptions which infest reverb- and spatialization-related discussions when people exchange viewpoints on other aspects of virtual orchestration. Or not nearly as much anyway.
The idea that convolution brings more realism to a mock-up than algorithmic reverbs could, is of an absurdity which you never encounter when the subject turns to sampled strings or brass. The belief that the Appassionatas will sound more ‘real’ (I cringe even having to type the word) when run through MIR instead of through the Phoenix is the sort of patent imbecility that you never encounter when people are discussing virtual compressors or limiters. 
Poor old Jens being led astray in the direction of Peter Alexander’s lair ― the last place on earth where one should seek intelligent and useful advice on the subject of mock-ups, audio production and spatialization ― is something you just don’t see happening when we’re discussing virtual pianos.
Measuring rooms? Meticulously calculating tail lengths and predelays? It’s all pointless, futile and time-wasting nonsense that is of no use whatsoever to anyone hoping to increase the quality and believability of a mock-up.

I don’t know why it happened the way it did, I don’t know when it started, I don’t understand why it could spread like a highly contagious disease the way it did, but the idea that you need a complex multi-reverb system, preferably convolution based, and that you need numbers, measurements and charts to guarantee some degree of realism in a mock-up, is the single biggest fallacy currently debilitating the mock-up community.
And I’ll tell you why I don’t understand its success: because there is not a single nano-second of audible proof for it. I still have to hear the first snippet of audio which validates the idea. And going straight to the heart of the matter: I still have to hear the first mix using this approach, which proves, beyond any doubt, that the assumed realism of convolution-based spatialization is capable of reducing, let alone annihilating, the instrinsic artificiality of mock-ups.

The moment anyone comes forward with such a demo, I’ll shut up. But until then, I’m afraid I’ll have to keep saying what I have been saying for the past 10 years: it’s utter madness and a complete waste of effort, focus and resources.
Reverb is unimportant. Or, let me rephrase that: you have to make it unimportant. That’s when you know, and only then, that you have a healthy-sounding, well-written, well-programmed and well-produced mock-up to share with the world.

_


----------



## Per Lichtman (Aug 19, 2014)

@re-peat I'm not sure what you mean by "no proof". There are very definitely demos that compare the results of convolution with various impulses vs. algorithmic reverbs (both internal and external) so that people can hear what they like best. There's no snake-oil: people are pushing what sounds best to them and they are pushing the methods they used to get there. It sounds more like you're saying "I don't like the sound" than looking at whether things can be proved or not. Do you have tests that disprove what people have been saying or are you simply unsatisfied with the acoustic results that people are achieving with the methods?

The acoustic properties of a space are measurable and quantifiable as long as you define values for certain variables. The idea that those things aren't important is frankly bizarre: if you know what the values are, you have a good starting point for emulating it (and that's true for both algorithmic and convolution). After that, you need to use your ears, obviously. But if something was recorded in a room with a 4 second RT60 and you start off trying to emulate it using an RT60 of 0.5 seconds in your verb... are you trying to say that will sound at all similar?

So if you're emulating the sound of say the tree mics in say for instance the OT Berlin or Spitfire Audio BML series, wouldn't it be worth a shot trying a placement plug-in that supports a tree approach (like VSS ) as opposed to the many that don't? If your original sample material was not recorded using a tree approach, couldn't that potentially be helpful? To invert your original discussion, can you prove there's no way that could be helpful, as opposed to simply addressing one consideration in regards to phase?

However, your point about working so that the reverb becomes unimportant is one I almost agree with. If you mix your material to sound good without reverb before you start adding reverb it can be easier to hear flaws that might get masked and you're less likely to use reverb as a crutch. And certainly it makes sense to avoid spending "too much" time on verb versus other things.

But I still hear a difference between different reverbs and the preferences I've espoused have been consistent in blind testing. If I didn't take the time to verify it thusly, then I would feel like I wasn't doing my job as an audio engineer properly. Like I said earlier, I haven't used MIR so I can't comment on that part of the equation. But convolution vs. algorithmic is a little like sampled libraries vs. analog synths: both can sound great, but one is better designed to capture a snapshot of something real whereas the other is better designed to be create sounds that do not directly emulate a "real acoustic" sound. And of course, in both cases a tremendous amount of skill and effort goes into creating a good sound and it's possible to create awful results either way.

We have veered pretty far from focusing on VSS, though.


----------



## re-peat (Aug 19, 2014)

Per Lichtman @ Tue Aug 19 said:


> (...) people are pushing what sounds best to them (...)


I'm not so sure about that, Per. Browsing the reverb-related threads on this forum, you mostly see people simply not having a clue as to what to do. They don't seem to know what sounds best to them or, more likely, are hesitant and afraid to take on the responsibility to decide for themselves, and are thus completely willing to surrender themselves to the first person — with enough justified or assumed authority — who tells them _what they're supposed to do_. And it is this "you're supposed to do this and this"-thing which I often find, in this particular context anyway, highly questionnable if not completely absurd. Because I simply don't believe in the reality-emulating concept from which it springs.

See, if I read _"The acoustic properties of a space are measurable and quantifiable as long as you define values for certain variables. If you know what the values are, you have a good starting point for emulating it."_ my first reaction is, after gasping for some much needed oxygene: why would anyone ever wanna bother with any of that? Because it accomplishes absolutely nothing (in the way of improving a mock-up), other than giving you the meaningless satisfaction that you got the numbers right. But what true value do these numbers have, I ask, in the make-believe bric-à-brac that is a mock-up? None whatsoever. There are simply too many things going on in a mock-up which render this numerical accuracy completely irrelevant and pointless.
You may spatialize the Appassionatas with the utmost in numerical accuracy and meticulously measured acoustic properties all you want, in won't change the simple fact that we will be listening to blatantly artificial strings, going through clumsy artificial motions in a hopefully somewhat sympathetic but always artificial space.

Same thing with your "emulating the sound of tree mics". What does it achieve if you are able to do so? Because in the sum total of a mock-up mix, the sound (and the impression it makes) will always be much more determined — and unforgivingly so I might add — by the crippledness of the sources rather then by the accuracy of their spatialization.

I don't do numbers. I simply load up a reverb or spatialization plugin -- ReLab, Phoenix, SPAT, UAD, ... doesn't really matter much --, tweak a few parameters until I have a sound that I feel is nicely compatible with the space I have in mind for a particular mix (or with the space dictated by the most dominant library in the mix) and move on to much more important matters. Never given even the slightest thought to things like acoustic properties, tree mics, numerical accuracy or anything else of that sort. Never given a yoctogram of my attention to the "cubic meters"-parameter in SPAT and I never will. I simply move that fader until it sounds more or less right with everything else in the mix.

_


----------



## markwind (Aug 19, 2014)

re-peat @ Tue Aug 19 said:


> Per Lichtman @ Tue Aug 19 said:
> 
> 
> > (...) people are pushing what sounds best to them (...)
> ...



I couldn't agree, and this could apply to a great many other domains too. It's a very common attitude.


----------



## brett (Aug 19, 2014)

I think we're splitting hairs to some degree here. Sure, the idea of 'placing' your sampled instruments in a world famous concert hall or scoring stage, while potentially appealing, is somewhat flawed as a one size fits all approach for samples from different manufacturers. But if you are sensible and use the reverb/spacialisation tool cleverly and get a great result who cares if the philosophy of one dev differs from another? Who cares if it's in said concert hall or an algorithmic space. The listener certainly won't if you can get your mockup realistic and transparent. 

These are all just tools and while they do have different approaches in design and marketing it's all down to a combination of personal preference, user skill and budget. In the right hands each of the tools discussed above can get great results, just as they can produce really terrible results if care is not taken. 

A related analogy is the discussion surrounding 'all-in-one' plugins such as the Waves Signature series. One may argue that you are better off learning how to get the sound you want from first principles thus deepening your understanding of sound and mixing. There's truth in that. However if the tool of your choice gets you a great result quickly at a budget you can afford I reckon you've had a win. As it is with reverb/spacialisation plugs

That's the way I see it, but really what matters is what works for you and your level of experience.


----------



## re-peat (Aug 19, 2014)

markwind @ Tue Aug 19 said:


> (...) I couldn't agree, and this could apply to a great many other domains too. It's a very common attitude.


Common attitude or not, Mark, what's that got to do with anything, if I may ask? It's not because something is 'common attitude' that it should be relieved from (critical) observation and questioning, is it?
And yes, perhaps some of my remarks could apply to several other domains too. But again, so what? 

I'm afraid I don't really get what you're trying to say here.

_


----------



## markwind (Aug 19, 2014)

re-peat @ Tue Aug 19 said:


> markwind @ Tue Aug 19 said:
> 
> 
> > (...) I couldn't agree, and this could apply to a great many other domains too. It's a very common attitude.
> ...



Haha, i'm sorry. I totally meant to say: I couldn't agree more .


----------



## davidgary73 (Aug 19, 2014)

I totally agree with re-peat. Back in those days, there's nothing like VSS or MIR tools for us to use but yet, many make great records. 

It was all just some simple usage of reverb, delay and EQ + compressors hardware units. These tools made millions of records and many are still making hits but most importantly, the music that speaks to us. 

Andy's B mockup of Spitfire demo, they are truly beautiful crafted music and he does not even need any thing else other than maybe adding some reverb, some Eq or compression. 

Master your playing and programming skills and learn how to use your reverb, delay, EQ and compressor plugins well.


----------



## Per Lichtman (Aug 19, 2014)

@re-peat @markwind Except of course that you're both using expensive sample libraries designed to sound more realistic as opposed to just analog synthesis which could also "sound good". Are you claiming that has nothing to do with the fact that it sounds more realistic? Because it sounds like you are saying that one aspect of emulation is "okay" and another is not. 

I have never at any point said not to use your ears - in fact, comparing different tools and methods by using my ears is how I came to prefer my current working method. And when I listen back to my old mixes, I can hear the point at which the reverb got better, long before I go diving into the project notes to see what I used. I think we've all had tools that we tried that seemed great on paper but were underwhelming when we actually listened to them and I agree with you guys that you shouldn't use something that doesn't sound right to you.

@davidgeary73 Well of course Andy B's demo sounds good without needing additional tools - that's the way the library's been recorded and programmed to work. Trying the same thing with a close-miked library not recorded in-situ would be a completely differen ballgame. 

And what orchestral mock-ups from a long time ago that used commercially available libraries sound as realistic as the best modern ones? There are lots of tracks that I love from that period - but for musical reasons, not craft ones and that includes some of my own early mock-ups. 

But the big thing that confused me in your post was the suggestion to someone that's trying to learn about placement, reverb and spatialization in a traditional orchestral context that they shoild learn to master compression. EDIT: Except of course you said to learn to use it well and the comment about mastering applied to something else, so mea culpa - now back to my original post.  Here's a short listen of reasons why I think that should be one of the last FX to master in that context.

1) Compression is extremely genre-specific in its application. You don't hear it at all in most traditional classical music and you do hear it in some film music.

2) Compression has one of the longest learning curves of any mixing effect and some of the greatest potential for ruining a mix beyond the point of repair. Within normal tolerances, it's much easier for a mixing engineer to repair a poorly EQed stem than a poorly compressed one.

3) The coloration of many compressors in many contexts serves to bring a sound forward in a mix - which isn't typically the first thing you need if you're trying to move a close-miked library further back in your mix.

Of course compression can be useful (especially in terms of using synths and expanded percussion in a film score), but telling someone to focus on learning that when they are trying to learn placement seems like a strange order of operations.

And in terms of "great records", here's one way to look at: every stage between the creative idea and the time it reaches the listeners ear is like a math problem. Here's a simplied example:

creative idea X performance X instruments used X recording environment X recording engineer's skill X recording equipment = quality of the track

In that example, the maximum numerical value for each element goes in descending order from left to right. If you have a poor creative idea, it's difficult to save it if you get every single other variable in that equation right. If you marry a great idea and great performance, than the listener is likely to be more forgiving of the other elements, etc., etc. But no stage is unimportant - if you nail every stage, you can communicate the creative idea more directly to the listener than if any one of them is off. Some are just weighted more heavily than others.

When I go back and listen to old orchestral mock-ups, my favorites are the ones the start with great source material. Those maintain my interest in a way that lets me tune out the technical deficiencies more. And what some people did with limited tech. is impressive (some of Jeremy Soule's work with the original GPO, for example) but it doesn't sound as good on a craft level as the best mock-ups you'll hear today. The fact that we can enjoy old mock-ups on a musical level shouldn't stop us from trying to improve the technical quality of the ones we make today.


----------



## davidgary73 (Aug 19, 2014)

@Per

A small correction if you don't mind  

I wrote "and learn how to use your reverb, delay, EQ and compressor plugins well" and did not mention the word master. Master was used in this context "Master your playing and programming skills". 

Other than that, i agree with everything you said that learning to master compression has a deep learning curve hahaha but i love using Nebula 3rd party SSL, Manleys, 33609 compressors on drums and percussions. 

Well, we use whatever fits the music as long we continue to channel out good music. 

Cheers


----------



## Per Lichtman (Aug 19, 2014)

@davidgeary73 A legitimate distinction - I apologize for getting it wrong and will fix it right now. I should really get in the habit of using the quote button if I'm going to keep writing long posts on my phone like that. 

And I definitely understand what you're saying about drums and as for the 33609, it is one of the most beautiful compressors I've ever worked with.


----------



## re-peat (Aug 19, 2014)

Per Lichtman @ Tue Aug 19 said:


> (...) Except of course that you're both using expensive sample libraries designed to sound more realistic as opposed to just analog synthesis which could also "sound good". Are you claiming that has nothing to do with the fact that it sounds more realistic? Because it sounds like you are saying that one aspect of emulation is "okay" and another is not. (...)


No, I distinguish between emulation which matters and emulation which doesn’t. A mix is only as convincing as its most convincing ingredient, and since the most prominent ingredient in a mock-up (its timbres) isn’t very convincing to begin with ― at least, to my ears it isn’t ― I fail to see much value in trying to elevate the believabilty of ingredients of less prominent importance beyond making them simply functional and non-distracting. “Realistic” never even enters into it, as far as I’m concerned.

If I decide to use, say, The Trumpet with LASS, I have two serious problems to begin with: neither of them sound very realistic. Decent, oh yes, and better than much else in the virtual world, but still: nowhere near to what I consider realistic. And no matter which type of spatialization I throw at these problems ― good, bad, cheap, expensive, convolution, algorithmic, whatever … ― they won’t go away. Because the problem is 100% intrinsic to the source sound. I have to accept that. And I do. But to me, that also means that there is absolutely no point in trying to create a “realistic space” around these instruments because the attention-grabbing sounds of my mix will always be the problematic ones : not the space, but the samplemodeled trumpet and the sampled strings.

But you’re right, I do prefer to work with quality libraries rather then with kirkhunteralia, but even with the best libraries I still start every mock-up project with the core problem that my sounds don’t sound all that realistic. So why, I keep wondering, would the space around those sounds have to be?

_Functional_ and _non-distracting_, those are the fellas for me. 

_


----------



## germancomponist (Aug 19, 2014)

re-peat @ Tue Aug 19 said:


> So why, I keep wondering, would the space around those sounds have to be?



o/~ :mrgreen:


----------



## re-peat (Aug 19, 2014)

For those interested, here are two short videos of SPAT in action. The first one using an anechoic flute recording (so as not to be distracted by the limitations of a sampled instrument), and the second one presenting two 'spatted' SampleModeling instruments, trumpet and sax, against a simple orchestral backdrop. (That orchestral backdrop is very rough and sketchy, I'm sorry about that, but that video wasn't made to display my mock-up skills or lack thereof, but only to illustrate some possibilities of SPAT.)

*1. SPAT / Quick glance at some parameters*
*2. SPAT / "Kijé's Wedding (Prokofiev)" / SampleModeling Trumpet & Saxophone*

_


----------



## germancomponist (Aug 19, 2014)

re-peat @ Tue Aug 19 said:


> For those interested, here are two short videos of SPAT in action. The first one using an anechoic flute recording (so as not to be distracted by the limitations of a sampled instrument), and the second one presenting two 'spatted' SampleModeling instruments, trumpet and sax, against a simple orchestral backdrop. (That orchestral backdrop is very rough and sketchy, I'm sorry about that, but that video wasn't made to display my mock-up skills or lack thereof, but only to illustrate some possibilities of SPAT.)
> 
> *1. SPAT / Quick glance at some parameters*
> *2. SPAT / "Kijé's Wedding (Prokofiev)" / SampleModeling Trumpet & Saxophone*
> ...



Thks for sharing, Sir! Very enlightening!


----------



## Per Lichtman (Aug 19, 2014)

re-peat @ Tue Aug 19 said:


> [A mix is only as convincing as its most convincing ingredient, and since the most prominent ingredient in a mock-up (its timbres) isn’t very convincing to begin with ― at least, to my ears it isn’t ― I fail to see much value in trying to elevate the believabilty of ingredients of less prominent importance beyond making them simply functional and non-distracting. “Realistic” never even enters into it, as far as I’m concerned.



While I agree that "functional and non-distracting" can be good and helpful targets, I disagree that a mix is only as convincing as it's most convincing element. I would say a mix is as convincing as it's least convincing element. That weakest element can torpedo one of your primary criteria: non-distracting. I mean, I think many of us heard it in some low-budget scores from early 90s projects where strings were sampled and programmed less convincingly than the strings and were very loud in a mix. We might be able to focus on the dialogue without getting distracted by the strings playing simple parts at a low volume, but when that brass blasted through... man did it get distracting and unconvincing.



re-peat @ Tue Aug 19 said:


> ... Decent, oh yes, and better than much else in the virtual world, but still: nowhere near to what I consider realistic. And no matter which type of spatialization I throw at these problems ― good, bad, cheap, expensive, convolution, algorithmic, whatever … ― they won’t go away. Because the problem is 100% intrinsic to the source sound. I have to accept that. And I do.



I agree, live performance can go so much further than samples have (and I don't see that changing because samples are recorded from live performances outside the context of the specific piece ). No arguments so far.



re-peat @ Tue Aug 19 said:


> But to me, that also means that there is absolutely no point in trying to create a “realistic space” around these instruments because the attention-grabbing sounds of my mix will always be the problematic ones : not the space, but the samplemodeled trumpet and the sampled strings.



In my experience, even the best close-miked sounds tended to sound unconvincing when placed a mediocre space and a wide range of sounds benefitted from being placed in a more convincing space. Even sample libraries I purchased more than a decade ago sound better in mixes using the tools I have now than they did with the tools I started out with (like the NFX reverb in GigaStudio). I think of the space as a modified for the sound, not a sound in and of itself (unless you're talking about hum or some other sort of room tone).



re-peat @ Tue Aug 19 said:


> But you’re right, I do prefer to work with quality libraries...



Always nice to be on common ground. 



re-peat @ Tue Aug 19 said:


> ...but even with the best libraries I still start every mock-up project with the core problem that my sounds don’t sound all that realistic. So why, I keep wondering, would the space around those sounds have to be?
> 
> _Functional_ and _non-distracting_, those are the fellas for me.
> 
> _



I respect the viewpoint but I strive to make it sound as realistic as I can (with the caveat of also being aesthetically pleasing to me) because to my ears an unrealistic space detracts from my efforts to sound like a performance. Once again, it goes back to our differing viewpoints on whether a mix is as convincing as it's strongest element or its weakest one.

But you bring up a great, great, *great* points that I agree with: non-distracting is a very important consideration in a mix. If an element starts to distract you in an unintended way, it needs work. If a mix scored to picture starts to distract from the action in an unintended way, then it needs work. And "functional" can be a helpful guide in certain contexts because (as we both agree) you're pretty much just fighting a losing battle if you won't be satisfied until you hit a 1:1 reproduction of a live performance. The live performance really is the "unreachable upper limit".


----------



## Per Lichtman (Aug 19, 2014)

re-peat @ Tue Aug 19 said:


> For those interested, here are two short videos of SPAT in action. The first one using an anechoic flute recording (so as not to be distracted by the limitations of a sampled instrument), and the second one presenting two 'spatted' SampleModeling instruments, trumpet and sax, against a simple orchestral backdrop. (That orchestral backdrop is very rough and sketchy, I'm sorry about that, but that video wasn't made to display my mock-up skills or lack thereof, but only to illustrate some possibilities of SPAT.)
> 
> *1. SPAT / Quick glance at some parameters*
> *2. SPAT / "Kijé's Wedding (Prokofiev)" / SampleModeling Trumpet & Saxophone*
> ...



I look forward to checking these out. Thanks!

I'll see if I can post some VSS examples next time I'm by the studio computer.


----------



## germancomponist (Aug 19, 2014)

Per Lichtman @ Tue Aug 19 said:


> I'll see if I can post some VSS examples next time I'm by the studio computer.



Cool, I am interested.


----------



## re-peat (Aug 19, 2014)

Per Lichtman @ Tue Aug 19 said:


> I would say a mix is as convincing as it's least convincing element.



You're entirely right, Per. My mistake. I phrased it badly. What I actually meant, and what I should have written, is: _a mix can only be as convincing to the degree in which its most convincing ingredient convinces_. Even if that begins to sound alarmingly like some ancient Macedonian proverb or something.

_


----------



## Echoes in the Attic (Aug 19, 2014)

On the subject of spatialization and a "virtual stage", I was just wondering if anyone had any experience with the HOFA convolution reverb which I just noticed has a stage positioning parameter, the graphic of which looks rather similar to the 3D grid in Heavyocity's Damage. I haven't had a chance to try it and no doubt it does not do anything like the specialized plug-ins like SPAT, but it seems to be more than just wetness and panning.

cheers


----------



## Per Lichtman (Aug 19, 2014)

@re-peat Proverb or not, I can agree with that.


----------



## Per Lichtman (Aug 19, 2014)

@Echoes in the Attic I haven't tried it, but since SoundOnSound quoted (and properly attributed) one of the articles from SoundBytesMag.net recently when we had content they didn't, it seems only fair that I extend the courtesy of linking to their article when the situation's reversed. 

http://www.soundonsound.com/sos/jan14/a ... reverb.htm

The most relevant section has the title "HOFA, So Good?"


----------



## Conor (Aug 19, 2014)

With apologies for continuing to not talk about VSS in this thread...

Re-peat, when using SPAT with LASS, how do you compensate for LASS's built-in stage placement? (Or do you?)


----------



## re-peat (Aug 20, 2014)

Cobra,

If I use SPAT with LASS (which is not often, as I'm not the world's most committed LASS-user), it’s mostly for a different purpose: not so much for repositioning but for something which I haven’t mentioned yet and which might be called: *dynamic spatialization*. Something I can only really do the way I feel it needs doing, with SPAT.

See, I often find that, when you simply put reverb on an instrument (or section) that there’s no real musical response between the dynamic expression of the source and the way the room reacts to that ― reverb being a very dumb device that will only generate a static response ― and somehow that doesn’t always sound right to me. Most noticeable, I find, at the beginnings and the dimminuendo endings of phrases, or during longer sustains that have some expressive curve, all places where I often want that the sound dissolves a little bit more in its surrounding space. And you can’t do that with just CC11 alone of course.

If I don’t use SPAT, I simply raise the send of the reverb a little bit at those points, which works quite well, but with SPAT, I can do one better. Well, more than one. By automating YAW, APERTURE and DISTANCE (see the first video above for an explanation of these parameters), I can effectively create a sort of dynamic interaction between the source and its space, increasing the blend at beginnings and endings of phrases, and reducing it during the phrase.
It’s not a phenomenon which occurs in real life quite in the same way of course, but it can be quite musical nonetheless, I believe, because if applied well, it definitely does increase the expression of a performance.

Here’s *a little example*, a simple line in octaves for LASS celli and basses. (Sorry for the quick and rather rough programming of the strings). Now, watch how the source moves, rotates and changes its directional projection a little bit at those points where I want it to dissolve more in the space. You have to be quite careful with this ― I overdid it a bit in the example, but that is only for demonstration purposes ― and you certainly do not want to create the illusion that the instruments are moving back and forth all the time, no, simply a subtle increase of the blurring between source and space. (I like the effect best of all on wind instruments though, especially the SM instruments.)

And here’s *the dry version* (LASS straight out of the box, no processing whatsoever) for comparison purposes. Note the big difference with the spatted version at the beginnings and the endings of the phrases.

_


----------



## marclawsonmusic (Aug 20, 2014)

Just wanted to say thanks for these videos, Piet. I appreciate you taking the time to share your experience.


----------



## Peter Alexander (Aug 20, 2014)

CobraTrumpet @ Tue Aug 19 said:


> With apologies for continuing to not talk about VSS in this thread...
> 
> Re-peat, when using SPAT with LASS, how do you compensate for LASS's built-in stage placement? (Or do you?)



I'm not Re-peat but I'll give you a starting answer. All of the libs we use were recorded in rooms each with its own RT60 and set of dimensions. Spat enables you to emulate that or you can use one of the preset rooms as Piet did in his demos. So if you have a very dry sound like LASS, you can insert the strings into a room you created emulating from Air Lyndhurst to Sony which is a way of matching libs recorded in different rooms.

It all depends on your recording strategy.

You can set Spat to match the stereo width of each section, or using surround, position all the strings at once.

BTW - Piet's vids are terrific.


----------



## Rv5 (Aug 21, 2014)

re-peat @ Tue Aug 19 said:


> For those interested, here are two short videos of SPAT in action. The first one using an anechoic flute recording (so as not to be distracted by the limitations of a sampled instrument), and the second one presenting two 'spatted' SampleModeling instruments, trumpet and sax, against a simple orchestral backdrop. (That orchestral backdrop is very rough and sketchy, I'm sorry about that, but that video wasn't made to display my mock-up skills or lack thereof, but only to illustrate some possibilities of SPAT.)
> 
> *1. SPAT / Quick glance at some parameters*
> *2. SPAT / "Kijé's Wedding (Prokofiev)" / SampleModeling Trumpet & Saxophone*
> ...



Great videos, thanks for posting!


----------



## tmm (Aug 23, 2014)

Piet, that was a great vid! I'm not a SPAT or VSS user, but seeing how you shifted the perspective on the strings to accent the phrases was really inspirational. I control perspective by balancing between 3 busses (dry, ER, and algo tail), and after seeing your vid, I tried automating the level of the dry signal on my strings to pull slightly forward at the beginning of phrases, and away a little at the end of phrases. It works really well to add some extra ebb and flow to the string phrasing. Thanks for that Piet!


----------



## gaz (Aug 23, 2014)

Hi Piet. I tried to DL the vidoes but it looks like they've been removed. Any chance of reposting them?

Cheers,
Gari


----------



## re-peat (Aug 24, 2014)

Gari,

I put all the SPAT-videos together *here*.

_


----------



## gaz (Aug 24, 2014)

Thanks! You're a star!


----------



## Heath (Aug 24, 2014)

I couldn't figure out how to use SPAT in Cubase with more than one channel feeding it, so I gave up and let the demo lapse. But at least I got to hear the reverb in it. It's the best I've heard, and I own the Lexicon PCM Native Reverb, so that's saying a lot!

BTW, a happy VSS user here. With respect, the people who don't like it may not be using it correctly (and yes, you do have to be very careful with positioning - a tiny shift can make the difference between great and, well, not great depending on the timbre of the instrument in use) or are simply finding that it's not suited to their particular music/ears. I'm looking forward to V2.


----------



## Marius Masalar (Aug 24, 2014)

Accidentally reported a post in this thread...that's what I get for tablet browsing. Sorry, mods! Ignore the report and delete this message as needed.


----------



## Daryl (Aug 25, 2014)

Heath @ Mon Aug 25 said:


> I couldn't figure out how to use SPAT in Cubase with more than one channel feeding it, so I gave up and let the demo lapse. But at least I got to hear the reverb in it. It's the best I've heard, and I own the Lexicon PCM Native Reverb, so that's saying a lot!


You have to insert it onto a multi channel group. However, because you can't create your own multi channel preset on Cubase, you have to find one that most closely matches what you want to do., I found that more than 4 stereo instruments required a huge workaround, and TBH wasn't worth the effort.

There is a very good post in the Nuendo forum that I could link to, of you are interested in giving it another go.

D


----------



## Heath (Aug 25, 2014)

Daryl @ Mon Aug 25 said:


> You have to insert it onto a multi channel group. However, because you can't create your own multi channel preset on Cubase, you have to find one that most closely matches what you want to do., I found that more than 4 stereo instruments required a huge workaround, and TBH wasn't worth the effort.
> 
> There is a very good post in the Nuendo forum that I could link to, of you are interested in giving it another go.
> 
> D



Please do share the link. Thanks. Sounds quite complex though. I'm not familiar with the method you're talking about. To be honest, I'm the kind of idiot that loves a good step-by-step tutorial video about these things. Also, although I might like to give SPAT another try, I'm at the end of my demo period - one quick bite of the cherry and goodnight, yet they didn't sell it to me!


----------



## melonioustonk (Oct 25, 2014)

Hello!

First time post, long time reader here. I'm just wondering if anyone here was able to successfully upgrade their vss1 to ver 2.0, I'm asking this question as I recently requested an upgrade code via the vss webpage, this was a few days back, I also tried emailing Gabriel directly three days ago, with no response, and with my previous experience, had been very quick in answering any queries, hopefully everything is well with him, as I understand that he's doing all this by himself...anyone here had any contact with him recently? Thanks for reading 


----------



## playz123 (Oct 25, 2014)

melonioustonk @ Sat Oct 25 said:


> Hello!
> 
> First time post, long time reader here. I'm just wondering if anyone here was able to successfully upgrade their vss1 to ver 2.0, I'm asking this question as I recently requested an upgrade code via the vss webpage, this was a few days back, I also tried emailing Gabriel directly three days ago, with no response, and with my previous experience, had been very quick in answering any queries, hopefully everything is well with him, as I understand that he's doing all this by himself...anyone here had any contact with him recently? Thanks for reading 



I had no problems doing what you want to do, but that was awhile ago. Yes, Gabriel usually is quite prompt, so that's a mystery. Any chance your ISP could be blocking a reply or it's being identified as spam? Anyway, maybe try again, and hopefully your message will get through. He's usually notified too as soon as an order is placed, so let's hope the online purchasing service did their job.


----------



## melonioustonk (Oct 25, 2014)

> I had no problems doing what you want to do, but that was awhile ago. Yes, Gabriel usually is quite prompt, so that's a mystery. Any chance your ISP could be blocking a reply or it's being identified as spam? Anyway, maybe try again, and hopefully your message will get through. He's usually notified too as soon as an order is placed, so let's hope the online purchasing service did their job.




Thanks Frank! I email Gabriel a day after vss 2 came out, just a general question, and he replied back the next day, unfortunately I was in a middle of a scoring project and I was unable to commit to vss 2 at the time, just hope he's well, I'll wait a few more days, or maybe I'll try the support link on web page instead.


----------



## Sid Francis (Oct 25, 2014)

No need to wait , just email him and he will answer in no time: a very nice guy and Version 2 is too good to waste a few days .-)


----------



## melonioustonk (Oct 25, 2014)

> No need to wait , just email him and he will answer in no time: a very nice guy and Version 2 is too good to waste



Thanks Sid, I have emailed him three days ago on his direct email, as I have not received my upgrade code (requested via webpage days before) any emailed from Gabriel previously goes directly to my inbox, also been checking my spam folder, no replies there either, I'll try my luck from his support link email.

Thanks for reading!


----------

