# What makes a mix sound wide if its bad practice to pan orchestral samples?



## thevisi0nary (Jun 6, 2019)

I know that orchestral samples are usually pre panned, but when I reference my tracks against a soundtrack that I like, the soundtrack still is usually much wider. 

I am wondering what kind of techniques people are using with their sample libraries to get them to really fill out space while still have presence and separation.


----------



## jneebz (Jun 6, 2019)

Cubase Stereo Enhancer. Just gotta be careful to check your mix in mono.


----------



## David Chappell (Jun 6, 2019)

Nothing wrong with panning orchestral samples further, in my opinion. Just be sure your DAW is using balanced panning (reducing the volume of one channel relative to the other) rather than stereo panning (where one channel gets gradually bled into the other - this will cause issues). Also be sure your arrangement is well balanced from left to right otherwise it can sound a bit weird if you go a bit mad with it.


----------



## shawnsingh (Jun 6, 2019)

What soundtrack is your reference? in a traditional orchestral style, the feeling of width often comes from recording and orchestration more than mixing techniques.

For example, spaced mic pairs like "outriggers" or Decca tree will be good for a 3d spatial image compared to coincident mic techniques. And capturing a bit of the room in those mic positions also add more feeling of width in the recording. So if you can use Decca mic positions or similar in your template over close mics, that could be worth a try.

But even more important could be orchestration. Layering too much can actually smooth out the high frequencies that would have given clarity and power to each instrument, and the result becomes more muddy and vaguely centered. Instead, assigning separate notes of a chord to individual wind instruments, or individual string sections, will spread the music in stereo, and it will also allow the brightness/clarity of each instrument to shine on its own even when the entire orchestra is playing.

Great examples of these points are the Andy Blaney demos for spitfire symphonic brass. I've heard those were done using outriggers only, but I don't really know.

Do you feel like these points might explain some width that you're hearing in your reference track, or do you feel it's something else?


----------



## Heinigoldstein (Jun 7, 2019)

You can experiment with different mic positions. Wide room mics in combination with panned close mics can help sometimes.


----------



## CGR (Jun 7, 2019)

shawnsingh said:


> Instead, assigning separate notes of a chord to individual wind instruments, or individual string sections, will spread the music in stereo, and it will also allow the brightness/clarity of each instrument to shine on its own even when the entire orchestra is playing.


 ^ This. A big factor I've found in attaining clarity, depth and fullness.
Re. panning, I find this plugin invaluable:
https://www.bozdigitallabs.com/product/pan-knob/


----------



## Light and Sound (Jun 7, 2019)

Most of the mixers I've worked with go for the Waves S1 on the mix bus. I tried it myself for my writing template, it's powerful, but you need to be restritive with it, just give things a little bit of a boost with it if needed.

For orchestral music you can use the close mic panning more extreme. Only works if the samples are kept in their natural recording phase (so the close mics are ahead of the hall mics). Keeping the close mics in the mix but quite low, as low as you can put them so that you can still hear the difference if you bypass the close, will allow you get that large stereo width and separation, as you will hear the attack of the close mic but then immediately "resolve" in to the ambient mics - gives the impression they are panned more than they are.

Just keep in mind that any changes you make to close mics should be done before you use any stereo enhancement, stereo imaging should always be done last and only IF you need it - not just "because".


----------



## CGR (Jun 7, 2019)

Light and Sound said:


> Most of the mixers I've worked with go for the Waves S1 on the mix bus. I tried it myself for my writing template, it's powerful, but you need to be restritive with it, just give things a little bit of a boost with it if needed.
> 
> For orchestral music you can use the close mic panning more extreme. Only works if the samples are kept in their natural recording phase (so the close mics are ahead of the hall mics). Keeping the close mics in the mix but quite low, as low as you can put them so that you can still hear the difference if you bypass the close, will allow you get that large stereo width and separation, as you will hear the attack of the close mic but then immediately "resolve" in to the ambient mics - gives the impression they are panned more than they are.
> 
> Just keep in mind that any changes you make to close mics should be done before you use any stereo enhancement, stereo imaging should always be done last and only IF you need it - not just "because".


Great info here. Thanks for sharing.


----------



## Akarin (Jun 7, 2019)

A quick thing I use often is to place a stereo enhancer on the hall reverb bus. Gentle 110% works wonders.


----------



## shawnsingh (Jun 7, 2019)

Interesting, I'm going to try that next time.


----------



## axb312 (Jun 7, 2019)

Anything wrong with using A1 stereo control?


----------



## Dietz (Jun 7, 2019)

David Chappell said:


> Just be sure your DAW is using balanced panning (reducing the volume of one channel relative to the other) rather than stereo panning (where one channel gets gradually bled into the other - this will cause issues).



What kind of issues? 8-/

Quite contrary: Using "balance" instead of a proper panning device will very likely ruin the sound because you lose 50 percent of the information. As a (very obvious) example, imagine the recording of a piano in full stereo: When you just lower the volume of the right side to make it appear to sit left on the stage, you won't hear much of the all-important mid- and treble-range any more.


----------



## David Chappell (Jun 7, 2019)

Dietz said:


> What kind of issues? 8-/
> 
> Quite contrary: Using "balance" instead of a proper panning device will very likely ruin the sound because you lose 50 percent of the information. As a (very obvious) example, imagine the recording of a piano in full stereo: When you just lower the volume of the right side to make it appear to sit left on the stage, you won't hear much of the all-important mid- and treble-range any more.


A relatively close mic'd piano is more an exception, I would say.

I'll need to ramble a bit to explain (for the benefit of others reading), but hey ho.

So, there's two main ways we perceive directionality: level difference and time difference. Basic example being how a mono sound can be made to sound more left by decreasing the level of the the right channel, which would be using level difference.

But then with stereo recordings time difference comes into play. With 1st violins the sound is coming from the left, and since sound doesn't travel instantaneously, that sound will be picked up by the left microphone slightly before the right, only by a few milliseconds, but on playback the brain can perceive that the sound is coming from the left. Haas effect being a way to utilise that on mono sources to create directionality.

In such a situation both microphones receive a similar level of signal, so the directionality is coming more from time difference than level difference.

If you used true stereo panning, you'd be bleeding the slightly delayed signal from one channel into the other, which would if anything somewhat ruin the effect of time difference. Using balanced panning you just reduce the level of one channel, so the time difference is preserved. For instruments with either room/ hall mics, that's why I'd say balanced is better.

The reason I say a close mic'd piano is more an exception is because the source doesn't stay central, and each microphone isn't necessarily receiving a similar level - the left microphone will pick up bass notes louder than the right, since the left is closer to those strings. Likewise the left would pick up treble notes quieter, since those strings are further away. So in such a situation it'd be better to use true stereo panning so that the levels stay consistent across the range.


----------



## shawnsingh (Jun 7, 2019)

I think both perspectives are valid depending on the scenario. Sounds like Chappell is considering decca tree / spaced microphone setups where room is quite present in the recordings. Sounds like Dietz is considering very dry recordings, especially if they're recorded with coincident mic setups that minimize the delay between the channels.

Feel free to correct me if I'm mischaracterizing the assumptions from either of you =)


----------



## Dietz (Jun 8, 2019)

shawnsingh said:


> Sounds like Dietz is considering very dry recordings, especially if they're recorded with coincident mic setups that minimize the delay between the channels.


In a similar vein, yes. Not necessarily "very dry", and not just coincident, but also all kinds of small A/B (like ORTF-setups). - I rarely feel the urge to pan recordings derived form a full-blown Decca tree. ;-D


----------



## Saxer (Jun 8, 2019)

There are a lot of hybrid soundtrack mixes where the center is left for dialogue. So the signals are panned harder but often compensated by corresponding signals on the opposite like first violins left and second on the right or additional textures, synth doublings etc. That makes the mix feel wider than a traditional orchestra mix.


----------



## labornvain (Jun 9, 2019)

David Chappell said:


> So, there's two main ways we perceive directionality: level difference and time difference. Basic example being how a mono sound can be made to sound more left by decreasing the level of the the right channel, which would be using level difference.
> 
> .



So this is generally true, and I don't mean to be pedantic, but actually, the two primary localization cues that our brain uses to determine the source of a sound are time delay and frequency response.

The latter is a result of what's called the auditory shadow. Any frequencies higher than about 2k will be blocked by your head. So if a sound is coming from 90° to your left, then that sound will reach your right ear at about the same volume, except those frequencies that are blocked by the auditory shadow.

You can test this by taking a mono signal and splitting it into two channels coming out of your left and right speakers at equal volume. Then roll-off the frequencies above 2K on the right channel and it will sound very much like the sound source is on your left.

The reason is is that the human head is on average about 5 in wide. The frequency whose wavelength is less than 5 in will be blocked by your head, whereas wavelengths longer than 5 in will go around your head. So sound waves around 2K or 2500 K have a wavelength smaller than 5 in. Therefore they get blocked.

The other primary localization cue is the time delay between each ear, as you already mentioned. But again, a difference in level is not required for the localization to occur.

Indeed, it takes an extreme difference in level for localization to occur. If you take your split mono source which is running to two separate channels on a mixer and merely pull the volume down on one channel, localization will only begin to occur when the difference is quite drastic.

But what is happening is that if, for example, you turn down the right Channel, then what you're really doing is allowing the other localization cues to take effect

I mean, if the Sound Source is only coming from your left speaker then you are not simulating localization cues, you actually have a genuine soundsource that's on your left side. So the time delay and the frequency roll off are allowed to have their effect.

When you have a mono sound source running to two speakers, they each have their own set of localization cues, time delay and auditory shadow. But these localization cues are masked by the other speaker. So it sounds like it's coming from The Middle, sort of.

So lowering the level of one side is actually just removing this masking effect.

This all might seem a bit academic, but it actually has real-world consequences. When you have a mono source coming out of two different speakers that are spatially separated. Each ear is hip hearing delayed versions of the opposite speaker combined with the non delayed speaker on each side. This creates a kind of fuzziness that we've all just gotten used to it. If you want to hear a mix without this fuzziness listen to some of the old Beatles records where they were panning everything hard left or heart right.

We all find this painting method a bit novel in these days but actually it was responsible for making the Beatles tracks sound incredibly punchy. When you're in a room and the bass is coming out of one speaker and one speaker only, clarity ensues.


----------



## colony nofi (Jun 9, 2019)

@David Chappell - this is a very interesting topic and has been thoughtfully / respectfully debated. I like it. 
Maybe if it goes further / new questions come up I might chime in with a few ideas/explanations of my own. But - @Dietz has - er- how to put this - a rather in depth knowledge and unique wisdom (and massive amount of experience) when it comes to this stuff...


----------



## Dietz (Jun 10, 2019)

colony nofi said:


> But - @Dietz has - er- how to put this - a rather in depth knowledge and unique wisdom (and massive amount of experience) when it comes to this stuff...



Huh! Thanks for the flowers ... :emoji_bouquet:  ... but I still learn something new from every production I do, and from every discussion - after more than 30 years in this business. So please don't hesitate to share your thoughts!


----------



## thevisi0nary (Jun 10, 2019)

Awesome advice to digest here. What do people think about panning the close mics only? Will this create phase issues if room mics are introduced? Also when it comes to panning mics with room sound, is it optimal or not optimal to use stereo balance for panning?


----------



## thevisi0nary (Jun 10, 2019)

shawnsingh said:


> What soundtrack is your reference? in a traditional orchestral style, the feeling of width often comes from recording and orchestration more than mixing techniques.
> 
> For example, spaced mic pairs like "outriggers" or Decca tree will be good for a 3d spatial image compared to coincident mic techniques. And capturing a bit of the room in those mic positions also add more feeling of width in the recording. So if you can use Decca mic positions or similar in your template over close mics, that could be worth a try.
> 
> ...



In this track it feels like there is a fair bit of distance between the dueling string lines. Also when the horns come in they feel reasonably to the left.


----------



## shawnsingh (Jun 11, 2019)

Yeah, in my opinion, the example video you've posted feels wide for both of the reasons I've mentioned earlier - the orchestration and the localized positioninings of each instrument (i'm not sure how much virtual and real this has?). The strings are actually chamber-like in size, at least some of the time, which gives it a more detailed feeling that also helps - it enhances the feeling of positioning and clarity even more. This mix also did *not* leave the center empty, which is probably a good thing. (I don't have too much experience with extreme panning, but I do expect a hole in the middle would actually detract from the feeling of width more than it would help.)

The room/reverb also, is recorded/mixed quite transparently, in my opinion. I can't tell for sure but seems like there may be a few different recordings or instruments that have slightly different room/reverb spaces, but it doesn't really matter. For the most part, this amount of room/reverb almost melts away into the background and keeps things feeling clear, but still provides the undeniable feeling of distance and space. I've always been partial to that kind of mix. As a side note I feel like Teldex recordings usually have that property too, something about the room that has slightly weaker first/second early reflections, but still has nice room and long tail.


----------



## Dietz (Jun 11, 2019)

shawnsingh said:


> I do expect a hole in the middle would actually detract from the feeling of width more than it would help


Exactly. Mixing is always about relations: If everything is loud, nothing is loud. And this is true for any aspect of sound, of course: If everything is dry and close, nothing is dry and close. If everything is bright, nothings sounds bright any more. And so on.

Or put simply: The basic question of mixing is "Louder than WHAT?" 

So the sensation of "width" will be much bigger when there are narrow sounds in the center, by comparison. ... this is a common trick in pop music, BTW: Have a verse in (almost) mono, and overwhelm the listener with a super-wide chorus.


----------



## MartinH. (Jun 11, 2019)

thevisi0nary said:


> In this track it feels like there is a fair bit of distance between the dueling string lines. Also when the horns come in they feel reasonably to the left.




Iirc Penka said in the GDC talk on the Bloodborne OST that they recorded in different locations for the DLC. The base game was recorded in Air Lyndhurst I think: 




I think she said some aleatoric stuff was taken from Symphobia, but otherwise it should mostly be real players.

But don't take my word for it, it's been a while that I watched it. Here's the talk:


----------



## ProfoundSilence (Jun 21, 2019)

two main things, 1.) microphone choices. If you don't pick the wider mics it wont sound as wide and 2.) you can still pan close mics freely.

bonus tip would be using a HAAS delay on the close mics as well, which in laymans terms is delaying the weaker side by a few milliseconds. 

i.e. pan your violins I close mic more to the left, then delay the right signal by 2-8 or so milliseconds(honestly just use your ears and a pair of headphones) and often times I mix both pan and the delay at the same time to approximate where I actually want the instrument to "feel" like it is


----------



## Jerry Growl (Jun 22, 2019)

Here's 12 great tips from Waves plugins :

https://www.waves.com/tips-for-wider-stereo-mix


----------



## Per Boysen (Jun 22, 2019)

A fun and useful exercise is to route a ms-matrix in your DAW:

1. Split the stereo mix into three busses.
2. On the first bus reverse left and right.
3. On the second bus, shift the phase.
4. On the fourth bus, make it mono.

Now, the two first busses together produce an extremely wide field and also a hole in the middle. Use the mono bus to fill that hole. By altering the levels of these three channels you have full control over the stereo width and sound definition. The last step would be to use another L-R shifter to get it back to the original.

I might be wrong, but I guess this is basically what these kind of stereo widening plugins are doing. But setting it up manually and experimenting can be quite educational.


----------



## Andrew Souter (Jun 23, 2019)

hope it doesn't sound salesman-y but, this is designed specifically for the topics of discussion:

https://2caudio.com/products/precedence

I think you might find it highly interesting.


----------



## thevisi0nary (Jun 24, 2019)

MartinH. said:


> Iirc Penka said in the GDC talk on the Bloodborne OST that they recorded in different locations for the DLC. The base game was recorded in Air Lyndhurst I think:
> 
> 
> 
> ...




I've watched this video before and I absolutely love it. I wish so bad that I could study the sheet music. There is really incredible music on this soundtrack.


----------



## thevisi0nary (Jun 24, 2019)

Per Boysen said:


> A fun and useful exercise is to route a ms-matrix in your DAW:
> 
> 1. Split the stereo mix into three busses.
> 2. On the first bus reverse left and right.
> ...



That is very interesting, will have to give this a try. Do you think it's possible to run into phasing issues doing this?


----------



## thevisi0nary (Jun 24, 2019)

Andrew Souter said:


> hope it doesn't sound salesman-y but, this is designed specifically for the topics of discussion:
> 
> https://2caudio.com/products/precedence
> 
> I think you might find it highly interesting.



Sounds fantastic, will definitely keep this on my watch list. What is the difference between something like this and Virtual Sound Stage by paralax?


----------



## pkm (Jun 24, 2019)

thevisi0nary said:


> That is very interesting, will have to give this a try. Do you think it's possible to run into phasing issues doing this?



Phasing issues are on purpose and are what make the stereo effect work. The stuff in the center is out of phase and cancels out, leaving only the stuff on the sides. When collapsed to mono, the first two busses completely cancel out and the 3rd mono bus is all that remains, phasing-free.


----------



## shawnsingh (Jun 24, 2019)

I think @thevisi0nary was asking whether the phase differences you're configuring would cause unintended issues like comb filtering, or some kind of "chorusy" effect, or mono compatibility issues, due to the phase differences.

Also something else that is keeping me awake at night... Whatever happened to the third bus?


----------



## pkm (Jun 24, 2019)

shawnsingh said:


> I think @thevisi0nary was asking whether the phase differences you're configuring would cause unintended issues like comb filtering, or some kind of "chorusy" effect, or mono compatibility issues, due to the phase differences.
> 
> Also something else that is keeping me awake at night... Whatever happened to the third bus?


Yeah, there can definitely be those types of phase issues, it’s inherent in the technique. Kinda like when you hear “a capellas” or instrumentals that have a weird phasey ghost of other instruments. It’s not perfect but can be very effective.

But mono compatibility is almost perfect because the sides completely 100% cancel out, leaving only the mono bus by itself. The only possible problems are overall volume.

And we don’t talk about the third bus.


----------



## Per Boysen (Jun 25, 2019)

thevisi0nary said:


> That is very interesting, will have to give this a try. Do you think it's possible to run into phasing issues doing this?


It's mainly a mastering technic, so given you have fixed phasing issues in the mixing phase this should not add any. On the contrary actually, you may reach a better mono compatibility (even for a wide stereo mix)


----------



## Andrew Souter (Jun 25, 2019)

thevisi0nary said:


> Sounds fantastic, will definitely keep this on my watch list. What is the difference between something like this and Virtual Sound Stage by paralax?




I am not completely familiar with Virtual Sound Stage, and as I know how much work goes into making products like this, I don't like to directly point out any perceived weaknesses of competitive products. Indeed these guys seem to do nice work, and they even recommend pairing their product with our algo verbs such as Aether/B2 to supply tails, so I have a mutual respect for their work. I believe Virtual Sound Stage is primarily a gain panner with a built-in Early Reflections engine, but I am not 100% sure, so don't quote me.

Our system changes the direct sound itself providing instantaneous audio source width, uses several psychoacoustic techniques related to those discussed in this thread to achieve positioning, offers as much mono-compatabilty as you like/need via three different algorithm modes and control over various positioning rules, and is modulated to give additional life and organic feeling to the result.

Our system is furthermore capable of inter-plugin communication between Precedence and Breeze 2.5, where position information in Precedence is communicated to a linked instance of the reverb engine, and the entire DSP settings of the reverb engine updates in response to position. This creates something like an algorithmic Multiple Impulse Response system. There is infinite variation in Precedence and Breeze 2.5 depending on position, and both are modulated. Combined they create an incredible sense of depth and positioning. It's truly next level!

It's almost like when Spitfire or other library company records in Air studio and offeres 20 different mic positions or similar, and sometimes in situ positional variations. Our system can take a completely dry library, or physically modeled instrument like Sample Modeling, or real recording from your studio, and do the same, but not with 20 or so positions -- literally infinite.

And it also has the ability to work well with and compliment libraries that are already room-y, as we are very well aware some great libraries are recorded with lots of room-sound. We have various input modes to address this and help blend wet libraries with dry libraries.

Furthermore, the new Precedence 1.5, offers Multi-Instnace Editing, and Edit Groups! Not only can you see 10, 50, 100, 200 instances within a single plug-in GUI, but you can also EDIT them! Changing instance selection within a shared GUI is MUCH, MUCH faster than constantly having to switch between the DAW mixer and many plug-in instances! The linked Reverb Engine can be have instance selection controlled by Precedence as well, so you can keep one GUI editor open for both a and quickly control many instances with the same connivence as controlling one!

Finally parameters in both Precedence and Breeze 2.5 can be changed en masse for the entire Edit Group! So you can load preset changes for the entire group with a single click! Position information is retained! You can metaphorically simply transport your mix from Air, to Boston hall or whatever else you like by changing the preset in Breeze for the entire group. While retaining the relative in-situ positions! Or you can change the Alg Mode in Precedence between Beta and Mu and export two different mixes, the later with enhanced mono-comatabiliy if that is a critical concern. Or change the Delta and Loss Parameters to change the positioning rules for the entire group, and create macro-changes to "spatial contrast" for the entire group.

etc etc. but I will stop bc this starts to sound salesman-y. 

we hope to have videos ready shortly to explain this all better, but the manual is arleady online with full details. Hope it helps.


----------



## thevisi0nary (Aug 24, 2019)

Andrew Souter said:


> I am not completely familiar with Virtual Sound Stage, and as I know how much work goes into making products like this, I don't like to directly point out any perceived weaknesses of competitive products. Indeed these guys seem to do nice work, and they even recommend pairing their product with our algo verbs such as Aether/B2 to supply tails, so I have a mutual respect for their work. I believe Virtual Sound Stage is primarily a gain panner with a built-in Early Reflections engine, but I am not 100% sure, so don't quote me.
> 
> Our system changes the direct sound itself providing instantaneous audio source width, uses several psychoacoustic techniques related to those discussed in this thread to achieve positioning, offers as much mono-compatabilty as you like/need via three different algorithm modes and control over various positioning rules, and is modulated to give additional life and organic feeling to the result.
> 
> ...



I'm thinking that I will almost definitely be getting precedence and breeze at some point, the combo looks amazing. Thank you for telling me how they actually work together instead of just linking me to the product site!


----------



## thevisi0nary (Aug 24, 2019)

I've finally realized after carefully looking at the pictures of the recording session (for the sound I've been trying to emulate) that the strings are not seated in situ but in "European Seating". This has to have a huge impact on how the sound sits in the stereo field (even with tree mics etc). 



https://c2.staticflickr.com/6/5449/17811759741_cbdf098b44_b.jpg



The only thing I can't tell is if it's Violas on the right or 2nd Violins.

It looks to be like:

-----------Basses
-----Violas-------Cellos
V1------------------------V2


----------



## Scoremixer (Aug 24, 2019)

thevisi0nary said:


> -----------Basses
> -----Violas-------Cellos
> V1------------------------V2



That's correct


----------



## MartinH. (Aug 24, 2019)

thevisi0nary said:


> I've finally realized after carefully looking at the pictures of the recording session (for the sound I've been trying to emulate) that the strings are not seated in situ but in "European Seating". This has to have a huge impact on how the sound sits in the stereo field (even with tree mics etc).
> 
> 
> 
> ...



Thanks a lot for posting this! Are you working on a Bloodborne mockup or just trying to get the template there? I'd be interested to hear how far you've come.


----------



## thevisi0nary (Aug 24, 2019)

Scoremixer said:


> That's correct



Thank you. Getting this accomplished with CSS will be a fun challenge hahaha.


----------



## thevisi0nary (Aug 24, 2019)

MartinH. said:


> Thanks a lot for posting this! Are you working on a Bloodborne mockup or just trying to get the template there? I'd be interested to hear how far you've come.



You got it! Really I am just a little obsessed with the soundtrack and I have been trying to learn as much as I can about it, so that I can incorporate those elements into my writing.

On musecore I found a fairly accurate transcription of the string section of the Cleric Beast theme, so I just transcribed that for midi. Haven't done any CC tweaking really so far so it's pretty raw.


----------



## MartinH. (Aug 25, 2019)

thevisi0nary said:


> You got it! Really I am just a little obsessed with the soundtrack and I have been trying to learn as much as I can about it, so that I can incorporate those elements into my writing.
> 
> On musecore I found a fairly accurate transcription of the string section of the Cleric Beast theme, so I just transcribed that for midi. Haven't done any CC tweaking really so far so it's pretty raw.




Thanks for sharing! Good to see someone else transscribing from that soundtrack too. I've picked "the hunter" for my mockup and I'm endlessly wrestling with hearing what the low strings are doing, but I think I finally made some progress by figuring out that they probably used a combination of triplets and pentuplets and I'd never have thought of that. 

I haven't looked at any transcriptions yet, I thought I'd learn the most when I do it myself. Ideal would be having the actual score to check afterwards what I got right, but that seems to be impossible. I've read from people trying to get them through sony and for an academic project but seems to have reached a deadend. 

I thought I'm probably pretty bad at transscribing, but I've tried recently mocking up a metal song from a tab (because I was lazy and didn't want to do the work), but I very quickly found errors and the way I've transcribed it works much better with the original. 

I always try to map out the tempo in a way that I can play my version and the reference track exactly in sync, and then I switch back and forth or let them play at the same time. It helps me to hear the original and switch my version on and off during playback, to see if it just gets louder or if there are any harmonic changes that shouldn't be there.

Do you happen to know what percussions they used? I feel like there must be some non-tonal stuff that fills out the frequency spectrum that I'm missing. And do you know if/how they used synth or samples to augment the orchestra?

Have you thought about how you're gonna mock up the choir in "cleric beast"? I'd imagine that to be quite tough because it's so exposed.


----------



## MartinH. (Aug 28, 2019)

thevisi0nary said:


> You got it! Really I am just a little obsessed with the soundtrack and I have been trying to learn as much as I can about it, so that I can incorporate those elements into my writing.
> 
> On musecore I found a fairly accurate transcription of the string section of the Cleric Beast theme, so I just transcribed that for midi. Haven't done any CC tweaking really so far so it's pretty raw.




In case you're interested this is where I'm currently at with my mockup/transcription practice. The high strings at the end are still very wrong, haven't figured those out yet, but not given up either:




and the original for reference:


----------



## Pincel (Aug 28, 2019)

MartinH. said:


> In case you're interested this is where I'm currently at with my mockup/transcription practice. The high strings at the end are still very wrong, haven't figured those out yet, but not given up either:
> 
> 
> 
> ...




Love the Bloodborne soundtrack! There's definitely a lot of interesting stuff to be learned regarding composition/orchestration by studying that music. It would be really cool to have access to the scores, but probably never going to happen.

Your mockup is sounding very good for the most part! 
Since you're struggling with that last section, I took the liberty to make a small transcription if you're interested, which I believe is a bit more accurate, or at least might help you in some way.
It IS hard to make out exactly what the Low Strings are doing in that part though, so take it with a grain of salt.

Keep it up man, that's definitely the way to learn. I should stop being lazy and start doing this kind of stuff too... :D


----------



## thevisi0nary (Aug 28, 2019)

MartinH. said:


> Thanks for sharing! Good to see someone else transscribing from that soundtrack too. I've picked "the hunter" for my mockup and I'm endlessly wrestling with hearing what the low strings are doing, but I think I finally made some progress by figuring out that they probably used a combination of triplets and pentuplets and I'd never have thought of that.
> 
> I haven't looked at any transcriptions yet, I thought I'd learn the most when I do it myself. Ideal would be having the actual score to check afterwards what I got right, but that seems to be impossible. I've read from people trying to get them through sony and for an academic project but seems to have reached a deadend.
> 
> ...



Yes I have found threads on forums about people writing to either sony or the composers and orchestrators, no one has been able to get any official scores. =/ It's really a shame. From the GDC video it says that the percussion consisted of timpani and different chimes, I don't know however if that includes the use of sfx hits. I do know that in the video that they say that at least for the strings, basically all of the aleatoric sounds are from symphobia. The other orchestrator said that the brass sfx was recorded (at least for the dlc tracks). There also has to definitely be some atmospheric sfx going on at some parts (like the ones you hear at the beginning of many of the tracks for example). They haven't said anything that would indicate synth layering for augmenting the orchestra. For the choir I've thought about investing in EW Hollywood Choirs mainly because of the word builder and the fact that it sounds pretty exposed. I feel similar to you in that most of the choir libraries I have heard from demo's have that heavy cathedral ambiance going on.


----------



## thevisi0nary (Aug 28, 2019)

MartinH. said:


> In case you're interested this is where I'm currently at with my mockup/transcription practice. The high strings at the end are still very wrong, haven't figured those out yet, but not given up either:
> 
> 
> 
> ...




This sounds awesome you did an excellent job! What libraries are you using for the brass? I do think the starting note on some of the run parts (0:31) are about one step higher. Sounds really sick.


----------



## MartinH. (Aug 29, 2019)

Pincel said:


> Your mockup is sounding very good for the most part!
> Since you're struggling with that last section, I took the liberty to make a small transcription if you're interested, which I believe is a bit more accurate, or at least might help you in some way.
> It IS hard to make out exactly what the Low Strings are doing in that part though, so take it with a grain of salt.
> 
> Keep it up man, that's definitely the way to learn. I should stop being lazy and start doing this kind of stuff too... :D



Thanks so much for the feedback, help and encouragement!
I gave that high strings part another shot before I looked at your version and got a bit closer, but after comparing it to your version I think yours must be more correct. I'll upload a new version soon that goes on a tiny bit longer and has the first bit of choir in it.




thevisi0nary said:


> This sounds awesome you did an excellent job! What libraries are you using for the brass? I do think the starting note on some of the run parts (0:31) are about one step higher. Sounds really sick.


Thanks a lot! The brass is all Metropolis Ark 1 (horns, trombones, tuba) and in one place the "majestic horn" from Organic Samples / Orchestral tools is layered, but I'm not sure I even needed that. Probably gonna take it out as I tweak some more.

I checked the brass part you mentioned and listening to just the two soundcloud tracks here I thought "damn, you're right, how could I miss that?", but actually trying it out and playing original and mine in sync and shifting that melody various amounts of semi tones up or down I don't think any other place fits better. My rule of thumb is "if it sounds equally bad one semi tone up and one semi tone down, I'm likely on the right one, but possibly in the wrong octave". And I think I indeed had most of the trombone parts one octave too high. I think the next version that I'll upload should be closer.

The strings I used are about half/half NI SSC SE and Met Ark 1, the percussions are also from the NI Symphony Series Collection, and the choir currently is Requiem Light.



thevisi0nary said:


> From the GDC video it says that the percussion consisted of timpani and different chimes, I don't know however if that includes the use of sfx hits.


I've used timpani too but I feel like with just timpani something is missing that the basedrum could fill. Doesn't mean it actually is the basedrum of course.


This is a different track and a live performance which doesn't necessarily do everything the same as for the recording sessions, but I'm pretty sure I see a bass drum there:





Regarding the drony bass tone at the beginning where I thought it might be a synth, I switched the articulation to sordino and it already sounds much closer I think.


P.S.: to get yet another comparison perspective I have re-routed things to another track that puts my version in mono hard panned to the left and the original in mono hardpanned to the right. So it becomes more obvious when things are out of sync during playback. It makes some things easier to compare and some things impossible to compare, so it really depends on what you're listening for if it makes sense. But I welcome it as another tool in the belt and I can easily solo/mute that track as needed.


----------



## Trash Panda (Aug 29, 2020)

MartinH. said:


> Thanks so much for the feedback, help and encouragement!
> I gave that high strings part another shot before I looked at your version and got a bit closer, but after comparing it to your version I think yours must be more correct. I'll upload a new version soon that goes on a tiny bit longer and has the first bit of choir in it.
> 
> 
> ...



Did you get any farther with this? Father Gascoigne is a project I’m working on too.


----------



## vitocorleone123 (Aug 29, 2020)

Suggestions: After recording and mic position, when mixing, use simple panning and mid-side processing (EQ, compression, etc.). Tools manipulating the stereo image itself should be the last resort. And definitely around the last step in the chain.


----------



## MartinH. (Aug 29, 2020)

The Serinator said:


> Did you get any farther with this? Father Gascoigne is a project I’m working on too.



No, I haven't touched it in a long while, right now I'm trying to do things in the style of the Doom Soundtrack. But if you post anything Dark Souls or Bloodborne related, please tag me, I'm still interested in hearing and learning more about it, it just hasn't been a focus for me this year.


----------



## GNP (Aug 29, 2020)

If you have UAD, you could try the Precision K-Stereo plugin on the main bus itself. Don't push it too much, but it really helps.

If you are your own score mixer, but have to send the stems to a re-recording mixer, then it's really up to the re-recording mixer to spread your music out on his end for the cinematic experience.


----------



## Joël Dollié (Aug 29, 2020)

Andrew Souter said:


> I am not completely familiar with Virtual Sound Stage, and as I know how much work goes into making products like this, I don't like to directly point out any perceived weaknesses of competitive products. Indeed these guys seem to do nice work, and they even recommend pairing their product with our algo verbs such as Aether/B2 to supply tails, so I have a mutual respect for their work. I believe Virtual Sound Stage is primarily a gain panner with a built-in Early Reflections engine, but I am not 100% sure, so don't quote me.
> 
> Our system changes the direct sound itself providing instantaneous audio source width, uses several psychoacoustic techniques related to those discussed in this thread to achieve positioning, offers as much mono-compatabilty as you like/need via three different algorithm modes and control over various positioning rules, and is modulated to give additional life and organic feeling to the result.
> 
> ...



Thanks for making precedence  It really allows me to perfectly position/widen/narrow/ match libraries better than traditional panning.


----------



## ProfoundSilence (Aug 29, 2020)

Dietz said:


> What kind of issues? 8-/
> 
> Quite contrary: Using "balance" instead of a proper panning device will very likely ruin the sound because you lose 50 percent of the information. As a (very obvious) example, imagine the recording of a piano in full stereo: When you just lower the volume of the right side to make it appear to sit left on the stage, you won't hear much of the all-important mid- and treble-range any more.


phase issues. depending on how its planned. 

so when the horns are on the left side of the stage they hit the left microphone first, then the right. if you pan either signal into the other channel it overlaps a few ms off and creates phasing.


----------



## MartinH. (Aug 29, 2020)

Y'all are replying to a 1 year old thread.


----------



## ProfoundSilence (Aug 29, 2020)

MartinH. said:


> Y'all are replying to a 1 year old thread.


idk I just work here


----------



## Dietz (Aug 30, 2020)

ProfoundSilence said:


> so when the horns are on the left side of the stage they hit the left microphone first, then the right. if you pan either signal into the other channel it overlaps a few ms off and creates phasing.


Not in case of a coincident main mic, e.g. an Ambisonics array, MS or a triple-8-array.


----------



## thevisi0nary (Aug 30, 2020)

MartinH. said:


> Y'all are replying to a 1 year old thread.



Jesus christ... I feel like I just posted this!


----------



## MartinH. (Aug 30, 2020)

thevisi0nary said:


> Jesus christ... I feel like I just posted this!



I know the feeling all too well. The Corona Quarantine Monotony has totally messed up my perception of time. I think many feel this way:


----------



## DS_Joost (Sep 1, 2020)

thevisi0nary said:


> I know that orchestral samples are usually pre panned, but when I reference my tracks against a soundtrack that I like, the soundtrack still is usually much wider.
> 
> I am wondering what kind of techniques people are using with their sample libraries to get them to really fill out space while still have presence and separation.



Actually ignoring that stupid advice for not panning samples and just pan them wherever you want. Use a stereo widener. Sounds good? Good. Realistic, no, but who actually cares? Phasing issues? Never had those, or better said, never heard those. None of your listeners care.

Guess what, most soundtracks have an orchestral sound that sounds nothing like an actual orchestra. I doubt sometimes many people still know how an actual orchestra sounds like. You know, in real life, in a concert hall.

A sampled orchestra doesn't sound like a real orchestra, at all. Knowing that, break those stupid ''rules'' and do whatever you wish.


----------

