That's correct
Thank you. Getting this accomplished with CSS will be a fun challenge hahaha.
That's correct
Thanks a lot for posting this! Are you working on a Bloodborne mockup or just trying to get the template there? I'd be interested to hear how far you've come.
You got it! Really I am just a little obsessed with the soundtrack and I have been trying to learn as much as I can about it, so that I can incorporate those elements into my writing.
On musecore I found a fairly accurate transcription of the string section of the Cleric Beast theme, so I just transcribed that for midi. Haven't done any CC tweaking really so far so it's pretty raw.
You got it! Really I am just a little obsessed with the soundtrack and I have been trying to learn as much as I can about it, so that I can incorporate those elements into my writing.
On musecore I found a fairly accurate transcription of the string section of the Cleric Beast theme, so I just transcribed that for midi. Haven't done any CC tweaking really so far so it's pretty raw.
In case you're interested this is where I'm currently at with my mockup/transcription practice. The high strings at the end are still very wrong, haven't figured those out yet, but not given up either:
and the original for reference:
Thanks for sharing! Good to see someone else transscribing from that soundtrack too. I've picked "the hunter" for my mockup and I'm endlessly wrestling with hearing what the low strings are doing, but I think I finally made some progress by figuring out that they probably used a combination of triplets and pentuplets and I'd never have thought of that.
I haven't looked at any transcriptions yet, I thought I'd learn the most when I do it myself. Ideal would be having the actual score to check afterwards what I got right, but that seems to be impossible. I've read from people trying to get them through sony and for an academic project but seems to have reached a deadend.
I thought I'm probably pretty bad at transscribing, but I've tried recently mocking up a metal song from a tab (because I was lazy and didn't want to do the work), but I very quickly found errors and the way I've transcribed it works much better with the original.
I always try to map out the tempo in a way that I can play my version and the reference track exactly in sync, and then I switch back and forth or let them play at the same time. It helps me to hear the original and switch my version on and off during playback, to see if it just gets louder or if there are any harmonic changes that shouldn't be there.
Do you happen to know what percussions they used? I feel like there must be some non-tonal stuff that fills out the frequency spectrum that I'm missing. And do you know if/how they used synth or samples to augment the orchestra?
Have you thought about how you're gonna mock up the choir in "cleric beast"? I'd imagine that to be quite tough because it's so exposed.
In case you're interested this is where I'm currently at with my mockup/transcription practice. The high strings at the end are still very wrong, haven't figured those out yet, but not given up either:
and the original for reference:
Your mockup is sounding very good for the most part!
Since you're struggling with that last section, I took the liberty to make a small transcription if you're interested, which I believe is a bit more accurate, or at least might help you in some way.
It IS hard to make out exactly what the Low Strings are doing in that part though, so take it with a grain of salt.
Keep it up man, that's definitely the way to learn. I should stop being lazy and start doing this kind of stuff too... :D
Thanks a lot! The brass is all Metropolis Ark 1 (horns, trombones, tuba) and in one place the "majestic horn" from Organic Samples / Orchestral tools is layered, but I'm not sure I even needed that. Probably gonna take it out as I tweak some more.This sounds awesome you did an excellent job! What libraries are you using for the brass? I do think the starting note on some of the run parts (0:31) are about one step higher. Sounds really sick.
I've used timpani too but I feel like with just timpani something is missing that the basedrum could fill. Doesn't mean it actually is the basedrum of course.From the GDC video it says that the percussion consisted of timpani and different chimes, I don't know however if that includes the use of sfx hits.
Thanks so much for the feedback, help and encouragement!
I gave that high strings part another shot before I looked at your version and got a bit closer, but after comparing it to your version I think yours must be more correct. I'll upload a new version soon that goes on a tiny bit longer and has the first bit of choir in it.
Thanks a lot! The brass is all Metropolis Ark 1 (horns, trombones, tuba) and in one place the "majestic horn" from Organic Samples / Orchestral tools is layered, but I'm not sure I even needed that. Probably gonna take it out as I tweak some more.
I checked the brass part you mentioned and listening to just the two soundcloud tracks here I thought "damn, you're right, how could I miss that?", but actually trying it out and playing original and mine in sync and shifting that melody various amounts of semi tones up or down I don't think any other place fits better. My rule of thumb is "if it sounds equally bad one semi tone up and one semi tone down, I'm likely on the right one, but possibly in the wrong octave". And I think I indeed had most of the trombone parts one octave too high. I think the next version that I'll upload should be closer.
The strings I used are about half/half NI SSC SE and Met Ark 1, the percussions are also from the NI Symphony Series Collection, and the choir currently is Requiem Light.
I've used timpani too but I feel like with just timpani something is missing that the basedrum could fill. Doesn't mean it actually is the basedrum of course.
This is a different track and a live performance which doesn't necessarily do everything the same as for the recording sessions, but I'm pretty sure I see a bass drum there:
Regarding the drony bass tone at the beginning where I thought it might be a synth, I switched the articulation to sordino and it already sounds much closer I think.
P.S.: to get yet another comparison perspective I have re-routed things to another track that puts my version in mono hard panned to the left and the original in mono hardpanned to the right. So it becomes more obvious when things are out of sync during playback. It makes some things easier to compare and some things impossible to compare, so it really depends on what you're listening for if it makes sense. But I welcome it as another tool in the belt and I can easily solo/mute that track as needed.
Did you get any farther with this? Father Gascoigne is a project I’m working on too.
I am not completely familiar with Virtual Sound Stage, and as I know how much work goes into making products like this, I don't like to directly point out any perceived weaknesses of competitive products. Indeed these guys seem to do nice work, and they even recommend pairing their product with our algo verbs such as Aether/B2 to supply tails, so I have a mutual respect for their work. I believe Virtual Sound Stage is primarily a gain panner with a built-in Early Reflections engine, but I am not 100% sure, so don't quote me.
Our system changes the direct sound itself providing instantaneous audio source width, uses several psychoacoustic techniques related to those discussed in this thread to achieve positioning, offers as much mono-compatabilty as you like/need via three different algorithm modes and control over various positioning rules, and is modulated to give additional life and organic feeling to the result.
Our system is furthermore capable of inter-plugin communication between Precedence and Breeze 2.5, where position information in Precedence is communicated to a linked instance of the reverb engine, and the entire DSP settings of the reverb engine updates in response to position. This creates something like an algorithmic Multiple Impulse Response system. There is infinite variation in Precedence and Breeze 2.5 depending on position, and both are modulated. Combined they create an incredible sense of depth and positioning. It's truly next level!
It's almost like when Spitfire or other library company records in Air studio and offeres 20 different mic positions or similar, and sometimes in situ positional variations. Our system can take a completely dry library, or physically modeled instrument like Sample Modeling, or real recording from your studio, and do the same, but not with 20 or so positions -- literally infinite.
And it also has the ability to work well with and compliment libraries that are already room-y, as we are very well aware some great libraries are recorded with lots of room-sound. We have various input modes to address this and help blend wet libraries with dry libraries.
Furthermore, the new Precedence 1.5, offers Multi-Instnace Editing, and Edit Groups! Not only can you see 10, 50, 100, 200 instances within a single plug-in GUI, but you can also EDIT them! Changing instance selection within a shared GUI is MUCH, MUCH faster than constantly having to switch between the DAW mixer and many plug-in instances! The linked Reverb Engine can be have instance selection controlled by Precedence as well, so you can keep one GUI editor open for both a and quickly control many instances with the same connivence as controlling one!
Finally parameters in both Precedence and Breeze 2.5 can be changed en masse for the entire Edit Group! So you can load preset changes for the entire group with a single click! Position information is retained! You can metaphorically simply transport your mix from Air, to Boston hall or whatever else you like by changing the preset in Breeze for the entire group. While retaining the relative in-situ positions! Or you can change the Alg Mode in Precedence between Beta and Mu and export two different mixes, the later with enhanced mono-comatabiliy if that is a critical concern. Or change the Delta and Loss Parameters to change the positioning rules for the entire group, and create macro-changes to "spatial contrast" for the entire group.
etc etc. but I will stop bc this starts to sound salesman-y.
we hope to have videos ready shortly to explain this all better, but the manual is arleady online with full details. Hope it helps.
phase issues. depending on how its planned.What kind of issues? 8-/
Quite contrary: Using "balance" instead of a proper panning device will very likely ruin the sound because you lose 50 percent of the information. As a (very obvious) example, imagine the recording of a piano in full stereo: When you just lower the volume of the right side to make it appear to sit left on the stage, you won't hear much of the all-important mid- and treble-range any more.
idk I just work hereY'all are replying to a 1 year old thread.
Not in case of a coincident main mic, e.g. an Ambisonics array, MS or a triple-8-array.so when the horns are on the left side of the stage they hit the left microphone first, then the right. if you pan either signal into the other channel it overlaps a few ms off and creates phasing.
Y'all are replying to a 1 year old thread.
Jesus christ... I feel like I just posted this!
I know that orchestral samples are usually pre panned, but when I reference my tracks against a soundtrack that I like, the soundtrack still is usually much wider.
I am wondering what kind of techniques people are using with their sample libraries to get them to really fill out space while still have presence and separation.