What's new

VI playback sounds different in LPX vs CB

I’ve noticed a difference. I think the place to look is in the implementation of the vsti in either vst or au format and not the daw’s themselves.

But on my system even my virus ti sounds different when I use it in LPX or CB.

I seem to trust CB more, as LPX seems to degrade the sound for some reason. This is an opinion and not scientifically based, but I need to work harder in LPX and especially when the track count increases.

my two cents
 
We would need to look at the actual project files and your DAW's preference settings in its entirety in order to rule out that you aren't setting up your projects slightly differently in some way.

Pretty simple, if you have LPX and CB on the same computer, you can run the test yourself.

All we did was, clean install High sierra on a nMP, clean install only Kontakt and latest CB 10 and LPX. Go into each DAW and choose our PCIe Rednet as soundcard, that is it no other alterations... then play the same midi clip triggering Olafur felt piano in each DAW. There's definitely a difference.

Had a strings session today at my studio and just for fun, i ran through the same test, all musicians can pick which DAW the midi playback came from 100% of the time.
 
I will try but no promises as we're in a middle of huge deadlines.
...
Again, the session has nothing but only one simple Midi track routed to SF felt piano (Olafur Arnald),

How long could it take to bounce two snippets?

One minute?
Two minutes?

If you have the time to write about your findings, you should also take the time to put them on an empirical ground.
Otherwise it's nonsense.

My guess is that in this case the two DAWs have a slightly different output level.
The absolute output level has an enormous impact on the perception of different frequency ranges.
 
On casual listening, I've heard subtle differences between DAWs over the years, but it's purely a subjective and unscientific impression. I find it interesting that so many people for so many years have reported, and continue to report, similar impressions. Are we all victim to some sort of mass auditory hallucination?

Anyway, if DAWs do indeed have slight differences in sound, then ultimately so what? For decades music has been produced on various analog consoles and tape that all imparted their own character.

Pure transparency in audio processing is a noble goal, but should there be some coloration imparted by a DAW, it's easy enough to work with, just as you do when working with analog gear. It becomes part of the sound of your studio, and you learn to use it to positive effect.
 
If you are talking VIs, you are talking AU vs VST. Its very likely there is a difference somewhere in the programming. If I remember correctly, the null test is sine wave or something like that? The same basic audio. Not midi notes that may also be interpreted differently by the way each DAW and VI read it? Are the settings exactly the same on each instrument?

Well, just my thoughts any time I hear there are differences when using different formats.
 
On casual listening, I've heard subtle differences between DAWs over the years, but it's purely a subjective and unscientific impression. I find it interesting that so many people for so many years have reported, and continue to report, similar impressions. Are we all victim to some sort of mass auditory hallucination?

As a matter of fact, yes we are. Start with watching the one hour video I quoted earlier in this thread, to learn more. I used to know a link to a great site that explains it even better, but I lost track of it now. But basically, our built in survival instinct INTENTIONALLY changes our ear's hearing response routinely and frequently, based on psychological factors. It does this because our body thinks it needs to hear differently in order to increase our chance of survival. It is beyond our conscious control. Our perceptions and biases weigh heavily into how we actually hear things..its not purely psychological....what our brain receives as information is actually different, from minute to minute.

Our ears cannot be trusted for this kind of comparative analysis. It's not that our ears are not sensitive enough, its that our ears are just entirely unreliable to be consistent enough to determine much of anything other then whether we like it right now or not.

That's why any kind of discussion on this topic....needs to be accompanied by scientific emphirical data, otherwise its basically meaningless. Enjoy the DAW you like the most and don't worry about it so much, they are all great platforms.


Anyway, if DAWs do indeed have slight differences in sound, then ultimately so what? For decades music has been produced on various analog consoles and tape that all imparted their own character.

+1

Pure transparency in audio processing is a noble goal, but should there be some coloration imparted by a DAW, it's easy enough to work with, just as you do when working with analog gear. It becomes part of the sound of your studio, and you learn to use it to positive effect.

Its extremely unlikely that any DAW is imparting ANY coloration to the sound in the raw simple collection of digital audio from plugins and summed to the master bus. Coloration could happen during D/A conversion in your sound card. It could come from using different EQ plugins and other intentionally placed DSP functions(through plugins or other built in features of the DAW designed intentionally to operate DSP on the signal)...

As someone said, even a slightly different gain stage structure could effect the DSP to sound different. Those are all user choices. In playing back a midi instrument, there could be midi filters happening, intentional or not, that scales velocities, which would change the sound produced by the VSTi, or perhaps by default one DAW has a certain CC set to something and the other DAW has a different CC set to something else or no default CC values or something...and the user doesn't realize that the instrument is not being midi controlled quite exactly the same way in the two cases, resulting in different sound. Those are user choices, whether intentional or not.
 
Extremely unlikely that an AU will send different then the VST version of the same plugin. That would be only if the Plugin developer developed the DSP slightly differently in both cases...which would be extremely unlikely.
 
I will state again also that transparent digital audio is not hard. Its actually really simple...don't add any DSP that isn't directed by the user. Simple. That's what all the DAW's do. They do totally transparent audio until you the user start operating on it with intentional DSP.

They also all provide a lot of options to add various forms of DSP to the signal at the user's request. It is up to the user to determine what DSP will be applied to the signal. Whether a user realizes it or not, they could be having some relevant setting somewhere set slightly differently in the two cases, resulting in different DSP and different sound, but again, intentional or not...that is the user's choice....its not the DAW routinely warming up the sound. Digital audio just doesn't work that way.
 
Yes, that big filter between your ears (aka the brain) does affect what you hear. It allows you to ignore stuff it knows are not dangerous so you can sleep. Probably does similar things when listening to music. I know once I notice something I like or dislike in a piece of music, it will always stick out when I listen to that piece of music. Even if it never bothered me before I noticed it.
 
I went deep into the weeds on this topic years ago (in another forum).
Its all flawed human perception - except for Pan Law.

When different daws used different pan laws you could get different results when using visually similar pan settings. Understandably.

Otherwise the math works in all daws.

now converters gets us into some fun differences ..........
 
There are some other differences as well. The way fades are handled in DSP is different between DAW's. Meaning that while the fader is moving....being faded....a certain kind of algorithm is used...and some DAW's introduce shockingly high levels of intermodulation noise or handle it differently enough as to declare them different. AdmiralBumbleBee did a controversial report about that earlier this year.
 
This has been tried and the problem is that once you import the audio into your program it will interpret both files the same way and the test gets void. That's why the files will null.

so bring them both into Audacity.

Again, the session has nothing but only one simple Midi track routed to SF felt piano (Olafur Arnald), low mid are definitely different between LPX and CB.

if it's a difference significant enough for you to hear then the null test is guaranteed to fail. Test it.
 
There's no "interpretation" of a PCM digital audio bitstream. It is what it is. If two files are different it's impossible for the DAW to bring them and make them null. If two files are the same they will null. Nulling two files does not have to be done in a DAW, it's easier, but we're talking identical bitstreams. The file headers may be a little different but once you get into the audio data, it'll be bit for bit identical.

The idea that any process like this would have an effect that only worked in one frequency band is absurd, like the audiophools who think an HDMI cable has more "presence" or "depth", or one "reveals more details"....a digital bit stream is, in terms of the audible spectrum, either right...or not. There is some small effects introduced by jitter but these are orders of magnitude smaller than the effect of moving your head a few millimeters in a real acoustic space, where the comb filtering introduced by the physics of a real room is unavoidable. Short of clamping your head into place and doing a double blind test, no, you're not hearing a difference. Something else is happening. Probably unconscious biasing of the test.

The potential systemic differences here don't work like that, even if they were audible, which they're absolutely not. This isn't a matter of "maybe they're audible". It's beyond absurd scientifically.

And bar pan laws and fade algorithms, there's no difference between kontakt running in Logic and kontakt running in CB. If you're switching DAWS, that's already too much time that the human ear's memory for the timbre of a sound is faulty. If it's a blind test and not double blind you have easy avenues for potential - even if unintentional - bias.

The only rational answers here are that you're either deluding yourself that you're hearing a difference, unconsciously biasing the test (if 100% is the result you get, this is actually extremely likely). If you talk to anybody who works at a DAW company or has ever worked in programming DAWs, they'll laugh at you, or smile and nod and say "sure, whatever". If it made a difference they'd spend all their time working on this, instead of what they do work on, which is features and GUI.
 
Unless you bounce the results offline, all this is absolutely meaningless.

Unless you're dealing with either a deliberately nonlinear plugin as many are - or badly coded third party plugin or external gear, offline and online bounces are identical. It's another myth that they are different. The math is simply done faster than realtime, with the expense of not being able to monitor the bitstream sent realtime to a D/A.
 
The ONLY things that can effect this are deliberate nonlinearities within plugins, pan laws, any D/A/A/D processing, and fade algorithms. If a kontakt patch uses an analog modeling process, that could happen. But it's not the DAW.

That is it.

There simply is no other answer here, if you are "hearing" a difference, then you're either doing something wrong, moving in the space, deluding yourself or there's some other biasing going on. Perhaps your audio card settings are different, whatever, but all things being equal, there's NO way to alter the bit stream. When you build a DAW and you want to sum bitstream A with bitstream B there's no choices to be made, it's not like an art....it's science!

PCM audio is not complicated. None of this is complicated math. Even if it was, it wouldn't be in a way that suddenly all affected one part of the frequency spectrum, so anybody claiming analog like differences within the digital domain is just wrong. Sorry. It's the exact same, simple mathematics - addition, mostly!
 
so for example, the Olafur patch mentioned - does it have filtering within the patch, HPF or LPF? are those filters using analog models? In which case, yes, you could hear a difference from one playback to the next, it would be minute but it would be there. That said, it's NOT the DAW, and in fact, two playbacks of the SAME DAW will then also sound different. And null test won't work because the audio being produced is actually different - but it's by design, the filter is the difference and it won't respond exactly the same due to the designed non-linearity. No magic there.
 
Top Bottom