# VI playback sounds different in LPX vs CB



## Mishabou (Nov 8, 2019)

Alright, i know this topic has been discussed to death and i'm aware of the famous null test but i thought i share a surprising finding while working at my friend's studio last night.

We had LPX and CB open on the same computer (nMP), using the exact same sound card (Dante PCIe) going to a DAD AX32 AD/DA and finally JBL monitors. Basically both software are using the exact same equipment/signal path.

We noticed a difference in sound when a VST instrument is played on LPX or CB. Just to make sure, we did a blindfold test by taking turn playing and switching apps. We both could pick out 100% of the time which app the VST was played from. CB has a slightly fuller low mid, very subtle difference but still definitely audible.

Has anyone experienced this ?


----------



## A.G (Nov 8, 2019)

Mishabou said:


> Has anyone experienced this ?


You are right, I have noted that a long time ago especially when I create AG product videos using the same Standard MIDI File (demo) and the same VIs loaded into both DAWs .
Cubase and LPX use different Pan law and maybe different way of summing. Logic thins the Master somehow.


----------



## Dewdman42 (Nov 8, 2019)

Is it possible for you to bounce the two “different” sounding master bus audio and share here for us to compare? 

I’m always a little suspicious about these kinds of claims that have been happening for years. that being said, admiralbumblebee did that comparison this year and found specifically that automated fades on different daws were using different algorithms with Inter modulation noise present in some daws.

we also have to be aware that our ears are extremely unreliable measuring devices for this kind of thing.


----------



## d.healey (Nov 8, 2019)

Dewdman42 said:


> Is it possible for you to bounce the two “different” sounding master bus audio and share here for us to compare?


^This


----------



## studioj (Nov 8, 2019)

I've used numerous DAW's in my work extensively, switching between PT, DP, Logic, a touch of Cubase... doing heavy projects in all. I have always thought, since I started paying attention to this kind of thing in early 2000's that these audio programs sounded different on playback. I think the only way you'd be able to quantify it though is to take an analog print off identical d-a converter / output. bounced files are identical from each DAW for the most part. Differences only present in playback, and they are subtle... but noticeable as the OP has surmised.


----------



## marclawsonmusic (Nov 8, 2019)

Watching this thread with interest...


----------



## Dewdman42 (Nov 8, 2019)

why would the bounced audio be identical while the actual analog audio different? 

That implies that one or more of these DAW's are secretly doing something spooky to the sound before putting audio into the audio buffer and handing it off to the audio device. If any DAW were doing something like that that, which I doubt they are, they would be touting it as a feature. Sorry, no I doubt that.

Its always possible that one daw is using a different pan law by default then the other, but summing algorithms in general are extremely simplistic. its just very very unlikely that one daw is, in the most simple sense, capturing output from a VST one way while another DAW is somehow calculating the result differently. Now if its running through EQ or something...then ok...of course they might sound different. But the basic mixing engine of all DAW's are pretty much the same stuff...in fact on OSX many would argue that they are all using CoreAudio to do most of that for them...so they should in fact be identical. Cubase might interject ASIO stuff in some way....which would make it LESS transparent if anything...but I also doubt that Steinberg would do something like that, I think they all strived for transparency a long long time ago..

AdmiralBUmbleBee did do that rather scientifically measure analysis this year, where he measured discrepancies between DAW's, while automated fades are happening...and something like that is also where different DAW's might calculate a fade in or a fade out, using different DSP methods...and thus might have slightly different sound qualities. But in terms of taking the output from a VST and mixing it to the stereo buss and sending to your sound card...this is highly simplistic and all DAW's have been shown to null test to zero a long time ago...


----------



## Dewdman42 (Nov 8, 2019)

This video is about an hour, but anyone doubting DAW transparency should take the hour to watch it.


----------



## Dewdman42 (Nov 8, 2019)

If that one is too technical for you, then maybe this will help:


----------



## d.healey (Nov 8, 2019)

I just noticed the title specifically says this happens with VIs. Does it also happen with regular audio?


----------



## Dewdman42 (Nov 8, 2019)

Right so that question would be whether one DAW takes the audio calculated by a plugin....and does more secret DSP on it before including it in the mix...than another DAW that must be more transparent? Its ludicrous to think that any of the DAW's would be doing anything other then striving for exact transparency from plugins. Its up to the plugin to calculate their output.


----------



## Dewdman42 (Nov 8, 2019)

Also remember that the user settings of various DAW's might not be set exactly equally. For example, what if one of them has some dithering option turned on somewhere, or something like that?


----------



## José Herring (Nov 8, 2019)

Mishabou said:


> Alright, i know this topic has been discussed to death and i'm aware of the famous null test but i thought i share a surprising finding while working at my friend's studio last night.
> 
> We had LPX and CB open on the same computer (nMP), using the exact same sound card (Dante PCIe) going to a DAD AX32 AD/DA and finally JBL monitors. Basically both software are using the exact same equipment/signal path.
> 
> ...


Yes, and when this topic came up years ago it lead to a maelstrom of responses. Then somebody did a null test and blah, blah, blah it went on forever. 

But, just by experience I've worked in studios that use Logic, and DP and I have extensive experience with PT and with my Cubase and by God I swear they all sound different. I use to have this condition called synesthesia. It has since gone away but I would confuse sounds with colors. Logic sounded yellow to me, Cubase blue, DP greyish and Protools just sounds like crap 

But since the condition has abated somewhat in my older years, I just notice differences in the way each platform sounds. Even on my same system, Live, Cubase and Reason all sound different.

But...... I just recently did some resampling, started in Cubase, edited in Reason because Reason will export samples in a nice neat package then built the instrument in Kontakt. Then played back the instrument in both Cubase and Reason via Kontakt and Kontakt colors the sound so much that I couldn't tell the difference. 

So there is a lot of variables especially when dealing with samples.

Now my VST synths on the other hand, it's weird but the sound in Reason is much less dark. So, I think your assessment of Cubase having a fuller low mid is correct.

On a side note, Reason has made such advances lately that I can finally admit that I use Reason without the stigma associated with it. Yes, I use Reason. I love it!


----------



## José Herring (Nov 8, 2019)

Dewdman42 said:


> Is it possible for you to bounce the two “different” sounding master bus audio and share here for us to compare?
> 
> I’m always a little suspicious about these kinds of claims that have been happening for years. that being said, admiralbumblebee did that comparison this year and found specifically that automated fades on different daws were using different algorithms with Inter modulation noise present in some daws.
> 
> we also have to be aware that our ears are extremely unreliable measuring devices for this kind of thing.


This has been tried and the problem is that once you import the audio into your program it will interpret both files the same way and the test gets void. That's why the files will null.

I swear the only real way to do it is to do what the OP did. You just have to have more than one DAW on your system and playback the same audio file or the same VI synth on both DAWS and the results are apparent. They sound different. Not enough that it will lead to any real changes in mix decisions especially when dealing with samples because there are 1 trillion other variables, but they do sound different.


----------



## d.healey (Nov 8, 2019)

josejherring said:


> This has been tried and the problem is that once you import the audio into your program it will interpret both files the same way and the test gets void. That's why the files will null.


That doesn't make sense. It should apply the same processing to both the files, it shouldn't make both files sound the same.


----------



## Dewdman42 (Nov 8, 2019)

if you have two audio files, we can null them first of all to see if they are different, scientifically. You don't need the original DAWs to do that.

Here would be another useful experiment. Take just one of those two audio bounced files and play it back in both DAW's with that absolutely most transparent DAW settings you can setup, do you hear a difference? 

In order to determine if there are differences, our ears make very poor measuring devices, that's why the null test is useful. You could perhaps do other kinds of spectral analysis on the two files, if they don't pass a null test, That would show the results scientifically...not by using your unreliable ears.

Also, in order to validate such a claim, people would need to inspect the project files very carefully looking for anything the user might be doing that they haven't accounted for to produce different sounding mixes. There could be global dithering options in play, for example.

I would also point out that if Cubase is in fact making a bump in the low mid...that is not necessarily a good thing. What you want is *transparency* without coloration. If you want to produce a mix that will translate to everyone else playing it on their apple ear buds or whatever else they are using...they aren't playing it through Cubase are they, If Cubase is in fact warming up the sound or adding low mid...I would call that a negative, not a positive, even if you think it sounds better in your studio. 

But personally I don't think it is doing it unless the user is overlooking something they are doing in the way they have cubase configured or the mix signal chain in some way.

Also, people should bear in mind that in terms of digital DSP, it would be very difficult to accidentally add a little low mid bump to the sound. That kind of DSP change would need to literally be intentional as a form of EQing...with specific DSP targeted to do that. Why would Steinberg or any other DAW manufacture intentionally add distortion of any kind to the signal without the user directing it? They would not.

This kind of thinking is a hold over from the old analog days where electronic components that imparted magical warmth to tone was considered a benefit and a reason to select one piece of gear over another. That is not how its done in digital world now, the name of the game is transparency, and any kind of sonically pleasing warmth that might be added to the signal would have be deliberate and intentionally thought out and formalized with specific DSP calculations to accomplish it. Otherwise, other forms of random DSP differences would appear as noise...not a nice warm mid low bump.

If you make your mix non-transparent somehow...then that is another matter...that is user choice...


----------



## Mishabou (Nov 8, 2019)

> Is it possible for you to bounce the two “different” sounding master bus audio and share here for us to compare?
> 
> I’m always a little suspicious about these kinds of claims that have been happening for years. that being said, admiralbumblebee did that comparison this year and found specifically that automated fades on different daws were using different algorithms with Inter modulation noise present in some daws.
> 
> we also have to be aware that our ears are extremely unreliable measuring devices for this kind of thing.




I will try but no promises as we're in a middle of huge deadlines.

When two people can identify 100% of the time the difference between two DAW (blindfolded), i would not call our ears unreliable 

Again, the session has nothing but only one simple Midi track routed to SF felt piano (Olafur Arnald), low mid are definitely different between LPX and CB.


----------



## Dewdman42 (Nov 8, 2019)

Mishabou said:


> When two people can identify 100% of the time the difference between two DAW (blindfolded), i would not call our ears unreliable



sorry no. They are unreliable for this kind of comparison. Extremely.



> Again, the session has nothing but only one simple Midi track routed to SF felt piano (Olafur Arnald), low mid are definitely different between LPX and CB.



We would need to look at the actual project files and your DAW's preference settings in its entirety in order to rule out that you aren't setting up your projects slightly differently in some way.


----------



## d.healey (Nov 8, 2019)

Mishabou said:


> When two people can identify 100% of the time the difference between two DAW (blindfolded), i would not call our ears unreliable


We should try to verify results with empirical data so that those of us with less sensitive hearing can verify the claims.

I'm guessing a pure sine wave would be the best test.


----------



## d.healey (Nov 8, 2019)

Mishabou said:


> We noticed a difference in sound when a VST instrument is played on LPX or CB.


Does it only happen with VSTi plugins or with audio too?


----------



## dgburns (Nov 9, 2019)

I’ve noticed a difference. I think the place to look is in the implementation of the vsti in either vst or au format and not the daw’s themselves.

But on my system even my virus ti sounds different when I use it in LPX or CB.

I seem to trust CB more, as LPX seems to degrade the sound for some reason. This is an opinion and not scientifically based, but I need to work harder in LPX and especially when the track count increases.

my two cents


----------



## Mishabou (Nov 9, 2019)

Dewdman42 said:


> We would need to look at the actual project files and your DAW's preference settings in its entirety in order to rule out that you aren't setting up your projects slightly differently in some way.



Pretty simple, if you have LPX and CB on the same computer, you can run the test yourself.

All we did was, clean install High sierra on a nMP, clean install only Kontakt and latest CB 10 and LPX. Go into each DAW and choose our PCIe Rednet as soundcard, that is it no other alterations... then play the same midi clip triggering Olafur felt piano in each DAW. There's definitely a difference.

Had a strings session today at my studio and just for fun, i ran through the same test, all musicians can pick which DAW the midi playback came from 100% of the time.


----------



## Living Fossil (Nov 9, 2019)

Mishabou said:


> I will try but no promises as we're in a middle of huge deadlines.
> ...
> Again, the session has nothing but only one simple Midi track routed to SF felt piano (Olafur Arnald),



How long could it take to bounce two snippets?

One minute? 
Two minutes?

If you have the time to write about your findings, you should also take the time to put them on an empirical ground.
Otherwise it's nonsense.

My guess is that in this case the two DAWs have a slightly different output level.
The absolute output level has an enormous impact on the perception of different frequency ranges.


----------



## NekujaK (Nov 9, 2019)

On casual listening, I've heard subtle differences between DAWs over the years, but it's purely a subjective and unscientific impression. I find it interesting that so many people for so many years have reported, and continue to report, similar impressions. Are we all victim to some sort of mass auditory hallucination?

Anyway, if DAWs do indeed have slight differences in sound, then ultimately so what? For decades music has been produced on various analog consoles and tape that all imparted their own character.

Pure transparency in audio processing is a noble goal, but should there be some coloration imparted by a DAW, it's easy enough to work with, just as you do when working with analog gear. It becomes part of the sound of your studio, and you learn to use it to positive effect.


----------



## dzilizzi (Nov 9, 2019)

If you are talking VIs, you are talking AU vs VST. Its very likely there is a difference somewhere in the programming. If I remember correctly, the null test is sine wave or something like that? The same basic audio. Not midi notes that may also be interpreted differently by the way each DAW and VI read it? Are the settings exactly the same on each instrument? 

Well, just my thoughts any time I hear there are differences when using different formats.


----------



## Dewdman42 (Nov 9, 2019)

NekujaK said:


> On casual listening, I've heard subtle differences between DAWs over the years, but it's purely a subjective and unscientific impression. I find it interesting that so many people for so many years have reported, and continue to report, similar impressions. Are we all victim to some sort of mass auditory hallucination?



As a matter of fact, yes we are. Start with watching the one hour video I quoted earlier in this thread, to learn more. I used to know a link to a great site that explains it even better, but I lost track of it now. But basically, our built in survival instinct INTENTIONALLY changes our ear's hearing response routinely and frequently, based on psychological factors. It does this because our body thinks it needs to hear differently in order to increase our chance of survival. It is beyond our conscious control. Our perceptions and biases weigh heavily into how we actually hear things..its not purely psychological....what our brain receives as information is actually different, from minute to minute.

Our ears cannot be trusted for this kind of comparative analysis. It's not that our ears are not sensitive enough, its that our ears are just entirely unreliable to be consistent enough to determine much of anything other then whether we like it right now or not.

That's why any kind of discussion on this topic....needs to be accompanied by scientific emphirical data, otherwise its basically meaningless. Enjoy the DAW you like the most and don't worry about it so much, they are all great platforms.




> Anyway, if DAWs do indeed have slight differences in sound, then ultimately so what? For decades music has been produced on various analog consoles and tape that all imparted their own character.



+1



> Pure transparency in audio processing is a noble goal, but should there be some coloration imparted by a DAW, it's easy enough to work with, just as you do when working with analog gear. It becomes part of the sound of your studio, and you learn to use it to positive effect.



Its extremely unlikely that any DAW is imparting ANY coloration to the sound in the raw simple collection of digital audio from plugins and summed to the master bus. Coloration could happen during D/A conversion in your sound card. It could come from using different EQ plugins and other intentionally placed DSP functions(through plugins or other built in features of the DAW designed intentionally to operate DSP on the signal)...

As someone said, even a slightly different gain stage structure could effect the DSP to sound different. Those are all user choices. In playing back a midi instrument, there could be midi filters happening, intentional or not, that scales velocities, which would change the sound produced by the VSTi, or perhaps by default one DAW has a certain CC set to something and the other DAW has a different CC set to something else or no default CC values or something...and the user doesn't realize that the instrument is not being midi controlled quite exactly the same way in the two cases, resulting in different sound. Those are user choices, whether intentional or not.


----------



## Dewdman42 (Nov 9, 2019)

Extremely unlikely that an AU will send different then the VST version of the same plugin. That would be only if the Plugin developer developed the DSP slightly differently in both cases...which would be extremely unlikely.


----------



## Dewdman42 (Nov 9, 2019)

I will state again also that transparent digital audio is not hard. Its actually really simple...don't add any DSP that isn't directed by the user. Simple. That's what all the DAW's do. They do totally transparent audio until you the user start operating on it with intentional DSP. 

They also all provide a lot of options to add various forms of DSP to the signal at the user's request. It is up to the user to determine what DSP will be applied to the signal. Whether a user realizes it or not, they could be having some relevant setting somewhere set slightly differently in the two cases, resulting in different DSP and different sound, but again, intentional or not...that is the user's choice....its not the DAW routinely warming up the sound. Digital audio just doesn't work that way.


----------



## dzilizzi (Nov 9, 2019)

Yes, that big filter between your ears (aka the brain) does affect what you hear. It allows you to ignore stuff it knows are not dangerous so you can sleep. Probably does similar things when listening to music. I know once I notice something I like or dislike in a piece of music, it will always stick out when I listen to that piece of music. Even if it never bothered me before I noticed it.


----------



## Dewdman42 (Nov 9, 2019)

exactly.


----------



## David Kudell (Nov 9, 2019)

By the time the music makes it out of your DAW, it's going to be remixed and recompressed many times so that any difference will be totally inaudible to the person listening on, most likely, their built-in iPhone speaker.


----------



## edhamilton (Nov 9, 2019)

I went deep into the weeds on this topic years ago (in another forum).
Its all flawed human perception - except for Pan Law.

When different daws used different pan laws you could get different results when using visually similar pan settings. Understandably.

Otherwise the math works in all daws.

now converters gets us into some fun differences ..........


----------



## Dewdman42 (Nov 9, 2019)

and by the way, the Pan Law is also a user choice in most modern DAW's


----------



## Dewdman42 (Nov 9, 2019)

There are some other differences as well. The way *fades* are handled in DSP is different between DAW's. Meaning that while the fader is moving....being faded....a certain kind of algorithm is used...and some DAW's introduce shockingly high levels of intermodulation noise or handle it differently enough as to declare them different. AdmiralBumbleBee did a controversial report about that earlier this year.


----------



## NoamL (Nov 9, 2019)

josejherring said:


> This has been tried and the problem is that once you import the audio into your program it will interpret both files the same way and the test gets void. That's why the files will null.



so bring them both into Audacity.



Mishabou said:


> Again, the session has nothing but only one simple Midi track routed to SF felt piano (Olafur Arnald), low mid are definitely different between LPX and CB.



if it's a difference significant enough for you to hear then the null test is guaranteed to fail. Test it.


----------



## vgamer1982 (Nov 10, 2019)

There's no "interpretation" of a PCM digital audio bitstream. It is what it is. If two files are different it's impossible for the DAW to bring them and make them null. If two files are the same they will null. Nulling two files does not have to be done in a DAW, it's easier, but we're talking identical bitstreams. The file headers may be a little different but once you get into the audio data, it'll be bit for bit identical.

The idea that any process like this would have an effect that only worked in one frequency band is absurd, like the audiophools who think an HDMI cable has more "presence" or "depth", or one "reveals more details"....a digital bit stream is, in terms of the audible spectrum, either right...or not. There is some small effects introduced by jitter but these are orders of magnitude smaller than the effect of moving your head a few millimeters in a real acoustic space, where the comb filtering introduced by the physics of a real room is unavoidable. Short of clamping your head into place and doing a double blind test, no, you're not hearing a difference. Something else is happening. Probably unconscious biasing of the test.

The potential systemic differences here don't work like that, even if they were audible, which they're absolutely not. This isn't a matter of "maybe they're audible". It's beyond absurd scientifically.

And bar pan laws and fade algorithms, there's no difference between kontakt running in Logic and kontakt running in CB. If you're switching DAWS, that's already too much time that the human ear's memory for the timbre of a sound is faulty. If it's a blind test and not double blind you have easy avenues for potential - even if unintentional - bias.

The only rational answers here are that you're either deluding yourself that you're hearing a difference, unconsciously biasing the test (if 100% is the result you get, this is actually extremely likely). If you talk to anybody who works at a DAW company or has ever worked in programming DAWs, they'll laugh at you, or smile and nod and say "sure, whatever". If it made a difference they'd spend all their time working on this, instead of what they do work on, which is features and GUI.


----------



## babylonwaves (Nov 10, 2019)

Unless you bounce the results offline, all this is absolutely meaningless.


----------



## vgamer1982 (Nov 10, 2019)

babylonwaves said:


> Unless you bounce the results offline, all this is absolutely meaningless.



Unless you're dealing with either a deliberately nonlinear plugin as many are - or badly coded third party plugin or external gear, offline and online bounces are identical. It's another myth that they are different. The math is simply done faster than realtime, with the expense of not being able to monitor the bitstream sent realtime to a D/A.


----------



## vgamer1982 (Nov 10, 2019)

The ONLY things that can effect this are deliberate nonlinearities within plugins, pan laws, any D/A/A/D processing, and fade algorithms. If a kontakt patch uses an analog modeling process, that could happen. But it's not the DAW.

That is it. 

There simply is no other answer here, if you are "hearing" a difference, then you're either doing something wrong, moving in the space, deluding yourself or there's some other biasing going on. Perhaps your audio card settings are different, whatever, but all things being equal, there's NO way to alter the bit stream. When you build a DAW and you want to sum bitstream A with bitstream B there's no choices to be made, it's not like an art....it's science!

PCM audio is not complicated. None of this is complicated math. Even if it was, it wouldn't be in a way that suddenly all affected one part of the frequency spectrum, so anybody claiming analog like differences within the digital domain is just wrong. Sorry. It's the exact same, simple mathematics - addition, mostly!


----------



## vgamer1982 (Nov 10, 2019)

so for example, the Olafur patch mentioned - does it have filtering within the patch, HPF or LPF? are those filters using analog models? In which case, yes, you could hear a difference from one playback to the next, it would be minute but it would be there. That said, it's NOT the DAW, and in fact, two playbacks of the SAME DAW will then also sound different. And null test won't work because the audio being produced is actually different - but it's by design, the filter is the difference and it won't respond exactly the same due to the designed non-linearity. No magic there.


----------



## vgamer1982 (Nov 10, 2019)

Similarly the analog filters in the D/A may change things by a minute tiny amount from playback to playback, but again, the effect of moving your head a few inches in any acoustic space will be orders of magnitude larger than the differences between the filters, because A/D and D/A filtering is designed to alias outside the audible spectrum, so the chances of it being audible are nil. So is the audio coming out of the speakers precisely the same? Possibly not, though it's _not_ within human capabilities to detect that difference reliably, we're talking changes of thousandths of a dB at most...


----------



## Henu (Nov 10, 2019)

I didn't find it mentioned before, but while (besides of pan law) I think this is 100% utter nonsense, there is one thing which could theoretically be happening and that's upsampling differencies between DAWs. Yet, I don't know _per se_ how this is handled in LPX.


----------



## babylonwaves (Nov 10, 2019)

vgamer1982 said:


> Unless you're dealing with either a deliberately nonlinear plugin as many are - or badly coded third party plugin or external gear, offline and online bounces are identical. It's another myth that they are different. The math is simply done faster than realtime, with the expense of not being able to monitor the bitstream sent realtime to a D/A.


all native hosts skip samples when playing back in realtime, depending on the load. and that, obviously, changes and eventually degrades the sound. so, in order to get a good result, you need to bounce offline. it's not a myth at all. also, there are differences in between core audio and asio drivers.


----------



## Sheridan (Nov 10, 2019)

Hi everyone!

Long time lurker here, chiming in because I think I might have something to contribute to this thread.

Just like for the OP the perceived difference between Logic and Cubase had been bugging me for some time, so last year I bit the bullet and performed the null test. Unfortunately I don’t have the sound files any longer so I can’t submit any "evidence", but I will walk you through what I did and you can decide.

I loaded up a strings patch from Albion One in Kontakt in both Cubase and Logic on the same Mac computer. I reset the round robins to ensure the same wave file would play when the note was triggered. Pan law was set to the same in both DAWs and there were no automation or additional effects applied (Admiral Bumblebee has shown that these are indeed different between DAWs).

I then captured the resulting audio outside of the DAW using Rogue Amoeba’s Audio Hijack software. This produced two wave files, one from Logic and Cubase respectively. I imported the files in Cubase and phase inverted one of them.

At first they didn’t null, there was still some sound coming out from the stereo master. I thought that maybe there was a fundamental difference after all. But when I looked more closely at the two original waveforms I noticed that the Logic one was ever so slightly lower in amplitude.

So I increased the level of the Logic file with 3 dB and presto, now the files nulled! Well, at least the stereo master level peaked below -110 dB, which I think was due to the difficulty in lining up the waveforms optically, even at max zoom I was probably a couple of samples off.

So IMHO Living Fossil is correct that the difference is due to the output levels, and this impacts the perception. I had always thought that the mid range in Kontakt in Cubase was slightly clearer, now I am confident that Cubase and Logic sounds the exactly the same and have opted for Logic for workflow and stability reasons.

Of course, if anyone else could take the time to perform a similar test that would provide independent verification and we could bury the topic once and for all.


----------



## Mishabou (Nov 10, 2019)

Sheridan said:


> Hi everyone!
> 
> Long time lurker here, chiming in because I think I might have something to contribute to this thread.
> 
> ...



If you reset the round robins, pan law was set to the same, no automation or additional effects applied and all levels at unity...then why would one DAW be 3 dB louder than the other ?


----------



## Living Fossil (Nov 10, 2019)

Mishabou said:


> If you reset the round robins, pan law was set to the same, no automation or additional effects applied and all levels at unity...then why would one DAW be 3 dB louder than the other ?



Could be related to the master output level or similar.

But:
Why don't you just post two snippets?

This thread is going nowhere without evidence.
If you're too lazy to do two little bounces, there is no sense to debate any further.
But who knows, maybe it's not even laziness but the anxiety of being expelled from Placebo-land.
(which, of course is the most beautiful place in nonexistence)


----------



## vgamer1982 (Nov 12, 2019)

babylonwaves said:


> all native hosts skip samples when playing back in realtime, depending on the load. and that, obviously, changes and eventually degrades the sound. so, in order to get a good result, you need to bounce offline. it's not a myth at all. also, there are differences in between core audio and asio drivers.



No, they don't. It's trivial to play back a PCM audio stream without "skipping" samples.


----------



## Dewdman42 (Nov 12, 2019)

That's the first time I've ever heard that claim too. I tend to doubt its veracity, but I remain open minded for the moment until we hear more.

I do not personally think for one second the real time bounce will render anything different then non-real-time bounce. I am pretty confident they would proceed equal results, tested easily with a null test. The only difference is that one can do it faster then the other.


----------



## babylonwaves (Nov 13, 2019)

vgamer1982 said:


> No, they don't. It's trivial to play back a PCM audio stream without "skipping" samples.


another poster already said that when you do a null test with two audio files from LX and CB, once level matched both cancel out perfectly. my comment was in regards of what the OP said and also the majority of poster here: VIs. and that's a different story because e.g. Kontakt performs differently on both platforms. there are Kontakt patches which overload the last core in Logic pretty fast and those might be still fine playing the same notes in cubase. that's why you need to bounce before you compare.


----------



## vgamer1982 (Nov 17, 2019)

babylonwaves said:


> another poster already said that when you do a null test with two audio files from LX and CB, once level matched both cancel out perfectly. my comment was in regards of what the OP said and also the majority of poster here: VIs. and that's a different story because e.g. Kontakt performs differently on both platforms. there are Kontakt patches which overload the last core in Logic pretty fast and those might be still fine playing the same notes in cubase. that's why you need to bounce before you compare.



If Kontakt is merely triggering samples, it's linear PCM audio streaming. It will be identical. It's not the host DAW that's the difference. If Kontakt is actually overloading a core and therefore causing audio glitches, that's not really anything to do with the DAW unless the DAW is written in an inefficient manner (which they aren't). If the CPU is overloading, we're not talking about one DAW "sounding" better than the other, we're talking about an actual fault/bug. That's very different.


----------



## foxrec (Apr 18, 2020)

Sheridan said:


> Hi everyone!
> 
> Long time lurker here, chiming in because I think I might have something to contribute to this thread.
> 
> ...


..yes i did the null test too and it cancels but there is still a difference in my ear, its not about the loudness +- db..its about the sound, its more clean open and 3d in cubase...i just cant understand why am i hearing this...


----------



## foxrec (Apr 18, 2020)

Sheridan said:


> Hi everyone!
> 
> Long time lurker here, chiming in because I think I might have something to contribute to this thread.
> 
> ...


...and i did another test, send 2 files to my friend who works in logic too without telling him who is who...and he said too that the file 2(cubase) sounded more open and 3d....regards


----------



## Zero&One (Apr 18, 2020)

foxrec said:


> ...and i did another test, send 2 files to my friend who works in logic too without telling him who is who...and he said too that the file 2(cubase) sounded more open and 3d....regards



Post them up with a poll.
A
B
Same

I suspect it’s just a placebo, but would be interesting


----------



## Bear Market (Apr 18, 2020)

foxrec said:


> ..yes i did the null test too and it cancels but there is still a difference in my ear, its not about the loudness +- db..its about the sound, its more clean open and 3d in cubase...i just cant understand why am i hearing this...



Confirmation bias?


----------



## Sheridan (Apr 18, 2020)

I would like to stress that in my tests the audio was captured outside the DAW applications and when adjusted for loudness the files nulled. Logic = Cubase = Pro Tools = Kontakt Standalone.


----------



## foxrec (Apr 18, 2020)

Bear Market said:


> Confirmation bias?


if you have cubase 10 too pls make a test, make one 8 bar midi file and use the same in logic and cubase export at the same volume without any plugin...i would like to hear your opinion, regards and stay safe


----------



## foxrec (Apr 18, 2020)

foxrec said:


> if you have cubase 10 too pls make a test, make one 8 bar midi file and use the same in logic and cubase export at the same volume without any plugin...i would like to hear your opinion, regards and stay safe


...with the same vst au instrument...


----------



## Living Fossil (Apr 18, 2020)

foxrec said:


> ..yes i did the null test too and it cancels but there is still a difference in my ear,



If it nulls, there is no difference.
End of discussion, case closed.


----------



## Ashermusic (Apr 18, 2020)

Of the top 100 things that will affect the quality of your final mix, this is #1027.


----------



## brenneisen (Apr 18, 2020)

Living Fossil said:


> If it nulls, there is no difference.



NO! Some golden ears go beyond your silly math!1


----------



## foxrec (Apr 18, 2020)

Living Fossil said:


> If it nulls, there is no difference.
> End of discussion, case closed.


Haha ok...


----------



## Living Fossil (Apr 18, 2020)

foxrec said:


> Haha ok...



do you want to tell why you think it's funny?


----------



## Zero&One (Apr 18, 2020)

B!!!


----------



## foxrec (Apr 18, 2020)

Living Fossil said:


> do you want to tell why you think it's funny?


No im just saying that to me it sounds different...maybe not to you and that fine...


----------



## Living Fossil (Apr 18, 2020)

foxrec said:


> No im just saying that to me it sounds different...maybe not to you and that fine...



Each time you listen to the same audio file, your brain focusses on slightly different aspects.
So, the identical file sounds different inside of your brain.
But that difference is a category of *perception*.
The file still is identical.

When two files null, it means it is exactly the same.

Why is it so hard to get this???


----------



## dzilizzi (Apr 18, 2020)

I've always wondered if AU vs VST makes a difference. Since the 1's and 0's aren't actually going to be the same. I don't have good enough ears to hear it, but maybe someone else does?

But when you are talking Kontakt and midi, there also may be differences in how each DAW handles modifications, etc... even from the same midi file, depending on the settings. To do a real comparison, you would have to make sure the settings are similar, unlike a sine wave audio file.

Edit: of course, not if they null.


----------



## DS_Joost (Apr 18, 2020)

josejherring said:


> On a side note, Reason has made such advances lately that I can finally admit that I use Reason without the stigma associated with it. Yes, I use Reason. I love it!



Might be off-topic, but you are not alone, Jose. It's my primary DAW!


----------



## Dewdman42 (Apr 18, 2020)

It amazes me that this topic keeps coming up on internet forums. 

Null means Null. Identical. There is nothing different.  If it actually does null, then you aren't hearing anything different. Your mind might think so for psychological reasons.

Now that being said... aside from a very controlled experiments with test tones, I find it highly unlikely that anyone was able to mix an actual music project in two different DAW's and get them the actually null. If they say they did, I don't think they know the meaning of the word "null". There are too many factors for why it would be utterly improbable that anyone would ever mix down a project with all the various plugins and faders...in exactly the same way in the two daw's leading to a null result. 

So yes..its possible that you mixed a project in one DAW and liked the sound better then the other, but based on scientific evidence that has been tested countless times in the past few decades, that is not due to some fundamental DAW voodoo...its simply the way you mixed the project in each DAW. 

I have never seen anyone actually provide sound files of two mixed projects from two different daw's that actually do null. I find it highly unlikely that anyone has ever been able to do that. But also many simple tests with test tones between two DAW's have been done, with perfectly nulled out results....meaning...exactly the same result. This is scientific. If you deny this science, then I can't help you, I'd suggest getting a voodoo doll to help you with your mixes then.

But it also has to be recognized that people work differently in different environments and one daw may very well lead someone to create better sounding mixes then another daw, simply because of the way they are using it...but that is entirely subjective, any DAW can produce absolute magic in the right hands.

All of that being said...admiral bumble bee also has measured scientifically the differences between DAW's while performing automated fades. So if your project has a lot of automated fades happening....(not midi fades)...then definitely all DAW's are not created equal. Unfortunately, in that test, Cubase didn't fare any better then LogicPro in terms of intermodulation noise. But then...maybe you like the Cubase flavor of intermodulation noise.


----------



## José Herring (Apr 18, 2020)

via Imgflip Meme Generator


----------



## dzilizzi (Apr 18, 2020)

Dewdman42 said:


> It amazes me that this topic keeps coming up on internet forums.
> 
> Null means Null. Identical. There is nothing different.  If it actually does null, then you aren't hearing anything different. Your mind might think so for psychological reasons.
> 
> ...


Hey! I want one of those voodoo dolls that make my mixes sound better. Where do I get one???

Okay, I'm not a computer programmer. Is there a difference between the programming in a VST vs an AU program? I know there are differences between Core Audio and ASIO, but then, the results won't null. Maybe I'm just confusing myself. And does it really matter? I think I may have been locked up in my house too long...


----------



## Dewdman42 (Apr 18, 2020)

sure there is different code in AU and VST...but usually if you have a certain plugin by a certain developer...they are using the same underlying DSP code in both plugins to process the actual digital audio. It would be very very very unlikely that a VST version would sound different then the AU of the same plugin from the same developer unless they intentionally used different DSP code in the two different plugins...which would be very strange if they did.

Your audio doesn't flow through all the code in a plugin....the digital audio flows through certain DSP calculations... All the other overhead related to the GUI or Au/VST mechanisms are irrelevant to the sound.


----------



## dzilizzi (Apr 18, 2020)

Dewdman42 said:


> sure there is different code in AU and VST...but usually if you have a certain plugin by a certain developer...they are using the same underlying DSP code in both plugins to process the actual digital audio. It would be very very very unlikely that a VST version would sound different then the AU of the same plugin from the same developer unless they intentionally used different DSP code in the two different plugins...which would be very strange if they did.
> 
> Your audio doesn't flow through all the code in a plugin....the digital audio flows through certain DSP calculations... All the other overhead related to the GUI or Au/VST mechanisms are irrelevant to the sound.


Thanks. That makes sense to me, but no one properly explained it. 

Now about the voodoo doll.....


----------



## kitekrazy (Apr 18, 2020)

This debate will never be conclusive. People say this when they open up different DAWs. The solution is make crappy music and no one will care.


----------



## vgamer1982 (Apr 20, 2020)

Null is null. Same 1's and 0's.
The problem here usually is that people don't realize how much moving your head just _half an inch_ with any normal kind of acoustic space, does to the sound (orders of magnitude bigger than any possible difference here, even if it didn't null, due to comb filtering and numerous other acoustical effects). So yes, sometimes people are hearing a difference, but...it's just that they can't keep their head still. Short of clamping it place...

We even assess sound using micro-movements you would swear blind you don't know about - because you don't. Your brain is localizing sound all the time through the 600 microsecond difference (ish) between left ear and right ear. If 600 microseconds and tiny notches in the 5k to 8k range effect whether you hear a sound as above or behind you, imagine how much the ILD and ITD changes of moving just teeny tiny bit does.

Then the next problem is that human hearing isn't linear, and isn't static, and is enormously influenced by both the thing you heard immediately prior, environment. A/B listenings are approximate - at best. The brain tunes into sound very selectively. There is no such thing as objective hearing. The very act of playing A influences B even if you could clamp your head in place. Just moving your jaw a little - changes the perceived sound. So it's often not that people are actively being obtuse. They may actually be hearing a difference. Double blind and boom, all of it goes away, but people still swear they hear the difference and sometimes, they do hear "a" difference. It's hearing that is at fault, and physics and acoustics. Not the DAW.

It's just absolutely, scientifically, mathematically there is no difference between the DAWs ability to do simple arithmetic, which is far from a complicated thing to do and not done any differently in any way that would be audible by humans, or any other species. Or any measurement microphone, system or what have you.


----------



## vgamer1982 (Apr 20, 2020)

Also any of these discussions when they come up, if they are intrinsically about the simple mathematics of summing digital audio is like saying that one DAW does 1+1 = 2 and the other does 1+1 = 2.000000000023. Even if it were true, the idea that that 0.00000000023 would be a) audible b) favorable to a particular part of the audio spectrum is _insane_, not to mention that what is being claimed is...an error that would have catastrophic effects on other simple computing tasks. There are quantum domains in which such errors play a huge part, there are issues with transmission in digital audio that (if catastrophically bad) can impact audio - but again, not in any particular part of the frequency spectrum that would be "beneficial", but...these are basically home PCs with really, really simple software. It's beyond bizarre but confirmation bias, and simple physiology push people to strange conclusions.


----------



## IFM (Apr 25, 2020)

So I thought I used to notice a difference but then realized Cubase has a hotter output than LPX. Your brain will always think the louder one is better.


----------

