# LUFS confusion



## Alex JD (Oct 30, 2020)

I'm confused. 
So I'm getting started in mixing and this argument is giving me headache.
As far as I understand, there is a standard for loudness which should guide you when you make music. I like to do soundtrack and cinematic music mostly. Reading a lot of stuff on the net I got to the conclusion of keeping peak around -15 to -6 when tracking, and the final mix to be peaking around -6 to -3 for mastering. It should give a good balance hopefully and allow it to be mastered a bit louder with decent headroom.

Then I came around the LUFS.
According to the some standards a track on spotify should be max integrated -14 lufs (so average around -16? didn't really get it) while for tv movies etc should be max -23. I tried to reach an average -23 on a track between gain staging and fadfers and it sound pretty low in volume. And it's all before mastering. Assuming with mastering the volume will rais a bit as well, I would already hit the wall of -23 therefore already be out for tv soundtracks. Does that mean I should mix a track in a -27 average if I was to compose a soundtrack for a movie? with dynamics tracks that sounds ridiculous to me, I mean I could have a quiet track that have a crescendo in the end and because of that final peak I have to lower the whole track basically? 
Or am I heavily misundestanding everything?


----------



## reborn579 (Oct 30, 2020)

the confusion here stems from the fact that there is a difference between *peak volume* and *lufs*. 
peak volume is the absolute maximum volume a mix / track reaches at a moment in time, while lufs is basically an average of volume over a time period. the latter is actually how our ears perceive loudness, so that's why it's in a way more relevant than peak. the recommendation of a maximum mix peak of -6dB to -10dB comes from the fact that if you would setup the track volumes so that the mix doesn't go over that limit, you would basically get a relatively good loudness (not too loud, not too quiet).
most volume meters show different lufs values, for different time periods (there's a momentary, short, long etc). when it comes to spotify / youtube etc, when they say -14 lufs, they refer to *integrated loudness*, which is basically an average of the loudness for the entire duration of the song.
check out this old SOS article. it doesn't really talk about lufs, but it still very good. 





Gain Staging In Your DAW Software


Despite the immense power and flexibility available in modern DAW software, many people still find that the mixes they craft entirely 'in the box' sound unsatisfying. Why is that?




www.soundonsound.com




basically 1LUFS is 1dB, so you can think of it that way.

i recommend using youlean loudness meter for checking your volumes.








Download Youlean Loudness Meter - Youlean







youlean.co


----------



## Alex JD (Oct 30, 2020)

Thanks fo the explanation, I'll check the article and the plugin.



reborn579 said:


> the confusion here stems from the fact that there is a difference between *peak volume* and *lufs*


Right that's a thing that weirdly I didn't grasp. I somehow though of peak as a requirement to achieve (e.g if not achieved the -6 was a mistake)

So ok when they say max -14 then they mean average of the song. Does that mean that there could be a peak on the song that goes over that and that It wouldn't compressed down too much by youtube spotify etc?


----------



## storyteller (Oct 30, 2020)

I am pretty sure any great mixing engineer will have the same feeling about LUFS... that is: $&@[email protected]$&. For all the good that is intended by its use, it also creates more problems with artistic creativity and restricting the final output to a formula rather than what “sounds right.” On streaming services, it keeps all of the songs on the same playing field so that people aren’t playing with their volumes, but it also takes away from the dynamics of a song... and thus crushes emotion. For film... well, read through the many threads here on dynamic range and how our current landscape consists of some of the best mixed, incredibly orchestrated, but worst sounding soundtracks of all time. 

Get Waves WLM or something similar and run your favorite mixes through it. You’ll see the commonalities in them. Then run your mix through it. You will likely see that it need lots of limiting and compression in the mastering process to wind up with numbers that these standards require.


----------



## reborn579 (Oct 30, 2020)

Alex JD said:


> Does that mean that there could be a peak on the song that goes over that and that It wouldn't compressed down too much by youtube spotify etc?


yes, that's exactly what it means. actually, most modern music has the peaks around -1dB, or even straight up 0dB. find a .flac file of 'daft punk - get lucky' and put it in your daw. you'll see that it always clips - albeit very little. so for a song with -14 lufs, you would probably have a lot of peaks that reach -1dB. and it probably won't get compressed at all.



storyteller said:


> On streaming services, it keeps all of the songs on the same playing field so that people aren’t playing with their volumes, but it also takes away from the dynamics of a song


i think the effect is quite the opposite. the introduction of the LUFS system came after everyone was pumping their song the the max, so it would be louder than the competition. a level playing field actually means more dynamic. 

some people say that lufs is what stopped the infamous 'loudness wars'


----------



## Scoremixer (Oct 30, 2020)

At the mix stage? Ignore it. Totally disregard it.

If you're just starting out, the only requirements you need to have of yourself is that your mix sounds good to your own ears, and that it's not clipping. 

Loudness standards are a good thing, but they're most relevant to the professionals who act as the final 'gatekeepers' to the distribution channels - that is the engineers who do do final mixes for film, for broadcast, and mastering engineers. 

Even among those professionals, it's really only the dubbing and broadcast engineers who have very tightly defined limits to hit. The LUFS levels stated by Spotify, Youtube etc are only suggestions, and everything I've had mastered by pro mastering engineers in the past few years has clocked in well in excess of the 'recommended' levels (and not suffered for it either, IMO). 

The internet and youtube tutorials like to make a big deal out of LUFS, but I've never seen a single mix engineer look at a LUFS meter or worry about hitting an arbitrary average loudness number (and I've assisted a lot of them over the years). 

If you're doing your own mastering, that is the time I'd suggest putting a meter after your final limiter and just double checking you're in the ballpark to something commercially released that's similar. But bear in mind that if you're doing dynamic music (like anything orchestral) then purely by going on the meters you could end making a soft track inappropriately loud to compete with a fff piece of action music, so always let your ears be the guide.


----------



## reborn579 (Oct 30, 2020)

Scoremixer said:


> If you're doing your own mastering, that is the time I'd suggest putting a meter after your final limiter


i gather that he (just like a lot of people around here) will probably take care of his own mastering, so it is important.
and even if you have your own mixing engineer and mastering engineer, it's not a bad habit to gain stage your song from the beginning. 

but i do agree - always let your ears be the guide.


----------



## Alex JD (Oct 30, 2020)

Scoremixer said:


> If you're doing your own mastering, that is the time I'd suggest putting a meter after your final limiter and just double checking you're in the ballpark to something commercially released that's similar.



The thing is that my goal is to make music for work so I need to pay attention to this stuff to give a good product. I usually mix my samples (not mastering them yet) but I'm a composer and If there is budget I usually would hire a mixer to work on the track. And I don't want these standards to ruin my work because it was too loud etc.
But as you said orchestral music is very dynamic and paying attention to that is very limiting in the choice you make while doing a song. 
Also then I suppose a song would need more mastering sessions (one to put on streaming services, one for the movie itself for example), or can I master a song with my own standard and the editor/engineer will take care of that?

So the thing is, when I record should I stick to the rule of mixing it with -3 db headroom for mastering without worrying too much about loudness since I assume the master engineer will be able to take care of that if necessary.


----------



## reborn579 (Oct 31, 2020)

Alex JD said:


> So the thing is, when I record should I stick to the rule of mixing it with -3 db headroom for mastering


i would make that somewhere around -6dB and -8dB. 



Alex JD said:


> Also then I suppose a song would need more mastering sessions


one mastering session will do. if you make films, you will usually send unmastered stems to the dubbing mixer. and he will mix your music with the sound effects and dialogue. 

you should checkout trevor morris' youtube channel. he has this cool series called 'how your music makes it to the screen'. very well made and very informative.


----------



## Scoremixer (Oct 31, 2020)

Alex JD said:


> The thing is that my goal is to make music for work so I need to pay attention to this stuff to give a good product.


You really only need to give them a passing glance at the end of the process. And as I said above, a large proportion of professionally mastered work doesn't adhere to these 'standards' and doesn't suffer because of it.



Alex JD said:


> I usually mix my samples (not mastering them yet) but I'm a composer and If there is budget I usually would hire a mixer to work on the track. And I don't want these standards to ruin my work because it was too loud etc.



If you're working entirely in the box, as long as you're not clipping at any stage in the process then nothing will be ruined.



Alex JD said:


> Also then I suppose a song would need more mastering sessions (one to put on streaming services, one for the movie itself for example), or can I master a song with my own standard and the editor/engineer will take care of that?



Music written specifically for films doesn't get mastered. It goes to the dubbing mixer, and it's their job to incorporate it into the overall soundtrack whilst paying attention to the overall technical requirements. If you're mastering your own stuff, then that is the only stage in the process I'd pay attention to LUFS, and even then I'd compare it with something similar that's commercially released and use that as a general guideline loudness, rather than slavishly trying to adhere to -16 LUFS or whatever.



Alex JD said:


> So the thing is, when I record should I stick to the rule of mixing it with -3 db headroom for mastering without worrying too much about loudness since I assume the master engineer will be able to take care of that if necessary.



That's not a rule. It's nice guideline that mastering engineers try and encourage so as to not receive stuff that's clipped or pre-slammed through a limiter, but again, as long as you're not clipping (or silly quiet) you'll be fine.

Again - loudness standards are a nice thing to keep in the back of your mind, but I'd only worry about them at the end of the process when you're judging where to set your limiter on your final master. Otherwise, the only things you should worry about are a) making it sound nice and b) don't let it clip.


----------



## reborn579 (Nov 9, 2020)

this is a really interesting video on loudness in the streaming age. a very sensible talk from alan silverman.


----------



## Nate Johnson (Nov 21, 2020)

I've found it incredibly helpful to monitor LUFS and Peak Volume....for my own sanity. I use Levels from Mastering In The Mix. It gives me some targets and helps keep consistency from piece to piece that I work on. Its not a perfect science though, that's for sure!


----------



## AudioLoco (Nov 21, 2020)

There are many many new alleged standards of perceived loudness, expressed in the new trendy, almost hipster (  ), LUFs meter .
Every digital platform tries to impose its own. Itunes, Spotify etc. Having done that (without even trying to find universal standards that match every platform) they have sawed confusion and many myths related to the loudness levels to which mastered music should aim for.
While that is going on, and many try desperetely to adhere to these "standards", the best mastering engineers in the world are outputting hits that COMPLETELY DISREGARD these alleged standards. No need to follow any of these guidelines in the real world unless you want your track to sound much much quieter (and therefore appearently worst, as the human ear perceives "louder" as "better"). 
The general push for achieving music that is less limited and squashed is great, but this is not the way to do it.


----------



## DovesGoWest (Nov 22, 2020)

I once watched a vid on YouTube where the guy pointed out the flaw with LUFS, as this is an average over time.

take your track and play it through a meter and say it comes back as -12LUFS. So let’s pretend Spotify says we play at -12, so your track won’t be altered. Now add a 32 bar section to the end of the track with a piano playing quietly. Remeasure the track and look your LUFS just dropped to say -14, so now Spotify says oh we will turn your track up to make -12


----------



## ReleaseCandidate (Nov 22, 2020)

AudioLoco said:


> No need to follow any of these guidelines in the real world unless you want your track to sound much much quieter (and therefore appearently worst, as the human ear perceives "louder" as "better").



Well, your advice would be actually useful, _if_ the platforms wouldn't lower the gain of low dynamic songs according to LUFS. The less dynamics your song has, the less loud it is perceived in comparison to a song with the same LUFS value but more dynamics (which means the loudest passages are actually louder as the target LUFS value) - because louder is only better if you have something quieter to compare to.
To put it the other way: 2 times the loudness of 0.5dB is the same for LUFS (well, no the algorithm is actually not a simple average) as a loudness of one time 0dB and one time 1dB, but the second 'song' is what humans call 'louder'.







But yes, if you'd master your song to -23 LUFS and the platform uses -13 LUFS and doesn't adapt your gain, it would be quieter.



DovesGoWest said:


> I once watched a vid on YouTube where the guy pointed out the flaw with LUFS, as this is an average over time.
> 
> take your track and play it through a meter and say it comes back as -12LUFS. So let’s pretend Spotify says we play at -12, so your track won’t be altered. Now add a 32 bar section to the end of the track with a piano playing quietly. Remeasure the track and look your LUFS just dropped to say -14



That's actually not a flaw but part of the fundamental design decision of LUFS, that you can have parts that are louder (and/or quieter) but not too many or too long (so you don't alter the average).


----------



## rrichard63 (Nov 22, 2020)

DovesGoWest said:


> Now add a 32 bar section to the end of the track with a piano playing quietly. Remeasure the track and look your LUFS just dropped to say -14, so now Spotify says oh we will turn your track up to make -12


Yes, the definition of loudness units (LU) assumes that variations over time are more or less uniformly distributed in time. For some styles of music this seems to be a weakness. Can anybody think of a way that standards bodies and/or streaming services could improve the definition?


ReleaseCandidate said:


> That's actually not a flaw but part of the fundamental design decision of LUFS, that you can have parts that are louder (and/or quieter) but not too many or too long (so you don't alter the average).


As a result, the fundamental design imposes certain artistic choices on composers and performers. I can understand why some might consider that a flaw.


----------



## AudioLoco (Nov 22, 2020)

ReleaseCandidate said:


> Well, your advice would be actually useful, _if_ the platforms wouldn't lower the gain of low dynamic songs according to LUFS. The less dynamics your song has, the less loud it is perceived in comparison to a song with the same LUFS value but more dynamics (which means the loudest passages are actually louder as the target LUFS value) - because louder is only better if you have something quieter to compare to.
> To put it the other way: 2 times the loudness of 0.5dB is the same for LUFS (well, no the algorithm is actually not a simple average) as a loudness of one time 0dB and one time 1dB, but the second 'song' is what humans call 'louder'.
> 
> 
> ...




If you analyze the most recent major releases, none of them follow these guidelines.


----------



## ReleaseCandidate (Nov 22, 2020)

AudioLoco said:


> If you analyze the most recent major releases, none of them follow these guidelines.



Which guidelines and which releases did you analyse where? So can you point me to a major release at Youtube that has an integrated LUFS of more than -14? Or anything at a (major) broadcaster that doesn't adhere to EBU 128?


----------



## DovesGoWest (Nov 22, 2020)

Google Ian shepherd and read his blog, he has an article where he talks about short and long term LUFS and what to aim for


----------



## twincities (Nov 22, 2020)

rrichard63 said:


> Can anybody think of a way that standards bodies and/or streaming services could improve the definition?



shorter integration times are probably the key. full "song" averages don't really make the most sense from a listener perspective, only a technical one. especially when a "song" can be whatever the artist decides, and throwing silence at the end of a single will allow them another dB in their chorus. writing standards so that "the loudest 1 minute cannot exceed -xxLUFS, otherwise the whole song will be turned down accordingly" seems like a better way than the arbitrary thing we call "songs" across genres, and may be 50 seconds, or 30 minutes. 

time is at least an objective measure, and doesn't allow the first half of a song to be the loudest thing on a playlist/radio by simply having an acoustic/quiet second half, and simultaneously doesn't punish consistently loud genres that may stylistically choose not to have quiet passages in certain songs. because as it stands a pop song with lighter intro/outros, acoustic passages, etc, can peak to peak get a lot "louder" at it's 808 drop choruses than a metal song that has pounding double bass the whole way through. measure them each at their loudest minute and level based on that an you have something that stays stylistically true to the songwriting intentions.


----------



## rrichard63 (Nov 22, 2020)

twincities said:


> shorter integration times are probably the key. full "song" averages don't really make the most sense from a listener perspective, only a technical one. especially when a "song" can be whatever the artist decides, and throwing silence at the end of a single will allow them another dB in their chorus. writing standards so that "the loudest 1 minute cannot exceed -xxLUFS, otherwise the whole song will be turned down accordingly" seems like a better way than the arbitrary thing we call "songs" across genres, and may be 50 seconds, or 30 minutes.
> 
> time is at least an objective measure, and doesn't allow the first half of a song to be the loudest thing on a playlist/radio by simply having an acoustic/quiet second half, and simultaneously doesn't punish consistently loud genres that may stylistically choose not to have quiet passages in certain songs. because as it stands a pop song with lighter intro/outros, acoustic passages, etc, can peak to peak get a lot "louder" at it's 808 drop choruses than a metal song that has pounding double bass the whole way through. measure them each at their loudest minute and level based on that an you have something that stays stylistically true to the songwriting intentions.


This is a good explanation. I think I just learned something. Thanks.


----------



## jcrosby (Nov 22, 2020)

People tend to still master as if they're mastering for compact disc, -9 or so. Most EDM genres are mastered for the club which is louder than that, (-7 / -6-ish. This is obscenely loud AFAIC even though I do some mastering on the side in the genre...)

Basically it's useful to understand what will happen to your music when it winds up on a given platform, but in reality releases on average are still mastered to levels well beyond -14.


----------



## telecode101 (Nov 22, 2020)

AudioLoco said:


> If you analyze the most recent major releases, none of them follow these guidelines.



major releases as in "pop music"?


----------



## ReleaseCandidate (Nov 22, 2020)

twincities said:


> writing standards so that "the loudest 1 minute cannot exceed -xxLUFS, otherwise the whole song will be turned down accordingly" seems like a better way than the arbitrary thing we call "songs" across genres, and may be 50 seconds, or 30 minutes.



That's actually why there is the 'EBU 128 s1' standard for short-form content, that imposes a short time limit.

It's meant for ads and stuff on TV up to 2 minutes, but is the better fit for songs than the whole-programme standard EBU 128.



> that the Short-term Loudness Level (measured in compliance with EBU Tech 3341 [2]) should not exceed −18.0 LUFS (+5.0 LU on the relative scale). For the implementation of Loudness Levelling workflows (for example, in Quality Control environments) a tolerance of +0.2 LU is allowed, to take account of measurement errors;





https://tech.ebu.ch/docs/r/r128s1.pdf


----------



## AudioLoco (Nov 22, 2020)

ReleaseCandidate said:


> Which guidelines and which releases did you analyse where? So can you point me to a major release at Youtube that has an integrated LUFS of more than -14? Or anything at a (major) broadcaster that doesn't adhere to EBU 128?



Hi there, there is a lot of info regarding the subject available. Major releases currently don't leave the mastering houses at -14. 

For example, if you are interested, some insight from mastering engineers here:

Targeting Mastering Loudness for Streaming Why NOT to do it 

All the best!


----------



## AudioLoco (Nov 22, 2020)

telecode101 said:


> major releases as in "pop music"?


Yes in pop, dance, rock, anything commercial.


----------



## ReleaseCandidate (Nov 22, 2020)

AudioLoco said:


> Hi there, there is a lot of info regarding the subject available. Major releases currently don't leave the mastering houses at -14.



But that's not the point (the loudness or peak level doesn't really matter). What matters is that you know that your content will be loudness normalized and not peak normalized as before.

It's fine that real mastering engineers know what they're doing, but we are _not_ talking about them. It's about people that _need_ guidelines (like comparsion with other songs on their target platform or loudness meters or tutorials about 'how to use your limiter' or ...) because they're not sure what and why they should do exactly, because they are still (or have just begun) learning.


----------



## Fab (Nov 23, 2020)

What you need...is a plugin!

I think 'Youlean' does a free version of their loudness meter which also includes LUFS requirements/averages; for film, TV and games. It's a good plugin in my opinion.


----------



## ReleaseCandidate (Nov 23, 2020)

Fab said:


> I think 'Youlean' does a free version of their loudness meter which also includes LUFS requirements/averages; for film, TV and games. It's a good plugin in my opinion.



I had some problems with Youlean crashing my DAWs, but it has a standalone version too.
Didn't happen with the free dpMeter https://www.tb-software.com/TBProAudio/dpmeter5.html or the free Melda loudness analyser https://www.meldaproduction.com/MLoudnessAnalyzer


----------



## gohrev (Nov 23, 2020)

I have two questions I'd like to ask:


If I understand correctly, one should aim for an average of -14 LUFS with a max loudness limited to, say, -2db. Correct?
If this were true, then what about a "minimum" loudness? In other words, how would one prevent their work from sounding too _quiet _?


----------



## ReleaseCandidate (Nov 23, 2020)

berlin87 said:


> I have two questions I'd like to ask:
> 
> 
> If I understand correctly, one should aim for an average of -14 LUFS with a max loudness limited to, say, -2db. Correct?



The maximum _true_ peak level should be limited to −1 dB(TP).
Yes, if you set your target like that, you have a result that should pass the streaming services without much limiting. But you can afterwards change the gain of your song to something higher or lower, just don't touch the dynamics (and peaks, ...), so no more compression/limiting/whatever, only gain.


Spotify's rules:





Spotify for Artists


Get more out of Spotify with tools & tips for artists and their teams.




artists.spotify.com













How to Master for Streaming Platforms: Normalization, LUFS, and Loudness


Whether you’re a mastering pro, or preparing your first release, this article will get you up to speed on how to master for Spotify, Apple Music, SoundCloud—and more—in 2022 and beyond.




www.izotope.com







berlin87 said:


> If this were true, then what about a "minimum" loudness? In other words, how would one prevent their work from sounding too _quiet _?



That's the real problem, because what's too quiet depends on where you are. In a plane you have another ambient noise level as at home in your living room (hopefully . But for everyday use you have to test your music on a smartphone with earbuds in a bus/tram/metro/train/mall/... full of people.


----------



## gohrev (Nov 23, 2020)

Thank you @ReleaseCandidate, it seems I'm slowly getting the hang of this. Guess I was a bit too careful with my threshold of -2db  

One thing that isn't entirely clear to me, is how some pieces can sound so very loud, even when limited to -2db — but I guess this all depends on the frequencies?


----------



## ReleaseCandidate (Nov 23, 2020)

berlin87 said:


> One thing that isn't entirely clear to me, is how some pieces can sound so very loud, even when limited to -2db — but I guess this all depends on the frequencies?



Yes, perceived loudness isn't that easy to measure.
Again Izotope: https://www.izotope.com/en/learn/understanding-loudness-in-audio-mastering.html 

And there are many more scientific papers about perceived loudness.


----------



## gsilbers (Nov 23, 2020)

Scoremixer said:


> You really only need to give them a passing glance at the end of the process. And as I said above, a large proportion of professionally mastered work doesn't adhere to these 'standards' and doesn't suffer because of it.
> 
> 
> 
> ...



Imma gonna add more weight to this post.

based on the initial post I think the confusion is not on the loudness standard but in the post production process.

Composers don’t have to worry about loudness standards if they are submitting to a re recording mixer so it can be mixed to a movie. They just have to have a good monitoring to levels.

the re recording mixer will make sure to deliver to -24lufs etc since they have to adhere to the deliverable specs, which are now laws in most countries.

if the delivery is to music libraries or other mediums then yes, use Spotify or YouTube standards or whatever.

And something that maybe it’s surprising to some but for acustic based music the less compressed and limited the more loud it will sound in Spotify etc. and modern stuff like edm and hiphop it’s the opposite. It depends on the track of course but that seems to be the rule of thumb for these newer loudness standard in YouTube/Spotify.


----------



## ReleaseCandidate (Nov 23, 2020)

Articles about loudness (too loud) in cinemas:








Loudness And Dynamics In Cinema Sound - Part 1 | Pro Tools


Whether you like it or not we have ended up with a Loudness war in cinema sound. Volume has gone up and dynamics have gone down. So what can be done about it? In the first part of this series on Loudness and Dynamics In Cinema Sound, with the help of Eelco Grimm, we look at how the problem has come




www.pro-tools-expert.com












Loudness And Dynamics In Cinema Sound - Part 2 - Cinema Survey Results | Pro Tools - The leading website for Pro Tools users


In the first part of this series on Loudness And Dynamics In Cinema Sound, we learnt how the Dolby calibrated system has completely broken down with films having to played back at a much lower level because they have been mixed and mastered to be so much louder. Steven Ghouti AFSI AES has worked in




www.pro-tools-expert.com


----------



## AudioLoco (Nov 23, 2020)

ReleaseCandidate said:


> But that's not the point (the loudness or peak level doesn't really matter). What matters is that you know that your content will be loudness normalized and not peak normalized as before.
> 
> It's fine that real mastering engineers know what they're doing, but we are _not_ talking about them. It's about people that _need_ guidelines (like comparsion with other songs on their target platform or loudness meters or tutorials about 'how to use your limiter' or ...) because they're not sure what and why they should do exactly, because they are still (or have just begun) learning.



Yes, conceptually, you are correct. It just doesn't reflect what is going on in the real world (unfortunately).
And the whole guidelines and common specs idea would be great, if implemented by all, but the point is that it is not.

I am not talking about music, or generally, audio, made only for media, as it tends to be further processed by whomever is working on the overall dubbing/re-recording mix. (If there are stated general level requirements from the client etc, sure, by all means, follow the guidelines provided!)
I am referring to music to be released directly to the world, commercially or not.

The people that are still learning and are unsure and will follow said guidelines will inevitably have results that will completely pale once a commercial track is played after their track. I don't understand the point in capping their possible results by design.
It is like saying to a budding composer something like: "do not use articulation switches, because that is something composers who know what they are doing can only use".....
Again, not only for average listeners, but also for sound editors and licensing "deciders" scrolling through various tracks to choose: If a track is too quiet while played in context near to other tracks, it will give an impression of sounding worst and less "pro", even if it may sound amazing with proper gain matching.

To argue in favour of your statement, I would say surely, absolutely you are right, those specs would help someone inexperienced to not go overboard with over-compression and over-limiting and destroy their creations, as a result.
If that is your point, we agree.
One of the challenging aspects, and "dark art" in mastering, is the delicate balance between max loudness possible and pleasant dynamics, so those specs let you stay on the safe side, just -probably- too safe.


----------



## tav.one (Nov 23, 2020)

Something I learnt last year: Spotify *doesn't* normalise tracks while encoding. It's only done at the user end and only for those users who have set the normalise setting to ON.

All my mixes before 2019 were going in at -12, now everything goes at -8 to -10 again.


----------



## ReleaseCandidate (Nov 23, 2020)

AudioLoco said:


> The people that are still learning and are unsure and will follow said guidelines will inevitably have results that will completely pale once a commercial track is played after their track.



I still don't know which 'guidelines' everybody(?) is talking about. The only guideline is to not overly compress your song as if the song is going to be peak normalized afterwards (the web is still full of videos on how to keep your meter in the yellow/red area). And check with some loudness meter if your peaks are too loud, and if not you can reset the gain to be as loud as you want and be (almost  sure, that no loudness limiting will do anything unexpected to your song. But you really should listen to your song loudness normalized to e.g. -14 LUFS too, because that's how people with the default settings of spotify and other streaming services will hear your song. But not at all, e.g. Bandcamp doesn't loudness normalize. 



tav.one said:


> Something I learnt last year: Spotify *doesn't* normalise tracks while encoding. It's only done at the user end and only for those users who have set the normalise setting to ON.



Yes, they also state that at their site (I posted the link somewhere above), they only add the information to the metadata of the songs., so the streaming client can use that or not.


----------



## telecode101 (Nov 23, 2020)

tav.one said:


> Something I learnt last year: Spotify *doesn't* normalise tracks while encoding. It's only done at the user end and only for those users who have set the normalise setting to ON.
> 
> All my mixes before 2019 were going in at -12, now everything goes at -8 to -10 again.


that sort of makes sense. i notice that little normalize button a few years after using spotify premium and turned it off. it makes no sense listening to old catalog releases with normalize on IMO. they were made before loudness wars.


----------



## twincities (Nov 24, 2020)

berlin87 said:


> One thing that isn't entirely clear to me, is how some pieces can sound so very loud, even when limited to -2db — but I guess this all depends on the frequencies?



there's also the very real possibility you just need to turn your speakers down and work louder (meter wise) while in your composing phase (or setup two monitor profiles, mix/compose, if your controller allows it). it's very very easy to play a piano VI metering at -20dbfs, set your monitor level based on that, and forever think -2dbfs is "so very loud". it's all relative, it's all perspective.


----------

