If you have heard a true analog to analog live recording of say a classical concert, if was mixed properly, there's a sense of extended range, or "air" or spaciouness that CD's cannot duplicate. It does slightly better when the source is analog.
Incidentally, I happen to have done professional mastering. That "air" you're hearing is the noise floor "fuzzing" up the sound, both the noise in the recording as well as the noise resulting from the playback system (a stylus touching a grooved record). While this may sound appealing to you because you're used to it. The problem is that while it "sounds good" to you, it's not an accurate reproduction, it's not a better dynamic range (again, the groove widths of a 33 or 45 limit the dynamic range to about 80dB), AND every successive playback of your 45 will continually degrade the media.
PM me for the email address, I challenge you to compare the two distinct versions of the Train song I was talking about. If you are into country music, I have Jimmy Wayne's Do You Believe Me Now as a 45 also.
"Drops of Jupiter" was mastered poorly. I ran this through an Leq(A) spectrum analyzer and here's what I found: NOTE: The tracks used are both 256 Kbps AAC, but they are from the original master recordings. The format doesn't change the loudness levels they were recorded at. So, for the purposes of this comparison of attributes, these files are more than adequate. The fidelity is not an issue as AAC at 256 Kbps is regarded by most engineers and audiophile forums as acoustically transparent to 16-bit LPCM. If we compared two LPCM files, the bad mastering would be just as evident:
http://images.cinemalogue.com/LeqA/LeqA-DOJ.tiff.html
http://images.cinemalogue.com/LeqA/LeqA-DOJg.tiff.html
"Drops" has an A-weighted average loudness (LeqA) of -12.0 dBFS, unweighted -8.6. This is no better than most of the amplitude-pumped bs recordings mastered since the 1990s to today.
Now compare that to "Celebration" by Kool n The Gang, produced by Eumir Deodato and released in 1980.
http://images.cinemalogue.com/LeqA/LeqA-Celeb.tiff.html
http://images.cinemalogue.com/LeqA/LeqA-Celeb_g.tiff.html
The LeqA value is -19.1 dBFS. This is an example of excellent mastering, and I picked it specifically because it is very emblematic of the quality of sound engineering from some of the bigger mastering houses in the early 80s. Also, it's nearly the same duration as "Drops", about 38 seconds longer.
More importantly, "Celebration" actually sounds quite fuller than "Drops" because it contains layers of instruments and lots of sweetening (a term used by engineers to describe fine tuning the mix to enhance spatial and amplitude dynamics, resulting in nuances that pop out at you rather than a "flat" mix but also having the distinct quality of not drowning out background layers). When the bass drum hits, if you adjust the volume upward, the thump stands out far more than anything in any portion of "Drops", which frankly sounds like it was recorded on crumpled wax paper and engineered by third graders with a potato peeler.
"Drops" is engineered so badly that the lead singer's vocals are almost drowned out by the portions of solo acoustic guitar! Nevermind when the strings and piano and drums come in... it all sounds very flat and I weep for the guy who spent tons of money on "audiophile" equipment only to purchase and play music this awful over his overrated amps and speakers.
Take a careful look at the two graphs. note in the yellow line how the A-weighted average keeps increasing toward the end of the "drops" track. This sort of gradual drift in the AVERAGE is common among newer tracks. However, older tracks while sweetened were downmixed and mastered to a constant average. It doesn't flinch once "Celebration" gets going.
Now look at the blue line, which is measuring the peak level in - dBFS at any given point throughout the track Note how "Drops" is constantly clipping its peak, whereas "Celebration" cycles a LOT and peaks JUST below 0 dBFS through the whole track. The range from the softest to loudest in the "Celebration" track is gigantic, very dynamic.
It shouldn't be the case that the average drifts while the level at any given moment there is only a -3dB difference from peaks to troughs. It should actually be the OTHER way around... a constant average with much larger variance between peaks and troughs... about TWENTY dBFS in the case of "Celebration". Now, just looking at the graphs you can tell which one is going to sound flatter on ANY sound system.
"Drops" looks like crap... and correspondingly sounds like crap. The funny thing is, what this means is that the vinyl copy of "Drops" will sound much worse. Oh sure, it won't manifest itself in scratchy cymbals clipping the dynamic range... but it will sound flat throughout, compared to a track like "Celebration". The noise level of analogue media simply masks most of the junk at the cost of making the entire track sound less intelligible. But your ears get used to it very quickly... especially if you aren't comparing it frequently to properly mastered digital recordings.
I agree with you about bad mastering, 90% of what's out there is unlistenable, half the time, it seems they can't even get stereo right.
Because of the poor mastering and final replication in a medium (vinyl) that can't support a larger dynamic range, even if they output "Drops" to 24-bit Linear PCM it's still going to sound like crap compared to "Celebration"... why?
Garbage in, garbage out.
I am using 1940's full range tube theater amplifiers rated @125 watts as my main source of amplification. These amps bring out EVERYTHING that is wrong with CD's today. I have run frequency sweeps on these amps, and my roommate's seven year old screams when it runs above 17khz because it hurts her ears. So I do know what they are capable of.
The reason your roommate's seven year old screams is because at that age, the eardrums are smaller, as are the hairs in the cochlea. Consequently, they resonate at higher frequencies. As you get older, however, and larger, your hearing degenerates both because your hearing equipment doesn't resonate to the highest frequencies, and because of years of use or abuse which eventually damages the cilia in the cochlea.
It is commonly misconceived by many "audiophiles" (a word that roughly translates to "doesn't know a lick of professional sound engineering") that cymbals don't reproduce well in digital recordings because of their high frequency. But this is absolutely false. Cymbal frequencies don't lie anywhere near the Nyquist frequency/limit. The problem is, as I stated before, dynamic range and amplitude resolution. While analogue media surpass 16-bit LPCM for amplitude resolution, they compromise on dynamic range because of the noise levels inherent to the medium as well as the mechanical limitations (e.g. groove widths on LP or track with/pitch on magnetic tape).
It is important to recognize that the combination of a higher noise floor, and more amplitude pumping at clipping levels is a disastrous combination from a sound engineering point of view, and the result is simply mediocre.
The advent of digital recording and mastering means that your entire argument is somewhat moot. Remember what I said, "Garbage in, Garbage out?" Another way of saying this is that your final product is only as good as the best representation of it at any stage in the process. If you trust "Drops" to be a listenable recording, or any other material from any band since probably 1995, know that almost all recording, mixing and mastering since that time has been done on digital mastertapes. Whatever limitations you think the digital format carries, is therefore present in the final analogue product you purchase.
If you argue that older analogue recordings sound better... they do! But again, that's not because of the analogue medium. That's because of the mastering process I just explained. Listen to THOSE recordings on 16-bit dithered LPCM (CD Audio) and 24-bit (undithered) and you'll see precisely what I mean.
P.S. thats another point... 16-bit recordings have some quantization error inherent in the medium but this is addressed by the process of dithering, or adding barely perceptible noise to the recording (well below the level noticeable in a digital recording) to force amplitude values to register correctly. I could explain this in better detail but it may bore the hell out of you and occupy a couple hours of my time. Basically, any properly mixed and mastered recording is going to sound better in a digital format than analogue.
Lastly, a properly engineered recording, as a mastering engineer once told me, should sound fundamentally good on any playback system. This doesn't mean that a pair of paper cone Kraco speakers will sound as good as my KEF Q-series loudspeakers. It does however mean the opposite of what you think is true. If a recording is engineered right, it will sound great on a mediocre system, and SPECTACULAR on a great system. If a recording is engineered wrong, it will sound bad on a great system, and like utter CRAP on a mediocre system.
Do this... compare a newer recording on an iPhone speaker or similarly limited-range speaker system to an older one... you might even use "Drops" and "Celebration". I will guarantee you that "drops" will sound like mud coming through that tiny speaker, and "Celebration" will sound much clearer. Or do it with any low-fi speaker... the result is always going to be the same. Especially because a good engineer always tests his master on various speaker systems to ensure that it's been mastered within the dynamic range and frequency response characteristics of most speakers. To do otherwise would be, much to my own chagrin, financial suicide.
I personally prefer to master my own recordings to a 24-bit DVD Audio format, which surpasses CD, SACD, Vinyl, tape, everything... but the market for this product is insignificant, and consequently most commercially released recordings are done in 16-bit which is still better than vinyl or quarter-inch reel to reel. (It's a whole other ball game if you're talking 3/4" or 2" mastering reels... but again, when these go down to the final product, forget it, there isnt enough groove width on vinyl or magnetic space on a 1/4" to accomodate a dynamic range better than CD. Anyone who suggests otherwise is selling you snake oil.
But more importantly, even with the ear-splitting 140dB dynamic range, and ludicrously-fine amplitude resolution of 12.7 million values per sample interval, of 24-bit LPCM, if your entire recording is mastered within only -3 dBFS variance from peak, it's still going to sound like sh-t.
Nature abhors a moron. -H.L. Mencken
http://www.cinemalogue.com
reply
share