eltech Posted July 6, 2016 Posted July 6, 2016 Dave, glad to hear you don't have the answer, and neither do I. I like the mystery in life. Knowing all the answers takes a bit of the fun out of it. In the absence of conclusive knowledge people make up their own theories. I enjoy sharing my experience in the hope that it can contribute to the quest for the answers. It is possible there may never be a conclusive answer. The more we know just leads to more questions anyway. Its an audio forum and in the end the result we get at home in our own systems is all that we can really talk about. Everyone here has a different sound system and accompanying opinion about what sounds good to them.
Guest Eggcup The Daft Posted July 6, 2016 Posted July 6, 2016 Of course, any number of things could possibly have an affect .... but by measuring the sound, it is easy enough to determine that nothing except the time delay (ie. the phase) has changed. However, it is EXPECTED that he would find the results he did. There's isn't any big obvious reason to pick apart his testing. Other people can do the experiment themselves, and confirm the results (or not). You could use other methods for inducing the delay (like an electronic delay, for example) ..... however there isn't any reason at all to expect that would change the outcome. All in all, it's a fairly simple and easy to control experiment. He just says (or at least implies) the results MEAN something they don't. I remain unconvinced by the test as it has been described. I have no problem if it fits the theory. Correct, I can only report what I observe. Am I correct in thinking that you can't say why either? I agree with you that there are reasons for why things sound as they do. I am not making sweeping statements, I am simply reporting my observation, and proposing a possible reason for it. I was not sugesting my proposal is conclusive. Are you suggesting your point is conclusive? I am aproaching it from personal observation not just theory. I have many DACs from vintage R2R through to modern delta sigma 192khz capable and I have played back these files on all of them and the same observation is made - that higher sample rates make the clicks recorded from a vinyl record sound more like what is on the record. I am very familiar with the fact that sample rates affect the sound and presentation of the music played back on diferent DACs. If everything was a fact, and conclusive we wouldnt be here trying to discuss it. I asked you earlier what is your take on high rez audio. I've allready said I think there is merrit to it. Are you interested in anyone elses observations or have you made your mind up already? I can't speak for your soundcard and drivers, but others I've used show the result you get and worse. Typically, despite the super fast processor on the chip, for some reason they still sample with a heavy handed filter at the input and at the rate specified. So I would say you have pre- and post-ringing from the ADC. If you take one of your high res files and downconvert it to 16/44.1 using suitable software, you should get a similar sound to that of the high res file on playback, assuming a good enough DAC.
Guest rmpfyf Posted July 6, 2016 Posted July 6, 2016 I remain unconvinced by the test as it has been described. I have no problem if it fits the theory. The test was fine. Conclusions fell apart when he failed to grasp sampling theory.
davewantsmoore Posted July 6, 2016 Posted July 6, 2016 Dave, glad to hear you don't have the answer, and neither do I. I like the mystery in life. No, that's not at all the message I was trying to send. It is not a mystery. This stuff is all very well understood, and there are reasons why your converters behave the way they do. I can speculate about what those reasons are (too complex for here) .... but I couldn't know until they were tested. I remain unconvinced by the test as it has been described. I have no problem if it fits the theory. The way science works is that people try to repeat his (or similar) tests, in order to contradict his results. So it would be big news if anyone demonstrated reliably contradictory results. Essentially his test was just a practical demonstration of what we already knew (that we can he sounds delayed by a very short duration). I don't really understand what there is to be "unconvinced" about, as it was already known...... it is probably even shorter than he showed (it is just hard to demonstrate reliably). The test was fine. Conclusions fell apart when he failed to grasp sampling theory. Indeed.... Confusing (or inadvertently misrepresenting) a signals rise time (which does need high sampling rates if it is to be steep) with its position in time (which doesn't need any particular sampling rate). .... it does, lead to a reasonably good 'rule' though. If you have a certain time resolution captured in the audio (this is independent of sampling rate) ..... then haphazard signal processing (resampling) may lose that time detail. This may be the main reasons why the ye-olde multibit converter chips are so highly preferred by many people, who directly feed them redbook audio and turn off the internal oversampling. 1
LHC Posted July 7, 2016 Posted July 7, 2016 Can you explain their reasoning? They only seems to discuss how we can hear such short delays (which is not controversial). They don't appear to make a (even a brief) case for digital audio format [which isn't surprising] Of course they related the 6 microsecond temporal resolution to digital audio format. Its on a different page of their white paper. Sorry I should have posted the link (but this was posted earlier in this thread). http://www.yamahaproaudio.com/global/en/training_support/selftraining/audio_quality/chapter5/09_temporal_resolution/ Here is an extract: "Chapter 5.3 describes a digital audio system with a sample frequency of 48 kHz to be able to accurately represent frequencies up to 20 kHz. For continuous signals, this frequency is the limit of the human hearing system. But most audio signals are discontinuous, with constantly changing level and frequency spectrum - with the human auditory system being capable of detecting changes down to 6 microseconds. To also accurately reproduce changes in a signal’s frequency spectrum with a temporal resolution down to 6 microseconds, the sampling rate of a digital audio system must operate at a minimum of the reciprocal of 6 microseconds = 166 kHz. Figure 515 presents the sampling of an audio signal that starts at t = 0, and reaches a detectable level at t = 6 microseconds. To capture the onset of the waveform, the sample time must be at least 6 microseconds." What Yamaha engineers are saying is that the traditional sampling theory is perfectly fine for continuous sound signals. But with discontinuous signals that are constantly changing, you need much higher sampling rate. Then they went on to say: "As a rule of thumb, 48 kHz is a reasonable choice for most high quality live audio systems. For studio environments and for live systems using very high quality loudspeaker systems with the audience in a carefully designed sweet spot, 96 kHz might be an appropriate choice. Regarding speaker performance, 192 kHz might make sense for demanding studio environments with very high quality speaker systems - with single persons listening exclusively in the system’s sweet spot." So they are clearly making the case that even in a home environment better than CD standard is needed for high quality audio reproduction. That is Yamaha's official position.
LHC Posted July 7, 2016 Posted July 7, 2016 Indeed .... however the furfie perpetuated widely is that this work has not been ongoing for a long long time (which it has). There is no controversy about AudioA sounding better than AudioB. However there are incredibly large powers that be who are desperate to sell you their back catalogue in a new format. Wouldn't people be upset if we all repurchased a new format, and then discovered that the technique they had used to "improve the quality" could have been delivered by other less costly ways (lock-in, compatibility, consumers $) I agree one should always be mindful of the industry and their marketing. But what Dr Joshua Reiss has done is a professional and thorough survey of the existing literature on testings of high resolution audio audibility. It is not a 'furfie'. If you truly believe this work has been going on for a long long time, then please cite the seminal publications that Reiss have overlooked. Otherwise your criticism is baseless. Well-respected Dr Mark Waldrep has always maintained that a truly proper test/comparison between redbook and hi-res have never been done to the required standard. Reiss's paper outline the criteria for a robust test. This is what Dr Waldrep wrote in his blog letters: "I could do this study and came dangerously close some years ago. I would like to secure a setup as described in the paper that actually delivers ultrasonics with greater than 120 dB of dynamics and play my recordings at Redbook and high-res. But I’ve come to realize that even if I did the work, it doesn’t really matter since the artists, recording industry, labels, and CE companies don’t really care. They love the hype and the marketing possibilities but when it comes to really incredible fidelity…meh." (I should add Reiss survey was limited to looking at higher sampling rate, but not higher bits. Waldrep insists that one need to look at both.)
LHC Posted July 7, 2016 Posted July 7, 2016 Indeed.... Confusing (or inadvertently misrepresenting) a signals rise time (which does need high sampling rates if it is to be steep) with its position in time (which doesn't need any particular sampling rate). There is no confusion. In his writing Kunchur took the pain to clearly differentiate between the two. You are entitled to your own personal opinion, that is fine. All I will say is that AFAIK there are no publications that directly rebuke or discredit what Kunchur has written. So you would not be able to cite any references to support your opinion. Its instructive to look at what Reiss has written about this issue. In his literature survey have a section on various possible causes of audible benefits of hi res. This is an extract: "Temporal fine structure [Moore 2008] plays an important role in a variety of auditory processes, and temporal resolution studies have suggested that listeners can discriminate monaural timing differences as low as 5 microseconds [Krumbholz 2003; Kunchur 2007, 2008]. Such fine temporal resolution also indicates that low pass or antialias filtering may cause significant and perceived degradation of audio when digitized or downsampled [Yoshikawa 1997], often referred to as time smearing [Craven 2004]. This time smear, which occurs because of convolution of the data with the filter impulse response, has been described variously in terms of the total length of the filter’s impulse response including pre-ring and post-ring, comparative percentage of energy in the sidelobes relative to the main lobe, the degree of pre-ring only, and the sharpness of the main lobe. [Oppenheim and Magnasco 2013; Majka 2015] both claim that human perception can outperform the uncertainty relation for time and frequency resolution. This was disputed in [Thekkadath and Spanner 2015], which showed that the conclusions drawn from the experiments were far too strong." So Reiss has done a good job putting Kunchur's findings in context of the harm in time smearing, and various literature supports that notion. Also note that Reiss is not afraid to cite dissenting opinions, as shown in the last sentence above. So if there are any published works that contradicts Kunchur's interpretation, Reiss would have included them in his survey. Clearly there aren't any. .... it does, lead to a reasonably good 'rule' though. If you have a certain time resolution captured in the audio (this is independent of sampling rate) ..... then haphazard signal processing (resampling) may lose that time detail. This may be the main reasons why the ye-olde multibit converter chips are so highly preferred by many people, who directly feed them redbook audio and turn off the internal oversampling. Now you are post-rationalising. Good, I like that very much. If you could develop your hypothesis into a proper theory you should publish it.
davewantsmoore Posted July 7, 2016 Posted July 7, 2016 (edited) Figure 515 presents the sampling of an audio signal that starts at t = 0, and reaches a detectable level at t = 6 microseconds. This is a signal which has frequency components above 24khz .... and thus they are correct it cannot be sampled by a 48khz rate. This is simply the basics of digital audio (and is not controversial at all). It is not, the same thing as the Kuncher paper.... which is talking about a signal (that can correctly be captured by a given digital system) ..... and varying the sounds position in time (which requires no specific sampling rate caveats) So they are clearly making the case that even in a home environment better than CD standard is needed for high quality audio reproduction. That is Yamaha's official position. They've said that if you want to capture a signal which rises quickly (high frequency) that you need to use a high sampling rate (2x the sampling rate to be precise). This has been known for ~100 years ... by saying them that you want to capture quickly rising signals .... they are saying you want to capture high frequencies. There is conjecture about whether this is audible. Note again, that this is NOT the same thing as what Kunchur looked at. Edited July 7, 2016 by davewantsmoore
davewantsmoore Posted July 7, 2016 Posted July 7, 2016 I agree one should always be mindful of the industry and their marketing. But what Dr Joshua Reiss has done is a professional and thorough survey of the existing literature on testings of high resolution audio audibility. It is not a 'furfie'. If you truly believe this work has been going on for a long long time, then please cite the seminal publications that Reiss have overlooked. Otherwise your criticism is baseless. Works are drastically (understatement) less likely to be published when they are not able to demonstrate a new / interesting result. It is like asking where are all the scientific studies showing that humans cannot "dance on the ceiling". It wasn't meant as direct criticism of the author, but more of peoples (not the authors) general interpretation of the history on this. Please don't get me wrong. I'm not calling into question the results of any of the studies he cross-analysed. I think it's quite likely most of them are meaningful (and that the sound was "better"). I'm talking about WHY it was better, and the common misunderstandings around that. As is obvious this thread is titled "WHY 192khz matters" .... not "DOES 192khz matter"
davewantsmoore Posted July 7, 2016 Posted July 7, 2016 "I could do this study and came dangerously close some years ago. I would like to secure a setup as described in the paper that actually delivers ultrasonics with greater than 120 dB of dynamics and play my recordings at Redbook and high-res. But I’ve come to realize that even if I did the work, it doesn’t really matter since the artists, recording industry, labels, and CE companies don’t really care. They love the hype and the marketing possibilities but when it comes to really incredible fidelity…meh." Many(!!!!) people have done this test, either casually or in detail. I have done it. When I was not able to demonstrate any audibility ..... I went "mythbuster" style, and tried to determine what it would take to force audibility. The only things I could find audible, were things which extended in to the <20khz range, such as issues with sample rate conversion, or intermodulation distortion in the tweeter. If I were to "publish" my result .... it is very difficult. My methods would invite (and fair enough too), the commentary, that my "result doesn't prove anything" (which is a totally correct observation). If I used a different method, or higher frequencies, or louder replay, or, or .... then it may have been able to demonstrate audibility. So, naturally I told very few people about my test <shrug> My conclusion about my result is simply "I wasn't able to show anything". 1
Newman Posted July 7, 2016 Posted July 7, 2016 Always remember that the energy coming out of a musical instrument that is not audible, is NOT music. It's waste. Statements like 'music extends well into the inaudible range' are nonsense. Literally. 2
Guest rmpfyf Posted July 7, 2016 Posted July 7, 2016 So Reiss has done a good job putting Kunchur's findings in context of the harm in time smearing, and various literature supports that notion. Not really - all this proves is that distribution of spectral energy in whatever filter is employed is important. Pick your filters, mind your windowing. Reiss has quoted a 101 in signal processing; nothing new here. Higher frequencies inherently relax this constraint. It's easier to product a good result in a spectral domain/context when bin sizes are inherently smaller. The effect of the window depends significantly on the content it's intended to characterize - whilst nothing most call music is a square wave, the more disperse the spectral energy the more critical the filter application for a given sampling frequency. Again, nothing new here, and it has zero directly to do with temporal resolution (Kunchur's argument). Energy outside the windowed filter causes a distortion in time. Making the filter more resolute limits the potential but isn't an absolute statement on application. Always remember that the energy coming out of a musical instrument that is not audible, is NOT music. It's waste. Statements like 'music extends well into the inaudible range' are nonsense. Literally. Not so sure about this taken literally; reluctant to confuse psychoacoustics with with signal theory. In a signal processing context, energy beyond the audible threshold can contribute to audible phenomena. Doesn't mean a 50kHz component of a cymbal crash is audible as a frequency in of itself, just that mathematically as a Fourier sum the small amount of energy at beyond-audible frequencies shapes a lower-frequency waveform. Really depends on the content of what you're listening to (and frankly on the ability of your audio system to replicate as much faithfully) whether these components are at all relevant, though on sharp transitions in natural phenomena there's no doubt that there's spectral content above audible frequency that contributes to audible phenomena. Depends what you're listening to what what you're listening to it on. Some people have a thing for square waves (me not so much). IMHO the main advantage of hires is to relax - directly and indirectly - requirements for high-performance audio. It's ability to extend the ultimate limit (in most applications)... meh. Awesome Redbook implementations still beat crap hires. You can't make a silk purse out of a sow's ear... (you know the rest...)
Volunteer sir sanders zingmore Posted July 7, 2016 Volunteer Posted July 7, 2016 I'm not really sure what this bit means : "IMHO the main advantage of hires is to relax - directly and indirectly - requirements for high-performance audio. "
Happy Posted July 7, 2016 Posted July 7, 2016 I'm not really sure what this bit means : "IMHO the main advantage of hires is to relax - directly and indirectly - requirements for high-performance audio. " was gonna quote and ask what do you mean, too
Ando Posted July 7, 2016 Posted July 7, 2016 I suspect this video may have been linked to previously but it does help with a background understanding of some of the issues being discussed here :https://xiph.org/video/vid2.shtml At xiph.org there is also an earlier video the first half of which is also relevant. Cheers Mike
Newman Posted July 7, 2016 Posted July 7, 2016 Not so sure about this taken literally.... In a signal processing context, energy beyond the audible threshold can contribute to audible phenomena. Doesn't mean a 50kHz component of a cymbal crash is audible as a frequency in of itself, just that mathematically as a Fourier sum the small amount of energy at beyond-audible frequencies shapes a lower-frequency waveform. Really depends on the content of what you're listening to (and frankly on the ability of your audio system to replicate as much faithfully) whether these components are at all relevant, though on sharp transitions in natural phenomena there's no doubt that there's spectral content above audible frequency that contributes to audible phenomena. Depends what you're listening to what what you're listening to it on. I disagree. You are not hearing the inaudible range itself, but only when its interactions are manifested in energy below 20kHz, and that energy is captured by a sub-20kHz system. For example, 28k and 29k energy interacts to produce a 1kHz beat, and that beat is picked up by a sub-20k system. 1
davewantsmoore Posted July 7, 2016 Posted July 7, 2016 there's no doubt that there's spectral content above audible frequency that contributes to audible phenomena. Depends what you're listening to what what you're listening to it on. Some people have a thing for square waves (me not so much). It's fairly easy to do tests which show this not to be the case .... and fairly difficult to demonstrate it to be reliably true.... and by that I mean, for example - carefully filtering out everything above 20khz, and not being able to hear any difference. There's a wide range of possible reasons this could be, though The real kicker, above anything else though ... is that nobody can ever look at a digital audio file, and judge the quality of the contents based on the sampling rate... and I'm sure there is a LOT of that going on, either consciously or subconsciously. We need to evolve beyond that thinking to ever achieve reliably high quality audio that is worth paying "again" for. This is why there is quite a lot of merit in what MQA is trying to do. I'm not really sure what this bit means : "IMHO the main advantage of hires is to relax - directly and indirectly - requirements for high-performance audio. " It is easy to make a DAC or ADC (that has optimised performance in the 0 to 20khz range) which operates at a low number of bits, and a very high sampling rate. MOST devices operate in that way. Re-sampling audio (from one rate to another), is an opportunity to "blur" the signal in time. Typical re-sampling filters used aren't very good (restricted by computing power). This is why many people prefer DSD (or higher rate DSD) It is not that DSD is "better" or "more accurate" (that is nonsense, the differences between it and some other suitable digital system are infinitesimal) .... it is because your DAC is already doing DSD-like-things internally, and feeding it DSD means that it doesn't have to do that work that it would otherwise have to do (and would have done a poor job of). The same applies for multibit formats at higher rates (vs low rates). If your DAC is internally going to convert everything to 786Khz rate, then it doesn't have to do as much (or any) low quality work if you fed it the high rate yourself (eg. by oversampling a low rate carefully .. .or by buying a highrate content from the shop). In short, it is simply avoiding the opportunity to damage the signal though poor quality processing.
Guest rmpfyf Posted July 7, 2016 Posted July 7, 2016 I disagree. You are not hearing the inaudible range itself, but only when its interactions are manifested in energy below 20kHz, and that energy is captured by a sub-20kHz system. For example, 28k and 29k energy interacts to produce a 1kHz beat, and that beat is picked up by a sub-20k system. Not what I was intending. A 10kHz square wave (or any square wave) will have components above audible frequency when implemented as a DFT. That simply means there's spectral content, not audible content, and for an infinite Fourier transform it's at frequencies that extend beyond audible. This doesn't mean the audible result presents as a chord with components above audible. It doesn't mean what you're hearing is a superposition of all frequencies in content at relevant amplitudes from unique sources. Just that the Fourier representation involves components above a frequency we'd term audible. And we should be strict here - we don't see an infinite Fourier transform, DSP implements this as a discrete Fourier transform, which is where the challenges start. Filter quality is key; higher sample frequencies just relax the requirement (this said, it's of course possible to still record/master crap hires). Spectral content is not an exact expression of audible content. When viewed through that strict lens (e.g. psychoacoustics, which was the going thing in the 70's to determine what created Redbook), then yes, all can be captured at Redbook. IMHO I don't think this much is wrong - I reckon psychoacoustics represents a relevant expresion of audible content, and that a good Redbook system will please anything bar square wave fans. The real question is 'what do you listen to' which is why this... carefully filtering out everything above 20khz, and not being able to hear any difference. ...holds true. It is not the same as saying 'there is no spectral content above 20kHz', just there's no audible content above that frequency. Doesn't mean the inverse transform of higher-frequency spectral content doesn't effect waveform shape at lower frequencies. But one does need to be able to record it, filter it, play it back, get it to your ears in a way that makes a tangible difference. If you could make a square wave, it ain't square by the time it gets to your ears (and it's not music). I think there's merit in MQA for the same reasons mentioned in the above posts, I just don't think it's a panacea. It just smarter filtering, some of which is afforded by buying a little extra space in bit depth and sample rate as required and allows.
Newman Posted July 7, 2016 Posted July 7, 2016 Sorry for the misunderstanding. But if you thought I said there is no spectral content above 20kHz, perhaps that was also a misunderstanding (I mentioned "the energy coming out of a musical instrument that is not audible", so clearly admitting there can be spectral content above 20k). Your second-to-last para above suggests we are agreeing on message. cheers
Guest rmpfyf Posted July 7, 2016 Posted July 7, 2016 (edited) Your second-to-last para above suggests we are agreeing on message. cheers How we nerds units unite (Edit: but how we rely on spellchecking) Edited July 7, 2016 by rmpfyf
davewantsmoore Posted July 7, 2016 Posted July 7, 2016 (edited) we don't see an infinite Fourier transform, DSP implements this as a discrete Fourier transform, which is where the challenges start. Filter quality is key; higher sample frequencies just relax the requirement (this said, it's of course possibleto still record/master crap hires). ... and it's also possible to avoid the issues at lower rates. It is just difficult .... it's easier to use a much higher rate (like what most DACs do) to make the challenges easier to tackle. ...holds true. It is not the same as saying 'there is no spectral content above 20kHz', just there's no audible content above that frequency. Doesn't mean the inverse transform of higher-frequency spectral content doesn't effect waveform shape at lower frequencies. Ah indeed .... but I mean if we do remove the spectral content (eg. by re-sampling) ..... the point being that this changed waveform shape isn't thought to be particularly audible as it is very small (when done with high quality - although it is often not) It is much more plausible to think that differences are caused by filter implementations which cause ringing or blurring, etc. These things have been shown to be reliably audible. How we nerds units I'm fairly sure I agree with this Edited July 7, 2016 by davewantsmoore
davewantsmoore Posted July 7, 2016 Posted July 7, 2016 bar square wave fans Square waves (of an appropriately high fundamental, say 8khz) sound identical on all my speakers at all rates.... all my speakers are steep low pass filters somewhere between 18 and 40khz... and they are unable to reproduce the shape of a square signal, beyond much more than a few harmonics at most, no matter what dizzying high frequency the electronics are capable of feeding them. Practical tests aside (which aren't perfect), I don't see how the waveform shape (being very steep) has ever been isolated as being reliably audible. It's position in time (like Kunchur demonstrated) absolutely is.... but that's been known/theorised for a very long time (see, angular direction for stereo/surround for example).
Guest rmpfyf Posted July 7, 2016 Posted July 7, 2016 Practical tests aside (which aren't perfect), I don't see how the waveform shape (being very steep) has ever been isolated as being reliably audible. It's position in time (like Kunchur demonstrated) absolutely is.... but that's been known/theorised for a very long time (see, angular direction for stereo/surround for example). That's the thing I don't get - Kunchur's work on time/phase alignment sensitivity gets a lot of press in hires circles but so far as aeroacoustics goes, it's a bit... well... duh. NHis linking that to neurophysiology theory is pretty interesting (the 2010 stuff?) and, if the last paper was anything to go by, a nod to a developing space - a genuine attempt (not the only one) first-principles approach to understand directly (e.g. if by theoretical application) what we can actually hear. Cool stuff. It's a pity it gets lost in the one bit of his earlier papers that's not practically correct (comments on temporal resolution) - that's the bit that gets bandied about a lot, seemingly.
Guest rmpfyf Posted July 7, 2016 Posted July 7, 2016 Ah indeed .... but I mean if we do remove the spectral content (eg. by re-sampling) ..... the point being that this changed waveform shape isn't thought to be particularly audible as it is very small (when done with high quality - although it is often not) Practically so. Brickwall it, cut Fs to suit, play it back, (often) audibly meh difference. Anoraks listening to reference cymbals on super tweeters all day long might disagree and point to $50k vinyl decks as being superior; I'd suggest they've missed the point Again though, to ref an earlier point of yours, having well-mastered 1's ad 0's is great but if your DAC can't recreate the intended waveform faithfully (earlier comments on NOS R2R's...) If the original time history is so 'busy' that a certain Fs isn't sufficient to capture it adequately as spectra, then there's a potential case for hires. Corner cases though, and we'd be getting into a whole 'musicality' vs 'authenticity' vs 'reproduction capability' argument - never heard a system even at high-audiophile money that's got the last bit maxed out enough to talk honest shop about sampling frequencies being the performance limit. (Couldn't comment at 'ridiculous' audiophile money; at some point it's too blue for my blood, however best system I'd ever heard was Redbook). It is much more plausible to think that differences are caused by filter implementations which cause ringing or blurring, etc. These things have been shown to be reliably audible. In a practical sense, absolutely - it's easy to be lazy about filtering. Redbook's a viable standard, it just assumes you do everything 'right'. Born of a dying age of old-school audiophiles!
Recommended Posts