firedog Posted July 8, 2016 Posted July 8, 2016 Potentially an issue.... although not really a massive reason against high rates (and can be sorted out in the electronics if an issue)... xiph really shouldn't have bothered with it IMHO, and just stuck to explaining why high rates weren't inherently necessary in general. See, while I know what you mean.... We are just back at generalisations. Enough for what?.... the assumption that such or other rate will generally do X, doesn't hold .... which causes all the "debates" where people are trying to argue for either "Yes", or "No". The generalisation people need to take .... is that high rate doesn't mean high quality .... it means: The audio might be less damaged (as it has been resampled less) The audio might work better on your DAC (as your DAC performs better with high rates) See that neither of these things mean you can't have high quality with low rates. Agree. All the theoretical arguments are pointless and have little to do with the reality of playback. Note a new DAC like the T+A DSD8 - it's native internal playback rate is 512 DSD (if you feed it DSD 512 it does no upsampling or internal manipulation of the file and just filters it directly to analog), and multiple users report it sounds best when fed music upsampled to DSD 512 in playback software - regardless of original format of music files. This has zero to do with the argument over whether we can "hear" high-res rates, and everything to do with the optimal way to run the DAC. The same is true for many other DACs - they sound best at a specific output rate.
firedog Posted July 8, 2016 Posted July 8, 2016 (edited) Happy not to have all those words here. BTW every Wadia product has done it that way, starting with their 1988 model 2000 CD player. Personally I think that, today, the importance of filters is overstated. Demonstrably audible with specific signals and specific filter poles? Sure, if they say so. But today?... using commercial music products in the same player?... taking the high-res product and converting down to 16/44 in the studio?... after the original high-res product has been processed and cleaned up as part of its routine production process?... then played back through the modern player's sigma-delta DAC with massive oversampling and simple filter? No, I still think we are obsessing at the wrong end of the playback chain. Still mistakenly driven by a 'garbage in garbage out' principle that puts the front end on a pedestal that made sense when front ends were dramatically error-prone, but ceases to be a useful principle when front ends are either inaudible or near-as-drat inaudible. Use HQP and it's various filters. You will find that most DACs have a sweet spot (sample rate - either PCM or DSD) at which they sound best and you will also find that different filters do make an audible difference. You can even find out which ones you think sound the best. Many delta sigma DACs can be made to skip the internal up/oversampling if fed their target rate from the source. Edited July 8, 2016 by firedog 1
JSmith Posted July 8, 2016 Posted July 8, 2016 (edited) Potentially an issue.... although not really a massive reason against high rates (and can be sorted out in the electronics if an issue)... xiph really shouldn't have bothered with it IMHO, and just stuck to explaining why high rates weren't inherently necessary in general. See, while I know what you mean.... We are just back at generalisations. Enough for what?.... the assumption that such or other rate will generally do X, doesn't hold .... which causes all the "debates" where people are trying to argue for either "Yes", or "No". The generalisation people need to take .... is that high rate doesn't mean high quality .... it means: The audio might be less damaged (as it has been resampled less) The audio might work better on your DAC (as your DAC performs better with high rates) See that neither of these things mean you can't have high quality with low rates. Yes, but if supersonic frequencies are filtered out then what is the point in having them there in the first place. If it 'aint broke, don't fix it. Nyquist clearly says that extra data points (sampling) offer no improvement to audio quality. In practice though anti-aliasing may affect the top end frequencies, so a slightly higher sample rate helps high frequency filters from passing ringing/rolloff that can be heard by some... As you say, a lot of this often comes down to processing within a DAC too and the DAC's effectiveness. Well, there is the issue of distortion occurring too when sampling at a too high rate as the sampling accuracy can suffer. JSmith edit: spelling Edited July 8, 2016 by JSmith 1
LHC Posted July 8, 2016 Posted July 8, 2016 (edited) Ok, I am tired of the dithering (sorry for the pun ) (and apologies to Newman for introducing more words here) I have downloaded a random textbook "Introduction to Signal Processing" by Sophocles J. Orfanidis. It is free to download from this url http://www.ece.rutgers.edu/~orfanidi/intro2sp/orfanidis-i2sp.pdf So, now please advise which pages or sub-sections one should read to learn about the sampling and filtering difficulties, time smearing, and those issues associated with high resolution audio. Please don't tell me to read the book from end-to-end (not that it is a bad thing to do in general). I just need to know exactly what you guys are avoiding to say in specific details here. Edit: I should add I am not being difficult or confrontational. I am just curious what exactly you guys are referring to. Edited July 8, 2016 by LHC
Guest rmpfyf Posted July 8, 2016 Posted July 8, 2016 Took a look through the ToC. It's all relevant. Though when you look up 'time smearing' in a filter context, I'd study windowing theory... not the complete answer but a good introduction to practical compromises in filtering. Not sure your background (so might be telling you how to suck eggs here) but if you really wanted to tinker, I'd suggest getting a hold of some software that lets you experiment with this much (e.g. Octave). Software mentioned in that book (MATLAB) has its user guide online, and it's has plenty of examples in the Signal Processing Toolbox documentation... many of which are really good. I'd personally suggest this http://www.dspguide.com/pdfbook.htm And if you're super-interested, there's bound to be something useful on Coursera (and it's free).
davewantsmoore Posted July 8, 2016 Posted July 8, 2016 BTW every Wadia product has done it that way, starting with their 1988 model 2000 CD player. Personally I think that, today, the importance of filters is overstated. Demonstrably audible with specific signals and specific filter poles? Sure, if they say so. But today?... using commercial music products in the same player?... taking the high-res product and converting down to 16/44 in the studio?... after the original high-res product has been processed and cleaned up as part of its routine production process?... then played back through the modern player's sigma-delta DAC with massive oversampling and simple filter? No, I still think we are obsessing at the wrong end of the playback chain. Still mistakenly driven by a 'garbage in garbage out' principle that puts the front end on a pedestal that made sense when front ends were dramatically error-prone, but ceases to be a useful principle when front ends are either inaudible or near-as-drat inaudible. We are talking about lots more than just the "low pass filter" on the DAC output. Although I strongly agree (as you know) there are many more significant problems for audio than a DAC. Happy not to have all those words here. It's difficult to do the thread topic justice without it.
davewantsmoore Posted July 8, 2016 Posted July 8, 2016 Yes, but if supersonic frequencies are filtered out then what is the point in having them there in the first place. Arguably, no reason. Nyquist clearly says that extra data points (sampling) offer no improvement to audio quality. In practice though anti-aliasing may affect the top end frequencies, so a slightly higher sample rate helps high frequency filters from passing ringing/rolloff that can be heard by some... Yes... but this is simply a restatement of the fact that you need to choose a high enough sampling rate to capture the frequencies you want without error. eg. If your real-world implementation of the system is not perfect, then you may want to sample at slightly more than 2x the highest frequency. Well, there is the issue of distortion occurring too when sampling at a too high rate as the sampling accuracy can suffer. Sure. It's pretty fair to say that just as you can achieve high quality with low rates (by dealing with the inherent issues) .... you can also achieve good results with high rates (by dealing with the inherent issues). Please don't tell me to read the book from end-to-end It isn't just one book to read from end-to-end ... it is many. In a lot of ways, it is really something you need to be taught interactively. With examples, and tests, and questions, demonstrations. The DSPguide book posted has the ADC DAC section which is a good start ..... it will explain to you how digital audio works .... Understanding how/why/when it doesn't work (example how resampling digital audio could harm it on the time axis) .... is a lot more complex, and is difficult to print in a book, it takes more of a "tutorial" format (it relies on experience).
LHC Posted July 9, 2016 Posted July 9, 2016 If the original time history is so 'busy' that a certain Fs isn't sufficient to capture it adequately as spectra, then there's a potential case for hires. Corner cases though, and we'd be getting into a whole 'musicality' vs 'authenticity' vs 'reproduction capability' argument - never heard a system even at high-audiophile money that's got the last bit maxed out enough to talk honest shop about sampling frequencies being the performance limit. (Couldn't comment at 'ridiculous' audiophile money; at some point it's too blue for my blood, however best system I'd ever heard was Redbook). This concurs with an experience from a GTG that I posted years ago. This particular member has an ultra high end and highly resolving system. We compared the hi res layer of a hybrid SACD to the same music in the redbook layer. This was sighted listening comparison, so can't pretend that it is rigorous or can be offered as evidence, its merely an observation. I believe the classical piano music was well recorded, DSD perhaps. But on a top notch system, with quality recording, the difference between hi res and redbook was actually tiny. The only clearly audible difference was during periods of complex music and there was a very rapid series of piano notes being played. On the hi res one could clearly hear the notes. Whereas on the redbook, the notes are still there, but sounded cluttered together, and muddle by comparison. In all other aspects the two format sounded the same to my poor ears. So when I came across the temporal resolution explanation for audible difference in high resolution music it made sense to me. It is consistent with my past experience (again I am not saying it is of evidence quality). 1
davewantsmoore Posted July 9, 2016 Posted July 9, 2016 On the hi res one could clearly hear the notes. Whereas on the redbook, the notes are still there, but sounded cluttered together, and muddle by comparison. In all other aspects the two format sounded the same to my poor ears. The point to understand (and it could be considered a "semantic" point, although is important to understand the answer to the thread title) .... is that this result isn't because redbook is incapable of this 'temporal resolution'. ... and while it might be, in practice, that we'll actually end up with high rates as the solution (and so perhaps, the details of what is happening is moot to many people) .... the reasons there was a difference are actually one (or more of) these: The redbook and DSD are intentionally different content (in the frequency and amplitude redbook covers, ie. between 0 and 20khz and above ~ -96dBFS) The DAC performs differently when fed different rates due to the way it is designed The redbook was resampled from a higher rate (eg. DSD) for release .... and this resampling harmed the audio
Guest Eggcup The Daft Posted July 9, 2016 Posted July 9, 2016 This concurs with an experience from a GTG that I posted years ago. This particular member has an ultra high end and highly resolving system. We compared the hi res layer of a hybrid SACD to the same music in the redbook layer. This was sighted listening comparison, so can't pretend that it is rigorous or can be offered as evidence, its merely an observation. I believe the classical piano music was well recorded, DSD perhaps. But on a top notch system, with quality recording, the difference between hi res and redbook was actually tiny. The only clearly audible difference was during periods of complex music and there was a very rapid series of piano notes being played. On the hi res one could clearly hear the notes. Whereas on the redbook, the notes are still there, but sounded cluttered together, and muddle by comparison. In all other aspects the two format sounded the same to my poor ears. So when I came across the temporal resolution explanation for audible difference in high resolution music it made sense to me. It is consistent with my past experience (again I am not saying it is of evidence quality). I've heard enough live classical piano music to suggest that the "muddle" was the correct reproduction...
Happy Posted July 9, 2016 Posted July 9, 2016 I've heard enough live classical piano music to suggest that the "muddle" was the correct reproduction... Yup oftentimes I am surprised how muddled and somewhat coloured live piano sounds. Sent from my iPhone using Tapatalk
Happy Posted July 9, 2016 Posted July 9, 2016 Pianos are very difficult to record. It's interesting that some of most realistic recordings are pirate ones Sent from my iPhone using Tapatalk
JSmith Posted July 9, 2016 Posted July 9, 2016 coloured live piano There's only 2 colours on a piano... JSmith 1
Guest rmpfyf Posted July 9, 2016 Posted July 9, 2016 The point to understand (and it could be considered a "semantic" point, although is important to understand the answer to the thread title) .... is that this result isn't because redbook is incapable of this 'temporal resolution'. +1. It's the ease of ability to capture and recreate a necessary waveform shape afforded by a smaller bin size in signal processing that hires gives you - not the temporal resolution that counts. If anything.
LHC Posted July 10, 2016 Posted July 10, 2016 The point to understand (and it could be considered a "semantic" point, although is important to understand the answer to the thread title) .... is that this result isn't because redbook is incapable of this 'temporal resolution'. ... and while it might be, in practice, that we'll actually end up with high rates as the solution (and so perhaps, the details of what is happening is moot to many people) .... the reasons there was a difference are actually one (or more of) these: The redbook and DSD are intentionally different content (in the frequency and amplitude redbook covers, ie. between 0 and 20khz and above ~ -96dBFS) The DAC performs differently when fed different rates due to the way it is designed The redbook was resampled from a higher rate (eg. DSD) for release .... and this resampling harmed the audio Look. I agree those are indeed possible causes of any audible differences. If one were to pin down the cause and effects, then one has to carefully isolated and eliminate these factors. For example by performing the recording itself (and not use commercial recordings) so that provenance is known, that would eliminate your first and third points. As for the second point use the highest quality DAC chip that could naively decode both hi res and standard redbook. The writings of Mark Waldrep and Reiss have written ways to conduct a rigorous experiment so contributing factors can be eliminated. Worth a read. We just have to agree to disagree whether redbook can reproduce the low temporal resolution. I can only quote the published positions of Kunchur, Yamaha engineers, Bob Stuart that differs from yours. Reiss acknowledged this position in his review, and importantly he did not categorise it in the 'controversial section'. At least I quote my sources, where are your sources?
LHC Posted July 10, 2016 Posted July 10, 2016 I've heard enough live classical piano music to suggest that the "muddle" was the correct reproduction... Sure, may depend a bit on the piano used too, not all piano sounds the same. The 'clarity' may be something that the recording engineers introduced thinking it may please the listeners of hi res. That is not possible to know for sure. I was just noting a point of difference.
LHC Posted July 10, 2016 Posted July 10, 2016 I have posted this video before. Here is Bob Stuart talking about how MQA works. Sure it involves new filtering, but as I wrote before it is the objective that is important. The objective has to be about achieving a temporal resolution not achievable by redbook. Filtering is a means to and end, it is the end that is important. I have posted earlier a blog post with Bob explaining what he means by 'temporal resolution', and it was consistent with Kunchur's.
LHC Posted July 10, 2016 Posted July 10, 2016 (edited) Many(!!!!) people have done this test, either casually or in detail. I have done it. When I was not able to demonstrate any audibility ..... I went "mythbuster" style, and tried to determine what it would take to force audibility. The only things I could find audible, were things which extended in to the <20khz range, such as issues with sample rate conversion, or intermodulation distortion in the tweeter. If I were to "publish" my result .... it is very difficult. My methods would invite (and fair enough too), the commentary, that my "result doesn't prove anything" (which is a totally correct observation). If I used a different method, or higher frequencies, or louder replay, or, or .... then it may have been able to demonstrate audibility. So, naturally I told very few people about my test <shrug> My conclusion about my result is simply "I wasn't able to show anything". Tests done on one test subject (yourself), no matter how rigorous, is not going to be publishable. But statistical testing like Meyer and Moran are publishable. So where are the 'many' tests that you referred to in the style of Meyer and Moran, and are not already reviewed by Reiss? EDIT: having say the above, I do have to say I truly and personally admire your drive, ingenuity and initiative. I wish we have more people like you in our country (in a general sense). Edited July 10, 2016 by LHC
LHC Posted July 10, 2016 Posted July 10, 2016 Took a look through the ToC. It's all relevant. Though when you look up 'time smearing' in a filter context, I'd study windowing theory... not the complete answer but a good introduction to practical compromises in filtering. Not sure your background (so might be telling you how to suck eggs here) but if you really wanted to tinker, I'd suggest getting a hold of some software that lets you experiment with this much (e.g. Octave). Software mentioned in that book (MATLAB) has its user guide online, and it's has plenty of examples in the Signal Processing Toolbox documentation... many of which are really good. I'd personally suggest this http://www.dspguide.com/pdfbook.htm And if you're super-interested, there's bound to be something useful on Coursera (and it's free). Thank you for at least trying to write a useful response. It would be unobtainium to suggests one has to read many books and rely on years of practical experience. If windowing theory is a start, I might look into it. I don't mind technical at all, I just need the guidance to be very specific as I don't have much spare time. I can access help if I get stuck.
Newman Posted July 10, 2016 Posted July 10, 2016 All obsessive deep-diving into narrow, generally inaudible nooks and crannies of our hobby is all part of the fun. However, vast over-exaggeration of the importance of these nooks and crannies will continue to be corrected with proper perspective. For the sake of the hobby in general. See you around. Off to check on some threads relevant to progressing the actual experience of audio. 1
LHC Posted July 10, 2016 Posted July 10, 2016 +1. It's the ease of ability to capture and recreate a necessary waveform shape afforded by a smaller bin size in signal processing that hires gives you - not the temporal resolution that counts. If anything. Hi rmpfyf. I think I will have to agree to disagree with you and Dave at some point to give this a closure. I will check out some of your suggestions about filtering and move on. Now there is an 'elephant in the room' that no one has mentioned in this long thread (and no, it is not Newman's point that we are arguing over the wrong end of the recording/reproduction chain, although that is true too). But I think I should just leave it alone.
davewantsmoore Posted July 10, 2016 Posted July 10, 2016 As for the second point use the highest quality DAC chip that could naively decode both hi res and standard redbook. The writings of Mark Waldrep and Reiss have written ways to conduct a rigorous experiment so contributing factors can be eliminated. Worth a read. Listening tests do no explain why (the sound was different). That's all I am trying to say in this thread.... we have a bunch of people saying they heard difference between high and low rates (and there is nothing controversial about that) ..... but if we are going to propose that there is something wrong with redbook (for example) ..... one has to carefully isolated and eliminate these factors .... then that is what is necessary. When we do that ...... what we find is most of the reasoning given by people for "what is wrong with redbook" (for example), is actually incorrect. Most people don't care (which is understandable) ...... but for those who do care about the answer to "why 192khz matters" .... then it's obviously very important. We just have to agree to disagree whether redbook can reproduce the low temporal resolution. Absolute baloney. This is not a thing which is based on "opinions". The properties of redbook aren't any type of mystery. They are a thing which can be demonstrated. We just have to agree to disagree whether redbook can reproduce the low temporal resolution. I can only quote the published positions of Kunchur, Yamaha engineers, Bob Stuart that differs from yours. You need understand the detail behind why Bob says what he does, in order to understand that he actually doesn't disagree. He is talking about the resampling of audio and the practical problems in designing a converter which mean there can be a difference (exactly what I've been banging on about). Kunchur present results which are completely non-controversial .... however he is (subtly) incorrect as to how they relate to digital audio. They relate to what Bob Stuart is saying about resampling and design of converters ..... and NOT to the requirement to have such a high rate, in order to represent the time difference he has demonstrated are important. Anyways, like I said, most people don't really care "why". where are your sources? The MQA patent. That'll do. That tells us what Bob really thinks is "wrong with redbook". In practice .... we probably do need 192khz .... it's probably the easiest way to avoid all the issues. People just need to understand there is no way to infer quality by looking at the rate. That is all. At least I quote my sources, where are your sources? That's not how this works.... I can't quote sources which say there is "no problem with redbook". The evidence needs to be present to say there IS a problem with redbook. All I'm saying is that .... When we look at the evidence (that these people are presenting) ..... it doesn't say there is an inherent problem with redbook itself ...... the problem is with poor implementations of it. This isn't anything new (and is the basis behind what MQA are attempting). ... but anyways. This isn't "my opinion" that I invented on my own. I don't need to defend "my" position. The truth is out there. Have fun. 1
davewantsmoore Posted July 10, 2016 Posted July 10, 2016 The objective has to be about achieving a temporal resolution not achievable by redbook. Only when the redbook is resampled (which may happen routinely in production and playback - but maybe not) It isn't a problem with redbook itself but what (and how) you do with it.
LHC Posted July 10, 2016 Posted July 10, 2016 All obsessive deep-diving into narrow, generally inaudible nooks and crannies of our hobby is all part of the fun. However, vast over-exaggeration of the importance of these nooks and crannies will continue to be corrected with proper perspective. For the sake of the hobby in general. See you around. Off to check on some threads relevant to progressing the actual experience of audio. You are correct, this has become a bit obsessive. But not as obsessive as the debate I had with Dave over the Jriver Jplay hoax claim.
Recommended Posts