JSmith Posted February 22, 2016 Posted February 22, 2016 http://music.columbia.edu/cmc/MusicAndComputers/chapter2/02_03.php "How often do we need to sample a waveform in order to achieve a good representation of it? The answer to this question is given by the Nyquist sampling theorem, which states that to well represent a signal, the sampling rate (or sampling frequency—not to be confused with the frequency content of the sound) needs to be at least twice the highest frequency contained in the sound of the signal." JSmith
JSmith Posted February 22, 2016 Posted February 22, 2016 ... reconstruct the complete (bandwidth limited) audible signal. Sorry to be picky... but I think you mean "band limited". The signal you are sampling needs to be band limited too... JSmith
hochopeper Posted February 22, 2016 Posted February 22, 2016 (edited) Sorry to be picky... but I think you mean "band limited". The signal you are sampling needs to be band limited too... JSmith What the hell is "band" as an attribute for a signal if it is not a lazy man's short hand for "bandwidth"? (I really thought people were worried about saving themselves the pain of 5 extra key presses) Edited February 22, 2016 by hochopeper 1
JSmith Posted February 22, 2016 Posted February 22, 2016 (I really thought people were worried about saving themselves the pain of 5 extra key presses) Nah, they're two different beasts. JSmith
BradC Posted February 22, 2016 Posted February 22, 2016 For signals mixed to higher frequencies (eg FM radio) the bandwidth of the signal is small, but the band is 90-108MHz or so
hochopeper Posted February 23, 2016 Posted February 23, 2016 @@almikel did those posts help clarify what you were asking?
JSmith Posted February 23, 2016 Posted February 23, 2016 Using terms correctly is the best beginning to correct advice... JSmith
almikel Posted February 23, 2016 Posted February 23, 2016 @@almikel did those posts help clarify what you were asking? hi Chris, unfortunately no - not for when a sound starts in between samples, and could start just after the previous sample or just before the next sample (or anywhere between) - I think both you and Dave are saying sampling can get that correct - I don't get how it could - for the start (and stop) of signals. I'm still referring to drum hits occurring very close together - no information higher than 1/2 the sampling frequency. A poor representation of what I'm trying to understand is below: 1 drum hit happens to be on a sample point Graph 1 2nd drum hit occurs "somewhere" after but before the next sample - it could be immediately following the previous sample: Graph 2 Or later and much closer to the next sample Graph 3 It's how sampling theory manages the difference between graph 2 and graph 3 that I can't grasp - everything else regarding Nyquist rates and slopes implying frequency I get.... ...and by the time you could hear a drum hit rise out of the noise floor it's probably had 4K samples taken (say about 1/10 sec), but theoretically is there a miniscule temporal error for the start of the signal (2nd drum hit)? cheers Mike
hochopeper Posted February 23, 2016 Posted February 23, 2016 The whole waveform shifts to the right. The magnitude of all samples change for the delayed start of the sound, not just the starting point as you've shown. The wave still starts at 0 on y axis but not at 0 on x axis.
almikel Posted February 23, 2016 Posted February 23, 2016 Now the lightbulb With a few samples get the slope and assuming less than nyquist determine when it started Cheers guys
JSmith Posted February 23, 2016 Posted February 23, 2016 Mike Possibly a bit technical, but hopefully this .PDF explains it better for you; http://www.wescottdesign.com/articles/Sampling/sampling.pdf JSmith
JSmith Posted February 23, 2016 Posted February 23, 2016 The whole waveform shifts to the right. The magnitude of all samples change for the delayed start of the sound, not just the starting point as you've shown. The wave still starts at 0 on y axis but not at 0 on x axis. There will always be a gap between two sampling points... this is the nature of digital sampling. A faithful reproduction of the original is what we seek... the original can never be completely captured. JSmith
Volunteer sir sanders zingmore Posted February 23, 2016 Volunteer Posted February 23, 2016 There will always be a gap between two sampling points... this is the nature of digital sampling. A faithful reproduction of the original is what we seek... the original can never be completely captured. JSmith But it can be completely reconstructed
JSmith Posted February 23, 2016 Posted February 23, 2016 (edited) But it can be completely reconstructed Well yeah, this is what a DAC is for. Completely, apart from a slight form of aliasing that is introduced during interpolation and high frequency attenuation. edit: Completely reconstructed... when the signal is perfectly band-limited. JSmith Edited February 23, 2016 by JSmith
firedog Posted June 29, 2016 Posted June 29, 2016 (edited) http://www.aes.org/e-lib/browse.cfm?elib=18296 free download Peer reviewed paper. Not workshop. Meta analysis indicating the difference between Redbook and hi-res is audible. Eighteen published experiments for which sufficient data could be obtained were included, providing a meta-analysis that combined over 400 participants in more than 12,500 trials. Results showed a small but statistically significant ability of test subjects to discriminate high resolution content, and this effect increased dramatically when test subjects received extensive training. This result was verified by a sensitivity analysis exploring different choices for the chosen studies and different analysis approaches. and: http://www.prosoundweb.com/article/p...solution_audio “Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content. Trained listeners could distinguish between the two formats around sixty percent of the time.†Edited June 29, 2016 by firedog
Newman Posted June 29, 2016 Posted June 29, 2016 That's just a literature review! But appreciate the link.
Nada Posted June 30, 2016 Posted June 30, 2016 (edited) http://www.aes.org/e-lib/browse.cfm?elib=18296 free download Peer reviewed paper. Not workshop. Meta analysis indicating the difference between Redbook and hi-res is audible. and: http://www.prosoundweb.com/article/p...solution_audio “Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content. Trained listeners could distinguish between the two formats around sixty percent of the time.†faeces in, faeces out = the extraordinary power of metanalysis to obfuscate Edited June 30, 2016 by Nada
davewantsmoore Posted June 30, 2016 Posted June 30, 2016 faeces in, faeces out There's probably still something in it. The real core point for a thread like this is WHY, high sampling rates might matter. Audiophiles all over have been lead to believe simply that higher rates are better.... however it is totally not that simple. The rate simply does not tell you anything about the quality of the audio. High sampling rates, do the following things. Allow you to store higher frequencies (they are not audible) Allow digital converters to be engineered in a way which can improve performance (or at least reduce costs) Allow filtering to be applied to the audio that "fix" things which are finer in time resolution than the lower rate allows The first two, don't justify delivering music to consumers in high sampling rates. Consumers can (and most of their hardware does) oversample the audio to help the converter perform better. The third is deep. MQA is attempting this in the mass market. Other players are doing it in the audiophile fringes. The take home message is that up/down sampling of audio is a big opportunity for things to go wrong (MQA trying to avoid this, and/or fix the damage).
Newman Posted June 30, 2016 Posted June 30, 2016 You are starting another myth, well done. Also, the filtering in #3 would have to be of a type that cannot be applied to an upsampled SD file. One thing that the above literature review does conclude is that "the causes are still unknown".
eltech Posted June 30, 2016 Posted June 30, 2016 hi Chris, unfortunately no - not for when a sound starts in between samples, and could start just after the previous sample or just before the next sample (or anywhere between) - I think both you and Dave are saying sampling can get that correct - I don't get how it could - for the start (and stop) of signals. I'm still referring to drum hits occurring very close together - no information higher than 1/2 the sampling frequency. A poor representation of what I'm trying to understand is below: 1 drum hit happens to be on a sample point Graph 1 EuuDV2.png 2nd drum hit occurs "somewhere" after but before the next sample - it could be immediately following the previous sample: Graph 2 EuuDV3.png Or later and much closer to the next sample Graph 3 EuuDV1.png It's how sampling theory manages the difference between graph 2 and graph 3 that I can't grasp - everything else regarding Nyquist rates and slopes implying frequency I get.... ...and by the time you could hear a drum hit rise out of the noise floor it's probably had 4K samples taken (say about 1/10 sec), but theoretically is there a miniscule temporal error for the start of the signal (2nd drum hit)? cheers Mike I agree with you. Though your discussion of drum hits is not the best way to illustrate the example beucase at a sampling frequency of 44,100 times per second, this is very fast, and the hits will be sampled. A better illustration I think, is to think of the very tiny vibrations made by any instrument. Also think of the inter-modulation of sounds made by multiple instruments such as in a symphony orchestra with many instruments playing at once. Now this does illustrate the many undulations of a waveform that cannot be captured in its entirety. A digital sample is a close representation but not a facsimile or an analogue. I think the problem with Nyquist theorem is that since we can fairly accurately sample and reconstruct a sine wave, it is assumed that a very complex waveform can be captured and reconstructed in the same way, and that's not true. Think about it like this. with a sample rate of 44,1000 per second, a DC signal is represented by 44,1000 samples. A sine wave @ 22,500 Hz is represented by two samples. If we use two samples to sample a frequency that the human ear is more sensitive to, like 1Khz. (actually you can do this by using a sample rate of 2khz) it would be very apparent to all upon hearing it, just how much distortion and inaccuracy there would be to the sound. It would be unlistenable and unrecognisable. Using PCM to sample audio results in ever fewer samples the higher up in frequency we wish to capture. Low frequencies are captured quite accurately and high frequencies are not captured particularly well. - I think this is the reason when CD first was released that listeners commented about a glassy, harsh unnatural sound. Now that digital audio has been around for over 20 years and people are used to hearing it, they have acclimatised themselves to this sort of sound. We hear it on the radio, on the TV, and from our CD's and music files. But if someone goes from listening to digital to analog audio the difference is quite profound. 192Khz is an improvement upon the lower sampling rate of 44.1Khz not just because it samples higher frequencies, but because it gives a few extra samples at higher audible frequencies which increases the accuracy of the sound which is captured and later played back.
Newman Posted June 30, 2016 Posted June 30, 2016 (edited) @@eltech didn't you read any of my reply to you last time you made much the same comment? You are peddling misinformation. Edited December 10, 2017 by Newman link updated 1
Volunteer sir sanders zingmore Posted June 30, 2016 Volunteer Posted June 30, 2016 It's the stair-step shuffle all over again 1
davewantsmoore Posted June 30, 2016 Posted June 30, 2016 You are starting another myth What is that? (I'm certainly not intending to) I agree with you. Though your discussion of drum hits is not the best way to illustrate the example beucase at a sampling frequency of 44,100 times per second, this is very fast, and the hits will be sampled. A better illustration I think, is to think of the very tiny vibrations made by any instrument. Also think of the inter-modulation of sounds made by multiple instruments such as in a symphony orchestra with many instruments playing at once. Now this does illustrate the many undulations of a waveform that cannot be captured in its entirety. They can be captured in their entirety. I think the problem with Nyquist theorem is that since we can fairly accurately sample and reconstruct a sine wave, it is assumed that a very complex waveform can be captured and reconstructed in the same way It can be ... there is no limit to the complexity of the waveform which can be perfectly represented (as long it has no frequency components above half the sampling rate) If we use two samples to sample a frequency that the human ear is more sensitive to, like 1Khz. (actually you can do this by using a sample rate of 2khz) it would be very apparent to all upon hearing it, just how much distortion and inaccuracy there would be to the sound. I can only assume then that you've never done this.
firedog Posted June 30, 2016 Posted June 30, 2016 Applying "Nyquist" to the real world isn't as simple as some of you seem to think: http://www.audiostream.com/content/sampling-what-nyquist-didnt-say-and-what-do-about-it-tim-wescott-wescott-design-services#Mgw8vTt0e6hPJzxZ.97 2
eltech Posted June 30, 2016 Posted June 30, 2016 What is that? (I'm certainly not intending to) They can be captured in their entirety. It can be ... there is no limit to the complexity of the waveform which can be perfectly represented (as long it has no frequency components above half the sampling rate) I can only assume then that you've never done this. See the post above and you might understand what I am talking about.
Recommended Posts