Jump to content

Recommended Posts

Posted

http://music.columbia.edu/cmc/MusicAndComputers/chapter2/02_03.php

 

"How often do we need to sample a waveform in order to achieve a good representation of it?

 
The answer to this question is given by the Nyquist sampling theorem, which states that to well represent a signal, the sampling rate (or sampling frequency—not to be confused with the frequency content of the sound) needs to be at least twice the highest frequency contained in the sound of the signal."

 

JSmith ninja.gif

Posted

... reconstruct the complete (bandwidth limited) audible signal.

 

Sorry to be picky... but I think you mean "band limited". ;)

 

The signal you are sampling needs to be band limited too...

 

JSmith ninja.gif

Posted (edited)

Sorry to be picky... but I think you mean "band limited". ;)

 

The signal you are sampling needs to be band limited too...

 

JSmith ninja.gif

 

 

What the hell is "band" as an attribute for a signal if it is not a lazy man's short hand for "bandwidth"? (I really thought people were worried about saving themselves the pain of 5 extra key presses)

Edited by hochopeper
  • Like 1
Posted

(I really thought people were worried about saving themselves the pain of 5 extra key presses)

 

Nah, they're two different beasts. ;)

 

JSmith ninja.gif

Posted

For signals mixed to higher frequencies (eg FM radio) the bandwidth of the signal is small, but the band is 90-108MHz or so

Posted

@@almikel did those posts help clarify what you were asking?

hi Chris,

unfortunately no - not for when a sound starts in between samples, and could start just after the previous sample or just before the next sample (or anywhere between) - I think both you and Dave are saying sampling can get that correct - I don't get how it could - for the start (and stop) of signals.

 

I'm still referring to drum hits occurring very close together - no information higher than 1/2 the sampling frequency.

A poor representation of what I'm trying to understand is below:

1 drum hit happens to be on a sample point

Graph 1

post-112425-0-57880700-1456203498_thumb.

 

2nd drum hit occurs "somewhere" after but before the next sample - it could be immediately following the previous sample:

Graph 2

post-112425-0-17534500-1456203827_thumb.

 

 

Or later and much closer to the next sample

Graph 3

post-112425-0-61503400-1456203592_thumb.

 

It's how sampling theory manages the difference between graph 2 and graph 3 that I can't grasp - everything else regarding Nyquist rates and slopes implying frequency I get....

...and by the time you could hear a drum hit rise out of the noise floor it's probably had 4K samples taken (say about 1/10 sec),

but theoretically is there a miniscule temporal error for the start of the signal (2nd drum hit)?

 

cheers

Mike

Posted

The whole waveform shifts to the right. The magnitude of all samples change for the delayed start of the sound, not just the starting point as you've shown. The wave still starts at 0 on y axis but not at 0 on x axis.

Posted

Now the lightbulb

With a few samples get the slope and assuming less than nyquist determine when it started

Cheers guys

Posted

The whole waveform shifts to the right. The magnitude of all samples change for the delayed start of the sound, not just the starting point as you've shown. The wave still starts at 0 on y axis but not at 0 on x axis.

 

There will always be a gap between two sampling points... this is the nature of digital sampling.

 

A faithful reproduction of the original is what we seek... the original can never be completely captured.

 

JSmith ninja.gif

  • Volunteer
Posted

There will always be a gap between two sampling points... this is the nature of digital sampling.

 

A faithful reproduction of the original is what we seek... the original can never be completely captured.

 

JSmith ninja.gif

 

But it can be completely reconstructed 

Posted (edited)

But it can be completely reconstructed 

 

Well yeah, this is what a DAC is for. Completely, apart from a slight form of aliasing that is introduced during interpolation and high frequency attenuation. :)

 

edit: Completely reconstructed... when the signal is perfectly band-limited.

 

JSmith ninja.gif

Edited by JSmith
  • 4 months later...
Posted (edited)

http://www.aes.org/e-lib/browse.cfm?elib=18296

 

free download

 

Peer reviewed paper. Not workshop. Meta analysis indicating the difference between Redbook and hi-res is audible. 

 

Eighteen published experiments for which sufficient data could be obtained were included, providing a meta-analysis that combined over 400 participants in more than 12,500 trials. Results showed a small but statistically significant ability of test subjects to discriminate high resolution content, and this effect increased dramatically when test subjects received extensive training. This result was verified by a sensitivity analysis exploring different choices for the chosen studies and different analysis approaches. 

 

“Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content. Trained listeners could distinguish between the two formats around sixty percent of the time.â€

Edited by firedog

Posted (edited)

http://www.aes.org/e-lib/browse.cfm?elib=18296

 

free download

 

Peer reviewed paper. Not workshop. Meta analysis indicating the difference between Redbook and hi-res is audible. 

 

 

“Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content. Trained listeners could distinguish between the two formats around sixty percent of the time.â€

 

faeces in, faeces out  =  the extraordinary power of metanalysis to obfuscate

Edited by Nada
Posted

faeces in, faeces out 

 

There's probably still something in it.

 

 

The real core point for a thread like this is WHY, high sampling rates might matter.    Audiophiles all over have been lead to believe simply that higher rates are better....     however it is totally not that simple.    The rate simply does not tell you anything about the quality of the audio.

 

High sampling rates, do the following things.

 

Allow you to store higher frequencies  (they are not audible)

Allow digital converters to be engineered in a way which can improve performance (or at least reduce costs)

Allow filtering to be applied to the audio that "fix" things which are finer in time resolution than the lower rate allows

 

 

The first two, don't justify delivering music to consumers in high sampling rates.    Consumers can (and most of their hardware does) oversample the audio to help the converter perform better.

 

The third is deep.  MQA is attempting this in the mass market.   Other players are doing it in the audiophile fringes.   The take home message is that up/down sampling of audio is a big opportunity for things to go wrong  (MQA trying to avoid this, and/or fix the damage).

Posted

You are starting another myth, well done. Also, the filtering in #3 would have to be of a type that cannot be applied to an upsampled SD file.

 

One thing that the above literature review does conclude is that "the causes are still unknown".

Posted

hi Chris,

unfortunately no - not for when a sound starts in between samples, and could start just after the previous sample or just before the next sample (or anywhere between) - I think both you and Dave are saying sampling can get that correct - I don't get how it could - for the start (and stop) of signals.

 

I'm still referring to drum hits occurring very close together - no information higher than 1/2 the sampling frequency.

A poor representation of what I'm trying to understand is below:

1 drum hit happens to be on a sample point

Graph 1

attachicon.gifEuuDV2.png

 

2nd drum hit occurs "somewhere" after but before the next sample - it could be immediately following the previous sample:

Graph 2

attachicon.gifEuuDV3.png

 

 

Or later and much closer to the next sample

Graph 3

attachicon.gifEuuDV1.png

 

It's how sampling theory manages the difference between graph 2 and graph 3 that I can't grasp - everything else regarding Nyquist rates and slopes implying frequency I get....

...and by the time you could hear a drum hit rise out of the noise floor it's probably had 4K samples taken (say about 1/10 sec),

but theoretically is there a miniscule temporal error for the start of the signal (2nd drum hit)?

 

cheers

Mike

 

I agree with you. Though your discussion of drum hits is not the best way to illustrate the example beucase at a sampling frequency of 44,100 times per second, this is very fast, and the hits will be sampled. A better illustration I think, is to think of the very tiny vibrations made by any instrument. Also think of the inter-modulation of sounds made by multiple instruments such as in a symphony orchestra with many instruments playing at once. Now this does illustrate the many undulations of a waveform that cannot be captured in its entirety. A digital sample is a close representation but not a facsimile or an analogue.

 

I think the problem with Nyquist theorem is that since we can fairly accurately sample and reconstruct a sine wave, it is assumed that a very complex waveform can be captured and reconstructed in the same way, and that's not true. Think about it like this. with a sample rate of 44,1000 per second, a DC signal is represented by 44,1000 samples. A sine wave @ 22,500 Hz is represented by two samples. 

 

If we use two samples to sample a frequency that the human ear is more sensitive to, like 1Khz. (actually you can do this by using a sample rate of 2khz)  it would be very apparent to all upon hearing it, just how much distortion and inaccuracy there would be to the sound. It would be unlistenable and unrecognisable. Using PCM to sample audio results in ever fewer samples the higher up in frequency we wish to capture. Low frequencies are captured quite accurately and high frequencies are not captured particularly well. - I think this is the reason when CD first was released that listeners commented about a glassy, harsh unnatural sound. Now that digital audio has been around for over 20 years and people are used to hearing it, they have acclimatised themselves to this sort of sound. We hear it on the radio, on the TV, and from our CD's and music files.

But if someone goes from listening to digital to analog audio the difference is quite profound.

 

192Khz is an improvement upon the lower sampling rate of 44.1Khz not just because it samples higher frequencies, but because it gives a few extra samples at higher audible frequencies which increases the accuracy of the sound which is captured and later played back.

Posted

You are starting another myth

 

What is that?  (I'm certainly not intending to)

 

 

 

I agree with you. Though your discussion of drum hits is not the best way to illustrate the example beucase at a sampling frequency of 44,100 times per second, this is very fast, and the hits will be sampled. A better illustration I think, is to think of the very tiny vibrations made by any instrument. Also think of the inter-modulation of sounds made by multiple instruments such as in a symphony orchestra with many instruments playing at once. Now this does illustrate the many undulations of a waveform that cannot be captured in its entirety.

 

They can be captured in their entirety.

 

I think the problem with Nyquist theorem is that since we can fairly accurately sample and reconstruct a sine wave, it is assumed that a very complex waveform can be captured and reconstructed in the same way

 

It can be ...  there is no limit to the complexity of the waveform which can be perfectly represented  (as long it has no frequency components above half the sampling rate)

 

 

If we use two samples to sample a frequency that the human ear is more sensitive to, like 1Khz. (actually you can do this by using a sample rate of 2khz)  it would be very apparent to all upon hearing it, just how much distortion and inaccuracy there would be to the sound.

 

I can only assume then that you've never done this.

Posted

What is that?  (I'm certainly not intending to)

 

 

 

 

They can be captured in their entirety.

 

 

It can be ...  there is no limit to the complexity of the waveform which can be perfectly represented  (as long it has no frequency components above half the sampling rate)

 

 

 

I can only assume then that you've never done this.

See the post above and you might understand what I am talking about.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...
To Top