Peta Posted June 19, 2016 Posted June 19, 2016 Just my opinion ... MQA could be part of a game changing series of events. It could also be a continuation of the very problem it says it aims to solve. The issue with really good high fidelity appears to me to be what is happening to standards. DLNA and "streaming standards" that define music in terms of a model that applied to the iPod some 10 years ago and focus on just "songs" rather than music in all its wonderful diversity are the root cause of our troubles. Add to that the CD standard that was tailored to the best that advanced manufacturing could manage in the early 1980s. Then add a dash of the fights between manufacturers for a new audio standard for dicks (DVD Audio, SACD, BluRay etc). Digital formats have followed a similar path with various sample rates and bit depths for standard file formats coming along but still the predominant format for audio distribution is MP3 which requires lost detail in its specification. Playback/storage mechanisms for digital source material have suffered from almost the same fate. MiniDisk, DAT and plenty of other proprietary formats have gone by the wayside after being promoted heavily for a few years. Both ends of the spectrum of standards setting have failed high quality music miserably. Manufacturer driven standards have not done well progressing from the 44kHz 16 bit CD standard. They have managed to get a lot out of that relatively restricted digital format but not to gain an agreement on a common standard that would allow for high quality and high data volume that is required for high fidelity playback. Therefore the playback approaches have not standardised either. Analogue was simple in concept. What was there on the recording should be played back as faithfully as possible. With Digital formats you actually want the playback to produce analogue sound that is nothing like the stream of numbers but is imprecisely modelled by those numbers. Therefore filters and other changes to the data are applied to bring the output waveform to something closer to the input waveform. Manufacturers of DACs that perform this kind of transformation even go as far as providing settings to change the output to be "warmer", "precise" or "dynamic". Essentially they are saying that you change the output to suit how you would like to hear the music not to reproduce it. That is only one example of how the music is routinely distorted by the manufacturers. Comparing apples with apples is not a straightforward thing when comparing playback devices. So how are you to really isolate any real issues with high fidelity reproduction? In the 1970s there was a push to standardise connectors, impedances and voltage levels for hi fi components. That led to a much better opportunity to build component systems from the best designed and built ones, instead of having to rely on a single all-in-one provider who correctly matched up their components so that they worked together properly. That meant a reduction in costs as standard circuits and components could be used and R&D could focus on improved sound. A similar thing has happened with standards like DLNA but with perverse consequences. The standards setters were consumer electronics manufacturers and they choose to go with the commercial path of least resistance - iTunes, AirPlay etc. are the model. Designers of high end audio generally have little expertise with the finicky standards of DLNA (add to that USB, HDMI, SPDIF, Toslink and the other newer connection standards for multi-channel) and therefore buy off the shelf chipsets and firmware to implement them. As a result we see bit depth and bandwidth limits, gapped playback, poor compatibility, high background noise and similar problems from complex signal paths and clunky software. Normally these problems would be described by the marketing departments of the manufacturers as bad sound - however they put the functionality in to their products for other marketing reasons and keep quiet about the problems they create for us. After all, they are really only catering for iPod users anyway. It seems to me that we are back in those days where the standards need to be sorted out so that manufacturers can focus on optimising the standard (just as was done for the the CD and vinyl record) rather than "inventing" new and incompatible software, hardware and ecosystem limitations on one component working with another. To do this the standards need to be forward looking and cross-industry, international and actually understanding music. I think there is a reason why this has not happened yet. The reason is us. We, the people who seek higher levels of fidelity in our music, are prepared to accept a manufacturer's "innovation" that makes their equipment (usually) incompatible with someone else's for a small gain in performance. Manufacturers therefore have no incentive to adopt and contribute to international and industry wide standards for music. For the mass market, there is a demand for features over quality, whether those features are used (even useful) or not the marketing brochures are full of features that are out of the box standard from the OEM supplier and a buyer who has insufficient experience to judge sound quality will tend to look at the marketing more than anything else. The majority of manufacturers want to sell a lot of what they design so they tailor to the mass market and make sure that they do not lose out based on missing some feature. So MQA might be part of a solution. I think it is actually part of the problem. 1
davewantsmoore Posted June 19, 2016 Posted June 19, 2016 (edited) On 19/06/2016 at 3:14 AM, Peta said: They have managed to get a lot out of that relatively restricted digital format but not to gain an agreement on a common standard that would allow for high quality and high data volume that is required for high fidelity playback. Therefore the playback approaches have not standardised either. One of the reasons for that is that larger sampling rates and bit depths, do not provide any guarantee to consumers of an increase in quality. If I got to an online store, and I can buy a song in redbook (16/44) or in DXD (24/384) .... there is no way for me to know if the DXD will sound better (or indeed, if it is any different at all, from the redbook) If the content was "created and distributed in the high rate", then there are reasons to think it could be superior. However it doesn't cover some big questions consumers will have: How do we know that a lower rate rendering of the "made in high rate" audio would necessarily be audibly deficient? (assuming I am being asked to pay more $ for the newer format - then why would I?) How do we know that a 'back catalogue' released in a new format, is superior? (Simply increasing the rate and depth, does nothing. So why should I pay $ ?) Edited June 19, 2016 by davewantsmoore
davewantsmoore Posted June 19, 2016 Posted June 19, 2016 On 19/06/2016 at 3:14 AM, Peta said: It seems to me that we are back in those days where the standards need to be sorted out The standard is PCM. The mass market questions which need addressing (MQA attempts this) are: How do consumers know about the quality of a file (the rate and depth does not tell them) How do we stream PCM over the internet (it can be too big) How do we get PCM into a DAC in the best way, without doing types of sample rate conversion that are potentially detrimental and/or too computationally expensive How do we fix all those issues, without FORCING new equipment to be purchased, or a new distribution standard (ie. offer backwards compatibility)
Peta Posted June 19, 2016 Posted June 19, 2016 On 19/06/2016 at 3:48 AM, davewantsmoore said: The standard is PCM. The mass market questions which need addressing (MQA attempts this) are: How do consumers know about the quality of a file (the rate and depth does not tell them) How do we stream PCM over the internet (it can be too big) How do we get PCM into a DAC in the best way, without doing types of sample rate conversion that are potentially detrimental and/or too computationally expensive How do we fix all those issues, without FORCING new equipment to be purchased, or a new distribution standard (ie. offer backwards compatibility) I would still argue that until standards for music are agreed, appropriate for the full spectrum of music and able to be implemented in standard code/hardware then we continue to have the problem of privately developed "solutions" that are technically correct but not solving the problem(s).
davewantsmoore Posted June 19, 2016 Posted June 19, 2016 We already have that now, it's PCM. Unfortunately it does not very well address the above questions.
Guest Eggcup The Daft Posted June 19, 2016 Posted June 19, 2016 On 19/06/2016 at 2:14 AM, davewantsmoore said: The decoder renders audio in any rate it wants to.... and as long as this rate is a 2x rate or higher (eg. 88.2 or 96kz), then they can get "the best" impulse response in the result. Actually, it looks like I'm underestimating. This is the MQA "deblurring" graph from the Absolute Sound article. Deblurring is the key technical advance claimed for MQA. If I've understood Bob Stuart correctly, he believes that around 13μs is the target here. Note that 96/24 falls short of that. The graph is itself confusing as we aren't told what rates of MP3 and AAC are being used for the test. Anyway, you don't get "best" below the 176.4/192 range. On 19/06/2016 at 2:14 AM, davewantsmoore said: The decoder retrieves all the extra audio information. In our examples, that means a rate up to DXD / 352.8. Now, it renders new audio (not using a simple sample rate converter), at whatever rate is best for the DAC. Actually, it seems that you agree with me. Clearly, the decoder is working on the fly. The point is that there have to be instructions in the folded information in the lower eight bits and these have to be followed. So the decoder has to retrieve all of the extra audio information. Remember that part of the MQA process is that different filters are built into that information in the lower eight bits, and have to be processed. At some point in the process, the information has to exist (or at least be fully understood) at the 352.8 rate - all the extra audio information is retrieved - and then it is output at the lower rate. Conceptually, it has to be equal to decode + downsample.
Guest Eggcup The Daft Posted June 19, 2016 Posted June 19, 2016 On 19/06/2016 at 4:53 AM, Peta said: I would still argue that until standards for music are agreed, appropriate for the full spectrum of music and able to be implemented in standard code/hardware then we continue to have the problem of privately developed "solutions" that are technically correct but not solving the problem(s). It depends on the problem you want to solve. MQA want to sell a particular technical solution to the problems Dave describes, and at the same time solve some other issues in digital recording and playback, by making that solution "end to end". When it comes to OUR problems as audiophiles (I'll define myself as one today, though I'm not very good at it and not in your or Dave's league) the "solution" in the digital space is simple. A master file comes out of the studio. When I cut my teeth in this field, back in 1980, one of the first things I learnt was that the studio master is the gold standard, what was intended by the musicians and engineers. As I said before, if we could build a time machine and go back to 1975, and say that we could have a distribution system that gives us the studio master, our predecessors or younger selves would be delighted for us. Yet here we are, discussing the advantages and disadvantages of yet another alternative to that very obvious answer. For the audiophile playback problem, one digital audio standard doesn't matter. We can use computers and DACs (special purpose computers) that can decode any standard any recording engineer wants to use to get what they perceive to be optimum. PCM? DSD? MQA? ZYX? So what? We can have a bit perfect copy of the studio master. They can even be streamed as they are. 768/32 has a lower bitrate than 4K Netflix - for stereo,anyway - and that is working today (internet connection permitting, of course). For the life of me, I can't see why, conceptually, MQA is not equivalent to the 1960s processing of mono recordings to make fake stereo. The techie part of me loves the idea, but the audiophile purist part doesn't even need to hear it to say that MQA processing of existing recordings is the wrong way to go.
New Sensations Posted June 19, 2016 Posted June 19, 2016 (edited) Re. Sunil Merchant @ Newport Beach, it would seem a misunderstanding on who was charged with what demo-wise in Newport led to a bit of a storm in a tea cup. I've exchanged a couple emails with Bob Stuart this afternoon before asking for something that I could share from a private email. From the horse's mouth, clarification is as follows: "All our product partners at Newport Beach had the same A/B content. Pioneer, Onkyo and Meridian were demonstrating these throughout the show. Mytek also had the music. Newport Beach is a combination of manufacturer and dealer rooms. In Meridian's case the product was in several rooms and although they were helping out for a lot of the time, they decided that Sunil's room should be straight demos because Meridian was doing A/B in another room. We had consistently strong and positive feedback about the sound in Sunil's room using Meridian's UltraDAC." In other words, it's a licensing issue. Only MQA's official hardware partners (and a few reviewers) have access to the material required for for an A/B comparison and I don't think Sunil was one such party (at Newport). A/B comparisons could be only conducted at Newport by Peter McGrath or whoever was in charge of the Mytek, Pioneer or Onkyo exhibits. Edited June 19, 2016 by J_o_h_n
Guest Eggcup The Daft Posted June 19, 2016 Posted June 19, 2016 (edited) On 19/06/2016 at 8:06 AM, J_o_h_n said: Re. Sunil Merchant @ Newport Beach, it would seem a misunderstanding on who was charged with what demo-wise in Newport led to a bit of a storm in a tea cup. I've exchanged a couple emails with Bob Stuart this afternoon before asking for something that I could share from a private email. From the horse's mouth, clarification is as follows: "All our product partners at Newport Beach had the same A/B content. Pioneer, Onkyo and Meridian were demonstrating these throughout the show. Mytek also had the music. Newport Beach is a combination of manufacturer and dealer rooms. In Meridian's case the product was in several rooms and although they were helping out for a lot of the time, they decided that Sunil's room should be straight demos because Meridian was doing A/B in another room. We had consistently strong and positive feedback about the sound in Sunil's room using Meridian's UltraDAC." In other words, it's a licensing issue. Only MQA's official hardware partners (and a few reviewers) have access to the material required for for an A/B comparison and I don't think Sunil was one such party (at Newport). A/B comparisons could be only conducted at Newport by Peter McGrath or whoever was in charge of the Mytek, Pioneer or Onkyo exhibits. So, Sunil Merchant, or anyone else in the show, had to have the "official" material for any A/B demo, and say, would not be allowed to demo with (for example) material from 2L, despite that material being prepared with the participation of Bob Stuart himself? Really, they trusted him with the launch of the Meridian UltraDAC, but not to A/B demo MQA on what would have been a far more capable device than the Onkyo stuff, at least? There I was, thinking that MQA was finalised, released, on the market, publicly available for anyone to use. What, ten or more DACs, new releases by a couple of artists in the format, the entire output of a specialist label available for download and prepared with assistance by the company's founder. There comes a time, surely, when it has to be let go and survive on its merits. We have no guarantee that special material for A/B demos is not, er, "special". Allowing people to demonstrate with the material presently available, and apparently endorsed, would clear any doubts. I see Meridian have included DSD128 playback in the UltraDAC. Good on them for that. I'll be watching for the Australian release date and price. Edited June 19, 2016 by Eggcup The Daft
New Sensations Posted June 19, 2016 Posted June 19, 2016 I guess Sunil could have shown with 2L music but he chose not to for whatever reason. "Official" A/B demo material was available elsewhere at the show. As Bob said, a licensing issue keeps said material from being distributed. Of course, people are free to run with whatever conspiracy theories they fancy: that the files are cooked. But it's worth keeping in mind that such malpractice is no more or less likely than an amplifier company running their show exhibit with a souped up unit. Besides, I've had MQA process two albums of my own choosing and they still come out sounding 'better' than the hi-res originals.
New Sensations Posted June 19, 2016 Posted June 19, 2016 "There I was, thinking that MQA was finalised, released, on the market, publicly available for anyone to use." Nope. Not really. 1
davewantsmoore Posted June 19, 2016 Posted June 19, 2016 On 19/06/2016 at 5:19 AM, Eggcup The Daft said: Anyway, you don't get "best" below the 176.4/192 range. Note that 96/24 falls short of that. 96khz looks to me to be quite close to, if not right on, 13us (note the logarithmic scale) <shrug> On 19/06/2016 at 5:19 AM, Eggcup The Daft said: Conceptually, it has to be equal to decode + downsample. Yes. Conceptually they are the same. Both start with a high rate, and end up with a low rate .... but they are not the same process, which is the whole point (avoiding 'low quality' sample rate conversion). Did you say you had read all about this already? (sorry, if I have you mixed up with someone else) ..... "Actually, it seems that you agree with me" .... I'm not debating anything On 19/06/2016 at 6:17 AM, Eggcup The Daft said: When it comes to OUR problems as audiophiles (I'll define myself as one today, though I'm not very good at it and not in your or Dave's league) the "solution" in the digital space is simple. A master file comes out of the studio. When I cut my teeth in this field, back in 1980, one of the first things I learnt was that the studio master is the gold standard, what was intended by the musicians and engineers. As I said before, if we could build a time machine and go back to 1975, and say that we could have a distribution system that gives us the studio master, our predecessors or younger selves would be delighted for us. Yet here we are, discussing the advantages and disadvantages of yet another alternative to that very obvious answer. Eh? One of the headline things that MQA is trying to do (god, I'm sounding like an MQA shill again, shudder) ..... is to encode their audio from the "master". The studios don't want to release their best copies of the audio, because there is no way for them to keep a handle on the quality. ..... and by that I mean, that if they release their 96khz studio master into the wild. There's nothing to prevent it being converted to a higher rate. That's potentially bad, because consumers may think the higher rate is better, but in fact is likely to be worse.... Another example, is there's nothing to stop someone applying EQ to the audio unbeknownst to a consumer. The whole point of the "authenticated" part of MQA, is so consumers know that the audio they are listening to IS the actual "master quality".... and not something which has been deteriorated by all the possible ways that consumers have no idea about. On 19/06/2016 at 6:17 AM, Eggcup The Daft said: For the audiophile playback problem, one digital audio standard doesn't matter. We can use computers and DACs (special purpose computers) that can decode any standard any recording engineer wants to use to get what they perceive to be optimum. PCM? DSD? MQA? ZYX? So what? We can have a bit perfect copy of the studio master. They can even be streamed as they are. 768/32 has a lower bitrate than 4K Netflix - for stereo,anyway - and that is working today (internet connection permitting, of course). I can't help thinking that you're missing the point .... that to playback your X rate audio, it is almost always being converted to another rate (and there are potential for problems there). Rendering all audio to the rate your DAC prefers (in way which avoids the potential problems of sample rate conversion and filters inside DACs) .... is potentially a much more optimal solution. On 19/06/2016 at 6:17 AM, Eggcup The Daft said: For the life of me, I can't see why, conceptually, MQA is not equivalent to the 1960s processing of mono recordings to make fake stereo. The techie part of me loves the idea, but the audiophile purist part doesn't even need to hear it to say that MQA processing of existing recordings is the wrong way to go. You've clearly both misunderstood what they're trying to achieve, and how they are going about it.... There's nothing at all wrong with either (in theory). Personally I think it is dangerous and unnecessary ... but I'm very happy to be wrong on both.
davewantsmoore Posted June 19, 2016 Posted June 19, 2016 On 19/06/2016 at 8:50 AM, Eggcup The Daft said: There I was, thinking that MQA was finalised, released, on the market, publicly available for anyone to use. What, ten or more DACs, new releases by a couple of artists in the format, the entire output of a specialist label available for download and prepared with assistance by the company's founder. There comes a time, surely, when it has to be let go and survive on its merits. We have no guarantee that special material for A/B demos is not, er, "special". Allowing people to demonstrate with the material presently available, and apparently endorsed, would clear any doubts. It's only very new. The number one hurdle is getting content producers to use it.... if that happens, the rest will follow. On 19/06/2016 at 9:06 AM, J_o_h_n said: Besides, I've had MQA process two albums of my own choosing and they still come out sounding 'better' than the hi-res originals. Interesting. Is there anything more you can tell us about this?! :thumbup:
Guest Eggcup The Daft Posted June 19, 2016 Posted June 19, 2016 On 19/06/2016 at 9:11 AM, J_o_h_n said: "There I was, thinking that MQA was finalised, released, on the market, publicly available for anyone to use." Nope. Not really. Fair enough. It varies my opinion on these things. On 19/06/2016 at 9:06 AM, J_o_h_n said: I guess Sunil could have shown with 2L music but he chose not to for whatever reason. "Official" A/B demo material was available elsewhere at the show. As Bob said, a licensing issue keeps said material from being distributed. Of course, people are free to run with whatever conspiracy theories they fancy: that the files are cooked. But it's worth keeping in mind that such malpractice is no more or less likely than an amplifier company running their show exhibit with a souped up unit. Besides, I've had MQA process two albums of my own choosing and they still come out sounding 'better' than the hi-res originals. Again, all fair enough. I'd like this business to be honest. I'm happy to trust Meridian, but MQA and Meridian aren't the same, so I'm a bit on guard about this. But we have to be careful about "better". As I said, we could have access to the masters, and I have a philosophical problem with processing music to sound "better" than the original: particularly if at some point all we get are the "better" files. If the A/B is against an inferior format, I have no problem with that, as long as it is upfront. I see MQA as trying to become a mass market format, not for the few, as say, high rate DSD is.
Guest Eggcup The Daft Posted June 19, 2016 Posted June 19, 2016 (edited) @@davewantsmoore No, I don't think I misunderstand MQA at all. It works like this, in the case of existing digital masters. Firstly, the master is encoded. The encoding consists of the following changes, in some order: Information that is deemed to be not audible is removed from the master. In the case of a 24 bit master, the original lowest eight bits are thrown away (in the practical versions we've seen - this could be done to a different amount of low bit information, as long as the space left is sufficient to store what is needed for the reconstruction to work) A filter is applied to undo any damage caused by the ADC used in recording - preringing, for example. The remaining information is stored in two parts. a 16 bit component (in the cases we have seen - it could be more or fewer bits in theory) at a word rate where the 24 bit equivalent can be compressed using FLAC and streamed easily: and a complex, compressed block of data in the lower bits that describes the extra data to be added to that signal to create the higher rate, "temporally improved" "equivalent" to the original master. The 16 (or whatever) bit component can be understood by a non-MQA DAC (capable of decoding the higher depth of the whole). On playback using an MQA DAC, the extra data is decoded and added to the playable component, and the whole is then processed by filters described in the decoder (for the DAC) and in the extra data in the file. The decoder is supposed to be DAC specific. This gives MQA various technical "advantages". In return for accepting 16 bit effective depth (in the practical examples seen), we get what MQA believes to be the required data above 20kHz for equivalent quality to the master. We get a higher bitrate from the decoder, so the same advantages in playback as from upsampling or using a high bitrate file. Also, the filters in the DAC, of whatever form, can be compensated for. FInally, there is the temporal processing: from the MQA website: Quote The vision behind MQA is to do no more damage to sound than travelling a short distance through air. By being able to resolve two sounds 8us apart – 15 times better than 192kHz – that vision has been realised. Finally, MQA "signs" the encoded file, so that we know when decoding the file that it is the output of a real MQA encoder from the master file - at least until that process is hacked or reverse engineered, as it may be in the real world. MQA are claiming that this signature means that we can know that the file is effectively the same as a "master" that is processed into MQA format, but it actually means that we have an MQA device encoded file and nothing more. So, the aim is to produce a format that can be more easily streamed ("convenience"), that is audibly equivalent to the master, but with temporal resolution improved, and without putting the master ("crown jewels", as it is put on the MQA website) online. The target audience is the mass market, so the decoders are intended to run on the same devices that are used for MP3 standard playback and streaming. It helps that these devices have higher processing power than many high end DACs, but that means that you and I end up having to buy new equipment to play back at the higher standards we aspire to. This sounds great for everyone else, though: until you actually think about all of the claims and the problems that come with them. To different parts of their audience, in effect, they are saying that it "sounds better than the master", it is "equivalent to the master", but it is not "putting the master online", for example. The same with the signing. Sure, we get to know that it came from the MQA encoder, so it hasn't been messed with since. But it's actually the record companies' mastering processes that have put bad quality commericial CDs out there, not some nutter with a copy of Audacity. It's likely the same mastering processes will be used for MQA. Also, "master" means more than "studio master". The "master" supplied to Apple, or to Tidal, could be used by those companies to make an instant library of MQA encoded files, and this could be allowed by MQA, as I understand it. This isn't a problem if the "master" is good enough, of course. It leaves me confused, in that one part of me thinks "I can have all the music in these huge online collections at my fingertips in better resolution than I own on discs today", and another part says "it's processed, it's enhanced, it's not what they claim it is". Edited June 19, 2016 by Eggcup The Daft
pretender Posted June 20, 2016 Posted June 20, 2016 (edited) Listening to 2L recordings from my non MQA Chord Qute: MQA = digital playback solved. Gone is the boredom that always creeps in listening to CD's. Same excitement as vinyl but better S/N. Not subtle. Any listening impressions from the deep thinkers here who try to deduce outcomes from theoretical arguments? Edited June 20, 2016 by pretender
davewantsmoore Posted June 20, 2016 Posted June 20, 2016 On 19/06/2016 at 12:10 PM, Eggcup The Daft said: Information that is deemed to be not audible is removed from the master. You make that sound very bad. Quote In the case of a 24 bit master, the original lowest eight bits are thrown away No. Everything which is not below the noise floor is encoded. Quote a 16 bit component (in the cases we have seen - it could be more or fewer bits in theory The number of bits which is needed to encode the above are used. More bits offers zero advantage (you keep talking like "more bits is better") Quote at a word rate where the 24 bit equivalent The output is a 24bit file. What was below the noisefloor in the original file, is replaced with new noise (where the extra info is encoded). Because this new noise is spectrally dense - it looks bad on a specrogram. It is inaudible unless we contrive an unusual specific situation where it could be. Quote The 16 (or whatever) bit component can be understood by a non-MQA DAC (capable of decoding the higher depth of the whole). Yes. A (non-MQA) DAC just sees PCM. It plays as much as it can (either 16bits or 24bits). The noisefloor is at the same level as the original recording. Quote On playback using an MQA DAC, the extra data is decoded and added to the playable component, and the whole is then processed by filters described in the decoder (for the DAC) and in the extra data in the file. The decoder is supposed to be DAC specific. Yes. The decoder is generic, but the filters it applies (and the rate is chooses) can be specific to the DAC. Quote In return for accepting 16 bit effective depth This is REALLY a misnomer, that gives people the wrong impression about what is happening. The "effective" depth of audio encoded by the MQA encoder, is whatever depth is needed to capture the audio and the noise floor. The only thing the MQA encoder discards is the PURE noise (ie. below the noise floor of the original audio). If I was to encode content which has a "23bits" of audio in it (note: this doesn't exist) ..... The MQA encoder will use 23bits to encode it, and have 1 bit remaining to store the "extra stuff". Quote we get what MQA believes to be the required data above 20kHz Again. Lots of unfounded connotations. They discard the enormous areas of the spectrum which are silence.
davewantsmoore Posted June 20, 2016 Posted June 20, 2016 (edited) Quote We get a higher bitrate from the decoder Do you? I don't think so. Quote so the same advantages in playback as from upsampling Very much NO. Drastically wrong. Quote or using a high bitrate file Even more wrong. Edited June 20, 2016 by davewantsmoore
Guest Eggcup The Daft Posted June 20, 2016 Posted June 20, 2016 (edited) On 20/06/2016 at 12:58 AM, pretender said: Listening to 2L recordings from my non MQA Chord Qute: MQA = digital playback solved. Gone is the boredom that always creeps in listening to CD's. Same excitement as vinyl but better S/N. Not subtle. Any listening impressions from the deep thinkers here who try to deduce outcomes from theoretical arguments? My non MQA Oppo sounds clipped when playing back the MQA files compared to the originals. I hear no difference with my computer. If you're getting a "better" result that's good. i've never had this "digital is worse than vinyl" problem either, though. Edited June 20, 2016 by Eggcup The Daft
davewantsmoore Posted June 20, 2016 Posted June 20, 2016 On 20/06/2016 at 12:58 AM, pretender said: Any listening impressions from the deep thinkers here who try to deduce outcomes from theoretical arguments? Did you do a comparison between the not-MQA encoded, and MQA encoded versions? If no. Then what you are hearing is almost certainly simply the result of a "good recording" (the 2L content is very good). If yes, and you clearly preferred the MQA encoded version (on a non-MQA DAC).... then there is only one reason for the difference you are hearing (aside from issues with testing like sighted bias). The filters which MQA have applied to counter known issues with the recording equipment.
Guest Eggcup The Daft Posted June 20, 2016 Posted June 20, 2016 What MQA believes to be required, and throwing away silence, is effectively the same for the vast majority of recordings. To be precise, they use a triangulation to decide what data is necessary. Very high frequency information is removed, and occasional noise bursts in the 20khz+ range from some percussion instruments may escape the range used. If you don't like the language, then fair enough, but it is correct. I doubt we will ever miss what is thrown away. It's still MQA's choice though! The 16 bit comments comefrom descriptions of how MQA actually works. A lot of information is being pushed into the lower bits. You can't necessarily store it all in the bottom four bits if you have 20 bits of resolution. As I understand it they have a practical solution but it requires space to work. The lower eight bits are used. Maybe they can fold lower bit information into the extra data, but that is not how the process has been described. I'll go and read the documents again, but from what I saw, they say that the high frequency information is folded. You yourself have described the advantages of high bitrates or upsampling in previous posts, for example pushing the filters in the DAC out of band. That's what I am referring to here. MQA has those advantages in addition to the other work it does, and it needs that because the filters in the DAC are still presumably active. One question - the Wikipedia description of MQA states that MQA is interpolating the higher bitrate from the 44.1 or 48kHz bitrate. How do you understand that they are getting the higher resolution back? This wasn't clear to me before, but some information on that must be in the lower bits as well, mustn't it?
Guest Eggcup The Daft Posted June 20, 2016 Posted June 20, 2016 On 20/06/2016 at 2:56 AM, davewantsmoore said: Did you do a comparison between the not-MQA encoded, and MQA encoded versions? If no. Then what you are hearing is almost certainly simply the result of a "good recording" (the 2L content is very good). If yes, and you clearly preferred the MQA encoded version (on a non-MQA DAC).... then there is only one reason for the difference you are hearing (aside from issues with testing like sighted bias). The filters which MQA have applied to counter known issues with the recording equipment. Can't speak for Pretender, but I used a DOS script to blind test the files by renaming and randomly choosing which to play first between encoded and not-encoded versions. The description of the difference as "clipped" came from sighted and blind listening. It may be that the "clipped" sound is correct, of course.
davewantsmoore Posted June 20, 2016 Posted June 20, 2016 On 20/06/2016 at 3:10 AM, Eggcup The Daft said: The 16 bit comments comefrom descriptions of how MQA actually works. A lot of information is being pushed into the lower bits. You can't necessarily store it all in the bottom four bits if you have 20 bits of resolution. As I understand it they have a practical solution but it requires space to work. The lower eight bits are used. Maybe they can fold lower bit information into the extra data, but that is not how the process has been described. I'll go and read the documents again, but from what I saw, they say that the high frequency information is folded. The encoder determines how many bits are required to store the original audio .... and uses that many (that can be traded off in the encoder if desired - but is unlikely there's ever a reason to need to do that). Seeing as NO practical audio has content more than 100dB below the loudest sound, then the number of bits required to store the audio is always somewhere around 13 to 18.... more like 13. MQA can come in a 32 bit PCM container if the situation ever arose where that was required.... but that will never happen. Realistic speakers and analogue electronics can't reproduce it .... and if you simply add together the noise floor of your room and the loudest sound in the digital system .... then you'll see that you'd go deaf.
davewantsmoore Posted June 20, 2016 Posted June 20, 2016 (edited) I tested. Original Resolution (352.8) MQA Stereo CD MQA Stereo oversampled to 352.8 (in order to bypass the DAC internal oversampling) CD oversampled to 352.8 (in order to bypass the DAC internal oversampling) No significant result to report. All sounded good. No major difference apparent. Edited June 20, 2016 by davewantsmoore
Guest Eggcup The Daft Posted June 20, 2016 Posted June 20, 2016 On 20/06/2016 at 3:21 AM, davewantsmoore said: The encoder determines how many bits are required to store the original audio .... and uses that many (that can be traded off in the encoder if desired - but is unlikely there's ever a reason to need to do that). Seeing as NO practical audio has content more than 100dB below the loudest sound, then the number of bits required to store the audio is always somewhere around 13 to 18.... more like 13. MQA can come in a 32 bit PCM container if the situation ever arose where that was required.... but that will never happen. Realistic speakers and analogue electronics can't reproduce it .... and if you simply add together the noise floor of your room and the loudest sound in the digital system .... then you'll see that you'd go deaf. So if they can work their magic in six bits, then the real world issues are solved. Got you.
Recommended Posts