Jump to content

New Project: Chop top LS with Double Stack ESS AMT-1


Recommended Posts

1 hour ago, Chris A said:

You'll see the difference in off-axis measurements--there'll be a drop out at about 1.5 kHz where the AMT-1 plastic body ends (1/4 axial wavelength).

 

I honestly don't care about off axis. My critical listening is just me and everything else is about 2ft left or right of the MLP and it's casual tv/movies at that point.

Link to comment
Share on other sites

9 minutes ago, Rudy81 said:

You are a master of understatement!  Enjoy


Man, it sounds really really good. Lacking some of the depth I heard in your setup, but I attribute that to placement and lack of room treatments. They do blend really well with the bass bins and then the pair blends well with the spuds. I've done a few movie demos and don't feel like the avr is withholding any of the lfe content by not utilizing the sub pre-outs either.

  • Like 1
Link to comment
Share on other sites

On 3/6/2020 at 2:23 PM, Thaddeus Smith said:

 

I honestly don't care about off axis. My critical listening is just me and everything else is about 2ft left or right of the MLP and it's casual tv/movies at that point.

 

Smooth off-axis response is still important for balanced reflected sound, though, even if you're in the sweet spot.

Link to comment
Share on other sites

here we sit currently. All of my measurement work with Chris was at 1m and I used some cushions on the wall for left channel since they're so close. Yellow is back at the MLP, cushions removed and representative of my normal listening configuration. Clearly I need room treatments in this very asymmetrical space, but that's not happening in this house and will be a focus when we build new in the next few years. It sure sounds good to my oxidized copper ears.

 

I tried running Audyssey just to see what impact that might have, but it keeps complaining about too much ambient room noise after running three separate sweeps per channel at incrementally higher volumes. I'm betting it's because I don't have my amps cranked in order to reduce noise from the DSP and then calibrated the signal chain accordingly.

 

At reference volume of 0.0 on the AVR I'm seeing 75-85dB with peaks in the low 90dB range for music, depending on the recording. Movies are 75-85dB with peaks in the low 100db range, depending on the movie. I'm pretty sure that's the "goal" when setting up an AVR. Measurements below were taken with the volume set to -15, for reference.

 

 

sweeps.jpg.6dfd930461a81cf4d8c23f2b134cede7.jpg

 

 

  • Like 1
Link to comment
Share on other sites

It's interesting that the difference between the blue/red curves taken at 1 metre in front of each loudspeaker...and the yellow line taken back at the listening position is what a lot of people call "house curve" but is really just the effects of the room and the off-axis response of the loudspeakers. This is the so-called "early reflections" curve that Toole talks about in his book Sound Reproduction: Loudspeakers and Rooms.

 

Note that a flat response on-axis at 1 m pretty much gets you where you need to be (and Toole states this repeatedly in his book and white papers). 

 

"Room correction software" like Audyssey, YPAO, Dirac, etc. is trying to wade through all the room modes and early reflections from measurements at the listening position(s) to get the "minimum phase" response--which largely what you get with 1 m on-axis measurements of each loudspeaker, at least above 100 Hz (...hint, hint...). 

 

Chris

Link to comment
Share on other sites

14 minutes ago, Chris A said:

It's interesting that the difference between the blue/red curves taken at 1 metre in front of each loudspeaker...and the yellow line taken back at the listening position is what a lot of people call "house curve" but is really just the effects of the room and the off-axis response of the loudspeakers. This is the so-called "early reflections" curve that Toole talks about in his book Sound Reproduction: Loudspeakers and Rooms.

 

Note that a flat response on-axis at 1 m pretty much gets you where you need to be (and Toole states this repeatedly in his book and white papers). 

 

"Room correction software" like Audyssey, YPAO, Dirac, etc. is trying to wade through all the room modes and early reflections from measurements at the listening position(s) to get the "minimum phase" response--which largely what you get with 1 m on-axis measurements of each loudspeaker, at least above 100 Hz (...hint, hint...). 

 

Chris

 

I certainly anticipated the process to be more convoluted and involved - and maybe it was on your end - thanks for getting me up and running and in a place where I can continue learning the process and understanding what I'm seeing, without banging my head into the bricks in the meantime.

Link to comment
Share on other sites

t doesn't take very long to grasp the concepts and the basic techniques.  And I think this is the part of the puzzle that typically dissuades the casual hobbyist from becoming proficient using DSP, rather than relying exclusively on "room correction software" to solve all ills without user knowledge of what is really occurring.

 

As you've stated elsewhere, some things are better experienced doing it in person--and reinforced through Q&A--rather than systematically reading pdf tutorials on one's own.  And hearing the differences after dialing them in is something that makes believers out of skeptics--especially those with sensitive ears.

 

I really think that these kind of tutorials (dialing-in DSP crossovers using REW) is something that really ought to be regular sessions at audio get-togethers such as Lone Star Audio Fest (LSAF) or Rocky Mountain Audio Fest (RMAF), etc.  Learning about the same kind of effects in music recordings via Audacity remastering techniques for poorly done recordings is another subject that a lot of people would likely find useful.

 

Chris

  • Like 2
Link to comment
Share on other sites

A week later and everything just sounds really good to me. It's crazy how allowing each driver to play their natural frequency range with just a handful of DSP tweaks and time alignment can make such a great sounding setup. remind me why this isn't more widely adopted?

  • Like 2
Link to comment
Share on other sites

  • 3 weeks later...

Made some changes tonight. Still have to spend some time listening for any audible improvements.

 

Since purchasing the Yamaha DSP from Claude, he and I have had a running dialogue about utilizing the AES/EBU input and reducing the amount of D/A to A/D to D/A conversions by having a digital source and an AVR in place ahead of the DSP. In practice, it doesn't seem to have much impact, but it's one of those things where it would be ideal if one could keep the signal entirely within the digital realm before that final D/A conversion to the amplification targets. I was overthinking it, especially when it came to volume control, of how to get audio from either a PC or my raspberry pi + spdif coax HAT and into the AES/EBU format.

 

I was also chastising Rodney about an expensive music server that he ordered and it forced me to go take a look back at some Roon configuration options. This led me to look more seriously at HQPlayer as a companion application for across-the-board upsampling of my FLAC files in the software domain. It's actually fairly easy to configure Roon as the front end, sending the data to HQPlayer via bitstream, and then from HQPlayer over to the raspberry pi running ROPIEEE XL with the native HQPlayer network endpoint protocol. I can also still stream Spotify natively to the RopieeeXL.

 

Keep up.

 

Volume control was my primary need for a pre-amp, along with video splitting for movies and tv and so on. Roon/HQPlayer have volume control software functions, as does Spotify. Ok, then what about that pesky conversion of SPDIF coax to AES/EBU. After digging through old forums, it looks like SPDIF to AES is actually really simple and that it's the AES to SPDIF where you need some fairly particular attenuation to keep from frying your electronics. Otherwise, it's just a matter of an inline impedance adapter, or simply giving it a shot with an RCA to XLR cable. Which I have.

 

I no longer use the AVR for DSP, subwoofer management, or channel levels. That's all living within the Yamaha DSP. It's now just a preamp with a DAC and a video switcher for sending sweet sweet 4k UHD (or lesser) video to my TV. All still a very important function.

 

Keep keeping up.

 

Then there's the issue of keeping things simple for the wife and kids. The AVR absolutely does that, but when it comes to music my wife only uses spotify through my primary media source - the NVIDIA Shield TV. I'm the only one who switches over to using the raspberry pi and Roon, with Ppotify also available on that device for my own convenience.

 

So that means, that since I'm already manually switching to an input unknown to the family I can keep the raspberry pi directly connected to my Yamaha DSP's digital input, the AVR connected to the Yamaha's analogue inputs, and simply shift my input selection efforts from the AVR to the front panel of the Yamaha where it's just a couple of button clicks to change between analogue and digital inputs.

 

So.. for movies and general family use I just run the Shield TV > Denon AVR > Yamaha > Amps, where there's a bunch of DAC and CAD conversion happening, but everything plays nice and is just fine for that type of content.

 

Now.. when I want to go into Hi-Fi mode, my new signal chain is dramatically decreased to Roon > HQPlayer > RaspberryPI > Yamaha > Amps, where the first four layers remain entirely in the digital domain.

 

Questions?

  • Like 4
Link to comment
Share on other sites

10 minutes ago, rockhound said:

HQ player resides where (hardware)? I'm assuming it's primary benefit is upsampling?

 

HQPlayer is a piece of software that's running on the same virtual machine also running the Roon core.

 

https://www.signalyst.com/consumer.html

 

Roon supports direct integration of the software, and basically transmits the audio via bitstream. https://kb.roonlabs.com/HQPlayer_Setup

 

The more recent versions of RopieeeXL also support the the "Network Audio Adapter" protocol used by HQPlayer for sending the augmented PCM stream to a network endpoint.

Link to comment
Share on other sites

9 minutes ago, CECAA850 said:

It's not plug and play and it's more expensive.

It's unbox, solder extensions, build wings, get another amp, a dsp unit, test mic and then rely on @Rudy81 (hope to hear from him soon) and @Chris A and others, to educate you enough that you don't fry the drivers.  Just slightly more than plug and play.  

Link to comment
Share on other sites

7 minutes ago, Thaddeus Smith said:

 

HQPlayer is a piece of software that's running on the same virtual machine also running the Roon core.

 

https://www.signalyst.com/consumer.html

 

Roon supports direct integration of the software, and basically transmits the audio via bitstream. https://kb.roonlabs.com/HQPlayer_Setup

 

The more recent versions of RopieeeXL also support the the "Network Audio Adapter" protocol used by HQPlayer for sending the augmented PCM stream to a network endpoint.

Got it that is what I was curious about if you had HQPlayer running on a separate piece of hardware.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...