Announcement

Collapse
No announcement yet.

AMD Announces Ryzen 7000 Series "Zen 4" Desktop CPUs - Linux Benchmarks To Come

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Dukenukemx View Post
    As for 4k, the main issue with it is that everyone tests it with Anti-Aliasing and much like DLSS and FSR, they don't remember why this technology was created in the first place. You don't need to run AA when you're using 4k. AA was created because at 640x480 or 800x600, you could really see the jaggies. At 4k though, good luck finding them. Just run the games AF to 16X and you'll be fine at 4k.
    The nyquist sampling theorem doesn't stop applying when you quadruple the number of pixels. If a 3D scene has content in it that's less than twice the size of a rendered pixel, that content will appear or disappear from frame to frame. Aliasing isn't just jagged horizons. It's ants crawling on the horizon when you move the camera, and buzzing foliage, and flickering fences.

    If you have a 4K monitor that's small enough (or far enough away) to be retina PPI, then the most efficient way to render on it is probably to render at lower internal resolution and enlarge to 4K with a temporal AA upscaler like DLSS or FSR 2. Where "most efficient" means the highest frame rate for equivalent image quality on the same hardware.

    Comment


    • Originally posted by yump View Post
      The nyquist sampling theorem doesn't stop applying when you quadruple the number of pixels. If a 3D scene has content in it that's less than twice the size of a rendered pixel, that content will appear or disappear from frame to frame. Aliasing isn't just jagged horizons. It's ants crawling on the horizon when you move the camera, and buzzing foliage, and flickering fences.
      Yes, but a little bit no. The most conventional application of Nyquist assumes a proper reconstruction filter (i.e. sinc). The pixels in modern displays act as more of a box filter, for reconstruction. So, your Kell Factor is actually lower, for them. That's the part where you're slightly off.

      However, you're still right that aliasing is always a potential issue, irrespective of the sampling frequency. That's because aliasing can produce beat frequencies in lower bands, where they're very noticeable.

      The approach taken in modern digital audio equipment is to push the sampling frequency (and thereby Nyquist limit) so high that you can use a cheap, lower-order antialias filter which has a more gradual rolloff. You don't notice the rolloff, because it still doesn't start until well in the supersonic range.

      Getting back to graphics, I think what our friend actually said was to use AF 16x, which I take to mean Anisotropic Filtering with 16 samples. That should be pretty effective at minimizing aliasing in textures. That leaves us just with the issue of edge jaggies. And there, I'm going to agree that you probably won't tend to notice edge jaggies, in a fast-paced game @ 4k, if your monitor is 32" or less and you don't sit with your face right up in it. If you can afford edge-AA, so much the better.

      Originally posted by yump View Post
      If you have a 4K monitor that's small enough (or far enough away) to be retina PPI, then the most efficient way to render on it is probably to render at lower internal resolution and enlarge to 4K with a temporal AA upscaler like DLSS or FSR 2.
      Using conventional upscaling methods, good edge AA (not to mention texture filtering) should be even more important. IIRC, DLSS 1.0 didn't use AA, but I don't know about DLSS 2.0.

      Comment


      • Originally posted by yump View Post

        The nyquist sampling theorem doesn't stop applying when you quadruple the number of pixels. If a 3D scene has content in it that's less than twice the size of a rendered pixel, that content will appear or disappear from frame to frame. Aliasing isn't just jagged horizons. It's ants crawling on the horizon when you move the camera, and buzzing foliage, and flickering fences.

        If you have a 4K monitor that's small enough (or far enough away) to be retina PPI, then the most efficient way to render on it is probably to render at lower internal resolution and enlarge to 4K with a temporal AA upscaler like DLSS or FSR 2. Where "most efficient" means the highest frame rate for equivalent image quality on the same hardware.
        I mostly agree with the sampling theorem, although its much more complex on a TFT because pixels are not single points but weirdly shaped areas that are also splitt in 3 different color areas.

        But if those pixels get small enough (more resolution or higher viewing distance) we aproach a point were the eye can't distinguish single pixels and aliasing is not a problem anymore. The only question remaining would be, if it's easier to render on lower resolution and upscale because we can't see the difference anyway. One method might give artefacts the other won't.
        Last edited by Anux; 09 September 2022, 04:32 AM.

        Comment


        • Originally posted by Anux View Post
          But if those pixels get small enough (more resolution or higher viewing distance) we aproach a point were the eye can't distinguish single pixels and aliasing is not a problem anymore. The only question remaining would be, if it's easier to render on lower resolution and upscale because we can't see the difference anyway. One method might give artefacts the other won't.
          Yeah, if the display and render resolutions are high enough the eye becomes the antialiasing filter. But doing that takes a lot more memory bandwidth, GPU time, electrical energy, and maybe even display cable bandwidth than it does to render at lower resolution, accumulate samples over multiple frames (FSR/DLSS 2+), and enlarge the image at the last possible moment. That's why I used the word "efficient".

          Native rendering is a bad way to do 3D graphics in the same way that MJPEG is a bad way to transmit video.

          Comment


          • Originally posted by coder View Post
            Getting back to graphics, I think what our friend actually said was to use AF 16x, which I take to mean Anisotropic Filtering with 16 samples. That should be pretty effective at minimizing aliasing in textures. That leaves us just with the issue of edge jaggies. And there, I'm going to agree that you probably won't tend to notice edge jaggies, in a fast-paced game @ 4k, if your monitor is 32" or less and you don't sit with your face right up in it. If you can afford edge-AA, so much the better.
            Yep, and while there are still jaggies and you will see them if you stand still in a game and look, it's not very noticeable when playing. Especially modern games where there's a lot going on in the picture, while older games don't have too much going on and you can see the jaggies more clearly. That's why at 4k you're better off turning off the AA and get the extra frames.

            Comment

            Working...
            X