Announcement

Collapse
No announcement yet.

AMD Announces Ryzen 7000 Series "Zen 4" Desktop CPUs - Linux Benchmarks To Come

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Dukenukemx
    replied
    Originally posted by coder View Post
    Getting back to graphics, I think what our friend actually said was to use AF 16x, which I take to mean Anisotropic Filtering with 16 samples. That should be pretty effective at minimizing aliasing in textures. That leaves us just with the issue of edge jaggies. And there, I'm going to agree that you probably won't tend to notice edge jaggies, in a fast-paced game @ 4k, if your monitor is 32" or less and you don't sit with your face right up in it. If you can afford edge-AA, so much the better.
    Yep, and while there are still jaggies and you will see them if you stand still in a game and look, it's not very noticeable when playing. Especially modern games where there's a lot going on in the picture, while older games don't have too much going on and you can see the jaggies more clearly. That's why at 4k you're better off turning off the AA and get the extra frames.

    Leave a comment:


  • yump
    replied
    Originally posted by Anux View Post
    But if those pixels get small enough (more resolution or higher viewing distance) we aproach a point were the eye can't distinguish single pixels and aliasing is not a problem anymore. The only question remaining would be, if it's easier to render on lower resolution and upscale because we can't see the difference anyway. One method might give artefacts the other won't.
    Yeah, if the display and render resolutions are high enough the eye becomes the antialiasing filter. But doing that takes a lot more memory bandwidth, GPU time, electrical energy, and maybe even display cable bandwidth than it does to render at lower resolution, accumulate samples over multiple frames (FSR/DLSS 2+), and enlarge the image at the last possible moment. That's why I used the word "efficient".

    Native rendering is a bad way to do 3D graphics in the same way that MJPEG is a bad way to transmit video.

    Leave a comment:


  • Anux
    replied
    Originally posted by yump View Post

    The nyquist sampling theorem doesn't stop applying when you quadruple the number of pixels. If a 3D scene has content in it that's less than twice the size of a rendered pixel, that content will appear or disappear from frame to frame. Aliasing isn't just jagged horizons. It's ants crawling on the horizon when you move the camera, and buzzing foliage, and flickering fences.

    If you have a 4K monitor that's small enough (or far enough away) to be retina PPI, then the most efficient way to render on it is probably to render at lower internal resolution and enlarge to 4K with a temporal AA upscaler like DLSS or FSR 2. Where "most efficient" means the highest frame rate for equivalent image quality on the same hardware.
    I mostly agree with the sampling theorem, although its much more complex on a TFT because pixels are not single points but weirdly shaped areas that are also splitt in 3 different color areas.

    But if those pixels get small enough (more resolution or higher viewing distance) we aproach a point were the eye can't distinguish single pixels and aliasing is not a problem anymore. The only question remaining would be, if it's easier to render on lower resolution and upscale because we can't see the difference anyway. One method might give artefacts the other won't.
    Last edited by Anux; 09 September 2022, 04:32 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by yump View Post
    The nyquist sampling theorem doesn't stop applying when you quadruple the number of pixels. If a 3D scene has content in it that's less than twice the size of a rendered pixel, that content will appear or disappear from frame to frame. Aliasing isn't just jagged horizons. It's ants crawling on the horizon when you move the camera, and buzzing foliage, and flickering fences.
    Yes, but a little bit no. The most conventional application of Nyquist assumes a proper reconstruction filter (i.e. sinc). The pixels in modern displays act as more of a box filter, for reconstruction. So, your Kell Factor is actually lower, for them. That's the part where you're slightly off.

    However, you're still right that aliasing is always a potential issue, irrespective of the sampling frequency. That's because aliasing can produce beat frequencies in lower bands, where they're very noticeable.

    The approach taken in modern digital audio equipment is to push the sampling frequency (and thereby Nyquist limit) so high that you can use a cheap, lower-order antialias filter which has a more gradual rolloff. You don't notice the rolloff, because it still doesn't start until well in the supersonic range.

    Getting back to graphics, I think what our friend actually said was to use AF 16x, which I take to mean Anisotropic Filtering with 16 samples. That should be pretty effective at minimizing aliasing in textures. That leaves us just with the issue of edge jaggies. And there, I'm going to agree that you probably won't tend to notice edge jaggies, in a fast-paced game @ 4k, if your monitor is 32" or less and you don't sit with your face right up in it. If you can afford edge-AA, so much the better.

    Originally posted by yump View Post
    If you have a 4K monitor that's small enough (or far enough away) to be retina PPI, then the most efficient way to render on it is probably to render at lower internal resolution and enlarge to 4K with a temporal AA upscaler like DLSS or FSR 2.
    Using conventional upscaling methods, good edge AA (not to mention texture filtering) should be even more important. IIRC, DLSS 1.0 didn't use AA, but I don't know about DLSS 2.0.

    Leave a comment:


  • yump
    replied
    Originally posted by Dukenukemx View Post
    As for 4k, the main issue with it is that everyone tests it with Anti-Aliasing and much like DLSS and FSR, they don't remember why this technology was created in the first place. You don't need to run AA when you're using 4k. AA was created because at 640x480 or 800x600, you could really see the jaggies. At 4k though, good luck finding them. Just run the games AF to 16X and you'll be fine at 4k.
    The nyquist sampling theorem doesn't stop applying when you quadruple the number of pixels. If a 3D scene has content in it that's less than twice the size of a rendered pixel, that content will appear or disappear from frame to frame. Aliasing isn't just jagged horizons. It's ants crawling on the horizon when you move the camera, and buzzing foliage, and flickering fences.

    If you have a 4K monitor that's small enough (or far enough away) to be retina PPI, then the most efficient way to render on it is probably to render at lower internal resolution and enlarge to 4K with a temporal AA upscaler like DLSS or FSR 2. Where "most efficient" means the highest frame rate for equivalent image quality on the same hardware.

    Leave a comment:


  • coder
    replied
    Originally posted by Anux View Post
    There is simply no reason this card shouldn't be < 100$.
    Inflation and component shortages.

    Give it time, though. My hope, for the NAVI 24 products, is that the dies are so small that the cards will indeed trend towards that magic $100 price point, if the rest of the costs come down more to where they should be. If Intel A380 can sell for $140 with a bigger die and 50% more GDDR6, then RX 6500XT should be able to sell nearer $100 once GPU prices finally stabilize. Right now, I'm seeing RX 6400 for $150 and RX 6500XT for $185.

    Leave a comment:


  • Anux
    replied
    Originally posted by coder View Post
    You need some minimum amount of GDDR memory, PCB, VRM, packaging, connectors, and cooling solution, before you even get to the GPU itself. The effect is that perf/$ increases significantly, by going up a couple tiers. This is clear if you look at charts of perf/$.
    That wasn't true in the past where we constantly got better perf/$ in the low end. Look at the 6500XT, it has 70W less, 4GB less, only PCIe x4, no video encoder and a die less than half the size. There is simply no reason this card shouldn't be < 100$.

    Leave a comment:


  • Dukenukemx
    replied
    Originally posted by coder View Post
    That's only true because the price point is that low. If you compare higher-priced cards from the same era, then you do get more performance for the same price.

    The problem faced by the cheap cards is that there's a price floor, when making a GPU. You need some minimum amount of GDDR memory, PCB, VRM, packaging, connectors, and cooling solution, before you even get to the GPU itself. The effect is that perf/$ increases significantly, by going up a couple tiers. This is clear if you look at charts of perf/$.
    The problem is that AMD, Nvidia, and I would include Intel as well have no idea about pricing. Especially AMD when they released the 5600XT that was priced the same as the 5700. Later AMD releases the 5500 which is barely the same performance as a RX 580 but more expensive. What AMD doesn't get is if they want market share then they have to lower prices significantly. I would say Intel understands this, until I saw their A380 for $139 and performs worse than a RX580/GTX1060. Those GTX 1060's ain't going nowhere.

    Leave a comment:


  • coder
    replied
    Originally posted by Anux View Post
    The only reason one would upgrade an old card is because you get more performance for at least the same price of your old card. It's still the opposite, if I want to replace my RX480 that did cost around 200€ I only get slower cards at this price point.
    There is nothing proportionally with today's pricing.
    That's only true because the price point is that low. If you compare higher-priced cards from the same era, then you do get more performance for the same price.

    The problem faced by the cheap cards is that there's a price floor, when making a GPU. You need some minimum amount of GDDR memory, PCB, VRM, packaging, connectors, and cooling solution, before you even get to the GPU itself. The effect is that perf/$ increases significantly, by going up a couple tiers. This is clear if you look at charts of perf/$.

    Leave a comment:


  • Anux
    replied
    Originally posted by Dukenukemx View Post
    Performance always goes up, as that's how they entice you to buy new cards. That doesn't mean prices shouldn't go up proportionally.
    The only reason one would upgrade an old card is because you get more performance for at least the same price of your old card. It's still the opposite, if I want to replace my RX480 that did cost around 200€ I only get slower cards at this price point.
    There is nothing proportionally with today's pricing.

    Leave a comment:

Working...
X