Announcement

Collapse
No announcement yet.

Red Hat Announces RHEL AI

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by mSparks View Post
    For gaming AI gives you generative upscaling, so 320x640 perf for a 4K display, the power and performance benefit claimed there 100% translates into gaming devices.
    RT cores need RTX games, currently there are ~ 2 or 3 of them, only one so far built to make use of RTX as its core development focus (Alan Wake 2), that proved you can build a AAA blockbuster on a $50mil budget that takes $500mil without RTX.
    RTX4080 "only" has 76 RT cores, all they have announced on blackwell so far is one Datacenter chip (GB200), chip specs for workstations and gaming will be released within the next 6 to 12 months once they know production yields and mobile gaming chips will drip out over the next 4 or 5 years.
    from what i have read only VR Games will profit a lot. just because they abolish the monolitic design and now have 2 chiplets who 1 chiplet can render left-eye and the other chiplet can render right-eye

    this more or less means because RDNA4 will only be 1 gpu die (with cache chiplets) VR gaming could be favor Nvidia. but again AMD does not want to focus on a niche market like VR...

    but all this is on 32bit floading point only because blackwell/5000 has bad 64bit floading point performance and some games use 64bit FP...

    "For gaming AI gives you generative upscaling, so 320x640 perf for a 4K display"

    well FSR4.0 will AI matrix cores on radeon 7000/8000 to so its no longer a shader software solution.

    so DLSS could lose its selling factor.
    Phantom circuit Sequence Reducer Dyslexia

    Comment


    • #72
      Originally posted by qarium View Post


      well FSR4.0 will AI matrix cores on radeon 7000/8000 to so its no longer a shader software solution.
      My biggest problem with the potential for AMDs 8000 series is Ive not seen anything to suggest it will be close to competitive with blackwell.

      The 7900XTX is looking like it ends up a bottom of the range card for this coming generation, Ive seen zero evidence to suggest that is something they can make a margin on. Really hope they can, but its a huge ask.

      Comment


      • #73
        Originally posted by mSparks View Post
        My biggest problem with the potential for AMDs 8000 series is Ive not seen anything to suggest it will be close to competitive with blackwell.
        The 7900XTX is looking like it ends up a bottom of the range card for this coming generation, Ive seen zero evidence to suggest that is something they can make a margin on. Really hope they can, but its a huge ask.
        why even think about blackwell for a comparison ?
        the 4070/4080/4090 is much better comparison...

        from a technical standpoint a 4090 was better than a 7900XTX because of hardware accelerated DLSS..

        and FSR even the version FSR3.1 was as many people say never as good because it was a software shader implementation.

        if only RDNA4 then use the AI matrix cores in FSR4.0 (the 7900XTX has this AI cores as well not only 8000) this would already move a RDNA4 8800XT into a better position. AMD lags behind in software development this means is is logical that FSR4.0 comes later and the hardware itself was already released in the 7900XTX...

        then about raytracing the jump from RDNA2 to RDNA3 was 17% any improvement there would move a RDNA4 8800XT into a better position...

        also i think they will release a 8800XTX in the same way there is different between 7900XTX and W7900 means double the vram

        this means a 8800XTX would have 32GB vram..

        "Ive seen zero evidence to suggest that is something they can make a margin on. Really hope they can, but its a huge ask."

        its the complete opposit according to amd internals bug-fix releases of chips or minor improvement architecture chips are much more profitable than complete new architectures.

        just see RDNA1.0... the 5700XT it was a horrible chip with many hardware defects and the workarounds did kill performance. and many defects where never fixed thats one of the reason why ROCm/HIP support on these chips are so bad.

        the feature set improvement from RDNA2 to RDNA3 was very small but did increase the profitability very much because of the chiplet design.

        they say from going from RDNA2 to RDNA3 something did go wrong in the chip design this means RDNA4 is a bug-fix release means just a minor change in the architecture. who knows how much performance this brings.

        but for amd this is clear to make more money than to bring a complete architecture.

        just compare Vega64 to the 5700XT it is pretty sure AMD did make more money on vega64 than on the RDNA1 5700XT...

        this means RDNA5 with the real new architectur will cost so much money for amd that the first generation of the new architecture will not make much money.

        its really biased that people believe you can only make big money on big improvements

        this is not how the chip business works you can make new generation with ZERO improvement in performance and go from monolitic chips to chiplet design and then you can make more money on a even lower sales price.

        exactly this happened with RDNA2 vs RDNA3 if you compare a Radeon 6950XT with a 7800XT or 7900GRE

        the 6950XT is not slow... but the 7900GRE is much cheaper to produce and to sell. because of the chiplet design

        AMD clearly say they expect to make money on RDNA4 and clealy on RDNA3 not so much.

        but it clearly easy to detect why this is so RDNA3 was shiped with AI Matrix cores but FSR4.0 was not ready to ship means their software development is to slow.

        factors like this tank the profitability of the RDNA3 generation.

        so if only FSR4.0 is ready for the RDNA4 release these cards will sell much better against Nvidias DLSS...





        Phantom circuit Sequence Reducer Dyslexia

        Comment


        • #74
          Originally posted by qarium View Post

          why even think about blackwell for a comparison ?
          Same reason to consider the 8000 series.

          Most PC users that buy gpus are in upgrade territory now, they will need to to make use of any of this AI stuff. so to sell the cost of the card needs to make sense for those who already own RTX2080/Radeon VII and above.

          i.e. what will those who already own say a Radeon 6950XT buy next.

          $300 to swap to the perf of at least an RTX4080 will sell well (low end blackwell/ the price of rtx4080s after blackwell launches)

          even $500 to swap to a 7900XTX does not.
          Last edited by mSparks; 15 May 2024, 09:34 AM.

          Comment


          • #75
            Originally posted by qarium View Post

            just compare Vega64 to the 5700XT it is pretty sure AMD did make more money on vega64 than on the RDNA1 5700XT...
            Vega 64 had a 486mm^2 die. 5700 xt was 250mm^2

            Comment


            • #76
              Originally posted by DumbFsck View Post
              Vega 64 had a 486mm^2 die. 5700 xt was 250mm^2
              vega64 was 14nm and the 5700XT was 7nm... thats why you can not compare the die cores like this.

              also keep in mind the cost explosion in the nm node the 7nm was much more expensive than 14nm
              Phantom circuit Sequence Reducer Dyslexia

              Comment


              • #77
                Originally posted by mSparks View Post
                Same reason to consider the 8000 series.
                Most PC users that buy gpus are in upgrade territory now, they will need to to make use of any of this AI stuff. so to sell the cost of the card needs to make sense for those who already own RTX2080/Radeon VII and above.
                i.e. what will those who already own say a Radeon 6950XT buy next.
                $300 to swap to the perf of at least an RTX4080 will sell well (low end blackwell/ the price of rtx4080s after blackwell launches)
                even $500 to swap to a 7900XTX does not.
                the upgrade market worldwide is very small compared to the OEM complete PC/laptop marke...
                means most people never upgrade and they buy new computer after 4-8 years.

                upgrade territory... believe it or not in the upgrade territory the main factor is not features or performance as you claim the main factor in the upgrade territory is only "price per performance"

                people who are like you means who have millions of euro and just buy the most expensive what has the most features and performance these people are only a very niche market.. lets say 1-5% or even less.

                this more people upgrade to a 7900XTX/XT or 4070TI and they do not buy a 4090 because the "price per performance" is bad ...

                the RDNA4 radeon 8000 cards will clearly not be made for the update market no one will update a 6950XT or 7900XTX to a RDNA4 8000 card.

                AMD did proof with the 6400/6500 that they do not care about the upgrade market the PCIe interface was crippled what did hurt performance on old systems very much and also AV1 decode and encoder was crippled beause the 6400/6500 chip was made as dGPU for ryzen 7000 APUs in notebooks/laptops and the 7000 APU does have a AV1 decode unit. means they did cut the 6400/6500 chip to the max because it is only a notebook dGPU chip
                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • #78
                  Originally posted by qarium View Post

                  means most people never upgrade and they buy new computer after 4-8 years.
                  Same difference
                  RTX2080 was launched in Sept 2018 - 6 years ago
                  Radeon VII was launched Feb, 2019 - 5 years ago

                  These cards are now in "upgrade" territory, either in place or as part of a new system, it doesn't make much if any difference, the deciding factor for new/swap will be what else needs upgrading. I'm running an 5900X CPU, absolutely no intention of upgrading it any time soon, not least because I got lucky with a very performant chip, and the only marked difference vs the 7900X is the 7900X burns 60 more watts for the same perf because it wastes electrons on RDNA2.
                  Originally posted by qarium View Post
                  more people upgrade to a 7900XTX
                  No one is or will, it is the first GPU AMD have made in a VERY long time that was moderately competitive between its launch date in Dec 2022 and the launch of the RTX4070 in April 2023 - 4 whole months, but will be eclipsed very soon because the way Nvidia is drip feeding their updates, the RTX5060 will/should outperform the RTX4080 in most if not all important tasks.

                  Also I actually expect raster performance on the RTX5000 chips to be less than the RTX4000 chips - as they devote more and more of the die to raytracing and GPGPU (the interesting bits of hardware for running huge markov models), but we'll see.
                  Last edited by mSparks; 16 May 2024, 07:46 PM.

                  Comment


                  • #79
                    Originally posted by mSparks View Post
                    Same difference
                    RTX2080 was launched in Sept 2018 - 6 years ago
                    Radeon VII was launched Feb, 2019 - 5 years ago
                    These cards are now in "upgrade" territory, either in place or as part of a new system, it doesn't make much if any difference, the deciding factor for new/swap will be what else needs upgrading.
                    right if your definition of upgrade to buy new OEM computer then yes.

                    I for myself i would not upgrade a 2080 or radeon7... my vega64 is close to the upgrade range but not yet

                    yes FSR3.0 and FSR3.1 was officially not released for Vega64 but because its opensource it can easily be backported.
                    ROCm/HIP has dropped its active development but stuff like blender works and even people report that zluda works to...
                    mesa did even add raytracing software solution in shader code for vega

                    means the only stuff that makes me want a upgrade is AV1 decode more vram for bigger AI models
                    and raytracing or higher raytracing performance is not even on the list because even on a 7900XTX raytracing is pointless in my point of view.

                    but all the RX480/580 people clearly want upgrade FSR works slow because no FP16 support and FP32 is slower
                    no working ROCm/HIP and no zluda and so one..

                    this means the RX480/580 people are more or less fucked and want to upgrade.

                    Originally posted by mSparks View Post
                    I'm running an 5900X CPU, absolutely no intention of upgrading it any time soon, not least because I got lucky with a very performant chip, and the only marked difference vs the 7900X is the 7900X burns 60 more watts for the same perf because it wastes electrons on RDNA2.
                    you are absolutely right no need to upgrade at all.

                    Originally posted by mSparks View Post
                    No one is or will, it is the first GPU AMD have made in a VERY long time that was moderately competitive between its launch date in Dec 2022 and the launch of the RTX4070 in April 2023 - 4 whole months, but will be eclipsed very soon because the way Nvidia is drip feeding their updates, the RTX5060 will/should outperform the RTX4080 in most if not all important tasks.
                    i did just want to say that many people upgrade to a 4070 and not 4090 because of the bad value per euro means bad performance per euro.

                    the people who have enough money and buy the best whatever it cost they are clearly a niche.

                    Originally posted by mSparks View Post
                    Also I actually expect raster performance on the RTX5000 chips to be less than the RTX4000 chips - as they devote more more of the die to raytracing and GPGPU (the interesting bits of hardware for running huge markov models), but we'll see.
                    is it only raster performance what is less ? or are the games using FP64 are totally tanked in performance ?
                    these RTX5000 chips have less FP64 performance keep this in mind.

                    also i think they will even devote less tranistors to GPGPU/Compute and spend the tranistors on AI acceleration... means stuff not used in GPGPU/compute...

                    Phantom circuit Sequence Reducer Dyslexia

                    Comment


                    • #80
                      Originally posted by qarium View Post

                      right if your definition of upgrade to buy new OEM computer then yes.

                      I for myself i would not upgrade a 2080 or radeon7... my vega64 is close to the upgrade range but not yet

                      within a year or two, RTX4080 perf is looking like it will be the minimum for many new game releases. For games like Alan wake 2 it already is.
                      Originally posted by qarium View Post
                      because of the bad value per euro
                      Nvidia got to price gouge on the 4000 series, because AMD wasnt competitive. I have no idea what that looks like this coming gen.
                      Originally posted by qarium View Post
                      raster performance what is less ?
                      raster performance is completely different silicon than GPGPU (cuda cores) and Raytracing (RT cores)

                      aiui this is already the main difference between the 4080 and the 4090, with the 4090 having less silicon dedicated to raster and more on gpgpu and RT, which is why for many "legacy" (and I use that term losely) graphics applications the 4080 outperforms the 4090.

                      Comment

                      Working...
                      X