Announcement

Collapse
No announcement yet.

Intel Arc Graphics A750/A770 Quick Linux Competition With The Radeon RX 7600

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by sophisticles View Post
    How is closed source a disadvantage?
    Thats easy to explain. Ig things don't work right nobody knows why. Nobody will debug it. And if design choices are made, nobody concerns nvidias way to do things.

    Originally posted by sophisticles View Post
    ​Who know the inner workings of an NVIDIA GPU better than NVIDIA?
    ​Ironically thats true, at least because nobody else knows...

    Originally posted by sophisticles View Post
    ​​I don't get this obsession people have with drivers needing to be open source.

    Obviously.

    Originally posted by sophisticles View Post
    ​​​I wonder if the people that demand that all drivers be open source play any closed source proprietary games or use closed sourced proprietary software like Resolve or Lightworks?
    Why is it OK for one piece of software to be closed source and another has to be open source?
    ​​
    Lifespan of drivers is just a lot shorter if they aren't actively maintained. Nvidias legacy drivers are kind of a joke. Applications tend to work for much longer even on new OS versions even if not actively maintained.

    Originally posted by sophisticles View Post
    ​​​​What if Intel, AMD and NVIDIA all said that going forward all their new hardware would only work with closed source drivers, would these same people stop buying any new hardware and just stick with legacy hardware?
    Answer is easy: If there would be no good choice you wouldn't be able to do a good choice. Luckily thats not a problem today for the most part.

    Comment


    • #32
      Originally posted by sophisticles View Post
      I wonder if the people that demand that all drivers be open source play any closed source proprietary games or use closed sourced proprietary software like Resolve or Lightworks?
      as we constantly see with AMD, mesa drivers constantly seem to breath new life and features into AMD gpus.

      Comment


      • #33
        Originally posted by ryao View Post

        The A750 is $200.
        sorry, meant to say, price to performance. Which is most important metrics in today's GPU's world.
        A750 still got 8 gb of vram, which I have on my 5700xt from 3.5 years ago...

        Comment


        • #34
          Originally posted by Quackdoc View Post
          nothing I have said here has contradicted what I have said in the past, the intel team has so far been doing a stellar job, this is so far the only major issue I have with it. in which it's not like it's too late to be remedied.
          stuff like that will always be the case as long as the opensource community does not build and organise their own opensource gpu hardware... like libre-soc gpu...

          of course it is not "too late" but you will see they will not do it. and the answer is always the same in this case: buy the next generation.

          honestly i do not unterstand why intel and amd do not work together on the gpu field because honestly because of CUDA/DLSS and so one they both lose agaist Nvidia...

          and the AI hype makes CUDA even more important.

          but i know many years ago amd and intel had a APU-like product together with intel cpu and amd gpu this corporation failed horrible...

          the fact that intel and amd do not work together on the gpu side gives Nvidia a free pass.
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #35
            Originally posted by sophisticles View Post
            How is closed source a disadvantage?
            Who know the inner workings of an NVIDIA GPU better than NVIDIA?
            I don't get this obsession people have with drivers needing to be open source.
            I wonder if the people that demand that all drivers be open source play any closed source proprietary games or use closed sourced proprietary software like Resolve or Lightworks?
            Why is it OK for one piece of software to be closed source and another has to be open source?
            What if Intel, AMD and NVIDIA all said that going forward all their new hardware would only work with closed source drivers, would these same people stop buying any new hardware and just stick with legacy hardware?
            all you say is true but future will proof to you what will be the essence of this in the future.

            i predict this: the market will split one group the people who really want opensource they will build and maintain their own true fully opensource GPU like the libre-soc gpu from RED Semiconductors.

            the other group who also accept closed source games will still buy Nvidia gpus

            and companies like valve will still buy cpus from amd with opensource drivers because these companies like valve want have the power to modify the driver they use for their devices like steam deck.

            "would these same people stop buying any new hardware and just stick with legacy hardware?"

            no these people will not stick to legacy hardware instead they will buy libre-soc gpu from RED Semiconductors
            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • #36
              Originally posted by qarium View Post

              stuff like that will always be the case as long as the opensource community does not build and organise their own opensource gpu hardware... like libre-soc gpu...

              of course it is not "too late" but you will see they will not do it. and the answer is always the same in this case: buy the next generation.

              honestly i do not unterstand why intel and amd do not work together on the gpu field because honestly because of CUDA/DLSS and so one they both lose agaist Nvidia...

              and the AI hype makes CUDA even more important.

              but i know many years ago amd and intel had a APU-like product together with intel cpu and amd gpu this corporation failed horrible...

              the fact that intel and amd do not work together on the gpu side gives Nvidia a free pass.
              its worth noting that intel does support a rocm backend for their oneapi compute. so at the very least, intel IS pushing for a cross platform, high performance compute api which does support AI stuff. though I would still rather see vulkan compute support pushed by AMD and intel myself.

              Comment


              • #37
                Originally posted by Quackdoc View Post
                its worth noting that intel does support a rocm backend for their oneapi compute. so at the very least, intel IS pushing for a cross platform, high performance compute api which does support AI stuff. though I would still rather see vulkan compute support pushed by AMD and intel myself.
                they can try as hard as they want every news i read about AI for example elon musk with X/twitter and truthGPT they all buy Nvidia or in case of google or microsoft develop their own chips.

                "I would still rather see vulkan compute support pushed by AMD and intel myself."

                i am in favor of this to. but i don't know no one focus on the outside of some sample code ŧests.

                but if you read this post you can see that there is little hope:

                Originally posted by cgmb View Post
                To be clear, the use of shader assembly code in the math libraries isn't why this has been difficult. As far as I can tell, every library has generic fallback paths. We never build for architectures that we don't officially support, so sometimes overly-specific #ifdefs creep into the code. However, I have been building and testing each library on a wide range of GPUs, and thus far every library has worked on every GPU I've tried after only minor patches.
                As a math libraries developer, in my opinion, the two main reasons why we do not have full support for all GPUs in the ROCm math libraries are:
                1. a. There have been 25 different GFX9, GFX10 and GFX11 ISAs that have been introduced since Vega released in 2017. A library like rocsparse is roughly 250 MiB. If it were built for all 25 ISAs, that one library would be 6 GiB. We have something like 17 libraries, so the total installed size of ROCm would be around 100 GiB.
                1. b. That hypothetical 6 GiB rocsparse library couldn't actually be created. The use of 32-bit relative offsets by the compiler constrains the maximum size of a binary to 2 GiB. Any binaries larger than that would fail to link. We could create multiple versions of the library built for different GPUs and ask the user to install the version for their GPU, however, our current build and packaging system is not sophisticated enough to do that.
                2. We do not have the test infrastructure to validate every library for every GPU to support the same level of quality that we do for the MI series GPUs, and we don't have any concept of tiers of support.
                There are a few different solutions in the works to address (1). Many of the GFX ISAs are literally identical to each other or have minimal differences. I'm confident we will solve (1) and then the libraries will at least run on all AMD GPUs. However, they would still not be validated for correctness on consumer cards unless we also solved (2).
                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • #38
                  Originally posted by qarium View Post
                  but if you read this post you can see that there is little hope:
                  Perhaps I'm missing something but I don't see how you get "little hope" from that post:

                  There are a few different solutions in the works to address (1). Many of the GFX ISAs are literally identical to each other or have minimal differences. I'm confident we will solve (1) and then the libraries will at least run on all AMD GPUs. However, they would still not be validated for correctness on consumer cards unless we also solved (2).
                  First we need to continue/finish work on reducing binary size, then we need to ramp up test coverage for consumer cards directly and/or working with the community as we do for graphics drivers.
                  Test signature

                  Comment


                  • #39
                    Originally posted by bridgman View Post
                    Perhaps I'm missing something but I don't see how you get "little hope" from that post:

                    you do not miss something in the quoted text but you miss the history what is not quoted in the text.
                    i can say for sure that i try it hard and hope the best for decades outside of crypto currency mining there was no real compute use-case on AMD cards. the little hope comes from the decades of "waiting"

                    people could CUDA on a GeForce 8800 GT from 29. Okt. 2007 and we have 2023 now and ROCm/HIP still does not work on my Vega64... thats 16 years difference ...

                    all the AI hype and all the open-source AI projects should show AMD that they finally need to put their Compute shit together.

                    Originally posted by bridgman View Post
                    First we need to continue/finish work on reducing binary size,
                    why ? make the compiler fully 64bit then you do not need to accept 2GB size limit

                    Originally posted by bridgman View Post
                    then we need to ramp up test coverage for consumer cards directly and/or working with the community as we do for graphics drivers.
                    did you not work with the community on compute the last 10 years ?

                    if AMD release a card like the 7900XTX do people not expect some test coverage on the compute parts ?

                    what is the problem with compute on vulkan ? is HIP the only suitable solution or is there hope for vulkan compute ?
                    Phantom circuit Sequence Reducer Dyslexia

                    Comment


                    • #40
                      Originally posted by qarium View Post
                      why ? make the compiler fully 64bit then you do not need to accept 2GB size limit
                      I don't know the typical numbers but going to 64 bit pointers and offsets everywhere is going to increase the size even more.
                      Test signature

                      Comment

                      Working...
                      X