Announcement

Collapse
No announcement yet.

Intel Arc Graphics A750/A770 Quick Linux Competition With The Radeon RX 7600

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • danger89
    replied
    Originally posted by mroche View Post
    The A750 and A770 range between $250 and $350. At that price range, you can get the RX 6400 to RX 6750 XT and the RX 7600, and from team green a 3050 to 3070, all of which have a "complete" ecosystem. So in its current state, I would say no, Arc is not very competitive.
    Cheers,
    Mike
    Hopefully if ARC a770 is dropping to $200 or below, then it will be interesting.

    Leave a comment:


  • Forge
    replied
    Originally posted by sophisticles View Post

    Does anything run reliably with Wayland?

    All i see is complaints about Wayland from users of various video cards.
    Why would I come here and post "All my things are working correctly under Wayland, and I'm very much enjoying that it's not really perceptable to the end-user, aside from simplified/improved inner workings."? It simply doesn't make sense. You *will* overwhelmingly see posts from people with problems/complaints, regardless of how common or rare those issues are, simply because THAT'S WHY THEY COME POST.

    Leave a comment:


  • qarium
    replied
    Originally posted by bridgman View Post
    Not to the same extent that we did for graphics. Our open source graphics focus was desktop users on common distros, while the compute focus was on large corporate customers who did not use standard distros or work in the community to the same extent.
    i did buy amd hardware for gpu compute in the last 10 years... that AMD did only care for "large corporate customers who did not use standard distros" ... and litterally ignored the community does in my point of view hit them hard now with all the AI hype and all the people buy Nvidia cards for AI workloads on CUDA...

    AMD has to end this negative elitism focus on: "large corporate customers" because it damange the reputation of the AMD brand name.

    i personally want to buy AMD hardware for AI workloads without AMD telling me they are only for "large corporate customers"
    thats nonsense.

    AMD should think about this; why not ship opensource AI modells inside the driver and make these AI functions easily accessible to other software like games. there are some very interesting opensource AI modells out there.

    AMD should think about dualgpu cards again with one chip RDNA3 and the other chip CDNA
    also producing a smaller CDNA die but with very large amount of ram to avoid the memory wall in AI tools could be a top seller as a second card to the existing graphic card.

    i also would find it nice to see some FPGAs inclused in any CPU/APU the Mega65(comodore64) and VampirV4(Amiga) proof that there is a market for FPGAs to emulate old computer systems for games. and honestly a virtual machine without FPGA does not work well for that. the FPGA should be large enough to emulate 8bit/16bit and maybe some popular 32bit area gaming platforms.





    Leave a comment:


  • bridgman
    replied
    Originally posted by qarium View Post
    did you not work with the community on compute the last 10 years ?
    Not to the same extent that we did for graphics. Our open source graphics focus was desktop users on common distros, while the compute focus was on large corporate customers who did not use standard distros or work in the community to the same extent.

    Leave a comment:


  • qarium
    replied
    Originally posted by bridgman View Post
    I don't know the typical numbers but going to 64 bit pointers and offsets everywhere is going to increase the size even more.
    of course it does increase the size but does it matter ? is the size itself a problem ?

    Leave a comment:


  • bridgman
    replied
    Originally posted by qarium View Post
    why ? make the compiler fully 64bit then you do not need to accept 2GB size limit
    I don't know the typical numbers but going to 64 bit pointers and offsets everywhere is going to increase the size even more.

    Leave a comment:


  • qarium
    replied
    Originally posted by bridgman View Post
    Perhaps I'm missing something but I don't see how you get "little hope" from that post:

    you do not miss something in the quoted text but you miss the history what is not quoted in the text.
    i can say for sure that i try it hard and hope the best for decades outside of crypto currency mining there was no real compute use-case on AMD cards. the little hope comes from the decades of "waiting"

    people could CUDA on a GeForce 8800 GT from 29. Okt. 2007 and we have 2023 now and ROCm/HIP still does not work on my Vega64... thats 16 years difference ...

    all the AI hype and all the open-source AI projects should show AMD that they finally need to put their Compute shit together.

    Originally posted by bridgman View Post
    First we need to continue/finish work on reducing binary size,
    why ? make the compiler fully 64bit then you do not need to accept 2GB size limit

    Originally posted by bridgman View Post
    then we need to ramp up test coverage for consumer cards directly and/or working with the community as we do for graphics drivers.
    did you not work with the community on compute the last 10 years ?

    if AMD release a card like the 7900XTX do people not expect some test coverage on the compute parts ?

    what is the problem with compute on vulkan ? is HIP the only suitable solution or is there hope for vulkan compute ?

    Leave a comment:


  • bridgman
    replied
    Originally posted by qarium View Post
    but if you read this post you can see that there is little hope:
    Perhaps I'm missing something but I don't see how you get "little hope" from that post:

    There are a few different solutions in the works to address (1). Many of the GFX ISAs are literally identical to each other or have minimal differences. I'm confident we will solve (1) and then the libraries will at least run on all AMD GPUs. However, they would still not be validated for correctness on consumer cards unless we also solved (2).
    First we need to continue/finish work on reducing binary size, then we need to ramp up test coverage for consumer cards directly and/or working with the community as we do for graphics drivers.

    Leave a comment:


  • qarium
    replied
    Originally posted by Quackdoc View Post
    its worth noting that intel does support a rocm backend for their oneapi compute. so at the very least, intel IS pushing for a cross platform, high performance compute api which does support AI stuff. though I would still rather see vulkan compute support pushed by AMD and intel myself.
    they can try as hard as they want every news i read about AI for example elon musk with X/twitter and truthGPT they all buy Nvidia or in case of google or microsoft develop their own chips.

    "I would still rather see vulkan compute support pushed by AMD and intel myself."

    i am in favor of this to. but i don't know no one focus on the outside of some sample code ŧests.

    but if you read this post you can see that there is little hope:

    Originally posted by cgmb View Post
    To be clear, the use of shader assembly code in the math libraries isn't why this has been difficult. As far as I can tell, every library has generic fallback paths. We never build for architectures that we don't officially support, so sometimes overly-specific #ifdefs creep into the code. However, I have been building and testing each library on a wide range of GPUs, and thus far every library has worked on every GPU I've tried after only minor patches.
    As a math libraries developer, in my opinion, the two main reasons why we do not have full support for all GPUs in the ROCm math libraries are:
    1. a. There have been 25 different GFX9, GFX10 and GFX11 ISAs that have been introduced since Vega released in 2017. A library like rocsparse is roughly 250 MiB. If it were built for all 25 ISAs, that one library would be 6 GiB. We have something like 17 libraries, so the total installed size of ROCm would be around 100 GiB.
    1. b. That hypothetical 6 GiB rocsparse library couldn't actually be created. The use of 32-bit relative offsets by the compiler constrains the maximum size of a binary to 2 GiB. Any binaries larger than that would fail to link. We could create multiple versions of the library built for different GPUs and ask the user to install the version for their GPU, however, our current build and packaging system is not sophisticated enough to do that.
    2. We do not have the test infrastructure to validate every library for every GPU to support the same level of quality that we do for the MI series GPUs, and we don't have any concept of tiers of support.
    There are a few different solutions in the works to address (1). Many of the GFX ISAs are literally identical to each other or have minimal differences. I'm confident we will solve (1) and then the libraries will at least run on all AMD GPUs. However, they would still not be validated for correctness on consumer cards unless we also solved (2).

    Leave a comment:


  • Quackdoc
    replied
    Originally posted by qarium View Post

    stuff like that will always be the case as long as the opensource community does not build and organise their own opensource gpu hardware... like libre-soc gpu...

    of course it is not "too late" but you will see they will not do it. and the answer is always the same in this case: buy the next generation.

    honestly i do not unterstand why intel and amd do not work together on the gpu field because honestly because of CUDA/DLSS and so one they both lose agaist Nvidia...

    and the AI hype makes CUDA even more important.

    but i know many years ago amd and intel had a APU-like product together with intel cpu and amd gpu this corporation failed horrible...

    the fact that intel and amd do not work together on the gpu side gives Nvidia a free pass.
    its worth noting that intel does support a rocm backend for their oneapi compute. so at the very least, intel IS pushing for a cross platform, high performance compute api which does support AI stuff. though I would still rather see vulkan compute support pushed by AMD and intel myself.

    Leave a comment:

Working...
X