If I was given a dime everybody stated "AMD" or an AMD related terminology within this thread, I certainly would not be broke!
Announcement
Collapse
No announcement yet.
Intel Arc Graphics A750 + A770 Linux Gaming Performance
Collapse
X
-
Originally posted by Anux View PostTrue, but still AMD never had a problem staying behind Intel with DDR or PCIe.
I can understand AMD not wanting to get left behind, staying on PCIe 4.0, even though it's more than enough for desktop user.
Originally posted by Anux View PostThey probably saw benefits of having PCIe 5 with SSDs and GPU direct acces
In terms of SSDs, not only do desktop users not need PCIe 5.0 speeds, but SSDs are just working their way to market that can finally come close to maxing PCIe 4.0 on reads.
It took a year after AMD's PCIe 4.0 support landed, for SSDs to hit the market that could exceed PCI 3.0. We're now seeing the same story play out with PCIe 5.0 vs 4.0. And when they do, they're burning lots of power, which means they throttle, which means they can't sustain it.
AMD was early on PCIe 4.0 support, which is why they got the jump on Intel. Intel responded by being even earlier on PCIe 5.0. And it's not as if this stuff is "free". Everybody is paying for it in higher motherboard costs and power consumption. Just for Intel's bragging rights.Last edited by coder; 09 October 2022, 06:55 PM.
Comment
-
Originally posted by Dukenukemx View PostThey have direct control. Either that or sell their own motherboards because clearly their board partners have lost their minds.
Also, I read some mention of the new Zen 4 boards having 2 "chipset" chips? That indeed sounds more expensive than 1, but it depends on the specifics.
I think the main issue with motherboard costs for Alder Lake and Zen 4 is PCIe 5.0. That requires boards with more layers, more expensive materials, and retimers. DDR5 also isn't helping.
The other thing that's happened is inflation. Board makers' input prices have gone up, sometimes considerably. Some of the more extreme examples (like Ethernet PHYs costing $hundreds) will get sorted out as silicon manufacturing backlogs are worked through, but we obviously can't expect to see current gen board prices bottom out at the same prices of earlier gens.
Originally posted by Dukenukemx View PostYou can't raise prices and expect stuff to sell. In fact, it isn't.
- Likes 1
Comment
-
Originally posted by Dukenukemx View PostYou have a source or not? You can't just say things without proving it. Put a link.
Originally posted by phoronix, in the article for this comment sectionBut even still some games out there -- some Steam Play titles -- simply won't run yet with the Intel "ANV" Vulkan driver. Among the few unimplemented features is the Vulkan sparse support needed for some VKD3D-Proton games like DIRT 5, Deathloop, Assassin's Creed: Valhalla, Forza Horizon 4/5, and other modern titles.
The linked bug report is a year old and says thatd3d12 feature level 12_0 and beyond and are currently unavailable on Anvthe number of games requiring feature level 12_0 to even launch has only increasedLast edited by smitty3268; 09 October 2022, 05:52 PM.
- Likes 1
Comment
-
Originally posted by qarium View Postwell you do not have a apple m1/m2 means all you talk is pure speculation.
and people who do have an M1/M2 told you LLVM-Pipe is already fast enough.
- Likes 1
Comment
-
Originally posted by Dukenukemx View PostDo you own one?
No need to work on 3D acceleration then. You hear that guys? LLVM-Pipe is good enough for anyone.
you really really don't get it. you claim only because you can watch youtube on LLVM-Pipe and use simple UI apps does not mean the people do not want the full gpu performance.
you as a gamer you have a specific view on apple m1/m2 what this hardware does not serve anyway even if the driver would be perfect you still would not want it you still will not buy it.
thats the joke about you you demand stuff and even if apple does all you want in the end you will not buy it.
to be honest you are a customer for hardware like this: laptop with something like intel 13900K+Nvidia RTX4000 on 4nm with FAKE-Frames DLSS3.0(this alone saves 25% energy.)
and whatever apple does they can never compete with this because all your legacy games are all closed source x86 games ,,,
best case szenario for apple M3 would be if apple do license and put in RDNA3 gpu who have hardware for FSR3.0 (similar to DLSS2,x hardware on RTX cards) ,,, but even if apple does this it could not compete with " intel 13900K+Nvidia RTX4000" in your case all your games are x86 and vulkan or dx12...
so its complete pointless to talk with you about apple m1/m2/m3
and its also pointless to talk with you about intel ARC because in the end you will not buy it.
the problems right now are so big that you will not buy it. stuff like games in proton does not run... games who do run have hangs/stuttering,.,.. and so one.
people say all the bad reviewers are windows only or/and gamers only but on linux gamers need proton and many titles do not run.
so it looks like to be honest intel ARC is not for you YET... i am 100% sure you will not buy it.
wait 1 year and intel maybe has fixed all the problems. it was for amd the same with the RDNA1 release they did nearly need 1 year to make it all run nicely on linux...
Phantom circuit Sequence Reducer Dyslexia
Comment
-
Originally posted by coder View PostBeing able to run dual-x8 GPUs @ PCIe 5.0 is the singular argument for having it on the desktop, and only if you're running high-end GPUs of the sort a desktop PSU could barely power 2 of.
they're burning lots of power, which means they throttle, which means they can't sustain it.
Comment
-
Originally posted by Anux View PostIf I'm seeing the trend correctly, especially the cheaper entry level GPUs come with cut down PCIe interfaces like the 6500 XT with PCIe 4 x4.
Nor is it clear exactly how much it saves. The PCIe protocol didn't get more complex until PCIe 6.0 (using PCIe 3.0 as baseline), so the amount of silicon a PCIe controller takes per-lane shouldn't increase until then. I'd expect the difference between x4 and x8 to be negligible, in absolute terms. Just guessing, since PCIe 3.0 x8 interfaces didn't seem too onerous at 40 nm, even.
Originally posted by Anux View PostSSDs with PCIe 5 are in the works.
Originally posted by Anux View PostThere is already a "solution" to that: https://www.teamgroupinc.com/en/uplo...969f14c489.jpg We might see active cooled ones with PCIe 5.
Originally posted by Anux View PostIncreasing the Bandwidth of SSDs is easy, just build an internal "raid 0".
Comment
-
Originally posted by coder View PostWell, it was originally designed as a laptop GPU. So, I wouldn't read too much into the x4 thing. We've had lots of entry-level GPUs with x8 interfaces, but x4 isn't yet a trend.
The PCIe protocol didn't get more complex until PCIe 6.0 (using PCIe 3.0 as baseline), so the amount of silicon a PCIe controller takes per-lane shouldn't increase until then.
No consumer SSDs can sustain writes much above PCIe 2.0 speeds. So, it's mostly just specsmanship.
That's a good thing?
The M.2 form factor is not good for that. Controllers eat up precious board space, not to mention the additional heat.
Comment
-
Originally posted by Anux View PostIt will become a trend with PCIe 5. Intel also used 8x
x4 would be a new development, if this is anything more than a one-off. But it's also kinda pointless, from a system perspective, because the card is mechanically x16 and the motherboards have to support x16. So, it's only a (tiny) cost savings at the silicon level, and shaving off a couple $0.01 from the graphics card board cost.
Originally posted by Anux View PostThe coding is more complex, so it would need more silicon. How much, I don't know.
Originally posted by Anux View PostWhat so special about raid0? It barely needs any logic. 1 controller and 4 mem chips and you're good to go.
Comment
Comment