Announcement

Collapse
No announcement yet.

AMD Zen 2 + Radeon RX 5700 Series For Linux Expectations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
    oiaohm
    Senior Member

  • oiaohm
    replied
    Originally posted by tildearrow View Post

    Oh, so this cache is basically the same as L3 cache.
    Now what I am wondering is: since this is a pretty huge cache for the price, is it as fast as a normal SRAM cache (AKA veeeeery fast), or is it slightly slower to reduce costs?
    As I said gamecahce is L3+L2. There is 32Meg of L3 per cpu core chiplet and 0.5 meg of L2 per active core. Everything on the cpu core chiplet will be 7nm construction. So very fast.

    Really it going to be interesting to see how many work loads are cache sensitive. This may help with Amd cpu want to faster and faster ram.

    Leave a comment:

  • tildearrow
    Senior Member

  • tildearrow
    replied
    Originally posted by skeevy420 View Post

    I'd like a workstation that supported multiple APUs so a workstation user wouldn't need multiple GPUs to fire up a VM for pass through. I imagine that would be a decent testbed for OS maintainers & developers, UI designers, etc when combined with NUMA, etc...or even office environments where two or more people could share the same system in a multi-seat setup since all we'd need is a monitor and USB hub for each APU.

    Years back, I assumed we'd have something like that by now from either Intel or AMD. I'm a little sad that we don't.

    I often wonder if pressure from game console manufacturers is why we don't have good APUs yet. I really doubt that MS or Sony wants an APU on the market that beats the PS4 or XB1 @1080p for less than the cost of the consoles.

    Dear AMD,

    If you're gonna do something with HBM this year, please do it with APUs and not GPUs. An APU designed around 1080p gaming or 2K-4K desktop work is better suited to deal with the 4GB HBM limit than gaming or professional GPUs.

    Also, your model numbers really suck right now. It's hard to tell if someone is talking about the RX 5700 or if they typo-ed RX 570 or if they're talking about motherboards. How frickin' hard would it have been to call it an RX 670? It's like y'all hired Slackware's marketing team for this launch. Please, please don't tell me this is a pecker wiggling contest with Nvidia over who has the highest model number since only morons think stupid thoughts like "5700 is a bigger number than 2080 therefore it must be better...oh snap, there's the HD7970, it's HD with a bigger number so it has to be better ".
    2 years later: Radeon RX 57000 and RX 58000.

    (They skipped 56000 to prevent Motorola from bringing up a lawsuit)

    (However one year later they release the RX 68000... oops!)
    tildearrow
    Senior Member
    Last edited by tildearrow; 11 June 2019, 03:43 PM.

    Leave a comment:

  • tildearrow
    Senior Member

  • tildearrow
    replied
    Originally posted by starshipeleven View Post
    Your definition of "gaming" is wrong, how did you come up with this bullshit.
    Which is why I called it a rambling, because it doesn't make much sense but just wanted to let it out.

    Originally posted by starshipeleven View Post
    Gaming boards are more durable than "office" ones as they are designed to withstand overclocking (i.e. being stable when pushed to the limits, with insane CPU power draw), high-end cards that at times may draw more than the PCIe spec mandates from the PCIe slots.

    Also gaming boards and cards in general have decent heatsinks (or at least a better shot at it).

    Yes but that's not what you think. Seriously what the fuck made you think that "gaming" hardware is unstable.
    The f**k that made me think that gaming hardware is unstable is: it isn't designed for 100% reliable operation (and especially 24/7). When I first bought this motherboard it would hard-freeze after a few hours of usage, and not even SysRq would help rebooting it. After a firmware update the freezes stopped. However, in November 2017 I found out it may still freeze (especially when playing Dolphin) (although rarely), which means, no, it isn't 100% reliable/stable.

    Originally posted by starshipeleven View Post
    The difference is in computing (and certification) power requirements.
    A "creator" is usually fine with high-end CPUs and GPUs, a "workstation user" needs top-of-the-line CPUs and top-of-the-line GPUs, sometimes more than one such GPU.
    User of 2 graphics cards here, but the problem was that my PSU couldn't take it and every damn hour it'd randomly turn off the AMD card, so had to go back to 1 card.

    Leave a comment:

  • tildearrow
    Senior Member

  • tildearrow
    replied
    Originally posted by LeJimster View Post
    Calling the L3 cache "gamecache" is so dumb. I hate these marketing guys sometimes.
    Oh, so this cache is basically the same as L3 cache.
    Now what I am wondering is: since this is a pretty huge cache for the price, is it as fast as a normal SRAM cache (AKA veeeeery fast), or is it slightly slower to reduce costs?

    Leave a comment:

  • cusa123
    Junior Member

  • cusa123
    replied

    I think there is no tendency to speak very well with the segment radeon for linux that has no graphical interface for a minimum control and this is what it intends to sell to google stadia?

    Leave a comment:

  • starshipeleven
    Premium Supporter

  • starshipeleven
    replied
    Originally posted by skeevy420 View Post
    And they won't ever be more than that until AMD or Intel puts real weight behind them.
    And they won't put real weight behind them because APUs are by definition the same CPU and GPU technology you see in dedicated parts, cut down to a single device's power limit to make a single hybrid device that costs less than making two dedicated pieces of hardware at its power level.

    My workstation has it and it's 9 years old...and multiple processors.
    That's not what I said. VT-D/IOMMU is common on Intel 1156 and later sockets (and AMD stuff of the same period).

    I said that in most workplaces this is NOT set up because workstation users don't need hardware passthrough. Most workstation users are actually running 2-3 applications, that's it.

    You don't? You aren't a standard user.

    You really don't see the value in having multiple NUMA nodes with their own dedicated GPU without needing multiple GPUs?
    No. If we are talking of computing or serious work I'd rather have the full choice of what goes in the system, the less integrated crap the better.

    Maybe I want to make a wimpy CPU with 8 GPUs, maybe I want to make a GPU-less 4-CPU monster with hundreds of cores.
    Maybe I don't give a shit about CPU/GPU power and I just want 6 SAS cards to add a zillion of hard drives.
    And anything in between.

    Having a hybrid part only limits the choice, and given that I'm paying thousands of $$ for each part I'd rather not waste them on unnecessary hardware.

    For consumer hardware you don't usually need so much flexibility, which is why nowadays even mini-itx motherboard come with everything you might need, and ATX boards are mostly useless beyond bragging rights.

    There's always that, but I know that I'd rather maintain one OS on one multi-user system based on commercial/enterprise grade hardware over multiple Walmart specials.
    That's because you never had to deal with mere hundreds or even thousands of users. Having a single bigass PC for multiple people is a fun LinusTechTips youtube video series (yes they did that), but it quickly breaks down once you go into real life scales.

    "maintaining the OS on multiple stations" is a solved issue for decades. I can deploy any number of new office stations in minutes by netbooting and initiating a generic windows OS image restore. It includes drivers for EVERYTHING thanks to driverpacks so it will work fine with every hardware.

    software breaks? Netboot and restore generic image.

    Hardware breaks? full workstation swap, restore generic image, call up Dell/HP and ask them to come and fix their shit, or if we are the ones providing support for that station I or others in the company can look into it when there is some downtime.

    Having a single authentication server (Windows: Active Domains, Linux has Kerberos and others) that contains the users "User folders" or "home folders" takes care of everything else. Users just login with their user and boom, the station is ready with all their data.

    And those also have GDDR5 over DDR4 for the GPUs, twice the amount of cores & threads available (albeit at a lower operating frequency), and more GPU cores/compute units. Theoretical same performance or not, what's in the shitty versions of the consoles is better than the Ryzen 5 2400g due to the GDDR5 alone.
    That's not how theoretical same performance works.

    Theoretical same performance means that no matter how different numbers are in the hardware, they will likely perform similarly.

    The consoles of the same era of the APU will likely be somewhat better on the GPU side, but still it's not THAT different.

    You gotta start somewhere, you gotta start sometime. What better place than here? What better time than now?
    When other products that won't become too expensive to sell if you add HBM will have sold enough to lower the cost of HBM, duh.

    But no shit, but you know damn well I'm referring to pro-grade workstations and servers.
    Pro workstations will be using dedicated cards for the reasons posted above.

    You don't think Google wouldn't want rack servers with PS5-grade APUs in multi-socket setups to power Stadia?
    They are making custom boards with HBM2, custom CPUs and custom GPUs and will likely have a custom high-speed board interconnect bus (as they said you will be able to pool more than one unit if you want) for a specific high-profile multi-million dollar investment.

    That's not an "APU", and there won't be any "sockets". It's HUGELY EXPENSIVE high end highly integrated custom-built computing hardware. It's basically a high-power embedded system, similar to what Facebook's OpenCompute designs are https://www.opencompute.org/

    Only few companies can pay the stupendous price upfront to get something like that designed and built, and are doing so only because they have a SPECIFIC purpose for it in mind. They have so much cash that they can trascend the need for standard hardware, but what I said above still applies.

    "No. If we are talking of computing or serious work I'd rather have the full choice of what goes in the system, the less integrated crap the better."

    Only what they specifically need goes in this custom hardware design. Everything else is dead weight.

    You need to understand that anything that is sold to a wider pubblic, even 5k server boards mounting 4x 10k Xeon processors are designed to be very generic so they can do much more different tasks than that.

    APUs would not be generic enough for the pro market, and no I don't care about your opinion, that's just the reality.

    You don't think photo editors and people into machine learning and people into using GPUs for processing wouldn't want a dual APU-Pro system so they have one for their display and another for OpenCL or whatever?
    No, there is no drawback in using the same GPU to drive their display, even multiple monitors.
    Display controllers are not involved in computing anyway.
    starshipeleven
    Premium Supporter
    Last edited by starshipeleven; 11 June 2019, 12:31 PM.

    Leave a comment:

  • atomsymbol
    Senior Member

  • atomsymbol
    replied
    Originally posted by phoronix View Post
    Phoronix: AMD Zen 2 + Radeon RX 5700 Series For Linux Expectations

    This weekend I was out the AMD E3 event learning more about their third-generation Ryzen processors as well as their equally exciting AMD Radeon RX 5700 series Navi hardware. Being at the event, one could reasonably deduce the Linux support will be great and it does appear to be that way building upon their improvements of earlier GPUs and Zen processors. It does appear to be that way while obviously we will begin testing soon of these new processors and graphics cards. At least for the Zen 2 processors, I am confident in their Linux support while on the Navi side we are awaiting Linux driver support but I am optimistic it will work out nicely. Now that the initial embargo has expired, here are more details on these new AMD products launching 7 July and my Linux information at this time.

    http://www.phoronix.com/vr.php?view=27968
    Prediction: 10-20 years into the future the micro-op cache (ยต-op cache) will become the most important part of a CPU in terms of maximizing x86 ILP (instruction-level parallelism).

    The micro-op cache in Zen 2 can store 4K entries and can generate up to 8 fused instructions per clock (https://www.anandtech.com/show/14525...nd-epyc-rome/8).

    See also: https://en.wikipedia.org/wiki/Transmeta_Crusoe

    Leave a comment:

  • moilami
    Senior Member

  • moilami
    replied
    Originally posted by tildearrow View Post
    That MEG "creation" Threadripper motherboard says it's for creators but if you actually break the acronym down, you see MSI Enthusiast Gaming so it is not a true workstation motherboard... so basically are you saying that a "creator" is different from a workstation user? Both do some sort of work, but you can't be giving the creator a less reliable thing.... some of them really demand the whole environment to be stable and therefore need a workstation machine... They are creators, they don't want this "creator" thing, so they need workstations, AKA real machines where the work can be done.
    I would guess it is marketed for Youtube and Twitch content creators, of which many are gamers streaming and doing videos.

    Leave a comment:

  • skeevy420
    Senior Member

  • skeevy420
    replied
    Originally posted by starshipeleven View Post
    Seriously what the fuck.
    Are you aware of the complexity required by multi-CPU boards and CPUs supporting this? APUs are CONSUMER hardware, they are first and foremost supposed to be cheap enough for consumer market.

    You are basically asking to add caviar in your sandwich.
    And they won't ever be more than that until AMD or Intel puts real weight behind them.

    This is like not done... ever, in commercial environments.

    Where you get pcie passthrough is in servers doing stuff, not in workstations.
    My workstation has it and it's 9 years old...and multiple processors. You really don't see the value in having multiple NUMA nodes with their own dedicated GPU without needing multiple GPUs?

    or, you know, make two vastly cheaper boards with a vastly cheaper APUs and run them as different PCs.
    There's always that, but I know that I'd rather maintain one OS on one multi-user system based on commercial/enterprise grade hardware over multiple Walmart specials.

    FYI: Ryzen 5 2400g's GPU has similar TFLOPS as the PS4/Xbone. https://www.thumbsticks.com/are-amd-...nsole-killers/
    And probably the newer APUs will be more or less the same as PS4 Pro and XboneX.

    They won't go farther than that because of what I said above. APUs are supposed to target the lower ends of the market. Consumers that can't afford (either money or size/thermal budget) a decent discrete graphics, yet aren't covered by the entry-lever integrated graphics.
    And those also have GDDR5 over DDR4 for the GPUs, twice the amount of cores & threads available (albeit at a lower operating frequency), and more GPU cores/compute units. Theoretical same performance or not, what's in the shitty versions of the consoles is better than the Ryzen 5 2400g due to the GDDR5 alone.

    Same as above. HBM is EXPENSIVE. You don't put that in low end consumer hardware.
    You gotta start somewhere, you gotta start sometime. What better place than here? What better time than now?

    But no shit, but you know damn well I'm referring to pro-grade workstations and servers. You don't think Google wouldn't want rack servers with PS5-grade APUs in multi-socket setups to power Stadia? You don't think photo editors and people into machine learning and people into using GPUs for processing wouldn't want a dual APU-Pro system so they have one for their display and another for OpenCL or whatever?

    You had a quote fail below this so, yeah, that's about it.

    Leave a comment:

  • starshipeleven
    Premium Supporter

  • starshipeleven
    replied
    Originally posted by skeevy420 View Post
    I'd like a workstation that supported multiple APUs
    Seriously what the fuck.
    Are you aware of the complexity required by multi-CPU boards and CPUs supporting this? APUs are CONSUMER hardware, they are first and foremost supposed to be cheap enough for consumer market.

    You are basically asking to add caviar in your sandwich.

    so a workstation user wouldn't need multiple GPUs to fire up a VM for pass through.
    This is like not done... ever, in commercial environments.

    Where you get pcie passthrough is in servers doing stuff, not in workstations.

    office environments where two or more people could share the same system in a multi-seat setup since all we'd need is a monitor and USB hub for each APU.
    or, you know, make two vastly cheaper boards with a vastly cheaper APUs and run them as different PCs.

    I often wonder if pressure from game console manufacturers is why we don't have good APUs yet. I really doubt that MS or Sony wants an APU on the market that beats the PS4 or XB1 @1080p for less than the cost of the consoles.
    FYI: Ryzen 5 2400g's GPU has similar TFLOPS as the PS4/Xbone. https://www.thumbsticks.com/are-amd-...nsole-killers/
    And probably the newer APUs will be more or less the same as PS4 Pro and XboneX.

    They won't go farther than that because of what I said above. APUs are supposed to target the lower ends of the market. Consumers that can't afford (either money or size/thermal budget) a decent discrete graphics, yet aren't covered by the entry-lever integrated graphics.

    If you're gonna do something with HBM this year, please do it with APUs and not GPUs.
    Same as above. HBM is EXPENSIVE. You don't put that in low end consumer hardware.

    Also, your model numbers really suck right now. It's hard to tell if someone is talking about the RX 5700 or if they typo-ed RX 570 or if they're talking about motherboards. How frickin' hard would it have been to call it an RX 670? It's like y'all hired Slackware's marketing team for this launch. Please, please don't tell me this is a pecker wiggling contest with Nvidia over who has the highest model number since only morons think stupid thoughts like "5700 is a bigger number than 2080 therefore it must be better...oh snap, there's the HD7970, it's HD with a bigger number so it has to be better ".
    [/QUOTE]
    starshipeleven
    Premium Supporter
    Last edited by starshipeleven; 11 June 2019, 09:41 AM.

    Leave a comment:

Working...
X