Announcement

Collapse
No announcement yet.

AMD Launches Antigua (Tonga) Powered Radeon R9 380X

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • dungeon
    replied
    SystemCrasher

    Well can't agree more, there is develop plus end use disorder

    About HSA it is simple, people will praise immeidiatelly if there is end user GUI apps which can take adventage of it and i mean now.... so yeah reallity is now, not tommorow. Somehow AMD like to advertise what will be in future potentionaly, but users can't wait

    Let me find HSA's compute coreds advertising on Kaveri launch... OK just nearly 2 years, can i use something HSA now?

    I totaly agree end user will probably say "OK VR, but what about now"

    Last edited by dungeon; 22 November 2015, 05:41 PM.

    Leave a comment:


  • bridgman
    replied
    Is anyone "marketing" HSA that way other than random posters on the internet (present company excepted, of course) ?

    I don't think anything from the HSA Foundation has ever suggested anything along those lines. What we are doing is giving GPUs/DSPs/whatevers some extra capabilities which were largely CPU-only previously... ability to access pageable system memory, ability to dispatch work to themselves or other GPUs/DSPs/whatevers without CPU intervention, extending coherent memory interfaces to include non-CPU devices, platform atomics between CPU and GPU/DSP/whatever... that kind of stuff.

    Some aspects of it require HW capabilities that only exist in what we call APUs (where GPU accesses shared system memory through something like an IOMMUv2) but many of the conventions extend to arbitrary GPU/CPU combinations as long as the GPU and CPU have the right capabilities.

    EDIT - just took another read through the HSAF site to see if anything like the "marketing" you described had appeared, but didn't find anything. What I think you're seeing is what happens when people try to distill several hundred pages of technical detail down to a catchy headline.
    Last edited by bridgman; 22 November 2015, 05:08 PM.

    Leave a comment:


  • duby229
    replied
    I have to admit I'm skeptical of HSA too. It gets marketed as if GPU cores are exactly the same as CPU cores and any application can use them. But we all know that isn't true at all.

    Leave a comment:


  • bridgman
    replied
    Originally posted by SystemCrasher View Post
    Ok, OpenCL seems reasonable. But WTF is HSA? And it only works on few selected APUs? EPIC FAIL! Now they are messing with CUDA. Hrmph. Where they are going to jump next?
    Um... we're not jumping around as far as I know. We are not "messing with CUDA" as you put it, we are running the HSA stack on dGPUs with a C++ compiler and providing tools that make it easy to *port* CUDA code to run on that C++ compiler. The ported code can still run through NVidia tools, so basically we're helping to move from a proprietary standard to an open standard.

    Leave a comment:


  • SystemCrasher
    replied
    dungeon, you see, AMD is good in innovations. They were first to bring us 64 bits (sorry, Intel, but Itanium does not counts due to awful prices and resulting lack of adoption, etc). And actually whole bad reputation of Pentium 4 is thanks to Athlons, which were much better (to degree Intel had to sell PXAs to Marvell to keep their financial reports adequate; I guess they had chances to regret about this decision).

    Then, AMD has created APUs, and it seems they were nearly first in industry to get idea GPGPU is way to go. But somehow they both failed to come with good implementation thanks to their drivers, and also fragmented efforts. Ok, OpenCL seems reasonable. But WTF is HSA? And it only works on few selected APUs? EPIC FAIL! Now they are messing with CUDA. Hrmph. Where they are going to jump next? Nvidia on other hand does not jumps like a crazy and just gradually improves CUDA. It works on most of their GPUs, not a couple of APUs. And somehow nvidia Linux drivers are performing better.

    Now AMD is first company on the planet who did a GPU like a multi-chip assembly and ... and what? Take a look on Phoronix benchmarks. Fury isn't so furry, eh? Hey, AMD, wtf with your software, dammit?

    Whatever, but AMD isn't good in 2 things: software and marketing. First, their drivers tend to suck. Second, even if it's not a case and they manage to do something epic, they fail to take market share at deserved pace. Like it happened with Athlons. And like it happens now with HBM. AMD hardware engineers delivered unique advantage, being first on the planet who did it. And now it stuck on crappy catalyst, as usually? Erm, AMD, will you ever learn from your past mistakes? They clearly need to fire half of their management in software and marketing dept's and go hire those who can orchestrate software development so it does not looks like it happens today, and those who are able to actually sell good achievements of engineers, dammit. And probably whole legal dep't which is generally proven to be just a bunch of saboteurs, delaying development processes here and there while there is zero added value from this activity.

    Leave a comment:


  • dungeon
    replied
    SystemCrasher

    Nothing, just it will be more time needed if they does not do radeon/amdgpu separation. As you see for amdgpu it was needed year plus, to driver beceme in usable state for an end user at least for one asic.

    Not sure when they actually got idea and started internally working on it, maybe it is more correct to say 2 years was needed for amdgpu (and that also thanks to the internal Catalyst devs involved)

    5 years is my rough estemate for radeon overhaul instead

    I don't say it is not good, just that those things takes time to be developed.

    For example HBM it was needed 3 years until first prototype proposal, so 5 years to be developed and to be accepted by JEDEC and 7 years until first adoption on actual product AMD Fiji

    So 8 years after nVidia Pascal GPUs will adopt it, can we say Pascal is already old like Kano like to say even if it is not released yet Stick Out Tongue .
    Last edited by dungeon; 21 November 2015, 11:01 AM.

    Leave a comment:


  • GreatEmerald
    replied
    Isn't it that Celeron is the lowest tier, and Pentium is one tier higher?

    Leave a comment:


  • Kano
    replied
    The name is Pentium G4xxx, later Celeron models with Skylake will most likely called G2xxx. The number usually increases with a new chip but Broadwell was skipped. It is no Pentium 4.

    Leave a comment:


  • SystemCrasher
    replied
    Originally posted by smitty3268 View Post
    I think more people associated Celeron with slow crap than Pentiums - for me the Pentium brand makes me think more of the original Pentium 1 than the P4.
    Hmm, well, it seems I've seriously misunderstood "Pentium 4" part of original text and it rather meant "4+ GHz" rather than "Pentuim 4". Pentium 4 has been last of pentiums and worst of them. It was hot, slow and actually had hard time to compete Athlons. Users got so negative impression about it that Intel had to abandon "Pentium" brand, using other brands like "Core" instead.
    Last edited by SystemCrasher; 21 November 2015, 02:43 AM.

    Leave a comment:


  • SystemCrasher
    replied
    Originally posted by dungeon View Post
    Well there is a reason, why closed drivers fglrx/nvidia do legacy driver separations.... to not broke things for older chips, while developing for new.
    There could be another reason: "why throw resources on this crap?!". Resources are limited and demanding customers are buying newer GPUs anyway, and these who are giving most of profit. Old devices can also be way too different, so it could be both hard to maintain and can even limit supported feautres, etc or require insane amount of code paths.

    One of the reason of separation with amdgpu and radeon, is again same reason - to no broke older
    I've got impression they decided they want some minimum hardware abilities to go for it, and cut everything which can't do what they want or isn't in best shape for it. As you can see they are up for GPU VM and now also for scheduling. Making it more and more similar to CPUs.

    Imagine AMD did overhaul of radeon driver (instead of developing amdgpu) to support new approach, driver would be likely be completely broken for next 5 years
    IIRC, AMDGPU is a heavily reworked ... old version of radeon driver. Also, RadeonSI parts is heavily based on old R600g as well. So code bases aren't entirely different (caveat: ernel and user modes are mixed in this example, to show overall situation). Though AMDGPU kernel part has been restructured, and as far as I can see, now it split according to IP block revisions. I guess that's how Catalyst did it. Yet, you can see, user-mode parts are still using RadeonSI with AMDGPU module. Overall, it looks more or less reasonable and sane and I'm fine with it.

    and even then things would not be reverted to previous state for older chips
    Actually, I'm quite happy on how AMD opensource drivers are working on virtually all GPUs I've seen. Except maybe newest. And AMDGPU+RadeonSI works, sort of. We can see nice round of Phoronix benchmarks, etc and while there're things to iron out, it mostly works and ... what did you told about 5 years?

    People with older chips should be really happy because driver is not touched there.
    If software is not touched, it is dead, like MS-DOS. But actually, almost all drivers are touched. Except maybe most ancient things, where everything hardware can do is already used and at most there could be some optimization or so. Or, in some unusual cases, you can see hacks allowing hardware to do even more than it originally could, due to some workarounds, etc.

    Leave a comment:

Working...
X