Announcement

Collapse
No announcement yet.

Bridgman Is No Longer "The AMD Open-Source Guy"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • necro-lover
    replied
    Originally posted by bridgman View Post
    Fixed that for you..."HSA is marketing speech for: shader based calculation without the usual drawbacks and overheads. Now you can imagine how good this will work."
    The UVD unit also use the shaders without the usual drawbacks and overheads now you can imagine how good this work with the opensource drivers on a hd2900 with a broken uvd unit and or on a hd3870 with closed source only uvd unit and so one and so one.
    Your marketing speech is a ridiculous attempt to sell the new DRM TrustZone system then we get the (OLD-UVD unit+TrustZone) = HSA!!!
    And Intel do have "Intel Trusted Execution Technology" already because of this they can deliver the video acceleration right now with opensource drivers.

    But hey the fools will get the truth when the HSA anti-consumer hardware is ready to sell then we all know how good TrustZone will work against the customers. . .

    Any positive imagination is just nativity-->"Fixed that for you"

    Leave a comment:


  • smitty3268
    replied
    Originally posted by e.a.i.m.a. View Post
    Which video, which player, which distro? With my x120e and playing the preview of 'skyfall' in 720p and 1080p, cpu usage was 35-55% in average for HD and 70-90%. I tried to raise the resolution up to 1600x1200 and the cpu usage did not raise much, a few percent at most. I am running openSuSE 12.1 with latest updates and catalyst 12.8, under xfce. Desktop effects are disabled, and I think anti-aliasing is too. For the purpose, I downloaded the preview and played them under vlc.

    I hadn't watched that video yet, but the proposals I've seen about OpenGL implementation seems promising. If only more specifications would be available to them, along with properly working BIOS, up to a total control of power management the game won't be the same.
    For the Intel demo, it's using GStreamer + libva acceleration, and rendering that directly into a GPU texture which Wayland uses to render onto the screen.

    According to the devs, I think that was the best their hardware could do - (Sandybridge, i think, or maybe Ivy), but future hardware would allow them to use a hardware overlay to display the video instead which would save a lot of power usage compared to using the GPU texture. The problem with current hardware was that the VA decoded format was incompatible with the current hardware overlays.

    Leave a comment:


  • 89c51
    replied
    Originally posted by blackiwid View Post
    could you estimate how much time you would need to invest to get gpu based aka shaderbased accelleration of x264 720p and 1080p accelleration.

    for a guy who is able to programm in C but have no experince in kernel or x or driver development?

    I would maybe try it if it would have any change of completion without working on it for 6 months full-time. And bridgeman did say its all there to get it running easy.

    he never explizitly said easy but if that task would cost 100.000$ manpower-costs it would be a unneeded statement because then it would be clear to anybody that it will not happen ever, if amd does it not by them self.
    Christian Konig already wrote code for shader based h264. I am not sure if he released it and where.

    If you are interested better ask in the mailing list.

    Leave a comment:


  • blackiwid
    replied
    could you estimate how much time you would need to invest to get gpu based aka shaderbased accelleration of x264 720p and 1080p accelleration.

    for a guy who is able to programm in C but have no experince in kernel or x or driver development?

    I would maybe try it if it would have any change of completion without working on it for 6 months full-time. And bridgeman did say its all there to get it running easy.

    he never explizitly said easy but if that task would cost 100.000$ manpower-costs it would be a unneeded statement because then it would be clear to anybody that it will not happen ever, if amd does it not by them self.

    Leave a comment:


  • e.a.i.m.a.
    replied
    Originally posted by blackiwid View Post
    the normal player... so what could that be, gstreamer... but yes I did understand that gstreamer is not that optimised than mplayer because of that I developed lately that as a minitube alternative:



    but still 3% vs 50-80% is still not good.
    I don't know which hardware have been used for the wayland presentation, but keep in mind that emphasis have been put on low power operation. Considering that the cpu part of my E-350 remind me the performance level of my 2003 32bit K8 (sonora) saying that current mobile chips are at least 3x more powerful should be a safe assumption, and its cpu usage should be proportional.

    Still, 3% vs 20% or 30% remain a significant improvement.

    Leave a comment:


  • entropy
    replied
    Originally posted by 89c51 View Post
    Bridgman when can we expect to see products with this tech??
    According to this roadmap - next year.


    (Not sure if it's still valid, though.)

    Leave a comment:


  • Figueiredo
    replied
    I understand there are benefits to GPU computing, when there are many paralelized calculations to be performed, however, this is not always the case, or there would be no reason for AMD including the UVD block in their GPUs and APUs in the first place.

    What I grasp from the HSA inniciative is more like:

    CPU: very good for serial general calculations
    GPU: very good for paralel calculations, still getting better at being general

    There are, however, other workloads, such as video transcoding for example that are not specially suitable for neither, hence intel, amd, nvidia and several arm soc vendors include a video transcoding block in their chips which is several times more efficient than either cpu, gpu of both at this specific workload.

    What I would like to know is if the HSA foundation has plans to do to these blocks the same it is doing to the GPU: helping programmers use the speciallysed logic transparently.

    Please correct me if I'm wrong in any of the above assumptions.

    Leave a comment:


  • 89c51
    replied
    Bridgman when can we expect to see products with this tech??

    Leave a comment:


  • mdk777
    replied
    Here is a introduction by MIke Houston:



    The discussion of face recognition is especially interesting....While the slideshow is not shown, he describes a 10x benefit for GPU compute transitioning to a Multiple benefit for using CPU compute in a single software algorithm.

    Consequently, I think you can safely assume that the ability to use both the cpu and the gpu, concurrently and sequentially as needed is the ultimate objective.

    Leave a comment:


  • Figueiredo
    replied
    Originally posted by bridgman View Post
    Fixed that for you...
    bridgman, I wonder if HSA is limited to shader based computation or if it's scope is wider, such as running the computational workload on the most apropiate kind of logic available on the system, general processing or fixed function, in order to have the best possible performance and power consumption. Obviously I would expect that shader computing is just the first step in that direction.

    It seems to me, as a layman, that running everything on a general processing unit (be it cpu, gpu of both) cannot be the most efficient way of doing it. In the future SoCs we would have several specialized blocks, each doing what it does best. If my understanding is correct, should we expect to run into the same problems we have today with such speciallized blocks and open source (UVD, PM and so on) and open source/linux or is AMD/HSA foundation planning on something to prevent that?

    Leave a comment:

Working...
X