Announcement

Collapse
No announcement yet.

Bridgman Is No Longer "The AMD Open-Source Guy"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • bridgman
    replied
    Originally posted by necro-lover View Post
    HSA is marketing speech for: shader based calculation without the usual drawbacks and overheads. Now you can imagine how good this will work.
    Fixed that for you...

    Leave a comment:


  • necro-lover
    replied
    Originally posted by crazycheese View Post
    I think video acceleration is becoming less and less important anyway. The reason is simple - you can't accelerate every format. But if something like HSA takes off, it would be able to share the processing between GPU and CPU and that would be "universal acceleration".
    HSA is marketing speech for: shader based calculation. Now you can imagine how good this will work.

    Leave a comment:


  • crazycheese
    replied
    I think video acceleration is becoming less and less important anyway. The reason is simple - you can't accelerate every format. But if something like HSA takes off, it would be able to share the processing between GPU and CPU and that would be "universal acceleration".

    Leave a comment:


  • blackiwid
    replied
    Originally posted by e.a.i.m.a. View Post
    Which video, which player, which distro? With my x120e and playing the preview of 'skyfall' in 720p and 1080p, cpu usage was 35-55% in average for HD and 70-90%. I tried to raise the resolution up to 1600x1200 and the cpu usage did not raise much, a few percent at most. I am running openSuSE 12.1 with latest updates and catalyst 12.8, under xfce. Desktop effects are disabled, and I think anti-aliasing is too. For the purpose, I downloaded the preview and played them under vlc.

    I hadn't watched that video yet, but the proposals I've seen about OpenGL implementation seems promising. If only more specifications would be available to them, along with properly working BIOS, up to a total control of power management the game won't be the same.
    the normal player... so what could that be, gstreamer... but yes I did understand that gstreamer is not that optimised than mplayer because of that I developed lately that as a minitube alternative:



    but still 3% vs 50-80% is still not good.

    Leave a comment:


  • e.a.i.m.a.
    replied
    Originally posted by blackiwid View Post
    I saw the presentation from xdc2012 about wayland and there he could render 1080p video with 3% cpu usage. I dont konw how much here the intel driver and some va-api support did or if also the wayland protocol is also better.

    but its hard to see with here 720p and 120% cpu load (2 cores) with a zacate.

    hope they could get that done... I would even TRY to do it myself if someone gave me the direction (maybe over shader if uvd shit is patented to death) but I fear that you have to programm that for each codec again and again, so you would have to do such work all 2 years with each new format or even with other resolutions and stuff...

    Or I buy in 6 months or so a few intel computers if their gpus are because of the software better than the amd ones... its hard to say but amd builds not only slower cpus but also slower gpus from a linux perspective. the only advantage they deliver are the price.
    Which video, which player, which distro? With my x120e and playing the preview of 'skyfall' in 720p and 1080p, cpu usage was 35-55% in average for HD and 70-90%. I tried to raise the resolution up to 1600x1200 and the cpu usage did not raise much, a few percent at most. I am running openSuSE 12.1 with latest updates and catalyst 12.8, under xfce. Desktop effects are disabled, and I think anti-aliasing is too. For the purpose, I downloaded the preview and played them under vlc.

    I hadn't watched that video yet, but the proposals I've seen about OpenGL implementation seems promising. If only more specifications would be available to them, along with properly working BIOS, up to a total control of power management the game won't be the same.

    Leave a comment:


  • blackiwid
    replied
    I saw the presentation from xdc2012 about wayland and there he could render 1080p video with 3% cpu usage. I dont konw how much here the intel driver and some va-api support did or if also the wayland protocol is also better.

    but its hard to see with here 720p and 120% cpu load (2 cores) with a zacate.

    hope they could get that done... I would even TRY to do it myself if someone gave me the direction (maybe over shader if uvd shit is patented to death) but I fear that you have to programm that for each codec again and again, so you would have to do such work all 2 years with each new format or even with other resolutions and stuff...

    Or I buy in 6 months or so a few intel computers if their gpus are because of the software better than the amd ones... its hard to say but amd builds not only slower cpus but also slower gpus from a linux perspective. the only advantage they deliver are the price.

    Leave a comment:


  • necro-lover
    replied
    Originally posted by Adarion View Post
    On UVD

    I personally suspect the driver part for UVD accel is even written 95% but they're waiting for the lawyer department to get clearance for a release.
    Doing it on shaders might be more flexible for future codecs but then it won't be as efficient as an ASIC. Either way it is done I still hope we'll have something useful here quickly, since it would be of great benefit especially for HTPCs with an E-350 or similar series.
    No they fail with this kind of behaviour in the past "HDMI-sound" and bridgman learned because of that mistake.
    They do have a example code but they will not release it because they only use the example code to make sure they use the right registers of the stars micro-controller of the UVD unit.
    In the end they will release only some spec with some register informations like the HDMI-Audio-case.
    In this way they do have the highest chance to get a OK from the lawyers.
    They tried it otherwise in the past and failed.
    Don't be naive the HDMI-Audio was the test-run to get such a critical stuff out of the door.
    For hardware informations spec there are lower regulations compared to software.
    They can release critical informations 5 times faster if they only focus on critical hardware informations instead of hardware+complete software implementation.

    Leave a comment:


  • Adarion
    replied
    On UVD

    I personally suspect the driver part for UVD accel is even written 95% but they're waiting for the lawyer department to get clearance for a release.
    Doing it on shaders might be more flexible for future codecs but then it won't be as efficient as an ASIC. Either way it is done I still hope we'll have something useful here quickly, since it would be of great benefit especially for HTPCs with an E-350 or similar series.

    Leave a comment:


  • Adarion
    replied
    Oh, well.
    I saw the tweet early but the news post late since I was a few days off computer. Had fun in the forests a few days now.

    Well, it was less horrible than I thought after reading the tweet. So nobody got fired or anything.
    So it is just a relatively minor change. And if everybody inside is happy with it, fine.


    Good luck with hacking on HSA stuff then, John, and thanks for all the answered questions. Thanks for communicating with us and thanks for bearing all the many flames that went towards you/ATI/AMD.

    And welcome to Tim Writer. (I hope you're prepared for the rough tone some people use here from time to time.)

    Leave a comment:


  • mdk777
    replied
    I still have to understand why f@h takes 100% of my cpu with the opencl client (HD5870) while it takes nearly 0% on nvidia with cuda.
    Again, I don't have the experience of someone like Bridgman regarding the details of the architecture and software interface.

    However, what I have been told is that the nvidia architecture did a better job of keeping the computations on the card, while the previous version of AMD architecture(optimized for GPU and not GPU compute) required constant call backs to the CPU to accomplish the same calculations.

    Hence again the great irony...The AMD GNC was supposed to be an improvement in GPU compute, moving toward the nvidia design model.
    However, there is always some communication required regardless of of how optimized the GPU compute capability.

    Hence, eliminating the PCIE latency entirely...well that is the holy grail of the entire HSA project as I understand it.

    Anyway, this is why I thought this discussion was appropriate to Bridgman being on the HSA team. The AMD GNC should kick@#$ on a AMD 7970 if the same architecture is going to ultimately going to be expected to perform on the APU/HSA.

    Leave a comment:

Working...
X