Announcement

Collapse
No announcement yet.

Bridgman Is No Longer "The AMD Open-Source Guy"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    On UVD

    I personally suspect the driver part for UVD accel is even written 95% but they're waiting for the lawyer department to get clearance for a release.
    Doing it on shaders might be more flexible for future codecs but then it won't be as efficient as an ASIC. Either way it is done I still hope we'll have something useful here quickly, since it would be of great benefit especially for HTPCs with an E-350 or similar series.
    Stop TCPA, stupid software patents and corrupt politicians!

    Comment


    • #82
      Originally posted by Adarion View Post
      On UVD

      I personally suspect the driver part for UVD accel is even written 95% but they're waiting for the lawyer department to get clearance for a release.
      Doing it on shaders might be more flexible for future codecs but then it won't be as efficient as an ASIC. Either way it is done I still hope we'll have something useful here quickly, since it would be of great benefit especially for HTPCs with an E-350 or similar series.
      No they fail with this kind of behaviour in the past "HDMI-sound" and bridgman learned because of that mistake.
      They do have a example code but they will not release it because they only use the example code to make sure they use the right registers of the stars micro-controller of the UVD unit.
      In the end they will release only some spec with some register informations like the HDMI-Audio-case.
      In this way they do have the highest chance to get a OK from the lawyers.
      They tried it otherwise in the past and failed.
      Don't be naive the HDMI-Audio was the test-run to get such a critical stuff out of the door.
      For hardware informations spec there are lower regulations compared to software.
      They can release critical informations 5 times faster if they only focus on critical hardware informations instead of hardware+complete software implementation.

      Comment


      • #83
        I saw the presentation from xdc2012 about wayland and there he could render 1080p video with 3% cpu usage. I dont konw how much here the intel driver and some va-api support did or if also the wayland protocol is also better.

        but its hard to see with here 720p and 120% cpu load (2 cores) with a zacate.

        hope they could get that done... I would even TRY to do it myself if someone gave me the direction (maybe over shader if uvd shit is patented to death) but I fear that you have to programm that for each codec again and again, so you would have to do such work all 2 years with each new format or even with other resolutions and stuff...

        Or I buy in 6 months or so a few intel computers if their gpus are because of the software better than the amd ones... its hard to say but amd builds not only slower cpus but also slower gpus from a linux perspective. the only advantage they deliver are the price.

        Comment


        • #84
          Originally posted by blackiwid View Post
          I saw the presentation from xdc2012 about wayland and there he could render 1080p video with 3% cpu usage. I dont konw how much here the intel driver and some va-api support did or if also the wayland protocol is also better.

          but its hard to see with here 720p and 120% cpu load (2 cores) with a zacate.

          hope they could get that done... I would even TRY to do it myself if someone gave me the direction (maybe over shader if uvd shit is patented to death) but I fear that you have to programm that for each codec again and again, so you would have to do such work all 2 years with each new format or even with other resolutions and stuff...

          Or I buy in 6 months or so a few intel computers if their gpus are because of the software better than the amd ones... its hard to say but amd builds not only slower cpus but also slower gpus from a linux perspective. the only advantage they deliver are the price.
          Which video, which player, which distro? With my x120e and playing the preview of 'skyfall' in 720p and 1080p, cpu usage was 35-55% in average for HD and 70-90%. I tried to raise the resolution up to 1600x1200 and the cpu usage did not raise much, a few percent at most. I am running openSuSE 12.1 with latest updates and catalyst 12.8, under xfce. Desktop effects are disabled, and I think anti-aliasing is too. For the purpose, I downloaded the preview and played them under vlc.

          I hadn't watched that video yet, but the proposals I've seen about OpenGL implementation seems promising. If only more specifications would be available to them, along with properly working BIOS, up to a total control of power management the game won't be the same.

          Comment


          • #85
            Originally posted by e.a.i.m.a. View Post
            Which video, which player, which distro? With my x120e and playing the preview of 'skyfall' in 720p and 1080p, cpu usage was 35-55% in average for HD and 70-90%. I tried to raise the resolution up to 1600x1200 and the cpu usage did not raise much, a few percent at most. I am running openSuSE 12.1 with latest updates and catalyst 12.8, under xfce. Desktop effects are disabled, and I think anti-aliasing is too. For the purpose, I downloaded the preview and played them under vlc.

            I hadn't watched that video yet, but the proposals I've seen about OpenGL implementation seems promising. If only more specifications would be available to them, along with properly working BIOS, up to a total control of power management the game won't be the same.
            the normal player... so what could that be, gstreamer... but yes I did understand that gstreamer is not that optimised than mplayer because of that I developed lately that as a minitube alternative:



            but still 3% vs 50-80% is still not good.

            Comment


            • #86
              I think video acceleration is becoming less and less important anyway. The reason is simple - you can't accelerate every format. But if something like HSA takes off, it would be able to share the processing between GPU and CPU and that would be "universal acceleration".

              Comment


              • #87
                Originally posted by crazycheese View Post
                I think video acceleration is becoming less and less important anyway. The reason is simple - you can't accelerate every format. But if something like HSA takes off, it would be able to share the processing between GPU and CPU and that would be "universal acceleration".
                HSA is marketing speech for: shader based calculation. Now you can imagine how good this will work.

                Comment


                • #88
                  Originally posted by necro-lover View Post
                  HSA is marketing speech for: shader based calculation without the usual drawbacks and overheads. Now you can imagine how good this will work.
                  Fixed that for you...
                  Test signature

                  Comment


                  • #89
                    Originally posted by bridgman View Post
                    Fixed that for you...
                    bridgman, I wonder if HSA is limited to shader based computation or if it's scope is wider, such as running the computational workload on the most apropiate kind of logic available on the system, general processing or fixed function, in order to have the best possible performance and power consumption. Obviously I would expect that shader computing is just the first step in that direction.

                    It seems to me, as a layman, that running everything on a general processing unit (be it cpu, gpu of both) cannot be the most efficient way of doing it. In the future SoCs we would have several specialized blocks, each doing what it does best. If my understanding is correct, should we expect to run into the same problems we have today with such speciallized blocks and open source (UVD, PM and so on) and open source/linux or is AMD/HSA foundation planning on something to prevent that?

                    Comment


                    • #90
                      Here is a introduction by MIke Houston:



                      The discussion of face recognition is especially interesting....While the slideshow is not shown, he describes a 10x benefit for GPU compute transitioning to a Multiple benefit for using CPU compute in a single software algorithm.

                      Consequently, I think you can safely assume that the ability to use both the cpu and the gpu, concurrently and sequentially as needed is the ultimate objective.

                      Comment

                      Working...
                      X