Announcement

Collapse
No announcement yet.

Broadcom Crystal HD Support For MPlayer, FFmpeg

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    What is the power consumption of these parts?

    Comment


    • #17
      Originally posted by BlackStar View Post
      What is the power consumption of these parts?

      About 30mW @ idle, 500mW watching 720P and about 1 Watt on 1080P material.

      Comment


      • #18
        Originally posted by deanjo View Post
        About 30mW @ idle, 500mW watching 720P and about 1 Watt on 1080P material.
        In the crystalhd-development google group an broadcom employer told this:

        Quite a bit especially on the 70012. When it is active (i.e. FW is loaded and running) it can consume as much as 1.3W whereas it drops to around 300mW when placed into idle.

        On the 70015, not so dramatic a difference - from around 300mW down to 100mW.
        And I don't get it. What's the point of the discussion if PCIe is fast enough. It works very well with 1920x1080p@24fps. I've it running for about a year now on an hp mini 5101 with an 1,6GHz atom processor. It even works for high bitrate stuff like the Avatar bluray with h264 bitrates about 40mbps.

        Comment


        • #19
          Typo on the 30mW, should have been 300mW. From talking with the XBMC devs it looks like the max that the Broadcom HD can handle is 40mbps. When it encounters bit rates higher then that it will drop frames despite it not running into a cpu limitation.

          Comment


          • #20
            Yes thats right, but I don't think it is a problem because the maximum video bitrate for bluray videos is also 40mbps .

            Comment


            • #21
              Ya but they will choke on some of the HD camera footage out there running 1920x1080P @ 60fps.

              Comment


              • #22
                300mW @ idle? That's quite a lot, unfortunately.

                Comment


                • #23
                  But 100mW to 300mW really isn't, all things considered, even for a netbook.

                  Comment


                  • #24
                    Originally posted by brent View Post
                    300mW @ idle? That's quite a lot, unfortunately.

                    thats true , especially when you know that things like the commercial Rapport kilocore 256 and 1024 CPUs on a single FPGA (Field Programmable Gate Array) where seeing 30 frames a second while consuming only 100 milliwatts (true only CIF i think but still) in 2006 where ARM at the time were getting 3.3 a second while consuming half a watt of power.

                    http://arstechnica.com/old/content/2006/04/6556.ars

                    iv advocated putting some form of cheap FPGA's on all PCB's, including motherboard's , GFX cards, and related hardware for a lot longer, for the simple fact that you cant really try and use/program a thing if the OEM's refuse to actually place them on their PCB's as a generic generally usable component, and so it never gets mainstream interest.

                    it's a shame Bridgman/AMD dont see the benefits Today, even though so called 'reconfigurable computing' is back in vogue with both http://www.staho.com/quad-core-to-ki...cessor/208227/ and http://www.gla.ac.uk/news/headline_183814_en.html

                    bringing updated advances news since the Rapport kilocore 256 and 1024 FPGA's appeared back in 2006

                    Comment


                    • #25
                      > ... FPGA ...
                      > it's a shame Bridgman/AMD dont see the benefits

                      Bridgman can't even see the benefits of decoding video. :-(
                      To be fair, I rather doubt that Bridgman is the one that
                      decides whether a card gets a FPGA or not.

                      popper, if you want a FPGA graphics card, get yourself a
                      Open Graphics Project OGD1 card.

                      Comment


                      • #26
                        Originally posted by popper View Post
                        iv advocated putting some form of cheap FPGA's on all PCB's, including motherboard's , GFX cards, and related hardware for a lot longer, for the simple fact that you cant really try and use/program a thing if the OEM's refuse to actually place them on their PCB's as a generic generally usable component, and so it never gets mainstream interest. it's a shame Bridgman/AMD dont see the benefits
                        You didn't respond to my question the last time this came up. The benefits are obvious, but you may be underestimating the costs.

                        Are you talking about adding a *cheap* FPGA to each board or an FPGA that will offload H.264 decoding ? They are not the same thing, even today. At current prices adding an FPGA with the performance you are talking about would roughly double the cost of a typical graphics card.

                        Originally posted by Dieter View Post
                        Bridgman can't even see the benefits of decoding video. :-(
                        Just curious, what would make you say something like that ?

                        Comment


                        • #27
                          Originally posted by bridgman View Post
                          Just curious, what would make you say something like that ?
                          Probably out of frustration from the lack of AMD's efforts bringing hardware decoding to linux.

                          Comment


                          • #28
                            Sure, I suppose, assuming you don't count any of the work that *has* been done over the last year in that area, but I still don't get the connection between that and Dieter's comment.

                            I manage the open source driver effort, not the fglrx effort, and the UVD issues have nothing to do with lack of understanding the importance -- unless you all agree that video decoding is more important than the 2D and 3D acceleration required to support compositing and a modern desktop and agree that we should stop implementing and documenting 2D/3D acceleration hardware on new GPUs for a year or two and focus on video acceleration instead.

                            Comment


                            • #29
                              Originally posted by bridgman View Post
                              I manage the open source driver effort, not the fglrx effort, and the UVD issues have nothing to do with lack of understanding the importance -- unless you all agree that video decoding is more important than the 2D and 3D acceleration required to support compositing and a modern desktop and agree that we should stop implementing and documenting 2D/3D acceleration hardware on new GPUs for a year or two and focus on video acceleration instead.
                              I wouldn't say "at video decoding is more important than the 2D and 3D acceleration" but I would say that it is just as important especially for FOSS users. Those guys that want blistering 3D performance on Nexiuz 2099 are going to go with the blobs anyways and I would venture that more video is viewed then "wobbly windows".

                              Comment


                              • #30
                                The results of the survey seem to correspond to what I'm saying.





                                Also the fact that even on the blob side of AMD the efforts there on video decoding acceleration hasn't been all that great either. Even S3 has a better solution then what ATI is offering at the moment. You take a look at nvnews.net forums. Stephen Warren is in there like a rabid dog hammering out and resolving vdpau issues.

                                Comment

                                Working...
                                X