Announcement

Collapse
No announcement yet.

nVidia likely to remain accelerated video king?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by curaga View Post
    How's this so?

    PCI bw (bytes/s) / 720p60 bw = 1.26, ie that and some extra.

    (266 * 1 024 * 1 024) / (1 280 * 720 * 4 * 60) = 1.26103704

    I thought PCI ran at 133MB/s max theoretical.

    Comment


    • #32
      Originally posted by mugginz View Post
      I thought PCI ran at 133MB/s max theoretical.
      To reply to myself....

      I see Matrox did make some PCI-X versions of their cards. Not very mainstream though.

      Comment


      • #33
        Originally posted by MU_Engineer View Post
        I've done some testing of video decode performance on my system, which is at least as good as any 2.8 GHz P4 system:

        CPU: Socket 939 A64 X2 4200+ (dual-core, faster than a 2.8 P4)
        Chipset: NVIDIA NF4 SLi (CrushK8-04), PCIe 1.0a x16 slots configured as x16/x0
        GPU: NVIDIA GTS250 512 MB
        Test files: several transcoded 720p 15 Mbps H.264 files
        Video player: Mplayer SVN-r29796-4.4.3 (amd64, VDPAU supported)

        I've done some testing and concluded his claims are 100% absolutely impossible since my system is grossly unable to do what he has claimed, and it's at least moderately faster and potentially much faster than his setup.



        Absolutely. My X2 4200+ cannot decode the 15 Mbps H.264 files in software (using Xv), and it's a faster CPU operating in a 64-bit environment and also has a second core. No 2.8 GHz P4 is going to be able to do this if the X2 4200+ cannot.



        I'd agree with this as well. I can play one stream with my setup fine, but adding a second leads to a bottleneck and a ton of lag. CPU usage is in the 20% range, so I guess it's probably PCIe bus bandwidth. My setup is equivalent to the best P4 setup as far as bandwidth is concerned. At best, his 2.8 GHz P4 is a P4 620 HT on an i975 platform. That's an x16/x0 or x8/x8 PCIe 1.0a board just like my NF4 SLi is, so bandwidth is at best equal. If he's on AGP, he's running at best half of the bus bandwidth as me (AGP 8x = 2133 MB/sec, PCIe 1.0a x16 = 4000 MB/sec). If he's running an older Northwood-A or Northwood-B on an i845/855 system, he's getting AGP 4x at 1067 MB/sec and going to be even worse off.



        They can use AMD's XvBA video decode assist on HD 3xxx or HD 4xxx-class AGP cards, which has roughly the same capabilities as VDPAU. However, the bus bandwidth still is low compared to my system which is apparently still bottlenecked.



        My HTPC runs a PCI GeForce 6200 and I can tell you that the only stuff you're going to be playing smoothly is low-def (480i) video using Xv or OpenGL for video output. There isn't enough bandwidth to run XvMC on the PCI bus for decode assist on low-def video. Playing 720p video is impossible on a PCI-fed card even with Xv output. I'd guess you could probably run one HD stream on an AGP 8x system though.



        I can't vouch for PCIe 2.0 not having enough bandwidth as no system I have has such support and I can't test that possibility. But there aren't any PCIe 2.0-supporting chipsets that I know of that support P4s since Intel removed P4/Pentium D support from the 3-series and 4-series chipsets that do have PCIe 2.0 support.



        I concur!
        Hmm... I have a 4400+ and it has no problem decoding 1080P video. However, keep in mind either your 4200+ or my 4400+ have WAY more power than a 2.8 GHz P4. They have more power than a 2.8 GHz Pentium D, let alone a P4. The megahertz myth in action... Anyway, I know my system will play 1080P video with absolutely no hardware acceleration. Here are the specs.

        CPU: AMD Athlon 64 X2 4400+
        Video card: Radeon X300 (using the open source radeon drivers)
        Memory: 4 GB DDR2 800 (Corsair XMS2)

        Comment


        • #34
          Originally posted by bridgman View Post
          - everything on the GPU (decode using dedicated HW, filter & post proc using shaders, presentation (CSC, scaling etc..) using shaders
          On Intel Ironlake, everything is done with shaders (MC, ILDB) except bitstream decoding which is carried on with a dedicated unit, that is the BSD: bitstream decoder. Doing so still makes it possible to do dual H.264 1080p stream decoding, which is better than competitive implementations. The BSD has a fixed pipeline though, so it only supports H.264 & VC-1 at this time. On the other hand, it would be simpler to adapt the BSD for e.g. VP8 rather than a whole decoder. Unfortunately, Intel is politically slow, so this probably won't happen for Sandy Bridge, unless their marketing see the light.

          Comment


          • #35
            Originally posted by MU_Engineer View Post
            They can use AMD's XvBA video decode assist on HD 3xxx or HD 4xxx-class AGP cards, which has roughly the same capabilities as VDPAU.
            I don't believe you. This might be true at the API level, though not with the XvBA version and additions people could be using. On an implementation side, this definitely is not the case. On the other hand, I have not tried much recently either. This probably improved?

            Comment


            • #36
              Originally posted by devius View Post
              Back on topic: Don't the intel drivers support VA-API? How well does that work? It seems to only work with the i965 driver which is unfortunate for netbooks based on 945GM
              The 945GM cannot do anythinig useful but XvMC (MPEG-2 iDCT/MC), and even so at very limited resolution. e.g. 720x576 IIRC.

              VA-API has drivers for the following Intel chips:
              * Intel Eaglelake (G45)
              * Intel Ironlake (HD Graphics)
              * Intel Poulsbo (US15W)

              The Intel Ironlake is now capable to decode two H.264 HD (1080p) streams simultaneously. The older G45 will likely be limited to 1 HD and 1 SD simultaneously though. Despite the name, older i965 (GMA 3xxx) are not supported.

              Comment


              • #37
                Originally posted by LinuxID10T View Post
                Hmm... I have a 4400+ and it has no problem decoding 1080P video.
                My X2 4200+ can decode 1080p MPEG-2 or MPEG-4 fine, but it won't do H.264. The OP was talking about H.264.

                Comment


                • #38
                  Originally posted by Rahux View Post
                  Hey guys - in the next few months I'll be looking into a new PC and my main priority is being able to watch 1080p movies and do a bit of gaming (but video is more important).

                  I invested in a nice large monitor as my new place will not have a TV. As far as I can see, ATI cards are performing much better overall but nVidia is the only one offering accelerated video (both local and flash).

                  Is this likely to remain the case in the next 6 months? Also I hear that the current crop of nVidia cards are very noisy - would I be getting nice video play at the expense of being able to hear the movies I watch? What's the outlook on blue-ray in linux too? Is the card likely to make a difference there?

                  I guess the question is really whether waiting will be worthwhile.
                  There doesn't appear to be any single all-in-one solution for the GPU+video_decode_acceleration.

                  If money isn't too much of an obstacle, you really should consider a broadcom crystalhd card. The main downside of them is that they are exclusively laptop-slot cards, HOWEVER, $50 buys you an adapter to pci-e 1x.

                  The really nice thing about the broadcom crystalhd card is that the drivers are OPEN SOURCE.

                  So optimally, you can pick the graphics card you like ***FOR REGULAR GRAPHICS STUFF*** and use a dedicated video decoder card for that purpose.

                  I kinda miss the old days, when every function was on its own card, and the purpose of the mainboard was just to interconnect components (rather than to jam everything onto one circuit). You had your hard drive card, you had your sound card, you had your graphics card, you had your network card, you had your modem, and you even had your general I/O card, and they all plugged into their own slots on the mainboard. Now we've got these mainboards with however many slots, and the slots are ALL EMPTY, and some of the onboard hardware works, some of it doesn't!

                  Well some things really need to be separated!

                  Comment


                  • #39
                    Originally posted by droidhacker View Post
                    There doesn't appear to be any single all-in-one solution for the GPU+video_decode_acceleration.

                    If money isn't too much of an obstacle, you really should consider a broadcom crystalhd card. The main downside of them is that they are exclusively laptop-slot cards, HOWEVER, $50 buys you an adapter to pci-e 1x.

                    The really nice thing about the broadcom crystalhd card is that the drivers are OPEN SOURCE.

                    So optimally, you can pick the graphics card you like ***FOR REGULAR GRAPHICS STUFF*** and use a dedicated video decoder card for that purpose.

                    I kinda miss the old days, when every function was on its own card, and the purpose of the mainboard was just to interconnect components (rather than to jam everything onto one circuit). You had your hard drive card, you had your sound card, you had your graphics card, you had your network card, you had your modem, and you even had your general I/O card, and they all plugged into their own slots on the mainboard. Now we've got these mainboards with however many slots, and the slots are ALL EMPTY, and some of the onboard hardware works, some of it doesn't!

                    Well some things really need to be separated!
                    So I did some research and it seems that the Broadcom HD mini PCIe decoder card isn't the best and simplest solution. It sounds like a nice idea, having open source supported drivers (the drivers themselves are NOT open source, but Broadcom is working with the open source devs so that's a plus).

                    So far, very few people have the card actually working 100% reliably under Linux and those that do tend to find that 1080p isn't renderable at all by the driver, especially in the gstreamer derivative.

                    I'm not saying that Broadcom HD isn't a workable solution, but it's hardly the dream you make it sound like.

                    Comment


                    • #40
                      The problem with that chip is that it is usally combined with pure Atom board/netbooks - those do not allow full hd HDMI output. If you want that you usually need Ion(2) and then you don't need the chip.

                      Comment

                      Working...
                      X