Announcement

Collapse
No announcement yet.

AMD gets beaten by NVidia, any plans?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    For some time now i've had the impression that CUDA is fading out pretty soon. Obviously people in games and 3d learned their lesson from the Glide3d crap and such. Correct me if i'm wrong, but hasn't CUDA been having problems getting adopted exactly because it's proprietary? And also, isn't the future of CUDA in a compatibility layer or such, on top of their OpenCL solution (or inside it, whatever makes sense for them)?

    Of course if you search hard enough, you'll find web pages for projects that are/were going to use CUDA for all sorts of funky shit that's coming out "any day now", but ends up with couple of pretty tech demos at best (read: vaporware). There's always the first mover advantage that people would try to use, while things that are soon becoming common everyday stuff are still new and fresh. I bet RealNC is just aching for a piece of the glamor, while it's still there, in having something new and cool before other kids in the neighborhood have it.

    I have to admit i haven't much idea how physics engines are implemented, but isn't OpenCL the definitive answer to that too? Like general purpose physics engines for any use reagrdless of drivers or anything, really? So who cares about physx anyway?
    Last edited by Pahanilmanlintu; 01-27-2009, 03:29 AM.

    Comment


    • #17
      List (probably incomplete) of games supporting PhysX:
      [...]
      Does this answer your question?
      You are mistaken. Exactly *one* game uses Nvidia's hardware PhysX support and that is Mirror's Edge. The rest run on software PhysX which is supported equally by both vendors.

      Despite what Nvidia wants you to believe, you, as an end-user, don't actually benefit from CUDA. Unless you are developing some kinds of very specialized software (and if you did, you would actually know that Amd offers its own, arguably superior solution - Stream), CUDA is nothing a marketing gimick.

      Please, don't buy into marketing bullshit (the way it's meant to be played, yeah right!) Hardware PhysX is nice, but hardly essential - unless you like Mirror's Edge all that much . Once OpenCL drivers are released and you'll find that the field is more even than it seems.
      Last edited by BlackStar; 01-27-2009, 04:01 AM.

      Comment


      • #18
        Well Nvidia provides great support for game developers - the it's ment to be played program - which ATI doesn't. Of course they try to convince those devs to user Physx too. ATI+Intel prefer Havok, pure CPU based. Before Intel bought Havok there was Havok FX, with GPU accelleration. The UT3 engine can use it - the 3 example levels look pretty cool with Physx enabled, but as they want to sell the game to ATI users too it will never be required. CUDA was used by several apps, more than Steam, but time will tell how successful the open standard OpenCL will be. For Linux users usually playback is more important then encoding and there is currently VDPAU, not perfect, but in a working state. UVD(2) for Linux is only discussed but an implementation is missing. If ATI thinks the Linux market share is not big enough, they forget the new market for small Linux solutions like Splashtop, HyperSpace which could be used for BlueRay playback with hardware accelleration. They should think about that...

        Comment


        • #19
          Originally posted by RealNC View Post
          OK, as of late I'm pretty much fed up. I can't activate PhysX with my ATI card, and the world is moving towards CUDA.
          PhysX is NVidia technology. It will most likely never be accelerated on an ATI card. But currently only a single game (Mirror's Edge) uses the GPU acceleration anyways, and it's just for pretty effects that don't change the game a whole lot. Trying to use an underpowered CPU to run software PhysX is what is killing your framerates, not the ATI card. You will have to play the game without it on your current setup.

          CUDA is also NVidia technology. It will most likely never be executed on an ATI card. CUDA is currently better supported by industry, but not a whole lot of end user programs ever use it.

          So, the question is, why did I buy an AMD/ATI in the first place? It starts to feel outdated lately (no PhysX, no CUDA.) Will AMD do something here? Will you allow us to enable PhysX in our games? I'm not talking about some future-wonder-tech to appear next year. I'm talking about RIGHT NOW. Because right now, I want to get an NVidia since my ATI runs like s***t with the latest PhysX games and no one supports ATI Stream out there, only CUDA.

          PS:
          I feel ripped-off by AMD. Where's the support for the COOL stuff?
          So in other words, you bought the wrong graphics card because you didn't do any research whatsoever, and now you are blaming ATI for your own mistake. Both companies make some fantastic hardware, but they are not making identical hardware. If you want everything NVidia offers, then return your ATI card and get an NVidia one. You will be doing all of us a great favor.

          Comment


          • #20
            Boy, that's the "Phoronix: The Way it's Meant to be Flamed." thread!

            Comment


            • #21
              Originally posted by octoberblu3 View Post
              If you want everything NVidia offers, then return your ATI card and get an NVidia one. You will be doing all of us a great favor.
              Absolutely brilliant comment!

              Comment


              • #22
                Originally posted by energyman View Post
                so - and with which of the games physx makes any difference?

                also, and that is VERY important, amd CAN'T USE CUDA BECAUSE IT IS NVIDIA'S TECH.
                Maybe you should think a moment before you start edit-quoting people like in #13?
                Same with PhysX. They BOUGHT THE DAMN COMPANY. IT IS THEIRS. What should AMD do about it. Please answer that.

                And OpenCL - well if a game shall run on MacOSX - opencl is the way to go. Linux? opencl. Any other non-MS platform? opencl.
                CMIIW, I've read somewhere that NVidia offering Physx and CUDA to AMD which rejected by AMD.
                I've sh**ty internet connection here so I can't confirm (with i.e. searching the article(s) on inet). So fast, man! ~2.1 KBps. Yummy. With the advantages of can't-googling.

                @ReadNC
                Oh, And if you want an NVidia card, wanna change with me? I've 8600GT, heheh.

                edit: finnally, can googling. I've used `nvidia+offer+amd+physx` and there's several result. U can read it by yourself.
                Last edited by t.s.; 01-27-2009, 11:19 AM.

                Comment


                • #23
                  I felt the urge to whine yesterday when I made the thread. It was because of a discussion I had in a friend's house who owns an NVidia. His dual boot system (openSUSE + Windows XP) runs so well with his NVidia. He can use PhysX for his games, showed a CUDA video *encoder* called "badaboom" (I think) that was extremely fast, his Linux desktop is awesome on the NVidia.

                  From AMD all I get is promises. In the Windows world you get told by ATI fanbois "PhysX is a hype, DX11 will have physics processing anyway". Yeah, sure, doesn't help my 4870 which is DX10.1 though, now does it. In Linux, I get told "but there's things in the making that will be awesome". Well, doesn't help my 4870 RIGHT NOW, now does it.

                  So yes, I'm pissed. I want this stuff too, OK? I did jump on the wrong train with my 4870. That's the realization for me.

                  Sorry if my posts were trollish.
                  Last edited by RealNC; 01-27-2009, 12:38 PM.

                  Comment


                  • #24
                    Just curious, what are we promising other than ongoing improvement ?

                    I assume you are aware of the transcoder for your 4870 ?

                    Comment


                    • #25
                      Well, the point is that NVidia already has the improvement for quite some time. AMD is still working on it.

                      And I know the following may not be AMD's fault, but my GeCube 4870 says on the box: "Hardware accelerated physics processing on GPU". Which is obviously a lie. I think this claim on the box even breaks some law in some countries. "Deceptive marketing."

                      Edit:
                      On another note, NVidia is pushing their features like no tomorrow. Every game out there tells me "the way it's meant to be played". They obviously work with the gaming industry more closely than AMD. DX10.1 was a failure, ATI Stream is something no one plans to use. NVidia made PhysX acceleratable on their hardware. Why isn't AMD trying to keep up here? Wasn't NVidia offering to allow for PhysX and even CUDA on ATI hardware? Why didn't AMD go for it? It would benefit its customers.
                      Last edited by RealNC; 01-27-2009, 12:59 PM.

                      Comment


                      • #26
                        The transcoder has been shipping for a while. There were some encoding artifacts reported in certain cases (64-bit OS maybe ?) but AFAIK those have been fixed.

                        I don't know the details, but I was under the impression that we were shipping physics support before the competition, and that was what prompted the PhyxX purchase. AFAIK we decided that physics wasn't going to be a big issue in the short term once Intel bought Havok, NVidia bought Aegia (PhysX) and everyone went proprietary. Havok is still being pretty good about working with us though.

                        A number of hardware vendors and game developers had been working with the physics companies before that, and there are some good examples out there of where physics can help, but I don't see single-vendor APIs catching on with the game developers except for specific, vendor-funded projects. We think focusing on cross-platform APIs like OpenCL and DX11 Compute Shaders is the right way to go, since those APIs *are* likely to be broadly used (OpenCL for GL apps, Compute Shaders for DX apps); hopefully that will turn out to be a good decision.
                        Last edited by bridgman; 01-27-2009, 01:31 PM.

                        Comment


                        • #27
                          Originally posted by bridgman View Post
                          The transcoder has been shipping for a while.
                          I know. I tried it. Both back then with my 1950, and now again with my 4870. It's buggy. And it produces videos that are of the worst quality I ever saw in a video encoder. There are even artifacts in there. It's just a proof of concept application with no real use.

                          I don't know the details, but I was under the impression that we were shipping physics support before the competition, and that was what prompted the PhyxX purchase. AFAIK we decided that physics wasn't going to be a big issue in the short term once Intel bought Havok, NVidia bought Aegia (PhysX) and everyone went proprietary.
                          I don't remember physics ever working. The only configuration I know of that somehow worked was with 3 graphics cards (lol) on the X19xx generation. No, thanks :P

                          And PhysX isn't only about framerate. The game offers much more details with hardware PhysX. You don't even get to see this stuff with ATI cards. It gets disabled. Many people think it's just about FPS. No, play for example Gears of War on NVidia and you'll see the game is more detailed, with bullets and a crapload of particle effects flying over the place. "The way it's meant to be played" :P

                          Is it too much to want this on my rig too?

                          A number of hardware vendors and game developers had been working with the physics companies before that, and there are some good examples out there of where physics can help, but I don't see single-vendor APIs catching on with the game developers except for specific, vendor-funded projects.
                          The list of PhysX games grows and grows though. The gaming industry is making use of it. PhysX is useful. Almost every game needs physics. Havok and PhysX offer this. But right now, games are going PhysX because it's hardware accelerated on the majority of people's PC. AMD should have supported it. There's really no excuse other then corporate politics that are of no value to me.

                          We think focusing on cross-platform APIs like OpenCL and DX11 Compute Shaders is the right way to go
                          Even if AMD will release DX11 drivers for my 4870, it will take time until the gaming industry goes DX11. But I don't think my 4870 will ever see DX11 drivers. So how's DX11 actually helping me?

                          NVidia was supporting even their 8xxx cards with PhysX. *That* is nice support.

                          And, I don't know if PhysX is or will be useful in Linux too, but at least it is supported in Windows and maybe Wine will work with it. But AMD completely lacks it either way.
                          Last edited by RealNC; 01-27-2009, 01:37 PM.

                          Comment


                          • #28
                            @RealNC dude grow up or get an Nvidia card

                            I will continue to use my ATI Rage Pro Turbo AGP 2x in bliss... :-) getting a whole 20FPS or so in quake

                            (no im not that entirely that deprived I have an Geforce2 400MX as well)


                            DONT FEED THE TROLLS he came here to whine as he said himself

                            Comment


                            • #29
                              This thread is about real hardware dude :P

                              Comment


                              • #30
                                Rage is real hardware, no-one made it up. I have some Rage cards too

                                I don't see what's the big problem. Nvidia has proprietary APIs, so has everyone else. They don't mix'n'match.

                                Comment

                                Working...
                                X