Announcement

Collapse
No announcement yet.

NVIDIA GeForce GTX 550 Ti

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by lexa2 View Post
    long time user of GTS-250 until recently had done an "upgrade" to the GTX 550 Ti. Speed difference between this cards turned out to be negligible.
    Thanks for the info. Unfortunately I have this problem every time I feel like upgrading which makes it so painful. tomshardware used to have all the video cards across several generations in the same charts which made it a lot easier to tell if one card was approximately twice as fast or ten times as fast as another.

    Comment


    • #12
      Originally posted by grotgrot View Post
      Thanks for the info. Unfortunately I have this problem every time I feel like upgrading which makes it so painful. tomshardware used to have all the video cards across several generations in the same charts which made it a lot easier to tell if one card was approximately twice as fast or ten times as fast as another.
      You are welcome. Yeah, that's definitely a problem with hardware review sites. Personally I'm not a big fan of Tom's Hardware as there's an almost perfect Russian language hw-review site ixbt.com (I believe that they have English language version at http://ixbtlabs.com/). This site had been online since almost forever (I remember using it back in 1995 - at that times it had been called ixbt.stack.net) and years ago they had been regularly publishing hw comparisons over several hw generations. It was very handy to get known how much times faster is the brand-new GeForce 3 Ti 200 card in comparison to TNT2 Ultra I have installed in my PC. Later on they had switched into monthly "i3DSpeed" reviews schedule so it became a bit difficult to do direct comparisons. But it's still possible but require a bit of hand-work to be done.

      For example it wasn't a surprise for me that 550 Ti would be approx on par with GTS-250, as there were tests published on i3DSpeed with direct comparison of GTS-250 to GTX-450 and then there were tests with direct comparison between GTX-450 and GTX-550 Ti. So the all I had to do is to proceed with a three-way compare. Needless to say that I would be happier in case iXBT and/or Phoronix would publish direct comparisons for the last three generation of GPUs so I wouldn't be forced to do all that hand-work in order to compare.

      Comment


      • #13
        Look forward to the huge nvidia comparison then.

        Comment


        • #14
          Originally posted by lexa2 View Post
          You write that you've got 8800GT, which is then had been re-branded as 9800 and a bit later had been rebranded to GTS-250. Essentially 8800, 9800 and GTS-250 are all just the same GPU. As I had written above I've been a long time user of GTS-250 until recently had done an "upgrade" to the GTX 550 Ti. Speed difference between this cards turned out to be negligible. In older apps like Q3A or HL2 under Wine I got less FPS than I've been getting with GTS-250. On the other hand apps which use complicated shaders (especially tessellation ones) runs way faster on GTX 550 Ti. Cuda calculations also seem to run faster on 550 Ti. In my case it makes sense as I'm doing some OpenGL programming as a hobby and like to use Cuda for doing intensive calculations. For ordinary gamer it wouldn't make sense to change his fast 8800/9800/GTS-250 to the GTX 550 Ti as it would result in performance drop in most of the available linux games. Same stands for gaming under Wine - it is very likely that FPS would regress there too as wined3d implementation doesn't use any OpenGL3.x/4.x fancy computation features that may bring the difference and improve FPS numbers on "Tesla" and "Fermi" GPUs.
          This is not entirely correct.
          I had 9800GT Green, which was a bit noisy when working, yet not hearable when Idling. The biggest problem was that the fans continued spinning, till nvidia driver is loaded, which means 20 seconds of noise till X starts up. 2 Minutes if using windows(not using, but remembering).

          I also had my hands on 8800GT from a friend of mine later, and it was a LOT hotter and MUCH more noisy.

          The chip that you mentioned, the G80,seen in 8800, was modified and advanced several times before nvidia pulled new architecture on market (and broken linux drivers for it, till it was sorted out). They have improved 8800's G80 in terms of architecture, manufacturing process and consumption etc, and this was packed in 9800. This is NOT entirely "rebranded", as nvidia strategy BEFORE fermi was "To make one good polished chip, and then cut it to suit different market shelves". Since fermi, they have gone AMD way - to produce several chips - for different segments right at start (and hence nvidia fermi drivers were and are still a bit crappy on linux).

          Now, after "expirementing" with AMD opensource strategy / HD4770, I had to return to nvidia(sadly), and had purchased a 260GTX sp216 / 1792Mb gddr3.
          Being basically heavy improved and multiplied by 2 x G92b, the G200b (not G200) in 260-285, is WARM at boot and work (~50C); the fans DO NOT SPIN, until nvidia driver is loaded (means, when Im off with gentoo based SysrescueCD using noveau driver the card is WARM and unhearable).
          Unless when stressed extremely hard, the GTX260 is barely hearable and hardly exceeds 70C at max load. Notice, G200 was the biggest GPU chip ever produced in terms of size, still they managed that. But a tiny bit is possible due to my gainward card having non-stock fan (similar to those in this test). Ie here

          For me, even if using opensource drivers hardly allows desireable result for next 5-6 years, using AMD proprietary drivers is still not acceptable. I think, if AMD continues to push proprietary further, it is possible to sort major of WINE bugs as well as native bugs, it is possible to push video decode either via OpenCL or via improvement of current situation with XvBA (compared to vdpau, I really hope the UNITE and STANDARTIZE this). STILL, I have fears about the card quickly loosing developers attention VERY SOON on AMD side compared to nvidia. Old Xorg, bugs etc.

          Regarding low performance, may be its true. The fermi GPUs are better at OGL4+ and have core configuration better suited heavy shader uses. Once again, nvidia disabled some of OpenGL features that were accelerated that way on non-quattro cards and may be used on by apps so people ditch overpriced quattro. The "castration" has affected games and regular apps on Linux too, so .. one of the points why I ditched fermi for now. Not cool, not cool at all. Should be paying for real support, not for some overpriced wires.

          Comment


          • #15
            starcraft 2 in linux!!!!

            Originally posted by dizzey View Post
            I upgraded my geforce 8800GT 640mb that is one of the first 8800 serie's card.
            I wanted to be able to play starcraft 2 in wine at 1080p resolution and the game was faster with the geforce 550.
            For me the upgrade was worth it but then i did not have a huge requirment list.
            Could you tell me your configuration ie mobo cpu...? And what distribution/kernel you are running?
            Also how much tweaking was necessary to get starcraft working under wine?
            did you use winetricks for the task? or crossover? or just plain wine?

            I'm interested in getting a desktop that can play starcraft under wine and thats my highest requirement!
            I want a cheap built so probably I'll go with a phenom(can get a quad-core for $85 !!!) and the geforce 550 ti(since it looks like ati/amd is still a long way from having reliable drivers under linux). What do you guys think?

            At some point I was thinking of getting a A8-3850 which makes a cheaper built than the above but I doubt the state of linux drivers/support is as good right now. And by that I mean out of the box experience cause I don't have the patience to build from source anything or compile kernels...

            Thanks for any advice!
            Last edited by jarg; 12 December 2011, 12:15 AM.

            Comment


            • #16
              Originally posted by jarg View Post
              Also how much tweaking was necessary to get starcraft working under wine?
              did you use winetricks for the task? or crossover? or just plain wine?
              There's no need in tweaking (mostly). Any fresh Wine from 1.3.x series would do. It might be needed to "winetricks wininet" because Wine's builtin implementation for this lib sometimes causes problems with SC2 installer and auto-updater (download progress stalls followed by unexpected CTD).

              Originally posted by jarg View Post
              I want a cheap built so probably I'll go with a phenom(can get a quad-core for $85 !!!) and the geforce 550 ti(since it looks like ati/amd is still a long way from having reliable drivers under linux). What do you guys think?
              Any 3GHz+ Phenom X4 would do (actually any X4 CPU would do, and even any hi-freq - i.e. 3GHz and up - X2 CPU would do). Make sure to have at least 4Gb of RAM to achieve smooth system behavior. As for videocard - if you're aiming at playing SC2 and not targeting future OpenGL games which use complicated render paths (so far there's only one game engine that's capable doing this - namely Unigine) - you'd get better FPS with fast card based on previous generation of
              nVIDIA GPU. Fast "OC" version of GTS-250 with 1Gb VRAM would perform faster than 550 Ti. Any GTX-270 card with wide memory bus width would also be faster. nVIDIA cards from 4xx/5xx series are great if you care about GPU compuiting power, tessellation and alike, but are slower than previous generations when it comes to traditional renderpaths from DirectX 9 and OpenGL2.x world.

              P.S. And make sure you head with nVIDIA way - cards from AMD are good but drivers are still not so stables as nVIDIA's are.
              P.P.S. Also be sure to use fresh enough version of Wine, at least 1.3.28, as there were a lot of changes in d3d emulation that boosted FPS a lot. Note, that you still won't be able to set highest quality settings in SC2 due to some render bugs showing up when shaders quality is set to be anything higher than "medium". To display in-game FPS counter use Ctrl+Alt+F combo (or Ctrl+Shift+F, or Ctrl+Alt+Shift+F - don't remember which is it exactly).

              Comment


              • #17
                In VGAs we count "stream teraflops" and not "instruction teraflops. A gtx280 has 240 32bit-mimd shaders with madd+mul(3ops) and 1.3-1.5ghz = 1Teraflop+. A gtx480 has 512 64bit-mimd(2*32bit-ops) with fmac(3ops) and 1.3-1.5ghz = 4Teraflops+. The best RadeonHD has 1600 32bit-simd shaders with madd(2ops or 1.6fmac-ops) and 800-900mhz = 2Teraflops in fmac mode. Best single nvidia is two times faster than single amd and four times faster than previous nvidia. Gtx400 vs gtx500 there is not difference, gtx200 vs gtx300 there is not difference, gtx8000 vs gtx9000 there is not difference. Best buy: a used gtx460, old model with 512cores(of course cut down to 384 for example), but you can free most of them(480) with another bios. At least 4-5teraflops are yours with some overclocking. Prise 90-100 US dollars!!!

                Comment


                • #18
                  Also about wine-winetricks. Amd cards are a failure regardless of the power. To match d3d-to-ogl translations and Amd VLIW cant survive. If the game has native opengl then its usually fine. Only Nvidia is for Linux, so sell your Radeon and buy Nvidia, little money can bay you freedom, don't go Windows again. As for winetricks configuration just go to app-database in winehq page and find your app or game, you will see what it takes to make it work, don't do whats in your head. If still you cant make it work load it through console: right click on your game installation folder and "open console" or "open console" file manager option, then wine lineage2.exe for example. You will see what is the problem. If a DLL does wrong for example, then just go to dll-files page, download the DLL unpack and copy-paste in wine system32 folder.

                  Comment


                  • #19
                    specific models with prices

                    I very much appreciate your responses!

                    Which way should I go:
                    GIGABYTE GV-N26UD-896M REV2.0 GeForce GTX 260 896MB 448-bit GDDR3 for $99
                    or
                    Galaxy 25SGF6HX1RUV GeForce GTS 250 1GB 256-bit DDR3 for $79

                    I'm looking for the most cost-efficient way to do this so any suggestions are welcome!

                    Comment


                    • #20
                      Originally posted by artivision View Post
                      In VGAs we count "stream teraflops" and not "instruction teraflops....
                      ...and the only thing typical end-user really cares about is not any type of the above "teraflops", but the resulting in-game FPS instead. And latter one, unfortunately, is not a direct product of any kind of "*flops". Qty, bit width and working frequency of ROPs have direct influence on the render target fillrate; qty of TMUs and their microarchitecture have direct influence on the texture fillrate and on the offscreen render targets processing speed. So-called "stream processors" or "cuda cores" which would be more correctly to call "FU (functional unit)" are what (for the most part) determines amount or "*flops" you had been writing about. Memory bus width, type (double or quadruple transfer rate) and flavor (GDDR vs. "ordinary" DDR) caps maximum achieved performance - the shorter bus width is and the cheaper memory modules are - the worse final maximum achieved performance would be. All in all, in-depth GPU architecture analysis are interesting for curious but are not what an ordinary user typically cares about. Resulting FPS and a videocard's cost - are.

                      This is why ATM I advice to stick with fastest cards from the GeForce 2xx series for now, as despite being slower in computational power when compared to "Fermi" series, they are still pretty fast for traditional rendering pipelines - and that does matter when playing under Wine as it emulates an older Direct3D 9 API on top of an older OpenGL 2.x/3.0 API - and the speed of this emulation is more sensible to the speed of performing traditional tasks like brute-force output and texture fillrate, triangle setup/T&L speed, and the speed of pretty simple vertex/fragment shader processing (GLSL 1.2 level at most). There's no point in having mighty "Fermi"-based card which offers decent speed of GPGPU computations, tessellation, black jack and hookers, while the real app running under Wine wouldn't use anything among these.

                      Originally posted by artivision View Post
                      ... If a DLL does wrong for example, then just go to dll-files page, download the DLL unpack and copy-paste in wine system32 folder.
                      You're absolutely right about following Wine's AppDB advices when configuring Wine to best suit target app, but the part about DLL downloading is correct in general but is a "dirty" one. It is illegal to download and use many of the aforementioned DLLs unless you own Windows license, which is not always the case for people using Wine under Linux and/or FreeBSD :-). In case you don't care about licensing issues - manually downloading and installing DLLs is a good way to go, but winetricks might still be handy in automating these routines.

                      Comment

                      Working...
                      X