Announcement

Collapse
No announcement yet.

NVIDIA GeForce GTX 550 Ti

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by jarg View Post
    Also how much tweaking was necessary to get starcraft working under wine?
    did you use winetricks for the task? or crossover? or just plain wine?
    There's no need in tweaking (mostly). Any fresh Wine from 1.3.x series would do. It might be needed to "winetricks wininet" because Wine's builtin implementation for this lib sometimes causes problems with SC2 installer and auto-updater (download progress stalls followed by unexpected CTD).

    Originally posted by jarg View Post
    I want a cheap built so probably I'll go with a phenom(can get a quad-core for $85 !!!) and the geforce 550 ti(since it looks like ati/amd is still a long way from having reliable drivers under linux). What do you guys think?
    Any 3GHz+ Phenom X4 would do (actually any X4 CPU would do, and even any hi-freq - i.e. 3GHz and up - X2 CPU would do). Make sure to have at least 4Gb of RAM to achieve smooth system behavior. As for videocard - if you're aiming at playing SC2 and not targeting future OpenGL games which use complicated render paths (so far there's only one game engine that's capable doing this - namely Unigine) - you'd get better FPS with fast card based on previous generation of
    nVIDIA GPU. Fast "OC" version of GTS-250 with 1Gb VRAM would perform faster than 550 Ti. Any GTX-270 card with wide memory bus width would also be faster. nVIDIA cards from 4xx/5xx series are great if you care about GPU compuiting power, tessellation and alike, but are slower than previous generations when it comes to traditional renderpaths from DirectX 9 and OpenGL2.x world.

    P.S. And make sure you head with nVIDIA way - cards from AMD are good but drivers are still not so stables as nVIDIA's are.
    P.P.S. Also be sure to use fresh enough version of Wine, at least 1.3.28, as there were a lot of changes in d3d emulation that boosted FPS a lot. Note, that you still won't be able to set highest quality settings in SC2 due to some render bugs showing up when shaders quality is set to be anything higher than "medium". To display in-game FPS counter use Ctrl+Alt+F combo (or Ctrl+Shift+F, or Ctrl+Alt+Shift+F - don't remember which is it exactly).

    Comment


    • #17
      In VGAs we count "stream teraflops" and not "instruction teraflops. A gtx280 has 240 32bit-mimd shaders with madd+mul(3ops) and 1.3-1.5ghz = 1Teraflop+. A gtx480 has 512 64bit-mimd(2*32bit-ops) with fmac(3ops) and 1.3-1.5ghz = 4Teraflops+. The best RadeonHD has 1600 32bit-simd shaders with madd(2ops or 1.6fmac-ops) and 800-900mhz = 2Teraflops in fmac mode. Best single nvidia is two times faster than single amd and four times faster than previous nvidia. Gtx400 vs gtx500 there is not difference, gtx200 vs gtx300 there is not difference, gtx8000 vs gtx9000 there is not difference. Best buy: a used gtx460, old model with 512cores(of course cut down to 384 for example), but you can free most of them(480) with another bios. At least 4-5teraflops are yours with some overclocking. Prise 90-100 US dollars!!!

      Comment


      • #18
        Also about wine-winetricks. Amd cards are a failure regardless of the power. To match d3d-to-ogl translations and Amd VLIW cant survive. If the game has native opengl then its usually fine. Only Nvidia is for Linux, so sell your Radeon and buy Nvidia, little money can bay you freedom, don't go Windows again. As for winetricks configuration just go to app-database in winehq page and find your app or game, you will see what it takes to make it work, don't do whats in your head. If still you cant make it work load it through console: right click on your game installation folder and "open console" or "open console" file manager option, then wine lineage2.exe for example. You will see what is the problem. If a DLL does wrong for example, then just go to dll-files page, download the DLL unpack and copy-paste in wine system32 folder.

        Comment


        • #19
          specific models with prices

          I very much appreciate your responses!

          Which way should I go:
          GIGABYTE GV-N26UD-896M REV2.0 GeForce GTX 260 896MB 448-bit GDDR3 for $99
          or
          Galaxy 25SGF6HX1RUV GeForce GTS 250 1GB 256-bit DDR3 for $79

          I'm looking for the most cost-efficient way to do this so any suggestions are welcome!

          Comment


          • #20
            Originally posted by artivision View Post
            In VGAs we count "stream teraflops" and not "instruction teraflops....
            ...and the only thing typical end-user really cares about is not any type of the above "teraflops", but the resulting in-game FPS instead. And latter one, unfortunately, is not a direct product of any kind of "*flops". Qty, bit width and working frequency of ROPs have direct influence on the render target fillrate; qty of TMUs and their microarchitecture have direct influence on the texture fillrate and on the offscreen render targets processing speed. So-called "stream processors" or "cuda cores" which would be more correctly to call "FU (functional unit)" are what (for the most part) determines amount or "*flops" you had been writing about. Memory bus width, type (double or quadruple transfer rate) and flavor (GDDR vs. "ordinary" DDR) caps maximum achieved performance - the shorter bus width is and the cheaper memory modules are - the worse final maximum achieved performance would be. All in all, in-depth GPU architecture analysis are interesting for curious but are not what an ordinary user typically cares about. Resulting FPS and a videocard's cost - are.

            This is why ATM I advice to stick with fastest cards from the GeForce 2xx series for now, as despite being slower in computational power when compared to "Fermi" series, they are still pretty fast for traditional rendering pipelines - and that does matter when playing under Wine as it emulates an older Direct3D 9 API on top of an older OpenGL 2.x/3.0 API - and the speed of this emulation is more sensible to the speed of performing traditional tasks like brute-force output and texture fillrate, triangle setup/T&L speed, and the speed of pretty simple vertex/fragment shader processing (GLSL 1.2 level at most). There's no point in having mighty "Fermi"-based card which offers decent speed of GPGPU computations, tessellation, black jack and hookers, while the real app running under Wine wouldn't use anything among these.

            Originally posted by artivision View Post
            ... If a DLL does wrong for example, then just go to dll-files page, download the DLL unpack and copy-paste in wine system32 folder.
            You're absolutely right about following Wine's AppDB advices when configuring Wine to best suit target app, but the part about DLL downloading is correct in general but is a "dirty" one. It is illegal to download and use many of the aforementioned DLLs unless you own Windows license, which is not always the case for people using Wine under Linux and/or FreeBSD :-). In case you don't care about licensing issues - manually downloading and installing DLLs is a good way to go, but winetricks might still be handy in automating these routines.

            Comment


            • #21
              GeForce GTS 250

              Thank you for your advice!
              Before I pull the trigger I just want to make sure this specific card is fast enough:
              Galaxy 25SGF6HX1RUV GeForce GTS 250 1GB 256-bit DDR3
              I can get it for $50 which is dirt cheap!
              Comments?

              Comment


              • #22
                You totally wrong about vga part. A shader can run raster and texture mapping data, but using rops and tmus making it faster(if you theoretically cut all tmus and rops, still the vga will produce the same graphics at 60% speed. Rops and tmus are as many needed to assist stream processors, so you count only teraflops. A 512bit fermi has 700 64bit instruction teraflops at 1.3-1.4ghz, but you must count stream 32bit simple add functions. So you multiply by 6(fmac=3ops,64bit dual issue cores=2ops), and its 4+teraflops. CellBE for example has 250-i-gflops or 3-s-teraflops, rsx(200gflops uses 6spes= +1.85tflops. In d3d-to-ogl translations, gflops don't mater match, that mater is the instruction set, if you have many emulation and jit instructions then you are fast, see this in l3c part for example: http://en.wikipedia.org/wiki/Loongson

                Comment


                • #23
                  1) D3d-to-ogl translations have nothing to do with vga power or generation. The best vga is the one with good emulation instruction on its instruction set. So cuda has 90% translation efficiency wile vliw
                  has 30%.
                  2) Wine support state-trackers for d3d and hlsl. You can install dx11 from winetricks and wine will run it inside opengl, with heavy translations like tessellation. So newer vga's are better.
                  3) Newest graphics-machines like unigine, unreal3, cry3, id4, id5, are unified and api-less. So the newest games will be d3d and ogl equal, without the need for translations. Old ones like unreal2 want
                  translations or bigger effort to be ogl friendly.
                  4) Do as i say and buy a 90dollar gtx460, do bios update to free more cores and some overclocking.

                  Comment


                  • #24
                    90??

                    Originally posted by artivision View Post
                    4) Do as i say and buy a 90dollar gtx460, do bios update to free more cores and some overclocking.
                    ok where do I find a $90 gtx 460?? cause they start from $160!

                    Comment


                    • #25
                      Originally posted by artivision View Post
                      You totally wrong about vga part...
                      Making such statements without quoting the original statements that you believe to be "totally wrong" is nothing else than trolling.

                      Originally posted by artivision View Post
                      ... so you count only teraflops....
                      Videocard end-user who use GPU for games usually counts exactly one thing - FPS. Terraflops, VLIW4 vs. classical RISC and all other thing are only interesting for curios, geeks and GPGPU users. This fact is pretty simple and obvious.

                      Comment


                      • #26
                        Originally posted by jarg View Post
                        Galaxy 25SGF6HX1RUV GeForce GTS 250 1GB 256-bit DDR3
                        DDR3 is a show-stopper here (for comparison diagram look here: http://www.ixbt.com/video3/guide/guide-06.shtml ; the article is written in Russian language but the diagram I'm writing about - it is one with "Far Cry 2" at the top right side - has only English words on it and is pretty self explanatory). Look for GDDR5 or - at the very least - GDDR3. Ordinary DDR3 is using 4x data transfers per clock, GDDR3 is based on DDR2 and uses 2x transfer rate per tick. So in case you would aim at GDDR3-equipped card - make sure it has at least 256-bit memory bus width and memory chips running at the highest freq possible.

                        As for the GPU GTS-250 card is based on vs. SC2 under Wine: my old GTS-250 from GigaByte had been able to run SC2 under wine at around @32-40FPS when playing full screen 1680x1050 without AA, forced 16x ANISO and in-game graphical settings all set to max except for shaders. Latter were set to "middle" and it enforced lightning into "Low" and post-processing into "middle". Now, with GeForce GTX-550 Ti equipped with 1Gb of GDDR5 VRAM and using 192-bit bus to access it, I've got around 40-45 FPS with the same settings. Setting shaders to "high" or "ultra" levels resulted in huge FPS drop with older versions of Wine. Now days performance drop isn't that big but it causes rendering glitches so I prefeur to stick with lower quality settings but play with higher FPS and visually-correct picture. Lowering shaders settings to "low" raised up FPS to 50-60 on GTS-250. Doing the same with 550 Ti results in smooth 60 FPS, so if you're really in search of smoothness and don't mind lowering quality - that's the way to go. Typical difference in speed between older generations of GPUs can be illustrated by this graph: http://www.ixbt.com/video/itogi-vide...1680-pcie.html

                        P.S. If you feel adventurous, have a good power supply installed in your PC and you are experienced enough to hack around with videocard BIOS reflash to unlock cores - it might be OK to head in the way artivision suggests you. Be prepared though that "Fermi" cards are pretty hot and power-consuming, and also you should be lucky enough to find a card that would suit your needs for a small enough cost on second-hand market. If you don't want "all that adventures" and simply want to buy-install-play something cheap (i.e. less than 150-200 USD) yet fast enough - aim at something like second-hand fab-OC version of GTS-250 from GigaByte with Zalman cooler (http://www.ixbt.com/video3/images/gi...scan-front.jpg, model GV-N250OC-1GI) or anything more modern you would be lucky to get for a good price. What to avoid: GT 240 and less, GTX-260 with less than 216 stream processors unlocked, any card with memory bus width less than 192-bit, any card which had non GDDRx type of memory installed, dual-GPU models.

                        Comment


                        • #27
                          Originally posted by artivision View Post
                          1) D3d-to-ogl translations have nothing to do with vga power or generation. The best vga is the one with good emulation instruction on its instruction set. So cuda has 90% translation efficiency wile vliw has 30%.
                          That's exactly the point: vga power, generation, e.t.c. - have nothing to do with d3d-2-ogl translation done by Wine, and they also doesn't matter at all for the end user as long as the card does its job and renders desired picture at 60FPS. 90% or 30% efficiency when it comes to execution at the low-level instruction set level doesn't matter for most of people out there as not everyone out there is GPGPU user and/or geek.

                          2) Wine support state-trackers for d3d and hlsl. You can install dx11 from winetricks and wine will run it inside opengl, with heavy translations like tessellation. So newer vga's are better.
                          Do you really believe in what you write here :-)? Had you ever tried it at home with any of latest DirectX11-capable titles? If "yes" and "yes", then - did it work? If "yes" again - would you mind posting a video on youtube that proves, for example, that DX11 features really work in - say - "Batman" when played under Wine with native DX11 libs installed through winetricks?

                          3) Newest graphics-machines like unigine, unreal3, cry3, id4, id5, are unified and api-less. So the newest games will be d3d and ogl equal, without the need for translations. Old ones like unreal2 want translations or bigger effort to be ogl friendly.
                          Tell that to engine creators out there. idTech4 engine you had mentioned has an ogl-based built in renderer and is pretty old. idTech5 was based on idTech4 and also officially uses ogl on PC. Essentially it is the only "triple A" engine out there that has ogl rendering backend for win32/64 platform. Unigine is cool and linux-friendly engine which has multiple rendering backends including ogl and various versions of d3d, but - unfortunately - this engine isn't widely used by gamedev companies yet (I hope the situation would change in a near future). Anything else out there only officially supports DirectX 9, 10 or 11 on PCs, despite the fact that the engines are really api-agnostic and ogl render backend can be written as easily as egl or d3d9. SC2 is a pretty good example: it has support for ogl in its engine (as it's the API SC2 uses on Mac OS X to do rendering), but it had been cut off from windows build of SC2 during the open beta-testing phase.

                          Comment


                          • #28
                            I thing we agree on most of the things, but i still disagree about newest graphic-machines like Unigine. In Unigine you don't have the choice(as documents say) to create different back-ends(d3d,ogl), you can only auto-generate both. There is not choice for Linux or Windows policy. And as for 90dolar gtx460, i mean used not new.

                            Comment


                            • #29
                              Originally posted by artivision View Post
                              I thing we agree on most of the things, but i still disagree about newest graphic-machines like Unigine. In Unigine you don't have the choice(as documents say) to create different back-ends(d3d,ogl), you can only auto-generate both. There is not choice for Linux or Windows policy. And as for 90dolar gtx460, i mean used not new.
                              Indeed we agree on most things, and also it's obvious that both you and me are geeks GPGPU users :-). As for Unigine - I hadn't had a chance to look into sources/SDK/anything-like-that and only had a chance to run their products including benchmarks like Tropics and Heaven and also OilRush game. I had tried them both on linux and windows hosts, and from what I've seen, engine allows to select rendering backend you wish to use (of course required API should be supported by underlying OS) and visual results for, say, DX10 vs. OpenGL 3/4 backends looks pretty much the same. Knowing that they have ported the engine to Android (thus they must be using something like EGL/OpenGL ES there) and that the feature set available through D3D9 is pretty limited when comparing to D3D10/11 or OGL3/4 made me believe that the engine itself is pretty modular and adding another rendering backend which would use yet another API (PoverVR, Glide, name-anything-you-like) shouldn't be of a big deal. IMO it's just the way I expect it to be with modern game engine - I don't like being artificially limited to a selected API (or its subset) as that would limit the portability of the resulting product - and that's not a good thing for nowdays gaming market, where you want to support as much target platforms/devices as possible. Xbox, PS3, iOS, Android, Win32/64, Linux/FreeBSD, Mac OS X - the more the better.

                              Actually this discussion is a bit offtop here, but to conclude I want to mention one fact: as a developer I don't want to use any of APIs and would prefer to have direct access to hardware features. Multiple levels of abstractions is a thing that plagues PC platform and drains a lot of performance. Yeah, having a "uniform spherical hardware in vacuum" offered for access through higher-level API is a convenient thing when writing some quick-n-hackish 3D app using utility lib like freeglut, but as soon as things would get complicated - you eventually would hit some strange behavior that can be explained only if we dive deeper into real hardware capabilities and end up finding out that actually we're hitting some hw limitation that the API ICD tries to silently workaround using slow-as-hell software fallback. That's why I like and look forward to wide adoption of OpenCL - it is possible to use it in a way so you would gain almost direct access to underlying ASIC and thus would be able to use it up to its limits.
                              Last edited by lexa2; 12-13-2011, 02:03 PM.

                              Comment


                              • #30
                                Totally agree that OpenCL and LLVM is the future in our situation. I will go a step farther and i will say that non reconfigurable processors must extinct. See "Tabula Abax 8-20floοrs" for example and imagine that flash-based(10+times energy efficient). It can even match 8xFermi server in a cellphone with a soft-ip like Mips or OpenCores.

                                Comment

                                Working...
                                X