Announcement

Collapse
No announcement yet.

The Fermi!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    I have a 780G with the HD3200 integrated on it (I don't use it I use a 9600 GT), but I have tried the 3200 a while back and it wasn't bad on fglrx. I tried various 3d applications and games and everything seemed to work just fine for the most part. While compositing at the time was iffy (can't speak for it now as I have no idea), not having compositing doesn't break the experience, its just a bonus.

    I think the "quake 3 almost works" argument is a little outdated (by like 2+ years) at this current time though, it definitely works well on fglrx and has for quite some time.

    As far as my thoughts on my 9600 GT.. I have used this card for over a year and it has worked pretty well. It feels pretty unstoppable on Linux from a performance standpoint as there isn't much out there that challenges it as sad as it is.

    However, if I were forced to buy a new video card right now, I would probably go ATI, just for the fact they have more attractive cards in the 100-200$ price range right now.

    And as for things like this:

    Originally posted by Kano View Post
    Only ati fanboys buy those cards for Linux usage.
    Yeah.. And you're just unbiased right? I smell nVidia fanboy. Why is it that nVidia fanboys are the most snotty pieces of.. Hmm, I'll stop there .

    Originally posted by energyman View Post
    well, since there are no open source drivers for the 6200 ... you just lost the argument from the start.

    I just lean back and rest comfortable on the fact that AMD already started to release information about the 5XXX series. While Nvidia can't even be assed to help with documentation for Geforce 4, FX or 6.
    nVidia cards do have open drivers, they have nouveau but as you already know that's just reverse engineered and will probably take years until one could actually use it on a production machine. The open source ATI and RadeonHD drivers on the other hand.. Those will probably be very usable in the next 2-3 months. Around the time we see the big distros like Ubuntu, OpenSUSE and such, have them included in their new releases (Not the experimental crap in Fedora even though that was nice I guess). Infact I've already heard quite a bit of positive feedback about those open drivers, should be even better 2-3 months from now.

    Comment


    • #52
      There will be at most 8000 GeForce GTX 480 cards at launch, if one is to believe pconline.com.cn (Google translation). So you better secure your specimen quickly.

      The number of GTX 460 cards is probably going to be higher. But those are decidedly less interesting for gamers, as they will have a harder time outperforming Radeon 5870.

      Comment


      • #53
        Originally posted by Qaridarium
        be sure nvidia do have a X86 core right in the past by buying a other company..

        be sure nvidia Buy a hole team of CPU specialists they buy the transmeta team and no they do not buy transmeta or the patents.. no! they buy the devs and engineers from transmeta...

        in fakt! nvidia can build a "Very long instruction word or VLIW" to emulate a X86 Cpu with no X86 Lizence from intel like transmeta do this in the past.

        modern games do not need a highend cpu

        if the game run the physic in the GPU and also handle video acceleration in the GPU and all other stuff with OpenCL in the Gpu...

        i'm sure an very old transmeta cpu refresh quatcore @1 ghz can handle this the CPU only handle 'windows' and pull the date from RAM to the GPU.
        Emulation is not a x86 processor. I am well aware of Transmetas offerings and it's CMS offerings (it's the last portable machine that I had any use for). That is not however what Charlie meant. Also your right Nvidia does have a x86 core, it aquired that back when it aquired ULi but it however does not mean they have the right to manufacture them.

        Comment


        • #54
          Originally posted by Qaridarium
          yes they have!... and yes they have the power to create a intel-free-CPU+emulate X86---

          transmeta do this in the past nvidia will do this in the future!.

          thats works... only because the CPU isn't important anymore.

          for exampel remix an 6 year old transmeta single core 1ghz to an 'global foundrees' 45nm or 32nm 4core transmeta stylish cpu @ 1ghz will consume '25watt' and nvidia bring a platform lowend cpu+highend GPU with fokus on OpenCL..... no one will miss anything because the GPU does all the work.

          modern multicoredx11 rendering+fbo-based rendering+bulledphysik-opencl brings no cpu consuming games and nerly all is handled by the GPU....

          why not?
          As a former owner of a transmeta based system, I can clearly testify that it's performance even it's Efficeon processor lagged badly executing anything remotely demanding. While Gaming is GPU heavy there still lies a point where a sufficient CPU is needed and with emulation it is very hard to get that level of performance there. Even Intel with it's Itanium processor has a hard time keeping up to a base x86 CPU because it again has to use emulation to run x86 code. Emulation is a heavy load no matter how you look at it. Combine that with the fact that a lot of software cannot benefit from parallelism and there is a point to which throwing more cores at those apps doesn't do a damn thing. Combine emulation and applications where clockspeed can't be replaced by parallelism and your returns in in performance start dropping exponentially. As far as nvidia acquiring a bunch of ex transmeta staff you can be rest assured that many of them were brought in for Tegra.

          Comment


          • #55
            Originally posted by Qaridarium
            sure sure... but nativ X86 core is a illusion!

            because Pentium1 isn't a nativ CISC core!...

            nativ X86 cores dies long time ago...

            all modern X86 cores emulate the CISC instruction set for the software internal there is no CISC=Complex Instruction Set Computing ------

            du talk abaut emulate a CISC in VLIW is slow but amd and intel emulate a CISC in RISC and only some extansions like SSE are nativ.

            ROP, Micro-Op, ?Op emulation kills CISC.....


            and i tell you the true in the future VLIW emulation kills CISC and RISC!


            because if intel have sse3 you cann emulate sse3 if intel now brings sse4 our you can emulate sse4 if intel goes sse5 you can emulate sse5 and so one,,,''

            in VLIW you have 1 hardware and you can optimise all in software!

            a hd5870 is VLIW to!...

            CISC and RISC is no more an option....

            intel dies a long long death with X86 technics because all future projekts goes for GPU OpenCL and lowlowlowend cpus like transmeta superslowlowlowlwoend cpus.

            there is no future for non VLIW produkts!...


            on VLIW you can save transistors on the waver save money and save energie consuming!...
            Hey I'm not saying eventually something like that won't occur but it is extremely doubtful within 10 years of that happening. Until you can scrap every then legacy piece of software out there it is highly unlikely you will see anything like your grand plans in the near future. By any rate, Charlie is still shooting blanks as it was supposed to be out two years ago with his "confirmed sources".

            Comment


            • #56
              Just to butt in on the conversation, nvidia has done a lot of gpgpu work with fermi - isn't supposed to be able to run c++, or some type of c++ code? Now granted I haven't worked on a big scale project like that, but I would assume that cpu developers would be useful for gpgpu things too.
              I'll also coin in on VLIW - that's generally agreed to be a bad idea for cpu architectures. The reason is simple: cpu's run generic code. They have to be able to handle many different types of applications, some including a heavy amount of branching, and let's not forget context switching (so called "multitasking"). So I doubt that underlying RISC architectures are going anywhere anytime soon.

              Comment


              • #57
                One could argue that modern x86 CPUs translate the x86 instruction stream dynamically into VLIW instructions which drive the different processing blocks. The key point there is that the size of the VLIW is not visible to the application program, so you don't have to recompile apps when (say) going from a 2-issue to a 4-issue processor.

                This is less of an issue for a GPU (or any environment where the parallelism is implicit in the programming model) because the hardware designers don't need to change the VLIW size to add performance, they can scale "the other way" by increasing the number of simultaneously executed VLIW instructions (ie more shaders).

                GPUs also have the advantage of having an implicit compile step between the application and the hardware (unlike CPUs) so again the choice of instruction format can be based on hardware efficiencies more than on the need for opcode portability.
                Test signature

                Comment


                • #58
                  Erg, this is what you get for first day back at work after holidays. I had meant CISC rather than vliw - I was getting mixed up. My bad.
                  To correct - CISC isn't good for cpu's in general, but VLIW vs superscalar is another matter.
                  My brain has since rewired itself to work correctly again!

                  Comment


                  • #59
                    Yeah, I find all the TLAs blur together after a while. Especially the four-letter TLAs
                    Test signature

                    Comment


                    • #60
                      Some clarifications:
                      Fermi means gf100 processors and 40 nm architecture.
                      Right?
                      Is there any other processor based on Fermi technology named in some other way than gf100?
                      The next cards based on this processors will be gtx470, gtx480, gtx360, gtx380.
                      Did I leave out anything?
                      Is there any other card based on Fermi architecture currently on the market or ready soon?

                      Thanks in advance

                      Comment

                      Working...
                      X