Announcement

Collapse
No announcement yet.

Clock-For-Clock, Nouveau Can Compete With NVIDIA's Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ворот93
    replied
    Originally posted by birdie View Post
    Writing GPU drivers is considered the most difficult and troublesome job in programming - it took NVIDIA years to polish their drivers and they have full specs for their hardware.

    nouveau developers are basically banging against the wall trying to create a good open source alternative to proprietary drivers.
    End users don't pay attention to this bitching about "unfairness". Running games is what they want.

    Still, very impressive work by nouveau developers. From the engineering PoV it is rather complete it seems. Some more features and we have our FOSS driver replacement.

    Leave a comment:


  • droidhacker
    replied
    Originally posted by mirza View Post
    This is big! Hopefully, dynamic clocking is not something incredibly complex (I have no idea). Irony is, this way Linus will probably end up buying one of NVIDIA cards pretty soon.
    I wouldn't count on that.
    nouveau is NOT a product of nvidia, it does absolutely nothing to position nvidia as a supporter of open source. They remain as hostile as always. Nouveau is a reverse engineering / hacking project.

    Leave a comment:


  • artivision
    replied
    Originally posted by bridgman View Post
    Starting with GCN the hardware went from VLIW SIMD to single component scalar+SIMD so llvm became a viable choice for the compiler, and we used the llvm compute back-end from Catalyst as a starting point for the GCN shader compiler. I think we are doing what you want.

    The downside is that llvm has a big learning curve, so six months after the initial userspace push there are still some compiler issues being discovered and worked out. Compiler issues tend to present as visibly nasty bugs, unfortunately, but there's also a "hey another 200 tests suddenly pass" aspect when each one is fixed.

    The Catalyst graphics shader compiler doesn't use llvm, which is why we released the compute back-end instead.


    1) I speaking only about open source drivers and I'm asking: why your open source driver doesn't target your hardware efficiently. Intel open source is only less than a generation behind closed source, and has 80+% the performance. The most of the work its all ready there, we only need optimizations that you only know how.

    2) And lets assume that someone wants to install your closed driver. The thing is simple: do it as a closed extension for the open driver. All the closed things have the same problem: Bad quality and compatibility. There is no good, something important as a driver to be closed.

    3) A Radeon HD4800 (1.2-1.5tflops) its useless on Linux. Doesn't play many new titles via Wine like Tera_Online_(Unreal engine 3). The thing is that you drop support from Catalyst and those cards will never play the new staff. Just release good open source drivers. So Winers will work with them.

    Leave a comment:


  • Morpheus
    replied
    What about bios flashing ?

    My question is pretty all in the title : is it possible to hard-set the frequencies in the BIOS, and try nouveau with the modified hardware ?

    I'm pretty interested in that, because I still have problems with using my ips226v with proprietary driver (edid issues with DVI link). If I can have the same perf (main point) with nouveau, that works with my two screens, including the problematic one.

    Or if some phoronix reader has had the same problem and found a working fix (already tried to pump the edid info from windows, et put it in the xorg.conf file), I'm okay with it.

    All that to prepare to use steam, hopefully.

    Leave a comment:


  • bridgman
    replied
    Originally posted by artivision View Post
    With MESA and Intel we have good compilers, good synthesizer and FX processor, and good version support. The only thing missing is that we can't target the Nvidia or AMD hardware with quality. The problem is that we don't have good target and optimizer libraries for this hardware. LLVM can replace hand righten optimizers, but you still need to integrate things to LLVM, plus you need good back-end that represents the hardware well. Those bad companies they don't give as at lest the back-ends on purpose, and in favor of their blob and their control over as.
    Starting with GCN the hardware went from VLIW SIMD to single component scalar+SIMD so llvm became a viable choice for the compiler, and we used the llvm compute back-end from Catalyst as a starting point for the GCN shader compiler. I think we are doing what you want.

    The downside is that llvm has a big learning curve, so six months after the initial userspace push there are still some compiler issues being discovered and worked out. Compiler issues tend to present as visibly nasty bugs, unfortunately, but there's also a "hey another 200 tests suddenly pass" aspect when each one is fixed.

    The Catalyst graphics shader compiler doesn't use llvm, which is why we released the compute back-end instead.
    Last edited by bridgman; 07 November 2012, 04:13 AM.

    Leave a comment:


  • ryao
    replied
    These benchmarks are only useful to those who want to get the best performance out of the lowest performance state of their GPUs.

    Furthermore, this "out-of-box configuration" concept is fallacious. If people choose their own graphics drivers, they are not opting for an "out-of-box configuration". I doubt that any benchmarks phoronix produces will be useful to people unless Michael abandons his "out-of-box configuration" mentality. These out-of-box configurations exist largely in his head.

    Leave a comment:


  • fireboot
    replied
    "Most of the tests in this article are also rather basic GL2 games that aren't too demanding on OpenGL compared to game engines like Unigine and Source"

    Why didn't you run a Unigine benchmark?

    Leave a comment:


  • artivision
    replied
    Originally posted by pingufunkybeat View Post
    Not GL functionality, it had something to do with command queue, or stream packing, or something like that, and might have had to do with the shader compiler and/or VLIW. Bridgman wrote about it, and thought that the Radeon/Catalyst gap should shrink as a result. Can't find the thread (here on phoronix) for the life of me now.

    With MESA and Intel we have good compilers, good synthesizer and FX processor, and good version support. The only thing missing is that we can't target the Nvidia or AMD hardware with quality. The problem is that we don't have good target and optimizer libraries for this hardware. LLVM can replace hand righten optimizers, but you still need to integrate things to LLVM, plus you need good back-end that represents the hardware well. Those bad companies they don't give as at lest the back-ends on purpose, and in favor of their blob and their control over as.

    Leave a comment:


  • pingufunkybeat
    replied
    Not GL functionality, it had something to do with command queue, or stream packing, or something like that, and might have had to do with the shader compiler and/or VLIW. Bridgman wrote about it, and thought that the Radeon/Catalyst gap should shrink as a result. Can't find the thread (here on phoronix) for the life of me now.

    Leave a comment:


  • glisse
    replied
    Originally posted by pingufunkybeat View Post
    This does sound like clueless blabbering, but there is some truth to that. Nvidia's older designs did put lot of stuff into hardware, which had to be handled by the driver on AMD cards.

    The latest AMD generation (GCN) changed that, IIRC, so it's expected that the drivers will come closer to matching the maximum performance there.
    There is virtually no difference in what is accelerated by hw or not since shader is the norm ie since dx10 GPU or since nv50 or r600 if you prefer.

    Nor there is secret command on AMD GPU, anyone can go look at what fglrx is doing to see that for themself.

    Leave a comment:

Working...
X