Announcement

Collapse
No announcement yet.

"Mega Drivers" Being Proposed For A Faster Mesa

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • jrch2k8
    replied
    Originally posted by libv View Post
    What anholt is only now stating, is that LTO brings a 6% gain in the single most CPU intensive test he could find... The across the board gains will be a lot lower.
    1.) well i agree with you on the intel case and if they were waiting to see gallium progress i think AMD showed is good enough since in many generations is rivaling their FLGRX and is beyond 50% compared to win7/8 drivers. Maybe they should start taking iLO/LLVM more seriously.

    2.) maybe "Mega Drivers" is not exactly a priority right now[i think was your point since at some point got very melodramatic from my PoV] but maybe "Mega Mesa" could be a better approach first?, meaning gets as much mesa shared code into one LTO friendly blob

    3.) maybe mesa can gain something nuking all those dead drivers+infrastucture for mesa 10? i mean SiS, Via, r128, openchrome and related corpses of the dark ages[keep them in 9 release for the 3 guys that needed it]

    4.) maybe after reaching Opengl 3.3 stable in Intel/AMD/nouveau focus 1 whole major release in stabilization,threading, gallium[intel case] and vectorization, this should help a lot in cpu bound scenarios specially in AMD FX and Ivy bridge+ processors

    5.) maybe take part of the vadim and tstellar work on llvm and make mesa glsl use it to generate LLVM IR and let later to LLVM to decide the backend depending the GPU? this could help to optimize once and use all as much as possible

    just some ideas

    Leave a comment:


  • libv
    replied
    What anholt is only now stating, is that LTO brings a 6% gain in the single most CPU intensive test he could find... The across the board gains will be a lot lower.

    Leave a comment:


  • libv
    replied
    Originally posted by marek View Post
    Well, if you wanted all driver components to be in a single repository, that's not gonna happen. The kernel driver has to stay in the kernel and the mesa driver has to stay in Mesa, because the interface is the simplest and changes the least at the kernel<->mesa boundary. (BTW radeon gallium drivers don't use libdrm as the middle layer between the kernel and Mesa)
    Did i state that i needed _all_ components of _all_ drivers in driver specific repositories? I most certainly didn't. But i do believe that infrastructure (which is what mesa is) should try to encompass the needs of both users and driver developers as much as possible. Otherwise infrastructure fails.

    Look at Gallium versus Intel. What did gallium do wrong that intel couldn't use more bits of it? Or is intel just doing NIH as a rule?

    Originally posted by marek View Post
    All things that have to stay in Mesa should better be in one big blob to take advantage of link-time optimizations.
    This is all good and well, until someone else (who isn't called libv or who doesn't work for The Devil^W^WMark Shuttleworth^W^WCanonical or Microsoft^WNovell^WSuSE) finds another reason as to why this is not a move forward. And then all of this gets reverted again, and who knows how many "shortcuts" will have been made in the meantime, shortcuts which are much harder to undo than this build time change.

    Originally posted by marek View Post
    And one more thing: upstream only cares about upstream.
    How is this a reply to my previous statement?

    Are you stating that Mesa shouldn't care about users or driver developers? That it must have everyone marching in line, disallowing other ideas or free thought, and that it never ever should strive to deliver what its users really need or want?

    If anything only exists for sustaining itself, it has lost all use or relevance. So keep up that thinking with mesa, soon, with Surfaceflinger and binary graphics drivers, upstream mesa will end up having made itself superfluous.

    Leave a comment:


  • Serge
    replied
    Originally posted by Nille View Post
    I bet that 99% of all Home users never heard from TFTP/PXE/NFS. The Big Problem is that the most routers can't ship an tftp server ip with dhcp.
    Originally posted by Luke View Post
    Those of us who do not have a reliable high-bandwidth connection cannot install over the network and
    must have all packages or the filesystem image locally prior to beginning an installation. I have never
    had landline internet at home, thus never done and install over a network.
    Sorry guys, I wrote my post tongue-in-cheek. I wasn't actually suggesting that installing off of a USB drive is obsolete. I fully agree that network installs are of a greater complexity and do not cover all use cases.

    My primary interest with network installs (should throw SSH + rsync/SCP into the mix) comes from me being tired of downloading an installation image, then writing that installation image to the installation media, then installing. I started playing around with that stuff as a means of reducing the number of times I copy the same data. The main reason I like network installs is because it reduces the number of reads and writes I tack onto my USB drives' odometers, but I also want to reduce the number of pointless copies of data in general. And if you haven't guessed from reading the above by now, I'm also a bit OCD, so that probably has something to do with it too.

    Edit: Just want to add, before I get yelled at, that yes I realize that not all of the above would actually reduce the number of times the installation image is being copied. Reducing usage of USB drives themselves is a good enough start for me. But next time I install an OS on the same computer I use for the download, I'm going to try downloading the installation image into tmpfs and unpacking it from there into a place on my hard drive that I can then point my bootloader to.
    Last edited by Serge; 10 August 2013, 11:04 PM.

    Leave a comment:


  • Ibidem
    replied
    As soon as I saw the article I thought "libv's not going to like this", but didn't want to jump in.

    But anyhow:
    Sure, if you want to _allow_ building it this way, go ahead!
    If you want to _force_ it, that's another story altogether.

    Why should I be forced to install every driver on earth on my Atom N270 based netbook (currently running Ubuntu, and will stay on a binary distro) that's running very short on disk space?
    Why should I need to download all of them at once when the kernel wireless driver is too flaky to reliably download a kernel update?
    (madwifi-hal is the only driver that works, so I'm stuck on 2.6.3x which means Lucid.)

    And yes, modular is good. It's good for bandwidth, good for diskspace, good for finding regressions (I don't have to get _all_ of Mesa for every version I test), it's a good way to force people to realize when their changes are overly invasive...

    Leave a comment:


  • Luke
    replied
    That's too much work for multiple installs

    Originally posted by locovaca View Post
    That's fine. PXE boot is by definition a local installer. In this case you could run PXE, TFTP, and NFS servers on your netbook or another machine. You would download the ISO on your Netbook and extract the contents to the root folder, update a couple of config files, then put it on your local network. You'd then do a PXE boot on any other machine in your house that you wanted to install to. If you do frequent installs it's much nicer.
    I'm not going to use any distro for a new installation that requires network booting or multiple machines hooked together. For first install I will always use the ISO directly. If I want to put something in mulitiple machines it is far easier to install and configure once, then simply dd the entire root partition to a file. Tar that up, and along with a USB stick with any distro on it plus Cryptsetup, LVM, and MDADM I have my installer for my finished OS with all my programs.

    Like I said, different users have different skills and different needs. Therefore, we need all three methods of installing: Optical, USB, and network.

    Leave a comment:


  • GreatEmerald
    replied
    Originally posted by libv View Post
    It smells like a cheap trick, along the lines of building the mesa binaries optimized for specific SSE versions...
    These "cheap tricks" happen to be the main feature of Gentoo. LTO is one of the things it doesn't work correctly with just yet.

    Leave a comment:


  • marek
    replied
    Originally posted by libv View Post
    It smells like a cheap trick, along the lines of building the mesa binaries optimized for specific SSE versions...

    What Eric does not seem to want to do is to make this a build time option, and he seems to intend to make this the "one and only way" for everything mesa, disallowing the other use-cases, just for a few percent of perhaps loadtime. All of it reeks to high hell of bad software practice imho.
    Well, if you wanted all driver components to be in a single repository, that's not gonna happen. The kernel driver has to stay in the kernel and the mesa driver has to stay in Mesa, because the interface is the simplest and changes the least at the kernel<->mesa boundary. (BTW radeon gallium drivers don't use libdrm as the middle layer between the kernel and Mesa) All things that have to stay in Mesa should better be in one big blob to take advantage of link-time optimizations. And one more thing: upstream only cares about upstream.

    Leave a comment:


  • libv
    replied
    Originally posted by marek View Post
    I think most people don't get that the primary reason for the "megadrivers" is that it improves performance in CPU-bound apps. All the other reasons are not so important for developers to waste time on.
    It smells like a cheap trick, along the lines of building the mesa binaries optimized for specific SSE versions...

    What Eric does not seem to want to do is to make this a build time option, and he seems to intend to make this the "one and only way" for everything mesa, disallowing the other use-cases, just for a few percent of perhaps loadtime. All of it reeks to high hell of bad software practice imho.

    Leave a comment:


  • marek
    replied
    I think most people don't get that the primary reason for the "megadrivers" is that it improves performance in CPU-bound apps. All the other reasons are not so important for developers to waste time on.

    Leave a comment:

Working...
X