Announcement

Collapse
No announcement yet.

"Mega Drivers" Being Proposed For A Faster Mesa

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by libv View Post
    Amazing. How one still can go in against common software development sense even after having worked in the software industry for 7 years.
    Truly amazing indeed.
    There was tying of stuff that caused many problems but for some people the problems caused by this just isn't enough.
    About the numbers. The gains seem very small. Other performance enhancements from optimizing software by optimizing algorithms, using hardware capabilities are much higher.

    Comment


    • #22
      Originally posted by Luke View Post
      To install a new system from scratch (not a copy of my existing systems), I must first go to a library
      to fetch a full CD/DVD installer image from an uncapped/unthrottled internet connection. Then that is taken
      home and used for installation. The exact same OS on the netbook, updated on the road, later
      fetches packages which are harvested from /var/cache/apt/archives and used again at home on
      the video editing desktop machine.

      For actually running a system, I require all binaries locally on my own machines not only for
      bandwidth but also security reasons.

      Thus, installing from a full installer image on a flash drive or a DVD is by no means obsolete,
      different users have different needs. With open source and many distros, people can choose for
      themselves.
      That's fine. PXE boot is by definition a local installer. In this case you could run PXE, TFTP, and NFS servers on your netbook or another machine. You would download the ISO on your Netbook and extract the contents to the root folder, update a couple of config files, then put it on your local network. You'd then do a PXE boot on any other machine in your house that you wanted to install to. If you do frequent installs it's much nicer.

      Comment


      • #23
        Originally posted by locovaca View Post
        That's fine. PXE boot is by definition a local installer. In this case you could run PXE, TFTP, and NFS servers on your netbook or another machine. You would download the ISO on your Netbook and extract the contents to the root folder, update a couple of config files, then put it on your local network. You'd then do a PXE boot on any other machine in your house that you wanted to install to. If you do frequent installs it's much nicer.
        On PXE Boot the Bootsystem get an TFTP Server from the DHCP. The most Routers doesn't give an option to set one. For this you have to run your own dhcp server on the other machine and a tftps server etc.... The most user doesn't has the knowledge about this.

        Comment


        • #24
          I think most people don't get that the primary reason for the "megadrivers" is that it improves performance in CPU-bound apps. All the other reasons are not so important for developers to waste time on.

          Comment


          • #25
            Originally posted by marek View Post
            I think most people don't get that the primary reason for the "megadrivers" is that it improves performance in CPU-bound apps. All the other reasons are not so important for developers to waste time on.
            It smells like a cheap trick, along the lines of building the mesa binaries optimized for specific SSE versions...

            What Eric does not seem to want to do is to make this a build time option, and he seems to intend to make this the "one and only way" for everything mesa, disallowing the other use-cases, just for a few percent of perhaps loadtime. All of it reeks to high hell of bad software practice imho.

            Comment


            • #26
              Originally posted by libv View Post
              It smells like a cheap trick, along the lines of building the mesa binaries optimized for specific SSE versions...

              What Eric does not seem to want to do is to make this a build time option, and he seems to intend to make this the "one and only way" for everything mesa, disallowing the other use-cases, just for a few percent of perhaps loadtime. All of it reeks to high hell of bad software practice imho.
              Well, if you wanted all driver components to be in a single repository, that's not gonna happen. The kernel driver has to stay in the kernel and the mesa driver has to stay in Mesa, because the interface is the simplest and changes the least at the kernel<->mesa boundary. (BTW radeon gallium drivers don't use libdrm as the middle layer between the kernel and Mesa) All things that have to stay in Mesa should better be in one big blob to take advantage of link-time optimizations. And one more thing: upstream only cares about upstream.

              Comment


              • #27
                Originally posted by libv View Post
                It smells like a cheap trick, along the lines of building the mesa binaries optimized for specific SSE versions...
                These "cheap tricks" happen to be the main feature of Gentoo. LTO is one of the things it doesn't work correctly with just yet.

                Comment


                • #28
                  That's too much work for multiple installs

                  Originally posted by locovaca View Post
                  That's fine. PXE boot is by definition a local installer. In this case you could run PXE, TFTP, and NFS servers on your netbook or another machine. You would download the ISO on your Netbook and extract the contents to the root folder, update a couple of config files, then put it on your local network. You'd then do a PXE boot on any other machine in your house that you wanted to install to. If you do frequent installs it's much nicer.
                  I'm not going to use any distro for a new installation that requires network booting or multiple machines hooked together. For first install I will always use the ISO directly. If I want to put something in mulitiple machines it is far easier to install and configure once, then simply dd the entire root partition to a file. Tar that up, and along with a USB stick with any distro on it plus Cryptsetup, LVM, and MDADM I have my installer for my finished OS with all my programs.

                  Like I said, different users have different skills and different needs. Therefore, we need all three methods of installing: Optical, USB, and network.

                  Comment


                  • #29
                    As soon as I saw the article I thought "libv's not going to like this", but didn't want to jump in.

                    But anyhow:
                    Sure, if you want to _allow_ building it this way, go ahead!
                    If you want to _force_ it, that's another story altogether.

                    Why should I be forced to install every driver on earth on my Atom N270 based netbook (currently running Ubuntu, and will stay on a binary distro) that's running very short on disk space?
                    Why should I need to download all of them at once when the kernel wireless driver is too flaky to reliably download a kernel update?
                    (madwifi-hal is the only driver that works, so I'm stuck on 2.6.3x which means Lucid.)

                    And yes, modular is good. It's good for bandwidth, good for diskspace, good for finding regressions (I don't have to get _all_ of Mesa for every version I test), it's a good way to force people to realize when their changes are overly invasive...

                    Comment


                    • #30
                      Originally posted by Nille View Post
                      I bet that 99% of all Home users never heard from TFTP/PXE/NFS. The Big Problem is that the most routers can't ship an tftp server ip with dhcp.
                      Originally posted by Luke View Post
                      Those of us who do not have a reliable high-bandwidth connection cannot install over the network and
                      must have all packages or the filesystem image locally prior to beginning an installation. I have never
                      had landline internet at home, thus never done and install over a network.
                      Sorry guys, I wrote my post tongue-in-cheek. I wasn't actually suggesting that installing off of a USB drive is obsolete. I fully agree that network installs are of a greater complexity and do not cover all use cases.

                      My primary interest with network installs (should throw SSH + rsync/SCP into the mix) comes from me being tired of downloading an installation image, then writing that installation image to the installation media, then installing. I started playing around with that stuff as a means of reducing the number of times I copy the same data. The main reason I like network installs is because it reduces the number of reads and writes I tack onto my USB drives' odometers, but I also want to reduce the number of pointless copies of data in general. And if you haven't guessed from reading the above by now, I'm also a bit OCD, so that probably has something to do with it too.

                      Edit: Just want to add, before I get yelled at, that yes I realize that not all of the above would actually reduce the number of times the installation image is being copied. Reducing usage of USB drives themselves is a good enough start for me. But next time I install an OS on the same computer I use for the download, I'm going to try downloading the installation image into tmpfs and unpacking it from there into a place on my hard drive that I can then point my bootloader to.
                      Last edited by Serge; 10 August 2013, 11:04 PM.

                      Comment

                      Working...
                      X