Announcement

Collapse
No announcement yet.

The Quest Of Finding Linux Compatible Hardware

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by [Knuckles] View Post
    Michael: In price comparisons, please also consider non-US users. Most benchmark sites *cough*anandtech*cough*tom's hardware*cough* tend to forget we exist. We don't use dollars, don't have access to newegg, and don't have rebates. So a very valid price comparison or guide in the US is normally not applicable in Europe or other places in the world.
    I think I can get Amazon UK/etc supported, but for anything else I need to be informed of such foreign stores that have affiliate setups and pricing APIs. Nothing on OpenBenchmarking.org is manually pulled.
    Michael Larabel
    https://www.michaellarabel.com/

    Comment


    • #12
      You could get amazon.de (germany) to work as well
      Then you'll have at least one (or 2 if you count Austria) Euro countries.

      I guess amazon.de will have a similar, if not the same, api as uk/us amazon.

      Comment


      • #13
        Wow, that sounds really valuable Michael. An empirically-generated HCL (Hardware Compatibility List) for arbitrary distros and configurations.

        Of course, the success of the system depends upon three things in particular that I can identify:

        (1) People using the system extensively enough that all relevant hardware has been tested. If there aren't enough data points, then it just becomes another search engine that happily says "0 results found".

        (2) How heavily can users rely upon the results provided by other users? Maybe there is a misconfiguration, custom software, or a special piece of hardware (e.g. vendor customizations on top of a IHV base chip, especially with e.g. laptop sound systems) that can skew the results. And of course any information that is expected to come from the user, rather than the operating system, is suspect, as users are often wrong about judging what they think their hardware is, intentionally or by accident.

        (3) How much detail does the system go into about the things we care about? For example, a gamer will want to know whether graphics card X running driver Y can run game Z with reasonable performance and no rendering artifacts. Just because someone running graphics card X says "it works nice on Ubuntu" doesn't mean, then, that they were using the same driver, or that said driver will be able to run the game you care about. As a particular example, finding a graphics card / driver combo able to run games like Savage 2, Heroes of Newerth, or Unigine games would be an interesting exercise, in a hypothetical future world where some implementations of some Mesa drivers can render these games correctly. Right now it's very cut-and-dry: use Radeon HD4000 or later, or Nvidia G80 or later with the respective proprietary drivers. But in the future it may not be as obvious if r600g starts to support the extensions required by these games.

        The value of such a database would be related to the depth of the data gathered, imho. Very general comments like "Works" (as in other HCLs) is almost completely useless in the case of complex hardware like GPUs, where you can say "Works" to some degree if it can start X -- and that's true for every card supported by vesa. And sometimes performance isn't the most important issue, either: I don't care if I can get 500 fps in a "benchmark" of Unigine, if all the effects aren't rendered and half the textures are black. Driver implementations can, and very often do, process API calls incorrectly without crashing or otherwise reporting an error, even if the resulting image is partially or completely messed up. Incorrect rendering is as damning, if not more damning, than bad performance. Automatically detecting correct rendering (or, alternatively, relying upon the user to report rendering issues) should become a crucial input for reporting results to the database.

        Comment


        • #14
          Originally posted by allquixotic View Post
          (1) People using the system extensively enough that all relevant hardware has been tested. If there aren't enough data points, then it just becomes another search engine that happily says "0 results found".
          I continue to be amazed at what I already find on the system... Very few hardware yields 0 results. Heck, I even ended up finding Sandy Bridge information dating back to mid December!

          Originally posted by allquixotic View Post
          As a particular example, finding a graphics card / driver combo able to run games like Savage 2, Heroes of Newerth, or Unigine games would be an interesting exercise
          Write test profiles for HoN and Savage and you can easily find out.

          Originally posted by allquixotic View Post
          The value of such a database would be related to the depth of the data gathered, imho. Very general comments like "Works" (as in other HCLs) is almost completely useless in the case of complex hardware like GPUs, where you can say "Works" to some degree if it can start X -- and that's true for every card supported by vesa.
          Only manual user data is inputting save name, identfier, and description from the Phoronix Test Suite. No other manual data is asked of the user.

          Originally posted by allquixotic View Post
          And sometimes performance isn't the most important issue, either: I don't care if I can get 500 fps in a "benchmark" of Unigine, if all the effects aren't rendered and half the textures are black...Automatically detecting correct rendering (or, alternatively, relying upon the user to report rendering issues) should become a crucial input for reporting results to the database.
          The Phoronix Test Suite already supports this: http://www.phoronix.com/vr.php?view=14380 and there will continue to only get more qualitative tests going forward.
          Michael Larabel
          https://www.michaellarabel.com/

          Comment


          • #15
            LinuxBIOS

            don't leave out coreboot support

            Comment


            • #16
              Originally posted by Nobu View Post
              Well, sometimes you have to be careful about what network adapter or printer/scanner you get, but I guess that's not the kind of hardware Michael was referring to.
              I'd say you should be less concerned with network adapters these days and more about which printers and scanners. And more the scanners.

              The rule for printers seems to be HP, Epson, or any printer properly supporting HP PCL or Postscript just like "picking NVidia" for graphics cards. But you've got to do a bit of research before buying to avoid the lemons in that space.

              Scanners...now that's a minefield still (and this would include support on MF devices...). Most of the Epson MF devices seem to be supported "okay" as with HP's (though there ARE devices that support hasn't happened on for both brands...). Cannon's stuff is hit or miss (Part of the CanoScan LIDE line is WELL supported, part of it is in the expensive paperweight/doorstop category... Same goes for their printers...sigh...)

              Comment


              • #17
                Originally posted by RealNC View Post
                The quest for Linux-compatible hardware is actually quite simple:

                * Get any sound card except X-Fi.
                * Get an NVidia graphics card.

                That's all there is to it
                Compatible with open source drivers? No, the OSS drivers for AMD cards are much more compatible i.e. many more features are functional than for nVidia cards. This means with Linux bundles/distros containing the open source drivers, nVidia will not be more compatible out-of-the-box.
                Compatible with proprietary drivers though? You may be right, in some ways, though other ways that isn't true. For instance, even with fglrx I can still use xrandr and the open source GUIs which utilize that, while AFAIK you still cannot with nVidia's proprietary driver. If you don't care about standards as I do so be it, but standards and openness give you more options and empower you and the community, so those kinds of things are better to support.

                Comment


                • #18
                  Originally posted by Michael View Post
                  I continue to be amazed at what I already find on the system... Very few hardware yields 0 results. Heck, I even ended up finding Sandy Bridge information dating back to mid December!
                  Then hopefully the existing result set is enough to help kickstart future adoption.
                  Originally posted by Michael View Post
                  Write test profiles for HoN and Savage and you can easily find out.
                  This sounds like fun! I'll see what I can't scrounge up with my modder friends in the S2 community...

                  Originally posted by Michael View Post
                  Only manual user data is inputting save name, identfier, and description from the Phoronix Test Suite. No other manual data is asked of the user.
                  With that particular quote, I was referring more to the searchable fields in the database, which of course, have to correspond to results collected from tests. I'm not sure if application compatibility is going to be a priority, but I think it should be. Here's a simple example.

                  I'm Joe User, and I'm a Linux gamer. To figure out whether I can play the hottest new games coming out on Linux, I just have to go to your website, and do a search for video card / graphics driver combos that support my game. The definition of "support" will vary between users, so maybe there should be different test profiles with different in-game settings. The complexities abound:
                  • Maybe the games I'm interested in only work with the binary driver? I'd like to know that up-front.
                  • Maybe a certain game only works on minimum detail settings, or using an optional renderer written to an older OpenGL spec? I'd still like to see the graphics card / driver pair that produced a running game included in the results.
                  • Rarely, the minimum detail settings will employ techniques that cause rendering issues or crashes, but the higher detail settings use newer paths (e.g. GLSL) that work correctly! It'd be great to know this, too.


                  We need this level of detail, because the OpenGL specification does an extremely poor job of creating a contract between the device driver implementation and the application. Bugs aside, there's still the matter of (mis)interpretation of the spec, and you see issues crop up all the time, even with the binary drivers on Windows and an army of fastidious testers. We need to know exactly which games work under which drivers against which cards. Having the app developer specify "system requirements" is grossly inadequate on Linux; it's bad enough as it is on Windows. Throw in the spotty, fickle support of OpenGL in Mesa, for those of who (god forbid) value freedom, and you've got a lot of data to mine.

                  Originally posted by Michael View Post
                  The Phoronix Test Suite already supports this: http://www.phoronix.com/vr.php?view=14380 and there will continue to only get more qualitative tests going forward.
                  Sounds like you need help writing meaningful tests! OpenBenchmarking / PTS might provide a good platform for creating good tests, but without a lot of actually useful tests, followed by a lot of users running said tests to collect data, search results will be spotty / outdated.

                  So, what's your strategy for rolling this all out? Here's what I'd do :
                  • Test Development: Push open-source contributors, gamer enthusiasts, application developers and hardware vendors to develop real-world application and game tests. Writing tests benefits everyone. It benefits users because it provides the foundation for fleshing out the data set. It benefits application / game developers because it makes users aware of their software, while also letting potential customers see how their software runs (or doesn't run) on relevant hardware. It benefits hardware vendors because they can show off the hottest hardware or recent driver improvements with real-world tests that users care about.
                  • Test Run Submission: This is the bread-and-butter of the project; it has to be. Having good tests is not useful unless they are run on a very broad range of hardware and software configurations. This not only exposes potential issues with the applications tested, but also may expose driver issues or limitations of certain hardware. Test runs should be done primarily by enthusiasts and end-users. For end-users, it needs to be easy for them to volunteer to execute a large number of tests in an automated fashion. Maybe you should develop a way for someone to just turn on PTS at night when they go to sleep, and it will download tests to execute from a centralised work queue, managed by Phoronix personnel, which is basically a list of the most desired test runs to be done? Think of what BOINC does with distributed computing. Apply this simplicity to distributed testing.
                  • Database Harvesting: This is where end-users and companies use your search forms to gather useful product evaluation data based on the collected results. This component will naturally accrue a user-base to the extent that the data returned is useful. So you don't really need to do anything to advertise or encourage people to use this: it'll be like Google, with people using it all the time once they realize how good it is. But you have to go through the first two points to get to that level.


                  Hope I gave you some ideas...

                  Comment


                  • #19
                    Originally posted by nobody View Post
                    don't leave out coreboot support
                    Agreed, proprietary BIOSes really need an overthrowing. Maybe motherboard makers will start making standards for BIOSes as well so that it's easier to put Coreboot on them, if no such standards currently exist much.

                    Of course, I hate motherboard makers already for their destruction of CPU socket standards, so I don't have very high hopes for BIOS standards.

                    Comment


                    • #20
                      Originally posted by allquixotic View Post

                      • Maybe the games I'm interested in only work with the binary driver? I'd like to know that up-front.
                      • Maybe a certain game only works on minimum detail settings, or using an optional renderer written to an older OpenGL spec? I'd still like to see the graphics card / driver pair that produced a running game included in the results.
                      • Rarely, the minimum detail settings will employ techniques that cause rendering issues or crashes, but the higher detail settings use newer paths (e.g. GLSL) that work correctly! It'd be great to know this, too.

                      You can basically figure that out from auto-parsing the test results and levels of performance for each driver and what are the most common drivers being used, etc. There will also be a RESTful external API eventually [one of many things on my ever-growing TODO list] for also external analysis of said data.

                      Originally posted by allquixotic View Post
                      Throw in the spotty, fickle support of OpenGL in Mesa, for those of who (god forbid) value freedom, and you've got a lot of data to mine.
                      OpenBenchmarking.org prior to its launch already has around 300MB of such information to play with.

                      Originally posted by allquixotic View Post
                      Test Development: Push open-source contributors, gamer enthusiasts, application developers and hardware vendors to develop real-world application and game tests. Writing tests benefits everyone. It benefits users because it provides the foundation for fleshing out the data set. It benefits application / game developers because it makes users aware of their software, while also letting potential customers see how their software runs (or doesn't run) on relevant hardware. It benefits hardware vendors because they can show off the hottest hardware or recent driver improvements with real-world tests that users care about.
                      Yep, and with OpenBenchmarking.org any registered user can upload their own tests and suites (or build one from the web interface) and have it available to any PTS 3.0+ user via the package-management-like system for how everything is handled. It's no longer a matter of getting the test profile pushed to me and then included in the next release, they're instantly available as long as it doesn't break the test profile spec in a particular version of PTS.


                      Originally posted by allquixotic View Post
                      Test Run Submission: This is the bread-and-butter of the project; it has to be. Having good tests is not useful unless they are run on a very broad range of hardware and software configurations. This not only exposes potential issues with the applications tested, but also may expose driver issues or limitations of certain hardware. Test runs should be done primarily by enthusiasts and end-users. For end-users, it needs to be easy for them to volunteer to execute a large number of tests in an automated fashion
                      Judging by http://global.phoronix.com/ in recent months, it should be no problem getting users to push their results when upgrading to PTS3.

                      Originally posted by allquixotic View Post
                      Maybe you should develop a way for someone to just turn on PTS at night when they go to sleep, and it will download tests to execute from a centralised work queue, managed by Phoronix personnel, which is basically a list of the most desired test runs to be done? Think of what BOINC does with distributed computing. Apply this simplicity to distributed testing.
                      I've already developed a way to do this quite a while ago as part of Phoromatic. The next-generation Phoromatic is also running atop OpenBenchmarking.org and is compliant with its API.

                      Hits to test profile pages, test results, most common test results, number of times a test/suite is cloned, etc, is all tracked. From that information it can auto-determine what's popular. Nothing on OpenBenchmarking.org requires manual maintenance; even the list of distributions to select from in these search areas is auto-generated based upon the most popular distributions it sees in use over the course of recent days/weeks.
                      Michael Larabel
                      https://www.michaellarabel.com/

                      Comment

                      Working...
                      X