Announcement

Collapse
No announcement yet.

Intel HD 4000 Ivy Bridge Graphics On Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Kivada View Post
    If I'm not mistaken S3TC is used by pretty much all games these days and enabling it actually does improve performance by reducing the amount of memory bandwidth required to load the texture, the game engine is going to try to load the same number of textures weather S3TC is there or not but the GPU is going to choke on the extra bandwidth required to do it, hence why S3TC is still used to this day.
    That's true, but only if you compare textures of the same resolution. As I said, the common practise is for the s3tc-compressed texture to be double the resolution.

    Example:
    2048x2048 s3tc tex, 4mb VRAM used
    1024x1024 uncompressed tex, 4mb VRAM used

    This is because s3tc is lossy, and because the improvement of the sharper texture is more than the degradement of the compression visually. The other reason being that the VRAM overhead of the bigger uncompressed textures may be too much.

    Comment


    • #17
      Why bother comparing to the radeon Gallium driver if you're going to run it at stock (=low) speed? Yes AMD should finally fix and by default enable dynamic power management in the open drivers, but this comparison is pointless.

      Comment


      • #18
        Originally posted by uid313 View Post
        I'll go with energy-efficient, cool, silent, open source any day over slightly faster graphics.
        This.

        A can of Intel opensource whopass waiting for
        your AMD APU to enter legacy (non)support.
        After that it's world of hurt with xf86-video-ati.

        I don't care how much AMD's hardware is faster
        if it gets whooped by Intel when legacy comes,
        and you have no other choice but opensource.

        Intel, hats (and bucks) off to you

        Comment


        • #19
          I'm hoping that Michael does a test soon of Llano with /sys/class/drm/card0/device/power_method=profile and power_profile=high, along with the new LLVM r600 backend... and then with 2d tiling, and PCIe 2 support enabled.

          There's a lot of optional features which are currently disabled by default in r600g, some of which have major performance implications. I'm especially interested in seeing if the VLIW packetizer in the LLVM back-end helps performance. I've already done piglit runs of the LLVM and TGSI back-ends (both glsl 1.2/1.3), but I haven't done a PTS gaming run with them yet. Unfortunately, my weekend was too short to finish that.

          Aside:
          I've also started the beginnings of a radeon performance profile-setting GUI as a way to teach myself GTK. I'll be poking around at this one in my spare time over the next few weeks, and hopefully by the end I'll have something to show for it. Currently targeting only radeons (r100+), but if I can find the right sysfs nodes for Nouveau/Intel/others (PTS can probably show me the way here), there's no reason I couldn't handle them all.

          Current features targeted: Change CPU/memory clocks/profiles, report temperatures/frequencies. Eventually, maybe add support for setting fan profiles/speeds when applicable. I'll leave DPMS to KDE/Gnome/etc. X.org feature settings (2D tiling, etc) will probably be left out for now, but might be added in the future.

          Comment


          • #20
            Why on earth is everything disabled that you want that others test?

            Comment


            • #21
              Originally posted by Veerappan View Post
              Aside:
              I've also started the beginnings of a radeon performance profile-setting GUI as a way to teach myself GTK. I'll be poking around at this one in my spare time over the next few weeks, and hopefully by the end I'll have something to show for it. Currently targeting only radeons (r100+), but if I can find the right sysfs nodes for Nouveau/Intel/others (PTS can probably show me the way here), there's no reason I couldn't handle them all.

              Current features targeted: Change CPU/memory clocks/profiles, report temperatures/frequencies. Eventually, maybe add support for setting fan profiles/speeds when applicable. I'll leave DPMS to KDE/Gnome/etc. X.org feature settings (2D tiling, etc) will probably be left out for now, but might be added in the future.
              Please make the user interface portable to other tool kits. I will happily try my coding skills with QT, once I learn enough C++ .

              Comment


              • #22
                Readers should be aware that the llano is slightly gimped based on the ram used. I don't know what speed he used (1333/1600 possible?) I get much higher results with 1866. If I am wrong then there is something else gimping his speeds considering this was a budget pc for under $400 and I get better results I can't imagine what.

                Comment


                • #23
                  In some of the tests, the Intel Ivy Bridge graphics were even quite competitive with the AMD Fusion A8-3870K on AMD's highly-optimized Catalyst Linux driver.
                  I just laughed so hard I almost pissed myself.

                  Comment


                  • #24
                    Originally posted by russofris View Post
                    I just laughed so hard I almost pissed myself.
                    Well you made me do a spit take with this one... +1

                    Comment


                    • #25
                      Originally posted by russofris View Post
                      I just laughed so hard I almost pissed myself.
                      If I had strength to read to this point, my reaction would surely be the same!

                      Comment


                      • #26
                        Originally posted by Hirager View Post
                        Please make the user interface portable to other tool kits. I will happily try my coding skills with QT, once I learn enough C++ .
                        I'll see what I can do. I've at least separated the back-end library and the GUI into separate object files, so you could attach a Qt GUI to the back-end library without too much hassle. I'll look up a Qt tutorial and see what I can do to abstract away the GUI enough that it could handle both GTK/Qt. *NIX GUI programming is completely new to me, and I only know C (not C++), so expect some road bumps

                        Once I get the GTK GUI functional, I'll post the source location for others to download/hack on.

                        Comment


                        • #27
                          Originally posted by curaga View Post
                          @Intel team

                          Congrats, especially on Nexuiz and Xonotic.
                          Thanks!!

                          Comment


                          • #28
                            Originally posted by Kano View Post
                            Why on earth is everything disabled that you want that others test?
                            Was this targeted towards me? If so...

                            Mostly, because the tool that I'm writing only targets the sysfs interfaces exposed by the radeon kernel driver. It's a run-time tool, and in its initial form, it will not persist settings across reboots. Most of the options that are disabled by default (2D tiling, PCIe 2, etc) are either kernel parameters or X.org configuration options. If I screw up writing kernel parameters for the user, I can hose a system, make it unbootable, and piss people off. Kernel parameter setting is also distro dependent (grub, lilo, etc). Same argument goes for overriding X.org settings (assuming the system even has an xorg.conf). I can document what settings need to be set, but those already are documented in various Wiki's and man pages, and just need to be summarized.

                            The sysfs nodes are much safer to read/set, which is why I'm starting there.

                            Comment


                            • #29
                              Originally posted by Veerappan View Post
                              I'll see what I can do. I've at least separated the back-end library and the GUI into separate object files, so you could attach a Qt GUI to the back-end library without too much hassle. I'll look up a Qt tutorial and see what I can do to abstract away the GUI enough that it could handle both GTK/Qt. *NIX GUI programming is completely new to me, and I only know C (not C++), so expect some road bumps

                              Once I get the GTK GUI functional, I'll post the source location for others to download/hack on.
                              Thanks. My primary concern for it is unnecessary lock-down with abuse of GTK for doing the low level things. The separation of GUI from library is sufficient to ensure portability. I am not sure how soon I will be able to learn it, but I want to play with QML. This means I probably will write the interface from scratch.

                              Comment


                              • #30
                                What i am still waiting for are xserver 1.11 stabily fixes - when you use kde 4.x and use disable composite for fullscreen apps your xserver can crash or you get completly distorted gfx until you disable composite effects. That problem is not seen on ubuntu 12.04 because they use a frankenstein 1.11 xserver with lots of patches from 1.12. But as debian wheezy/sid does not use those patches it is unstable there. When you don't use that function it is ok, but when you know that on laptops composite is usally disabled when running low on battery and enabled when you connect power again this is definitely bad.

                                Comment

                                Working...
                                X