Announcement

Collapse
No announcement yet.

Intel HD 4000 Ivy Bridge Graphics On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    It's incredible how fast is HD4000, especially compared to the dead-slow A8-3870.
    I still didn't understand if Michael set AMD apus to the high power profile, otherwise the benchmarks are simply useless.
    ## VGA ##
    AMD: X1950XTX, HD3870, HD5870
    Intel: GMA45, HD3000 (Core i5 2500K)

    Comment


    • #32
      Originally posted by russofris View Post
      I just laughed so hard I almost pissed myself.
      +1
      would be interesting to check where is supposedly highly optimized cuz in my tests

      opengl + wine = black magic and voodoo if it renders correctly
      xvba(UVD) = almost everything i throwed at it crash it and half of the X with it or has a zillion render artifacts (gallium mpeg/2 accel is flawless for me using vdpau excellent work here)
      opencl = never made it work but nvidia blob do just fine (but ok could be that my code hate fglrx)
      2d accel = slow/crappy/crashy/full of render errors (but bit better than before)
      xv = well crappiness at its highest exponential (new option helps but kills gl performance so ....)
      pm = only good thing so far (<-- maybe what michael means)
      3d performance on linux native games = slower than windows gl and lot slower than DX but faster than FOSS, so ... not exactly higly optimized in the broad sense of the word but ok is faster than mesa for the time being
      browsers GPU accel(webgl) = capped at 60 FPS and mesa hit 60 too in all the test i've done with mozilla and chromium examples
      browsers GPU accel(render) = browser crash festival at least chromium 18 and firefox aurora hate the bastard but opera next seems to hate it less (r600g show some crashes from time to time too but is way more usable)
      Flash accel = blacklisted cuz no matter what i never seen flash using anything else than software renderer (but r600g with html5 is a pleasure so DIE FLASH) and when you fullscreen there is a high chance that fglrx kills X or get a kernel panic hard lockup LOL

      so no other choice than laugh as hard as i can.

      btw i got running in wine with r600g:

      codemaster grid(race game) (disable blur since it hurts performance)
      starcraft 2 wings of liberty

      at 1366x768 in my 4850x2 and the frame rate is playable if you skip AA and go to ultra and amazingly the render is perfect crystal clear, so i think r600g is mature enough to start benchmarking wine games, i will try to run some games i know have in game benchmarks later and see how much juice i can get out of r600g

      Comment


      • #33
        Originally posted by Kano View Post
        What i am still waiting for are xserver 1.11 stabily fixes - when you use kde 4.x and use disable composite for fullscreen apps your xserver can crash or you get completly distorted gfx until you disable composite effects. That problem is not seen on ubuntu 12.04 because they use a frankenstein 1.11 xserver with lots of patches from 1.12. But as debian wheezy/sid does not use those patches it is unstable there. When you don't use that function it is ok, but when you know that on laptops composite is usally disabled when running low on battery and enabled when you connect power again this is definitely bad.
        A blog post about compositing and energy usage, by lead KWin developer, Martin Grasslin.

        Comment


        • #34
          Originally posted by russofris View Post
          I just laughed so hard I almost pissed myself.
          Why? Catalyst has excellent 3D performance compared the to open-source radeon driver. Have you ever seen a changelog for Catalyst? There are often a bunch of per-app optimizations.

          Comment


          • #35
            Originally posted by Veerappan View Post
            I'll see what I can do. I've at least separated the back-end library and the GUI into separate object files, so you could attach a Qt GUI to the back-end library without too much hassle. I'll look up a Qt tutorial and see what I can do to abstract away the GUI enough that it could handle both GTK/Qt. *NIX GUI programming is completely new to me, and I only know C (not C++), so expect some road bumps

            Once I get the GTK GUI functional, I'll post the source location for others to download/hack on.
            I suggest doing the /sys interface in C, then doing a GUI in something more high-level like python.

            In fact, I have a python-Qt front-end for changing power profiles and dynpm for radeons, which I never really sanitised enough for a release. It calls a minimal C-program (which needs root privileges to write to /sys, so very minimal) and lets you change between dynpm and different profiles for 2 different gfx cards. It might give you an easy start into GUI stuff.

            If you're interested, I can send it to you, let me know.

            Comment


            • #36
              Video Playback?

              Am I the only one that would have liked to have seen a Video playback comparison??? I don't game on IGP's, at least not for long (maybe some TeamNations Forever) ... But I do watch movies/TV on my desktop/laptop. I have a working MythTV setup running, but currently only with nVidia cards inside of them. I would have liked to know if I can go a different route with the HD 4000 in the Ivy Bridge chips.

              Comment


              • #37
                Originally posted by pingufunkybeat View Post
                I suggest doing the /sys interface in C, then doing a GUI in something more high-level like python.

                In fact, I have a python-Qt front-end for changing power profiles and dynpm for radeons, which I never really sanitised enough for a release. It calls a minimal C-program (which needs root privileges to write to /sys, so very minimal) and lets you change between dynpm and different profiles for 2 different gfx cards. It might give you an easy start into GUI stuff.

                If you're interested, I can send it to you, let me know.
                I wouldn't mind taking a look at what you've got, as you might have come up with something better than I've currently got. I'll PM you my email address. My motivation for this was to replace the shell scripts that I had previously written to do the same thing. I've currently written the back-end to discover an arbitrary number of cards that are exporting nodes in /sys/class/drm/card[\d+]/ and then trying to probe around as safely as possible for the needed sysfs nodes (power_[method|profile]. The user-facing GUI would enumerate those cards, and let the user configure power profiles independently for each individual card as desired.

                It does make sense to split off a separate executable from the GUI since you need root privileges to write to the nodes, and you also need root to READ the temperature values (it's under /sys/kernel/debug/... *grumble*). To read the values you can be an unprivileged user, but a small program to set the values makes sense as it minimizes attack surface.

                Comment


                • #38
                  Originally posted by DanL View Post
                  Why? Catalyst has excellent 3D performance compared the to open-source radeon driver. Have you ever seen a changelog for Catalyst? There are often a bunch of per-app optimizations.
                  have moderate performance but like i said is much less than its windows counterpart and DX but yes is faster than gallium but is also lot less stable and more error prone than gallium. hence not "highly optimized" LOL

                  btw thereis no such thing as per app optimizations, but fixing crappy app code at driver level or fixing nasty error on the driver. use google and find out what those apps optimizations really means (you eyes will drop tears of blood).

                  Another one: but catalyst support gl X.x.x faster than gallium, OK yes they expose the the api number very fast but normally is not even usable after a bunch of releases later and most likely apps need a bunch of workarounds that arent needed with nvidia or gallium and normallly the performance is very subpar with nvidia (with same class hardware) until a bunch of releases later, so again not exactly "highly optimized" (ask unigine how pleaseant is to use fglrx)

                  so we don't discuss that in benchmarks it give more fps than gallium, we laugh at the fact that someone call that bloody mess of crashy driver "highly optimized"

                  Comment


                  • #39
                    I'm surprised how close AMD and Intel are in some of the tests. I thought AMD is much faster.

                    Comment


                    • #40
                      Originally posted by mikkl View Post
                      I'm surprised how close AMD and Intel are in some of the tests. I thought AMD is much faster.
                      In windows, the Llano chips seem to usually beat the Intel ones fairly easily in graphics workloads, but it seems either the Intel Linux drivers are much better than the Intel Windows drivers (very possible), or the Catalyst drivers in Linux are slower than in Windows.

                      Comment

                      Working...
                      X