Announcement

Collapse
No announcement yet.

Gallium3D / LLVMpipe With LLVM 2.8

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Yeah, I've heard no complaints on the Intel drives.

    Thread derailment? That's par for the course on Phoronix

    Comment


    • #42
      AFAIK when SSDs have errors it's in the writing process and HDDs tend to fail during reads, so you'll only be aware of errors when you try to read the data you thought was safe. So at least in this regard SSDs seem safer. Don't know about real life expectancy though.

      Comment


      • #43
        There are still too much problems with SSD's. For example when abad HDD write happens it fails when transmitting to the HDD and the file save dialog tells you it failed.

        Due to the horrible nature of SSDs, the OS succesfuly transmits the file to the SSD, but then the SSD controller could write it unsuccesfully to its own disk without the user knowing 'till it's already too late.

        On paper SSD's rule, but in practiae they still don't, but I'm sure it'll be ready when I'm in the market for internal computer storage again, so...

        Comment


        • #44
          Excuse me for the wierd wrong characters and double posts but my phone is almost dead; broken touch screen and lots of wierd Java errors everywhere too. I'll get a new one tomorow... <_<'

          Comment


          • #45
            Originally posted by V!NCENT View Post
            BlackStar, just shut the fuck up. I'm not even going to make any effort defending software fallback. If a user doesn't like 2fps Vs. a blank screen the problem is simply between keyboard and chair. Nobody who is able to influence the Gallium3D code will think otherwise. Noobs be noobs.
            It reminds me the times when I've run Quake 1 on Nexgen CPU (586 without FPU, recognised by apps as 386). I've done some workarounds with modification of few different FPU emulators for DOS and some other quirks to make it advertising as 486DX for DOS to make it work but I was very very happy when I finally saw it running even that it was less than 1FPS but it worked.

            Comment


            • #46
              Originally posted by xeros View Post
              It reminds me the times when I've run Quake 1 on Nexgen CPU (586 without FPU, recognised by apps as 386). I've done some workarounds with modification of few different FPU emulators for DOS and some other quirks to make it advertising as 486DX for DOS to make it work but I was very very happy when I finally saw it running even that it was less than 1FPS but it worked.
              Edit: It was even ~5-20 seconds for 1 frame for 100MHz CPU with software FPU emulation.

              Comment


              • #47
                Originally posted by xeros View Post
                It reminds me the times when I've run Quake 1 on Nexgen CPU (586 without FPU, recognised by apps as 386). I've done some workarounds with modification of few different FPU emulators for DOS and some other quirks to make it advertising as 486DX for DOS to make it work but I was very very happy when I finally saw it running even that it was less than 1FPS but it worked.
                My 486DX2 66MHz only ran it at something like 15FPS with minimum resolution.

                Comment


                • #48
                  How do I enable llvmpipe? I have Kubuntu 10.04 (Lucid) with xorg-edgers packages. My machine has an intergrate ATI express X1250 card. XBMC runs very slow on it due to r300-dri driver lacking a good GLSL implementation. I want to try out llvmpipe to see if it's better than r300-dri, but don't know how to enable it.

                  Comment


                  • #49
                    Have you tried the 300g (Gallium3D HW accelerated driver) on your hardware ? AFAIK that should give you the best of both worlds -- LLVM JIT-compiled vertex shaders running on the CPU plus HW-accelerated fragment shaders running on the GPU.
                    Test signature

                    Comment


                    • #50
                      Originally posted by bridgman View Post
                      Have you tried the 300g (Gallium3D HW accelerated driver) on your hardware ? AFAIK that should give you the best of both worlds -- LLVM JIT-compiled vertex shaders running on the CPU plus HW-accelerated fragment shaders running on the GPU.
                      my glxinfo shows:

                      OpenGL vendor string: X.Org R300 Project
                      OpenGL renderer string: Gallium 0.4 on RS690
                      OpenGL version string: 2.1 Mesa 7.10-devel
                      OpenGL shading language version string: 1.20

                      Does that mean I'm running r300g? If so, it's still very slow for xbmc. I got only 15fps. Not much improvement since 5 months ago:

                      Comment

                      Working...
                      X