Announcement

Collapse
No announcement yet.

TitaniumGL 3D drivers (linux version)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by curaga View Post
    xdg-open is a Gnome-centric freedesktop.org util, you're not going to find it on many smaller distros. Or, maybe it is there, but doesn't know of any browser.
    I use KDE and it works perfectly. It obeys my browser preference in KDE's "Default applications" settings.

    It is not Gnome-centric. It's universal and has no Gnome dependencies.

    Comment


    • #12
      Originally posted by RealNC View Post
      I use KDE and it works perfectly. It obeys my browser preference in KDE's "Default applications" settings.

      It is not Gnome-centric. It's universal and has no Gnome dependencies.
      Cool, now tell me how to set the default browser using icewm, with no KDE nor Gnome parts installed.

      Comment


      • #13
        Use the settings menu inside firefox or
        Code:
        xdg-settings set default-web-browser firefox.desktop
        ?

        Comment


        • #14
          Originally posted by curaga View Post
          Cool, now tell me how to set the default browser using icewm, with no KDE nor Gnome parts installed.
          Use xdg-mime: https://wiki.archlinux.org/index.php/Xdg-open#xdg-mime
          For the browser, the mime-type to set is x-scheme-handler/http and x-scheme-handler/https

          Comment


          • #15
            Originally posted by ChrisXY View Post
            Wat.


            You can use LD_PRELOAD if you don't want to replace system files.
            Code:
             % LD_PRELOAD=/home/chris/TitaniumGL_linux_version/libGL.so.1 glxinfo32| grep -E 'version|render'
            direct rendering: Yes
            server glx version string: 1.3
            client glx version string: GLX_ARB_create_context GLX_ARB_get_proc_address GLX_SGIX_fbconfig
            GLX version: 1.2
            OpenGL renderer string: TitaniumGL/4 THREADs/SOFTWARE RENDERING/4 TMUs
            OpenGL version string: 1.4 v2009-2012/3/08 (c)Kovacs Gergo
            im not sure if it's distro-agnostic ? but you could probably use the better /etc/ld.so.preload file I.E
            something like
            echo /home/chris/TitaniumGL_linux_version/libGL.so.1 > /etc/ld.so.preload

            Geri:there will be 64 bit version, once. but not in these months. i dont have so mutch time in these days, i must care about different projects also.
            what are these "different projects" you also care about, anything we might be interested in too ?

            BTW Geri,why are you making it "freeware" (very good) but not putting it on github and releasing the code (not so good) and did you use a lot of benchmarked SIMD code in your separate routines or did you add in a fast 3rd party SIMD lib like Eigen
            http://eigen.tuxfamily.org/index.php?title=Main_Page to make your life easier ? to get the far better speed than LLVMpipe etc
            Last edited by popper; 10 March 2012, 10:23 AM.

            Comment


            • #16
              ,,what are these "different projects" you also care about, anything we might be interested in too ?''

              1. i have worked in the past few weeks without a pause (24/7 - sleeping), on my softwares. i must rest a bit.

              2. i have 3d rpg maker software and i need to fix it under windows with intel gpu's. something is terrible wrong with the intel graphics drivers for windows, i was able to run it on intel now, but it still falls apart. i dont have intel testmachine so its a FUGLY work to do.

              3. GENERAL THINKING FROM INDUSTRY - i am working on an experimental real-time ray tracer for cpu. i want to get rid of gpu's, i want to get rid of rasterization at all. we are living in the last minutes of the era of the raster based 3d. The rendering mechanism, wich is invented basically by 3dfx in wide range, is what we still using, will maybee start dying soon. we dont need gpu's at all. imho, attempts to run generic code on gpu is a wrong concept of creating applications, and only forced by nvidia becose they cant ship x86(-64) cpus. not only just becouse they have no license to do it, but becouse they dont have the technology to create it (nvidia x86 cpu cores probably still sucking around 5-600 mhz in they laboratorys). The problem with opencl/cuda/shaders/and all other things related this that does not make the ability to create real applications in it. These things creating an assimethric platform, and no serious application EVER done in them, becouse they are useless if somebody want to create a real application in them. This conception creates an incohesive, non-monolithic programming way that is not able to be programmed in simple, algorithmic ways. Yes, i know everyone who is interested in amd, nvidia, powervr, and any other products related to this, says the opposite of this. But in reality, no real programmer will touch this platforms ever on his free will. This assimethric conception of application development multiples the time of the developing, and in some cases its useless. 90% of the algorythms simply canot efficently ported into gpus, becouse in the real world, an apllication is not just flows a bounch of datas over a pipeline. Also, its not possible to use the gpu in that way, like what we do it with FPU, becouse reaching FPU is just a few clock, but to utilize GPU's need a driver and an operating system, it cant just be reached with interrupts. This technology simply means that will never be able to step over this point. Creating a real application is work like that, the programmer creates functions, creates arrays, cycles, ifs in a COHESIVE way through the language constructs of the selected programming language, then the application can use the libraries of the operating system to read files, get mouse cursor, etc. This is impossible with the GPGPU concepcion, this is the main reason, why nobody uses it (exept the *never-heard-of* projects those are directly goals the gpgpu industry). This kind of programming ,,ideology'' hangs on the umbilical cord of nvidia, and other multicorporations those are interested in rasterization, becouse they products are rely on rasterization - the newest gpus are just also a tweaked voodoo1's, and additional circuits to reach general purpose functionality, and to be able to run shaders. And the problem is that they CANT just create a new products, becouse even if they can see beyond this, they will rely on such kind of application development methods that rely on the previous wrong style of software creation, so therefore its rarely succesfull, while the industry swallows in the chaos due to the so called ,,software and hardware properties'' created by illegitime ,,democracy'' in some contries. And i also decided to pull down the last skin from my foxies, this is why titaniumgl is also released now. Its still just a rasterizer. And when the time comes, i will switch to software ray tracing in my products, and press shift-delete on my old gpu related codes. They worth nothing. I investigated the possibility of creating a real time ray tracing architecture that is ONLY based on pure algorithmical implementations, and i have found out that we (the whole industry) have been tricked - again. The real situation is that we alreday reached the point to speed up the ray tracing over the speed of rasterisation in 2006-2009. I have done some calculations and i have find out that implementing a ray tracing hardware would be possible from around the same amout of transistors used in the raster hardwares since years, while the technology would allow basically unlimit number (~1-20 billion (different) polygon in real time) in fullHD resolution with shadowed, refracted (basically: in a really ray traced) environment over 24 fps. Of course if nvidia would create a hardware like this, that would mean that ALL of they techniques can go literally into the garbage, and they must start the work from 0. Who are into this things, this was the really reason why nvidia have bought physx back then, they was lack of the proper technologies to reach this goal. Shamefully, they are still lack it, physx is not good to reach this very specific kind of phisical simulation that need to reach real-time ray tracing. And realtime real ray tracers implemented on gpus are still just generic-like softwares created by individuals who wish the whole conception into the garbage, and wish to code a monolithic environment instead of that. And i also decided to jump down from this boat, wich will sink, including opencl, directx, opengl, cuda, shaders, gpus, gpgpu conception, whatever. And no, GPGPU is unable to be more than this - to be more, it should have hdd-s connected in, it should boot Windows (or linux, does not matter), it should be able to run a firefox.... Basically, then it would become a CPU, so it wouldnt be a GPU any more. The definition of a GPGPU causes the conception failure in itself. And i decided to try to create a real time ray tracer, however, i am unsuccesfull at the moment, becouse i have really lot of bugs (and some limitations that i still get becouse i dont want platformspecific things in the code) My ray tracer is a very epic, probably undoable, but painfull project. Its very fallen apart yet, produces ugly and graphics, and effects are not yet properly implemented. So i must rape it together until i get some enjoyable graphics quality, and it still looks like some bad quake2 clone from 1993. But the dinosaurs gona die out. Its cannot be avoided. And i want to have a finished, usable, and good technology, when (if?) that happends. This project have real time priority in my brain, this is wich is really forbids me to proceed the others

              ,,SIMD''
              oh, i accudentally also partly answered this. no, i dont use simd, i dont like to ruin my algorithms. TitaniumGL does not even have inline SSE, or any kind of inline assembly, it just pure algorithm. however, some other projects of me may have inline assembly, but not as a concept or base-thing, just to implement a simply function.
              Last edited by Geri; 10 March 2012, 11:26 AM.

              Comment


              • #17
                How do you intend to run vastly parallel algorithms on the CPU then? The GPU can run them orders of magnitude faster. For university (numerical analysis), I wrote some C code to solve matrices (for linear equations). No matter how good the algorithm was (even taking brain-dead shortcuts like not checking for divisions by zero), as soon as it was ported to CUDA, it solved them dozens of times faster.

                Unless CPUs can do that stuff, GPUs are here to stay.

                Comment


                • #18
                  Originally posted by RealNC View Post
                  How do you intend to run vastly parallel algorithms on the CPU then? The GPU can run them orders of magnitude faster. For university (numerical analysis), I wrote some C code to solve matrices (for linear equations). No matter how good the algorithm was (even taking brain-dead shortcuts like not checking for divisions by zero), as soon as it was ported to CUDA, it solved them dozens of times faster.

                  Unless CPUs can do that stuff, GPUs are here to stay.
                  GPUs can only run very limited type things in parallel.

                  We were indirectly involved in some computational things. I believe the client was running some ray tracing type things using I think a 560gtx or something in fp32 mode. My partner replicated a reference implementation for fp64 on an original nehalem core i7. Our unoptimized algoriithm had 4x the throughput running in double precision mode compared with their "optimized" single precision. Now it could just be the quality of the code written, I couldn't say.

                  The only things gpus seem to be generally good at are some basicl image processing tasks. I haven't done that much with them but I would wonder how well a 2d image median filter might run on a gpu.

                  Comment


                  • #19
                    Originally posted by Gusar View Post
                    Use xdg-mime: https://wiki.archlinux.org/index.php/Xdg-open#xdg-mime
                    For the browser, the mime-type to set is x-scheme-handler/http and x-scheme-handler/https
                    Huh, that's actually fairly nice. Last I looked at the xdg things there was no such util, but it has been a year or some. I even have some nightmares on things needing the gconf registry, very nice to see it's gone to a more sane direction.

                    Comment


                    • #20
                      Originally posted by bnolsen View Post
                      GPUs can only run very limited type things in parallel.
                      That's the point though. Doing the same computation on a vast amount of data is what GPGPU is about. If you have an algorithm that can benefit from that, it will run much better on GPUs.

                      Comment

                      Working...
                      X