Announcement

Collapse
No announcement yet.

TitaniumGL 3D drivers (linux version)

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    ,,what are these "different projects" you also care about, anything we might be interested in too ?''

    1. i have worked in the past few weeks without a pause (24/7 - sleeping), on my softwares. i must rest a bit.

    2. i have 3d rpg maker software and i need to fix it under windows with intel gpu's. something is terrible wrong with the intel graphics drivers for windows, i was able to run it on intel now, but it still falls apart. i dont have intel testmachine so its a FUGLY work to do.

    3. GENERAL THINKING FROM INDUSTRY - i am working on an experimental real-time ray tracer for cpu. i want to get rid of gpu's, i want to get rid of rasterization at all. we are living in the last minutes of the era of the raster based 3d. The rendering mechanism, wich is invented basically by 3dfx in wide range, is what we still using, will maybee start dying soon. we dont need gpu's at all. imho, attempts to run generic code on gpu is a wrong concept of creating applications, and only forced by nvidia becose they cant ship x86(-64) cpus. not only just becouse they have no license to do it, but becouse they dont have the technology to create it (nvidia x86 cpu cores probably still sucking around 5-600 mhz in they laboratorys). The problem with opencl/cuda/shaders/and all other things related this that does not make the ability to create real applications in it. These things creating an assimethric platform, and no serious application EVER done in them, becouse they are useless if somebody want to create a real application in them. This conception creates an incohesive, non-monolithic programming way that is not able to be programmed in simple, algorithmic ways. Yes, i know everyone who is interested in amd, nvidia, powervr, and any other products related to this, says the opposite of this. But in reality, no real programmer will touch this platforms ever on his free will. This assimethric conception of application development multiples the time of the developing, and in some cases its useless. 90% of the algorythms simply canot efficently ported into gpus, becouse in the real world, an apllication is not just flows a bounch of datas over a pipeline. Also, its not possible to use the gpu in that way, like what we do it with FPU, becouse reaching FPU is just a few clock, but to utilize GPU's need a driver and an operating system, it cant just be reached with interrupts. This technology simply means that will never be able to step over this point. Creating a real application is work like that, the programmer creates functions, creates arrays, cycles, ifs in a COHESIVE way through the language constructs of the selected programming language, then the application can use the libraries of the operating system to read files, get mouse cursor, etc. This is impossible with the GPGPU concepcion, this is the main reason, why nobody uses it (exept the *never-heard-of* projects those are directly goals the gpgpu industry). This kind of programming ,,ideology'' hangs on the umbilical cord of nvidia, and other multicorporations those are interested in rasterization, becouse they products are rely on rasterization - the newest gpus are just also a tweaked voodoo1's, and additional circuits to reach general purpose functionality, and to be able to run shaders. And the problem is that they CANT just create a new products, becouse even if they can see beyond this, they will rely on such kind of application development methods that rely on the previous wrong style of software creation, so therefore its rarely succesfull, while the industry swallows in the chaos due to the so called ,,software and hardware properties'' created by illegitime ,,democracy'' in some contries. And i also decided to pull down the last skin from my foxies, this is why titaniumgl is also released now. Its still just a rasterizer. And when the time comes, i will switch to software ray tracing in my products, and press shift-delete on my old gpu related codes. They worth nothing. I investigated the possibility of creating a real time ray tracing architecture that is ONLY based on pure algorithmical implementations, and i have found out that we (the whole industry) have been tricked - again. The real situation is that we alreday reached the point to speed up the ray tracing over the speed of rasterisation in 2006-2009. I have done some calculations and i have find out that implementing a ray tracing hardware would be possible from around the same amout of transistors used in the raster hardwares since years, while the technology would allow basically unlimit number (~1-20 billion (different) polygon in real time) in fullHD resolution with shadowed, refracted (basically: in a really ray traced) environment over 24 fps. Of course if nvidia would create a hardware like this, that would mean that ALL of they techniques can go literally into the garbage, and they must start the work from 0. Who are into this things, this was the really reason why nvidia have bought physx back then, they was lack of the proper technologies to reach this goal. Shamefully, they are still lack it, physx is not good to reach this very specific kind of phisical simulation that need to reach real-time ray tracing. And realtime real ray tracers implemented on gpus are still just generic-like softwares created by individuals who wish the whole conception into the garbage, and wish to code a monolithic environment instead of that. And i also decided to jump down from this boat, wich will sink, including opencl, directx, opengl, cuda, shaders, gpus, gpgpu conception, whatever. And no, GPGPU is unable to be more than this - to be more, it should have hdd-s connected in, it should boot Windows (or linux, does not matter), it should be able to run a firefox.... Basically, then it would become a CPU, so it wouldnt be a GPU any more. The definition of a GPGPU causes the conception failure in itself. And i decided to try to create a real time ray tracer, however, i am unsuccesfull at the moment, becouse i have really lot of bugs (and some limitations that i still get becouse i dont want platformspecific things in the code) My ray tracer is a very epic, probably undoable, but painfull project. Its very fallen apart yet, produces ugly and graphics, and effects are not yet properly implemented. So i must rape it together until i get some enjoyable graphics quality, and it still looks like some bad quake2 clone from 1993. But the dinosaurs gona die out. Its cannot be avoided. And i want to have a finished, usable, and good technology, when (if?) that happends. This project have real time priority in my brain, this is wich is really forbids me to proceed the others

    ,,SIMD''
    oh, i accudentally also partly answered this. no, i dont use simd, i dont like to ruin my algorithms. TitaniumGL does not even have inline SSE, or any kind of inline assembly, it just pure algorithm. however, some other projects of me may have inline assembly, but not as a concept or base-thing, just to implement a simply function.
    Last edited by Geri; 03-10-2012, 10:26 AM.

    Comment


    • #17
      How do you intend to run vastly parallel algorithms on the CPU then? The GPU can run them orders of magnitude faster. For university (numerical analysis), I wrote some C code to solve matrices (for linear equations). No matter how good the algorithm was (even taking brain-dead shortcuts like not checking for divisions by zero), as soon as it was ported to CUDA, it solved them dozens of times faster.

      Unless CPUs can do that stuff, GPUs are here to stay.

      Comment


      • #18
        Originally posted by RealNC View Post
        How do you intend to run vastly parallel algorithms on the CPU then? The GPU can run them orders of magnitude faster. For university (numerical analysis), I wrote some C code to solve matrices (for linear equations). No matter how good the algorithm was (even taking brain-dead shortcuts like not checking for divisions by zero), as soon as it was ported to CUDA, it solved them dozens of times faster.

        Unless CPUs can do that stuff, GPUs are here to stay.
        GPUs can only run very limited type things in parallel.

        We were indirectly involved in some computational things. I believe the client was running some ray tracing type things using I think a 560gtx or something in fp32 mode. My partner replicated a reference implementation for fp64 on an original nehalem core i7. Our unoptimized algoriithm had 4x the throughput running in double precision mode compared with their "optimized" single precision. Now it could just be the quality of the code written, I couldn't say.

        The only things gpus seem to be generally good at are some basicl image processing tasks. I haven't done that much with them but I would wonder how well a 2d image median filter might run on a gpu.

        Comment


        • #19
          Originally posted by Gusar View Post
          Use xdg-mime: https://wiki.archlinux.org/index.php/Xdg-open#xdg-mime
          For the browser, the mime-type to set is x-scheme-handler/http and x-scheme-handler/https
          Huh, that's actually fairly nice. Last I looked at the xdg things there was no such util, but it has been a year or some. I even have some nightmares on things needing the gconf registry, very nice to see it's gone to a more sane direction.

          Comment


          • #20
            Originally posted by bnolsen View Post
            GPUs can only run very limited type things in parallel.
            That's the point though. Doing the same computation on a vast amount of data is what GPGPU is about. If you have an algorithm that can benefit from that, it will run much better on GPUs.

            Comment


            • #21
              I am a little bit sceptical about this project.

              1) It doesn't seem to implement shaders and shaders are likely to make TitaniumGL bloody slow if such support is ever added. llvmpipe is shaders-only. There is no algorithm specifically optimized for some fixed-function pipeline configuration in llvmpipe.

              2) I guess TitaniumGL does not strive for OpenGL correctness, which makes it not a viable permanent replacement for any GL driver.

              3) TitaniumGL may be using X11 to accelerate some operations.


              Regarding the Phoronix article:

              A) Different compositing managers are used for TitaniumGL and llvmpipe (why?).

              B) There is the nouveau DDX with TitaniumGL, but only Vesa with llvmpipe (why?).

              C) The applications can take different actions for each driver, because they see that TitaniumGL has only GL 1.4 and llvmpipe has GL 2.1 and a half of 3.x features.


              All in all, TitaniumGL is a nice project and has its purpose, but I don't think its code would be any useful to Mesa because of the points (1) and (2).
              Last edited by marek; 03-10-2012, 02:55 PM.

              Comment


              • #22
                Originally posted by marek View Post
                I am a little bit sceptical about this project.

                1) It doesn't seem to implement shaders and shaders are likely to make TitaniumGL bloody slow if such support is ever added. llvmpipe is shaders-only. There is no algorithm specifically optimized for some fixed-function pipeline configuration in llvmpipe.

                2) I guess TitaniumGL does not strive for OpenGL correctness, which makes it not a viable permanent replacement for any GL driver.

                3) TitaniumGL may be using X11 to accelerate some operations.


                Regarding the Phoronix article:

                A) Different compositing managers are used for TitaniumGL and llvmpipe (why?).

                B) There is the nouveau DDX with TitaniumGL, but only Vesa with llvmpipe (why?).

                C) The applications can take different actions for each driver, because they see that TitaniumGL has only GL 1.4 and llvmpipe has GL 2.1 and a half of 3.x features.


                All in all, TitaniumGL is a nice project and has its purpose, but I don't think its code would be any useful to Mesa because of the points (1) and (2).
                Who ever proposed merging it with Mesa?

                1. It's closed source, so that throws that out the window

                2. The code is very likely to be non-portable considering the amount of effort required to get it running on Win/FreeBSD/Linux, and since the Win32 code is so different (d3d backend), it may as well be a completely separate project

                3. Even if the developer opened up the source code, it would take so much effort to integrate it with mesa/DRI/DRM that it'd be easier to rewrite it from scratch

                4. The renderer seems to sacrifice accuracy/precision in the name of speed, and that's generally not how mesa rolls (at least not without making it a user-tunable option).

                This is just more of the crazy Germans and their ray-tracing, software-rendering nonsense...

                Does a chip even exist in the sub-$10,000 range which could draw a game like Mass Effect 3 at 60 fps at 1920x1080 in real-time using ray tracing, software rendering, or both? (We're assuming just the actual visual fidelity of a game like that, not THE actual game, because you'll say that the API it was coded in is fundamentally flawed blah blah).

                Don't google it; I'll answer your question: No.
                Last edited by allquixotic; 03-10-2012, 03:24 PM.

                Comment


                • #23
                  allquixotic: so you basically say that not even google was able to find someone who can do it.
                  thankyou for the praise, this is sooo beautiful

                  Comment


                  • #24
                    I did not try it, but is it really needed to run firefox when you use it?
                    Code:
                    strings libGL.so.1 |sort -u|grep http
                    firefox http://lgfxadserv.no-ip.org/titaniumgladssrvc/titaniumgladssrvc.php
                    http://LegendgrafiX.tk
                    http://TitaniumGL.tk
                    that looks really ugly and a sed replacement would get rid of it anyway. Do you have got quake3 optimizsations in there? I mean because of
                    Code:
                    strings libGL.so.1 |sort -u|grep quake
                    quake3
                    quake3demo

                    Comment


                    • #25
                      I did not try it, but is it really needed to run firefox when you use it?
                      you can disable it for 3 eur

                      Do you have got quake3 optimizsations in there
                      it forced quake3 to a different path, but i think that does not any more happens

                      Comment


                      • #26
                        This is not freeware, it is adware.

                        Comment


                        • #27
                          Yes it is malware, because it calls external URLs from within itself. It is also written by windows guys.
                          Actually I see that coming, as Geri is the one developing closed source game and refusing to opensource it even partially.
                          The reason why it is closed source: they want to do anything they want (no limits), like running stolen GPL or proprietary code, call websites or open popups etc.

                          Lets see, if I can separate good intentions from bad. Can you set up official site (oh, you have one, good!), then require basically everyone using it to register.
                          This is done by going to the website and getting SN for free. There are some ads. Goal achieved.

                          Now you can open your library, no?

                          If you answer with "No", this means our code uses hidden backdoors or contains illegal stuff (just like all proprietary drivers, lol).

                          Comment


                          • #28
                            You should ignore crazycheese. He's been trolling the forums since forever.

                            Comment


                            • #29
                              i know him ^^'

                              Comment


                              • #30
                                What has I written that was trolling? RealNC, you are by no means lesser troll!

                                Also, once the Ad-giver discoveres his Ads are being missused, because they are automatically triggered (and not by human - because they pay only for REAL clicks), this guy becomes real problems. Because he does it just like malware!

                                So, do you agree that you use stolen IP? Hehe, such a fun with proprietary, always! Everytime, pure freeware that refuses to go opensource is either stolen or has backdoors/killswitches.

                                Comment

                                Working...
                                X