Announcement

Collapse
No announcement yet.

Why More Companies Don't Contribute To X.Org

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by deanjo View Post
    As far as timeframe goes I can't see anything radically changing in respect to discreet solutions for at least another 10-15 years.
    No. You are wrong.

    We need 3 shrinks in lithography to reach that point.

    32nm for first generation APUs-decent graphics performance in low-to-medium settings.

    22nm for second- this will reach good enough performance for modern gaming, probably able to play most modern games in medium(1440 or 1680) resolutions with all or most of effects enabled.

    16nm for the third- this will be the final nail in the coffin. By this time OpenCl will be mature and the gpu part will probably be bigger than the cpu part. Graphics performance will be plenty for mainstream gaming, and higher resolutions will simply need more APUs. Crossfire solutions will be mature enough for almost double scalling.

    11nm for the forth- At this time, AMD will probably stop selling discreet gpus, since the gpu part inside an APU will be by far the bigger anyway.

    We are expected to reach 16nm approx. in 2013-14. 11nm in 2015-16. This is not set in stone, and it might be pushed further, but at least Intel is positive that it can be done.

    You have to remember, that with each shrinkage gpu performance will almost double.

    By 2020, it won't be possible to find dedicated GPUs in stock...

    Comment


    • #72
      And to make the previous messages on-topic:

      In the long term, it won't matter much which graphics server is used, X or Wayland.

      Graphics should be handled by the gpu directly and entirely, since gpus on die will be universal and powerful.

      This means we need better opensource drivers and compositing window managers, since they are what matters most.

      Don't be surprised if after 2015-16 you see dri drivers wholly inside the Linux kernel. It will make sense by then, if APUs take over the market(the most probable outcome).

      If you believe this is sci-fi, then try to explain why AMD bought ATI, and why Intel experiments with Larrabee...Try to understand what is the big deal behind Fusion and why AMD keeps thinking it is the future. Try to undestand why the 32nm revision of Core ix are simply dual core Nehalems with on package gpus, and Sandybridge will have on die gpus in its entire lineup...

      In reality, the big 3 companies (Intel, AMD, Nvidia), know that this will be the future from 2003-2004 or even past that. It was known because we cannot raise mhz anymore, and general code doesn't scale well after 8 cores.

      NVIDIA since it doesn't have cpus, tries to reach that point from the gpu side, with CUDA. Intel since it doesn't have dedicated gpus, it obviously tries to reach that point from the cpu side. AMD could choose to develop a gpu, but decided years ago that simply purchasing a gpu player would be better. And bought ATI.

      Comment


      • #73
        I studied Wayland a little more. Previously i hadn't looked into its details.

        Before i mentioned what we really need is better opensource drivers plus better compositing managers, and that X or Wayland will be irrelevant.

        Well, it seems i am right, but with a correction:

        Wayland is a compositing manager itself.

        So it seems that it could provide advantages in latency after all, and it is the way forward, not because it is new but because it skips some unneeded communications between X and a compositor.

        Wow, this makes me want to install Wayland on my Arch ASAP(when Qt port is ready).

        Comment


        • #74
          I am excited by your posts, Templar ^^,

          Comment


          • #75
            Originally posted by TemplarGR View Post
            No. You are wrong.

            We need 3 shrinks in lithography to reach that point.

            32nm for first generation APUs-decent graphics performance in low-to-medium settings.

            22nm for second- this will reach good enough performance for modern gaming, probably able to play most modern games in medium(1440 or 1680) resolutions with all or most of effects enabled.

            16nm for the third- this will be the final nail in the coffin. By this time OpenCl will be mature and the gpu part will probably be bigger than the cpu part. Graphics performance will be plenty for mainstream gaming, and higher resolutions will simply need more APUs. Crossfire solutions will be mature enough for almost double scalling.

            11nm for the forth- At this time, AMD will probably stop selling discreet gpus, since the gpu part inside an APU will be by far the bigger anyway.

            We are expected to reach 16nm approx. in 2013-14. 11nm in 2015-16. This is not set in stone, and it might be pushed further, but at least Intel is positive that it can be done.

            You have to remember, that with each shrinkage gpu performance will almost double.

            By 2020, it won't be possible to find dedicated GPUs in stock...
            Your assuming that the current graphics capabilities will remain the same. This has never been the case. As we have seen in the past that the API's out there grow along with the generations of graphics. Not to mention that those same shrinkages on lithography also apply to discreet solutions.

            Until there is actual life like realtime rendering, the graphic demands will keep climbing and keep requiring a high class discreet solution.

            Comment


            • #76
              Originally posted by TemplarGR View Post
              Don't be surprised if after 2015-16 you see dri drivers wholly inside the Linux kernel. It will make sense by then, if APUs take over the market(the most probable outcome).
              Not likely. The current graphics and compute APIs are too big to live in the kernel and there is not much advantage to putting them there.

              Comment


              • #77
                Originally posted by agd5f View Post
                Not likely. The current graphics and compute APIs are too big to live in the kernel and there is not much advantage to putting them there.
                Very likely. You are wrong, because you are thinking in terms of now, not in terms of the future.

                In the future, cpu and gpu will be interconnected, they won't be only for graphics work, they will work closely in everything.

                In order for them to work closely, you will have to move their drivers in the kernel. There is no alternative really...

                You do understand, that in the future, all cpus will have gpgpu cores inside? GPGPU, not GPU only. It will be used for calculations. Eventually it will be like SSE is now for example, a part of your processor. Kernel will have to manage them too. You want to tell me that half your processor will be managed in kernel space and half in userspace?

                Of course, MESA libs will remain in userspace. Only the drivers will have to move in. And you will move them, unless you want linux to be left behind the times.

                This will happen. I am not making this up. Carefully study the market and you will find out about it too.

                Of course, this is up to you. I am not a Mesa or Kernel developer, though if given the time i would love to be in the future. But stop being conservative here, and look closely what the hardware players are doing. Heck, i believe you are working for amd, not? Then watch what AMD is saying about the future of APUs...

                They will not say that dedicated gpus will vanish. In fact, they say the opposite. It is not for their interest to say otherwise now. But eventually, they will just move to apus only.

                Comment


                • #78
                  Originally posted by deanjo View Post
                  Your assuming that the current graphics capabilities will remain the same. This has never been the case. As we have seen in the past that the API's out there grow along with the generations of graphics. Not to mention that those same shrinkages on lithography also apply to discreet solutions.

                  Until there is actual life like realtime rendering, the graphic demands will keep climbing and keep requiring a high class discreet solution.
                  No no you still don't get it, probably because you are a fan of NVIDIA...

                  Let me give a very simplified example.

                  Stop thinking in terms of performance for a minute. Think in terms of die area, because that is all that matters really, the size of a transistor and the die area. Of course architecture counts too, but architecure is more about how the code manipulates the transistors.

                  Lets as say that in 45nm, you have in your pc 1 CPU and 1 GPU, and their die areas and number of transistors are the same.

                  We are moving to 32nm, and because code doesn't manipulate the cpu much, we move the gpu in. We have 2/3 CPU and 1/3 GPU. Obviously, the gpu part of the package has 1/3 of performance of a discreet solution of same lithography and die area.

                  Let us move to 22nm. Since cpu doesn't matter much, we decide to just shrink the cpu part and double the GPU part. Now we have 1/3 CPU and 2/3 GPU. At 22nm, the GPU part on die will have 2/3 the performance of a dedicated solution of the same lithography and die area.

                  Now let us go to 16nm. We shrink the cpu, and double the gpu part again. Now we have 1/6 CPU and 5/6 GPU. The on die GPU part has 5/6 the performance of a dedicated GPU part, of same process and die area.

                  And finally, 11nm. Shrink cpu, double gpu. We have 1/12 CPU and 11/12 GPU. Sure, we would prefer 12/12 of the GPU, but since it makes no sense for amd to have 2 lines of products, it cuts the dedicated gpu. You just use more APUs if you want more performance.

                  Another think to consider, is that while a mixed solution will have less transistors for gpu than a dedicated solution, it will have the advantage of not having to move data around the pci bus. This will heavily offset this.

                  Of course, this is a simplified example. The cpu part WILL advance too, and probably will not just shrink at every new node. But, as i said, code does not scale after 8 cores. We cannot increase the mhz, we cannot just put there more cores, we have to do something with the transistors. Since gpus can do much more parallel work, and give almost perfect scalling, gpu will get all the attention from now on...

                  I hope i enlightened you.

                  Comment


                  • #79
                    You and agd5f are saying pretty much the same thing. When agd5f says "the current graphics and compute APIs are too big to live in the kernel" he is talking about Mesa and similar components, since those are the components which implement the graphics and compute APIs.

                    You may be talking about moving the Gallium3D bits into the kernel and keeping the common Mesa code in userspace, but that would be quite a bit *less* efficient than the current implementation. The current code does quite a bit of buffering in userspace to minimize the number of kernel calls required (kernel calls are slower than calls between userspace components); if the "driver" code (presumably the Gallium3D driver) were in kernel space then the number of kernel calls required would go up dramatically and performance would drop.

                    re: "watch what AMD is saying about the future of APUs", agd5f does work for AMD (as you suspected) and was the first open source developer to work on the new AMD APU graphics hardware. Alex has a better understanding than most about the future of APUs... he just can't tell you everything yet
                    Test signature

                    Comment


                    • #80
                      Originally posted by TemplarGR View Post
                      Very likely. You are wrong, because you are thinking in terms of now, not in terms of the future.

                      In the future, cpu and gpu will be interconnected, they won't be only for graphics work, they will work closely in everything.

                      In order for them to work closely, you will have to move their drivers in the kernel. There is no alternative really...

                      You do understand, that in the future, all cpus will have gpgpu cores inside? GPGPU, not GPU only. It will be used for calculations. Eventually it will be like SSE is now for example, a part of your processor. Kernel will have to manage them too. You want to tell me that half your processor will be managed in kernel space and half in userspace?
                      The whole GPU will be managed by the kms drm just like it is now. The graphics and compute APIs are HUGE; it makes no sense to move them into the kernel. If an application wants to use the API it will link with the appropriate lib which opens the userspace driver which will send commands to the hw via the drm just like we do now for 2D/3D. What advantage does cramming something like mesa into the kernel serve?

                      Originally posted by TemplarGR View Post
                      Of course, MESA libs will remain in userspace. Only the drivers will have to move in. And you will move them, unless you want linux to be left behind the times.
                      The 3D drivers are part of mesa. They interface with the hw via the drm. Other OSes work the same way. There's no advantage to cramming an enormous API into the kernel.

                      Comment

                      Working...
                      X