Announcement

Collapse
No announcement yet.

Why More Companies Don't Contribute To X.Org

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by agd5f View Post
    The whole GPU will be managed by the kms drm just like it is now. The graphics and compute APIs are HUGE; it makes no sense to move them into the kernel. If an application wants to use the API it will link with the appropriate lib which opens the userspace driver which will send commands to the hw via the drm just like we do now for 2D/3D. What advantage does cramming something like mesa into the kernel serve?



    The 3D drivers are part of mesa. They interface with the hw via the drm. Other OSes work the same way. There's no advantage to cramming an enormous API into the kernel.
    I wasn't talking about moving the graphics and compute libraries inside the kernel, obviously. I am talking about the drivers. Since i am not a linux graphics guru at the moment, i do not know exactly what parts are already in the kernel and what others could be put there. I have only a general overview of this. If all that could be put in the kernel is already there, then fine.

    But, there is one thing i am curious about:

    What happens when the cpu+gpu will have to work together for compute operations? The model "give a bunch of code to the gpu and let it alarm us when it finishes" is the most easy, but what if there have to be many messages between the two? How will the drivers handle that? I was under the impression that if both were inside the kernel, this could optimize a situation like this in the future.

    Comment


    • #82
      The disconnect here is that when you say "move the graphics and compute APIs into the kernel" and "move the drivers into the kernel) you *are* talking about Mesa etc... I believe the "driver" bits you are talking about (synchronization between GPU & CPU activities etc..) are already in the kernel, although the details will need to evolve along with the hardware. There's just a lot *more* driver in userspace, and that will probably stay in userspace.

      The key point is that the interface point between userspace driver and kernel driver is likely to remain much lower level than the common graphics & compute APIs, ie that the userspace drivers will take care of transforming common APIs into hardware-specific commands, then the kernel driver will take care of executing those commands (including synchronization etc...).

      Bottom line - I believe the architecture used today aligns pretty well with what you "mean", but not with what you are "saying"
      Test signature

      Comment


      • #83
        Originally posted by bridgman View Post
        re: "watch what AMD is saying about the future of APUs", agd5f does work for AMD (as you suspected) and was the first open source developer to work on the new AMD APU graphics hardware. Alex has a better understanding than most about the future of APUs... he just can't tell you everything yet
        Don't you just love the internet? Where everyone is an expert on everything.

        Comment


        • #84
          Nice to hear that the current architecture is already there. As i said, it is obvious libraries should be in userspace, i was talking about very low level bits of the driver.

          In any case, i am happy that Mesa and the radeon driver progresses, so when the time comes, it will be ready for APU computing. We need an open OpenCL implementation, and much other stuff, but i believe that by the time we really need those, they will be there.

          In topic: That is why contributing to X doesn't matter much anymore

          Comment


          • #85
            Originally posted by smitty3268 View Post
            Don't you just love the internet? Where everyone is an expert on everything.
            I wasn't planning to comment on this quote from Bridgman, but i will, because i am in the mood

            With all due respect to agd5f, and his wonderful work on the radeon driver which i use daily for one year, he is not more knowledgable of the future of APUs than i am.

            Just because someone gave him the first generation APU, and he wrote code for it, doesn't make him know the future of APUs. It is like saying that i or even Linus Torvalds knows the future of x86 cpus because we code for them...

            The only ones who know about it, are the engineers that design the hardware, and even they might not know the future. If we really want to know the future, we will have to ask Otellini and the like...

            I do not know the future. I only made an educated guess, based on what i know about hardware and software, my experience, and news i gather here and there. That doesn't mean i spread FUD or rumours, what i wrote is very likely to happen. I do not like being mistaken.

            So, the only thing he is more of an expert than me, is for Mesa and the graphics stack, and i didn't say otherwise. In fact, i said that i don't know the details of the graphics stack, so why the spiteful comment?

            Comment


            • #86
              Originally posted by TemplarGR View Post
              I wasn't planning to comment on this quote from Bridgman, but i will, because i am in the mood

              With all due respect to agd5f, and his wonderful work on the radeon driver which i use daily for one year, he is not more knowledgable of the future of APUs than i am.

              Just because someone gave him the first generation APU, and he wrote code for it, doesn't make him know the future of APUs. It is like saying that i or even Linus Torvalds knows the future of x86 cpus because we code for them...

              The only ones who know about it, are the engineers that design the hardware, and even they might not know the future. If we really want to know the future, we will have to ask Otellini and the like...

              I do not know the future. I only made an educated guess, based on what i know about hardware and software, my experience, and news i gather here and there. That doesn't mean i spread FUD or rumours, what i wrote is very likely to happen. I do not like being mistaken.

              So, the only thing he is more of an expert than me, is for Mesa and the graphics stack, and i didn't say otherwise. In fact, i said that i don't know the details of the graphics stack, so why the spiteful comment?
              I'm sorry you took it as spiteful, I was chuckling when writing it and meant it to come across in that vein.

              Anyway, I find it perfectly reasonable to assume that as an AMD employee who is writing drivers for their hardware he may have some inside information about future plans for their hardware. Certainly for the next couple of years, at least, and I read what you posted and frankly thought you were assuming a lot of stuff that isn't necessarily going to happen. It might, but you're stating a lot of things as unimpeachable facts that you really have no way of knowing, and as soon as someone called you out on it you claimed they had no clue what they were talking about. It kind of reminded me of a recent thread by Quaridarium, where he claimed that OpenCL was the saviour of all mankind and all sorts of other stuff... Hence the chuckle.

              Comment


              • #87
                The only part of your posting I had a problem with was "If we really want to know the future (AMD APU plans), we will have to ask <some guy at Intel> ..."
                Test signature

                Comment


                • #88
                  Originally posted by bridgman View Post
                  The only part of your posting I had a problem with was "If we really want to know the future (AMD APU plans), we will have to ask <some guy at Intel> ..."
                  LOL

                  At the time of writing this i didn't recall by memory AMD's CEO so i wrote Otellini and the like

                  But, i believe Intel has similar plans for their CPU+GPU combination too. Sandybridge confirms this.

                  Comment


                  • #89
                    Originally posted by smitty3268 View Post
                    I'm sorry you took it as spiteful, I was chuckling when writing it and meant it to come across in that vein.

                    Anyway, I find it perfectly reasonable to assume that as an AMD employee who is writing drivers for their hardware he may have some inside information about future plans for their hardware. Certainly for the next couple of years, at least, and I read what you posted and frankly thought you were assuming a lot of stuff that isn't necessarily going to happen. It might, but you're stating a lot of things as unimpeachable facts that you really have no way of knowing, and as soon as someone called you out on it you claimed they had no clue what they were talking about. It kind of reminded me of a recent thread by Quaridarium, where he claimed that OpenCL was the saviour of all mankind and all sorts of other stuff... Hence the chuckle.

                    It is true he may have inside information, but i don't really believe it will be much. Companies tend to be secretive about their future plans.

                    As for not being able to know these stuff, i am not. I do not work for Intel or AMD, so i do not have anything concrete on my hands.

                    But these things take years to develop. Hardware may need 4-5-6 years to be released. Engineers don't just change an architecture radically in the middle of development. Also, when they target something, they make gradual changes with each release. If someone is knowledgable about hardware/software, he can detect the pattern and find out what they are planning for years ahead.

                    For example, it is obvious that they are not planning to move the gpu out of the die anytime soon. The only CPU which will be released in the near future without a gpu core, is the first Bulldozer. Bulldozer's second revision will include a gpu die too, it is confirmed.

                    So, if you know this, and know that Intel at least sees a clear path to 11nm(22nm is guaranteed at this stage), you know that in 6 years transistors of APUs will be 1/8 the size of the first generation APUs(32nm). This is obvious knowledge, not a prophecy, not inside information. This will happen.

                    Next, if you know that GPGPU is advancing and it is being taught to schools, you know that if they place that gpu cores inside the cpu, they will have to use them. So you know those cores will be used for calculations, instead of being used like cheap IGPs.

                    Lastly, if you know that code in general doesn't scale well after 8 cores, you know that besides various improvements in caches, prefetchers etc, at least for the desktop 8 cores will be the maximum(4 modules if you are AMD). Notice that the problem is about developers, not cpus. Developers can't code well for >8 cores. Various independent tests confirm this. Servers could use many more cores of course, but the desktop is not a server.

                    So if you mix all these together, you could predict that most of the transistors will be used for the gpu cores in the future, at least on the desktop. If you add some economics in the mix, you will arrive at the conclusion that having APUs where 5/6 of the die is a gpu, and having a dedicated gpu, isn't economically viable. Why R&D dedicated gpus since they are almost the same with APUs? Why spend more money and duplicate efforts?

                    This is all public knowledge. I may be wrong on all of these of course. If God is willing, we will all be here to find out after 6 years. But, it is still the most probable outcome. Especially if you consider the alternatives:

                    Pure CPUS: At 11nm, you could have say 64 penryn like cores on die. Would that improve performance? Do you seriously believe that in 6 years code in general could utilize well 8 or even 4 cores?

                    Pure GPUS: At 11nm, they will be lets say ~16 times more powerfull than now. If they remain over pcie, will we be able to feed them? Especially for GPGPU, won't we face problems with latency?

                    Comment


                    • #90
                      Originally posted by deanjo View Post
                      Would you want to work with a bunch of juvenile dipshits like that?
                      With a thought about all things Ballmer and other MS-spokepersons burps out from time to time, and all the people/companies still wanting to work with Microsoft I would say yes.:-P

                      I do think it was a funny joke, I do not however think it was executed that well. Had it been made in a form not making people questionable the integrity of the git-repos or by abusing root-permissions (say post it as a proposed patch or pull-request on a mailing list) then I have had a much better time handling it.

                      Comment

                      Working...
                      X