Announcement

Collapse
No announcement yet.

Why More Companies Don't Contribute To X.Org

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • phoronix
    started a topic Why More Companies Don't Contribute To X.Org

    Why More Companies Don't Contribute To X.Org

    Phoronix: Why More Companies Don't Contribute To X.Org

    Being brought up from the discussion surrounding the RadeonHD driver being vandalized, which wound up just being a prank by two X.Org developers to torment one of the former RadeonHD developers, was a discussion why more companies don't contribute back to X.Org. Do companies think the X.Org code is too hard? That it's not worth the time? Is it all politics?..

    http://www.phoronix.com/vr.php?view=ODgzMw

  • elanthis
    replied
    Originally posted by smitty3268 View Post
    Are CPU manufacturers really going to want to stick 50 billion extra transistors on their CPUs that are much more complicated (and likely to fail, therefore wasting the entire chip)?
    They've had no problem with exponentially increasing transistor counts before now, so why would this be any different?

    Also keep in mind that they already deal with bad transistors on chips just fine. Many of those dual-core CPUs you can buy today are really just quad-cores with two cores turned off because they failed to pass verification. I imagine the same is done with graphics chips, where the difference between the $100 parts and the $200 parts is often just that the exact same chips are running at different frequencies with some block of their SPUs disabled due to verification failures after fab.

    Or will they stick to a simpler, cheaper chip good enough for 95% of people that will give them higher yields and tell consumers to buy $600 graphics cards for those who really need the extra power?
    Nobody (with a brain) buys $600 graphics cards. Or even $300 graphics cards.

    Also, your assumption that you can just plug in extra APUs for more power doesn't seem like a good solution to me - look at crossfire and SLI - even with 2 GPUs it doesn't always scale very well. Stick in 4 GPUs and watch scaling go way down. Just being APUs won't fix the scaling problem, at least not all the way.
    There are other issues to solve, certainly. Memory bandwidth is one of the big ones, and one that isn't getting solved particularly quickly. Throwing more processing into a package that is starved for data or bottlenecked in writing out data is not going to help, for sure.

    That said, some of the other-other issues are solved by moving those SPUs into the CPU. It's going to be a while before the heavy-duty GPUs are integrated, though, due both the constrained memory bandwidth when sharing with the CPU as well as the heat issue. (The heating issue is actually solvable, as I understand it; we just haven't actually started manufacturing chips using those solutions. 3D circuit layout allowing chips to be more compact and hence lose less energy to impedance while simultaneously increasing cooling surfaces via internal micro-ducts and hence increasing the efficiency of cooling. Not a EE/CE guy though so maybe I have that wrong.)

    Leave a comment:


  • smitty3268
    replied
    I agree that APUs will take over the entire low-end market. Probably the middle of the market as well, at least after enough years go by to make it possible. Where I'm not so sure about is the high-end. It's entirely possible the high-end market could die out, especially if Windows gaming is relegated entirely to console ports. But I think it's large enough to survive.

    Are CPU manufacturers really going to want to stick 50 billion extra transistors on their CPUs that are much more complicated (and likely to fail, therefore wasting the entire chip)? Or will they stick to a simpler, cheaper chip good enough for 95% of people that will give them higher yields and tell consumers to buy $600 graphics cards for those who really need the extra power? I think this is an open question, and probably something that not even AMD or Intel has figured out yet, or will even attempt to figure out for many years to come.

    Also, your assumption that you can just plug in extra APUs for more power doesn't seem like a good solution to me - look at crossfire and SLI - even with 2 GPUs it doesn't always scale very well. Stick in 4 GPUs and watch scaling go way down. Just being APUs won't fix the scaling problem, at least not all the way.

    Leave a comment:


  • Xake
    replied
    Originally posted by deanjo View Post
    Would you want to work with a bunch of juvenile dipshits like that?
    With a thought about all things Ballmer and other MS-spokepersons burps out from time to time, and all the people/companies still wanting to work with Microsoft I would say yes.:-P

    I do think it was a funny joke, I do not however think it was executed that well. Had it been made in a form not making people questionable the integrity of the git-repos or by abusing root-permissions (say post it as a proposed patch or pull-request on a mailing list) then I have had a much better time handling it.

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by smitty3268 View Post
    I'm sorry you took it as spiteful, I was chuckling when writing it and meant it to come across in that vein.

    Anyway, I find it perfectly reasonable to assume that as an AMD employee who is writing drivers for their hardware he may have some inside information about future plans for their hardware. Certainly for the next couple of years, at least, and I read what you posted and frankly thought you were assuming a lot of stuff that isn't necessarily going to happen. It might, but you're stating a lot of things as unimpeachable facts that you really have no way of knowing, and as soon as someone called you out on it you claimed they had no clue what they were talking about. It kind of reminded me of a recent thread by Quaridarium, where he claimed that OpenCL was the saviour of all mankind and all sorts of other stuff... Hence the chuckle.

    It is true he may have inside information, but i don't really believe it will be much. Companies tend to be secretive about their future plans.

    As for not being able to know these stuff, i am not. I do not work for Intel or AMD, so i do not have anything concrete on my hands.

    But these things take years to develop. Hardware may need 4-5-6 years to be released. Engineers don't just change an architecture radically in the middle of development. Also, when they target something, they make gradual changes with each release. If someone is knowledgable about hardware/software, he can detect the pattern and find out what they are planning for years ahead.

    For example, it is obvious that they are not planning to move the gpu out of the die anytime soon. The only CPU which will be released in the near future without a gpu core, is the first Bulldozer. Bulldozer's second revision will include a gpu die too, it is confirmed.

    So, if you know this, and know that Intel at least sees a clear path to 11nm(22nm is guaranteed at this stage), you know that in 6 years transistors of APUs will be 1/8 the size of the first generation APUs(32nm). This is obvious knowledge, not a prophecy, not inside information. This will happen.

    Next, if you know that GPGPU is advancing and it is being taught to schools, you know that if they place that gpu cores inside the cpu, they will have to use them. So you know those cores will be used for calculations, instead of being used like cheap IGPs.

    Lastly, if you know that code in general doesn't scale well after 8 cores, you know that besides various improvements in caches, prefetchers etc, at least for the desktop 8 cores will be the maximum(4 modules if you are AMD). Notice that the problem is about developers, not cpus. Developers can't code well for >8 cores. Various independent tests confirm this. Servers could use many more cores of course, but the desktop is not a server.

    So if you mix all these together, you could predict that most of the transistors will be used for the gpu cores in the future, at least on the desktop. If you add some economics in the mix, you will arrive at the conclusion that having APUs where 5/6 of the die is a gpu, and having a dedicated gpu, isn't economically viable. Why R&D dedicated gpus since they are almost the same with APUs? Why spend more money and duplicate efforts?

    This is all public knowledge. I may be wrong on all of these of course. If God is willing, we will all be here to find out after 6 years. But, it is still the most probable outcome. Especially if you consider the alternatives:

    Pure CPUS: At 11nm, you could have say 64 penryn like cores on die. Would that improve performance? Do you seriously believe that in 6 years code in general could utilize well 8 or even 4 cores?

    Pure GPUS: At 11nm, they will be lets say ~16 times more powerfull than now. If they remain over pcie, will we be able to feed them? Especially for GPGPU, won't we face problems with latency?

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by bridgman View Post
    The only part of your posting I had a problem with was "If we really want to know the future (AMD APU plans), we will have to ask <some guy at Intel> ..."
    LOL

    At the time of writing this i didn't recall by memory AMD's CEO so i wrote Otellini and the like

    But, i believe Intel has similar plans for their CPU+GPU combination too. Sandybridge confirms this.

    Leave a comment:


  • bridgman
    replied
    The only part of your posting I had a problem with was "If we really want to know the future (AMD APU plans), we will have to ask <some guy at Intel> ..."

    Leave a comment:


  • smitty3268
    replied
    Originally posted by TemplarGR View Post
    I wasn't planning to comment on this quote from Bridgman, but i will, because i am in the mood

    With all due respect to agd5f, and his wonderful work on the radeon driver which i use daily for one year, he is not more knowledgable of the future of APUs than i am.

    Just because someone gave him the first generation APU, and he wrote code for it, doesn't make him know the future of APUs. It is like saying that i or even Linus Torvalds knows the future of x86 cpus because we code for them...

    The only ones who know about it, are the engineers that design the hardware, and even they might not know the future. If we really want to know the future, we will have to ask Otellini and the like...

    I do not know the future. I only made an educated guess, based on what i know about hardware and software, my experience, and news i gather here and there. That doesn't mean i spread FUD or rumours, what i wrote is very likely to happen. I do not like being mistaken.

    So, the only thing he is more of an expert than me, is for Mesa and the graphics stack, and i didn't say otherwise. In fact, i said that i don't know the details of the graphics stack, so why the spiteful comment?
    I'm sorry you took it as spiteful, I was chuckling when writing it and meant it to come across in that vein.

    Anyway, I find it perfectly reasonable to assume that as an AMD employee who is writing drivers for their hardware he may have some inside information about future plans for their hardware. Certainly for the next couple of years, at least, and I read what you posted and frankly thought you were assuming a lot of stuff that isn't necessarily going to happen. It might, but you're stating a lot of things as unimpeachable facts that you really have no way of knowing, and as soon as someone called you out on it you claimed they had no clue what they were talking about. It kind of reminded me of a recent thread by Quaridarium, where he claimed that OpenCL was the saviour of all mankind and all sorts of other stuff... Hence the chuckle.

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by smitty3268 View Post
    Don't you just love the internet? Where everyone is an expert on everything.
    I wasn't planning to comment on this quote from Bridgman, but i will, because i am in the mood

    With all due respect to agd5f, and his wonderful work on the radeon driver which i use daily for one year, he is not more knowledgable of the future of APUs than i am.

    Just because someone gave him the first generation APU, and he wrote code for it, doesn't make him know the future of APUs. It is like saying that i or even Linus Torvalds knows the future of x86 cpus because we code for them...

    The only ones who know about it, are the engineers that design the hardware, and even they might not know the future. If we really want to know the future, we will have to ask Otellini and the like...

    I do not know the future. I only made an educated guess, based on what i know about hardware and software, my experience, and news i gather here and there. That doesn't mean i spread FUD or rumours, what i wrote is very likely to happen. I do not like being mistaken.

    So, the only thing he is more of an expert than me, is for Mesa and the graphics stack, and i didn't say otherwise. In fact, i said that i don't know the details of the graphics stack, so why the spiteful comment?

    Leave a comment:


  • TemplarGR
    replied
    Nice to hear that the current architecture is already there. As i said, it is obvious libraries should be in userspace, i was talking about very low level bits of the driver.

    In any case, i am happy that Mesa and the radeon driver progresses, so when the time comes, it will be ready for APU computing. We need an open OpenCL implementation, and much other stuff, but i believe that by the time we really need those, they will be there.

    In topic: That is why contributing to X doesn't matter much anymore

    Leave a comment:

Working...
X