Announcement

Collapse
No announcement yet.

Open ATI Driver More Popular Than Catalyst

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    It was never a program error. Nvidia never had issues with it. Even when a tool is not often used but it exposes a problem it should be fixed - which was fixed but really late. But it shows definitely the wrong attitude of ATI by fixing primary problems that affect heavyly used apps. Basically 1 rendering problem is already 1 too much.

    Comment


    • #47
      Originally posted by Kano View Post
      It was never a program error. Nvidia never had issues with it. Even when a tool is not often used but it exposes a problem it should be fixed - which was fixed but really late. But it shows definitely the wrong attitude of ATI by fixing primary problems that affect heavyly used apps. Basically 1 rendering problem is already 1 too much.
      All chipsets have known rendering errata. It's already too late to hope that there's no rendering problems.

      Comment


      • #48
        But it was not chipset error when it rendered before correctly. The problem was in the shared code opengl part. But what i wanted to say is that ATI just ignores errors as long as possible. Some fix the time like currently nobody cares anymore if fglrx supports modelines or not for driving a crt with 100 hz @ 1152x864 by default or not just because tft are much more common. Now a good example is xvba, sure it was done for the extra famous embedded market. Not really hard to guess that these most liked were based on hd3200 chipset (even i know samsung tvs with amd cpu+hd3200 onboard), but everybody tries to claim that those devices are so much different and that support for it was never planned for endusers. The fact is that there is a partially working xvba-video wrapper which shows LOTS of problems. But new drivers do not really fix issues with xvba, no they even render it completely unusable like from 9-10 to 9-11. Then look at Nvidia changelogs - usually there are vdpau improvements listed as they care about problems. fglrx can not even do xv correctly since the hardware solution was removed which was on older chips. Really interesting is that radeon oss can handle it. I think you asked a bit before who will most likely use vesa driver. That's very easy to answer too: Xserver 1.7+ users with ATI HD 5 cards are definitely among em. It's like ATI's Fglrx team is sleeping all the day, playing games on Win or whatever, putting the least possible effort into Linux. It is highly unlikely that they really use the driver they create in their free time. The oss devs however seem to actually care more about end users which i really conside a good move. But there is too much still missing and those who buy ATI hardware now to use it with oss drivers in the future basically lost already. The UVD(2) parts will be most likely never possible to program, the generic way to use shaders to do a similar job will never work on lowend cards. Even a ATI 3450 had problems to use opengl output with fglrx for hd movies - too slow it seems. So will the target be use a highend card to emulate a part which can do the same with very low energy requirements otherwise. Sorry, but something is completely wrong here.
        Last edited by Kano; 12-05-2009, 10:05 PM.

        Comment


        • #49
          Originally posted by Kano View Post
          But it was not chipset error when it rendered before correctly. The problem was in the shared code opengl part. But what i wanted to say is that ATI just ignores errors as long as possible. Some fix the time like currently nobody cares anymore if fglrx supports modelines or not for driving a crt with 100 hz @ 1152x864 by default or not just because tft are much more common. Now a good example is xvba, sure it was done for the extra famous embedded market. Not really hard to guess that these most liked were based on hd3200 chipset (even i know samsung tvs with amd cpu+hd3200 onboard), but everybody tries to claim that those devices are so much different and that support for it was never planned for endusers. The fact is that there is a partially working xvba-video wrapper which shows LOTS of problems. But new drivers do not really fix issues with xvba, no they even render it completely unusable like from 9-10 to 9-11. Then look at Nvidia changelogs - usually there are vdpau improvements listed as they care about problems. fglrx can not even do xv correctly since the hardware solution was removed which was on older chips. Really interesting is that radeon oss can handle it. I think you asked a bit before who will most likely use vesa driver. That's very easy to answer too: Xserver 1.7+ users with ATI HD 5 cards are definitely among em. It's like ATI's Fglrx team is sleeping all the day, playing games on Win or whatever, putting the least possible effort into Linux. It is highly unlikely that they really use the driver they create in their free time. The oss devs however seem to actually care more about end users which i really conside a good move. But there is too much still missing and those who buy ATI hardware now to use it with oss drivers in the future basically lost already. The UVD(2) parts will be most likely never possible to program, the generic way to use shaders to do a similar job will never work on lowend cards. Even a ATI 3450 had problems to use opengl output with fglrx for hd movies - too slow it seems. So will the target be use a highend card to emulate a part which can do the same with very low energy requirements otherwise. Sorry, but something is completely wrong here.
          Kano, every driver has its problems and bugs that go unresolved for long periods of time... Hell, I still have a NVIDIA driver bug that's been open now for about four years or so concerning CoolBits and Xinerama.
          Michael Larabel
          http://www.michaellarabel.com/

          Comment


          • #50
            Originally posted by mirv View Post
            Hopefully the increased development of drivers (and I'll refer mainly to the 3D stack here) will benefit game development under linux.
            I'm hoping for that too, but currently shaders are not looking great in mesa3d. Somebody needs to raise the bar and break the glsl 1.20 barrier and provide llvm backend for shader compilation. Speaking of 3d foss, we are still in the stone age and time is running out . Another scary thing is that there was no proper memory manager for long time, it's good to see stable TTM and GEM nowadays.
            Last edited by hax0r; 12-05-2009, 11:38 PM.

            Comment


            • #51
              For a long time most of the effort went into getting the stack running over a decent memory manager. I don't think that work is quite finished yet. Once it is, though, I think you'll see faster progress in other areas.

              Note that LLVM isn't likely to be a happening thing for low level shader compilation (below IL/TGSI) for quite a while -- it doesn't really handle VLIW at all in its current form. My sense was that shader utilization (compiler efficiency) was not the bottleneck anyways - reducing draw command overhead and increasing parallelism / pipelining between GPU and driver execution were the big things AFAIK.

              Comment


              • #52
                Originally posted by Michael View Post
                Kano, every driver has its problems and bugs that go unresolved for long periods of time... Hell, I still have a NVIDIA driver bug that's been open now for about four years or so concerning CoolBits and Xinerama.
                Did it ever work right? Regressions are MUCH more important to fix than things that have never worked in the first place.

                Comment


                • #53
                  Originally posted by bridgman View Post
                  Note that LLVM isn't likely to be a happening thing for low level shader compilation (below IL/TGSI) for quite a while -- it doesn't really handle VLIW at all in its current form. My sense was that shader utilization (compiler efficiency) was not the bottleneck anyways - reducing draw command overhead and increasing parallelism / pipelining between GPU and driver execution were the big things AFAIK.
                  Is VLIW support something Itanium needs? Just wondering if there's any chance of someone else helping out with it or if it will need to come from the radeon developers to make it happen.

                  It probably doesn't make a whole lot of sense to drop the current compiler while you are still able to share it with the non-gallium driver, but I do hope this gets worked on eventually when gallium gets more mature. I really don't think the custom code in the drivers is going to be able to compete with the optimizations created by a dedicated compiler team.

                  Comment


                  • #54
                    Originally posted by frantaylor View Post
                    Did it ever work right? Regressions are MUCH more important to fix than things that have never worked in the first place.
                    All things being equal, yes. But on the other hand, a regression that only affects 10 people using an unmaintained rarely used app is a lot less important than a bug that's never worked but affects millions on a daily basis. Which is exactly the point that AMD was making about that bug which deanjo seems to disagree with so strongly.

                    Lets be honest, we all know that fglrx is buggier than the current nvidia drivers. But they aren't unusable, or the devil's spawn either. Hopefully the day will soon come when the OSS drivers have decent OGL3 support and we can all forget about fglrx for good

                    Comment


                    • #55
                      There is no way to determine the impact of a regression

                      Originally posted by smitty3268 View Post
                      a regression that only affects 10 people using an unmaintained rarely used app
                      This is NOT my experience with regressions. I test software for a living and I find odd weird regressions all the time. Invariably they end up having much more impact than you might think.

                      Here is a common scenario: A regression is found in a "rarely used app" and a bug report is filed. The developers say "this is a rarely used app" and they mark the bug as "we are not going to fix it". Others come across the same bug, find the bug report, see that the developers are indifferent, and they either code around it or they just drop the buggy piece of code and switch to something else. The developers have no idea that this has happened. Since it is free software, there are no sales to affect and no salesmen to beat up the developers to fix it.

                      Really the only sane approach is to take a serious effort to fix any regression, no matter how minor it seems.

                      Comment


                      • #56
                        Originally posted by frantaylor View Post
                        This is NOT my experience with regressions. I test software for a living and I find odd weird regressions all the time. Invariably they end up having much more impact than you might think.

                        Here is a common scenario: A regression is found in a "rarely used app" and a bug report is filed. The developers say "this is a rarely used app" and they mark the bug as "we are not going to fix it". Others come across the same bug, find the bug report, see that the developers are indifferent, and they either code around it or they just drop the buggy piece of code and switch to something else. The developers have no idea that this has happened. Since it is free software, there are no sales to affect and no salesmen to beat up the developers to fix it.

                        Really the only sane approach is to take a serious effort to fix any regression, no matter how minor it seems.
                        I'm certainly not saying regressions are unimportant, quite the opposite. But any sane development team is going to prioritize the bug reports they get, and if you think that regressions should always automatically go on top no matter what the situation, then I would have to disagree.

                        Comment


                        • #57
                          Originally posted by smitty3268 View Post
                          It probably doesn't make a whole lot of sense to drop the current compiler while you are still able to share it with the non-gallium driver, but I do hope this gets worked on eventually when gallium gets more mature. I really don't think the custom code in the drivers is going to be able to compete with the optimizations created by a dedicated compiler team.
                          Other way around. r300g borrows the compiler from classic r300.

                          r300/r500 is *not* a good venue for LLVM. r600 is a bit more amenable. And for the record, I really don't think the custom code in LLVM is going to be able to compete with the optimizations created by fglrx's team.

                          Comment


                          • #58
                            Originally posted by MostAwesomeDude View Post
                            r300/r500 is *not* a good venue for LLVM. r600 is a bit more amenable.
                            Can you elaborate why? Do they have too much fixed-function hardware and not enough shader power, or what?

                            Originally posted by MostAwesomeDude View Post
                            And for the record, I really don't think the custom code in LLVM is going to be able to compete with the optimizations created by fglrx's team.
                            Sorry, I meant the OSS drivers. I don't think LLVM will be able to compete with fglrx either, I just think it will be closer.

                            Comment


                            • #59
                              Originally posted by MostAwesomeDude View Post
                              What really interests me, besides the strangely loyal radeonhd people, are the people using VESA. Which chipsets are they using? Surely there's something we can do to ease their pain.
                              on my PC... on some situations VESA runs faster than FGLRX....

                              Comment


                              • #60
                                Originally posted by Kano View Post
                                Well ATI likes to fix only rendering issues with well known apps. When a small app like gl2benchmark clearly shows an error it can take 11 month to fix. But idtech 5 should be a too major engine to ignore errors.
                                idtech5 is an bullshit backward engine only for bullshit Xbox360/playstation3 hardware with only 256mb-ram/512mb-Ram.

                                IDtech5 is the worst i ever read abaut....


                                ArmA2 vor an exampel is an Killer Engine for Big workstations 12Cores 32gb ram 2GB-Vram Grafic-cards

                                and you need 2 pices of 5870 to play it!


                                for idtech5 you need a backward graficcard with no power and you need no ram and no and no.. nothing????

                                idtech 5 is bullshit!

                                idtech4 has more features than idtech5!

                                day and night! idtech5 can only handle DAY-Time!

                                Comment

                                Working...
                                X