Announcement

Collapse
No announcement yet.

AMD's Hiring Open-Source Graphics Developers Still

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by V!NCENT View Post
    RAM is cheap? RAM > CPU. Now 4GB is already the standard. In two years you can bet your ass it is 8GB (the double) and thus I made the right configuration. My computer can easily do ten years before crumbling under the weight of newer software. So by buying a 1000 dollar PC I avoid having to do an upgrade every three years. 1000/10=100 euro's a year. 100*3 years=300 dollar. For 300 dollar you can't buy a reasonable computer.
    Originally posted by V!NCENT View Post
    60 dollars at that time for never having to use swap in ten years. You see the point?
    Originally posted by V!NCENT View Post
    Yes it is slower. Your point?
    Your logic has some fails. First, the real bottleneck is hard disk. The NANDs are very low quality and very high price/performance now, so there is currently no ways to cut costs. My 10 year old athlon 3200 with 1gb of ram was sufficient over whole period, quad cores inclusive.

    But it wasn't the point of discussion.

    The point is, if we project the current AMD opensource driver state to RAM chips(thanks to von Neumann we don't need a driver for them) , you would have 2GiBs free out of 8GiBs installed. You invest 60 dollars and have 15 dollar effect. Consuming whole as-if "60 dollar" electricity. Producing as-if "60-dollar" heat. Delivering as-if "60 dollar" features. Thanks only to driver. Which AMD writes happily for windows for free.

    Originally posted by V!NCENT View Post
    As for the AMD card: as time goes by, performance of the 3D driver goes up. That means I'm pretty much in the clear as well.
    Not always the case. Low-end cards exist even now, and they cannot beat 10 year old hi-end cards. Nothing out of the ordinary though.

    Originally posted by V!NCENT View Post
    I lease my card? I bought it. EULA? -> GPL. LOL you failed very hard Given that you morooned to Mars I am not going to upgrade you now
    You lease your card, if its something new to you. The hardware is not covered by GPL, but belongs to AMD/Nvidia. You bought the ability to use your card only.

    Originally posted by V!NCENT View Post
    Errrr Tomb Raider Anniversary (look up the release date and graphics) manage to run at 30fps with very nice graphics under Wine. Sometimes there is a very tiny shader texture somewhere that is not updated but that's so rare that I don't care.
    I don't have the game, but every time I used Supertuxcard the bottleneck was CPU. Where on nvidia blob the bottleneck was GPU, CPU idled.


    Originally posted by V!NCENT View Post
    I'll make a screencast if you want, showing it is perfectly working. Tell me how to make it crash so I can prove my point that it doesn't.
    I don't need your screencast, I used git versions of opensource driver (et al components, libdrm, kernel, mesa etc) with 4770 for more than a year and I had kernel panics, low performance and low features. Hence I use nvidia now.

    Comment


    • Originally posted by mirv View Post
      Strange comment considering that I see lots of "amd drivers are bad" without actually stating what is supposedly bad.
      Just compare opensource driver to nvidia blob or even catalyst.

      Originally posted by mirv View Post
      Also interesting in your rather...poor...view of free hackers (which do an awful lot more than you seem to think). And by the way - one of those corporations that has made quite an investment into Linux was, and is, AMD.
      Poor view of free hackers stating themself that nvidia blob currently has the best graphic implementation on linux from all? Poor company, not hackers. Ideally you have hackers hacking new features and not doing company job.

      Comment


      • Originally posted by crazycheese View Post
        Just compare opensource driver to nvidia blob or even catalyst.

        Poor view of free hackers stating themself that nvidia blob currently has the best graphic implementation on linux from all? Poor company, not hackers. Ideally you have hackers hacking new features and not doing company job.
        I think that was already done earlier in the thread. Not seeing anything bad there, unless you mean lack of features. The open source drivers are not intended to compete with the blob, by the way, but rather are supposed to focus on areas that the blobs dont/can't. I see none of this as bad. Working driver is generally good actually.
        You'll have to back that statement up too.
        Also, the company (and I assume you're referring to AMD) is doing its job - catalyst drivers. They're complementing it with the open source stuff, and indeed are doing more than was asked of them. But if you're going to whinge that AMD should do more with the open source graphics drivers, you have to be downright hateful of nvidia for doing absolutely nothing to help open source graphics.

        Comment


        • Originally posted by mirv View Post
          But if you're going to whinge that AMD should do more with the open source graphics drivers, you have to be downright hateful of nvidia for doing absolutely nothing to help open source graphics.
          Believe me, I am.

          Comment


          • Originally posted by crazycheese View Post
            Just compare opensource driver to nvidia blob or even catalyst.
            I do that daily and prefer the OSS driver. That's because stability and usability is more important to me than raw FPS and advanced features that no software under Linux can leverage.

            You are living in ancient history. The OSS drivers are quite good nowadays. If 95% of the functionality is not enough for you, then by all means use something else, but don't mislead people with your escapades how nothing works. Things work very well. I've just finished Prey, and I'm going to replay Quake4, using open drivers. You can continue telling me how it doesn't work, I'll keep using them.

            Good luck changing resolution or hotplugging monitors using your nvidia card. As long as something is not needed for Windows, you don't get it in your "linux" driver either.

            Comment


            • Originally posted by pingufunkybeat View Post
              You can continue telling me how it doesn't work, I'll keep using them.
              Good luck changing resolution or hotplugging monitors using your nvidia card. As long as something is not needed for Windows, you don't get it in your "linux" driver either.
              I was not telling you they don't work, I was telling you they don't project even 20% of features of the card. If I had IGP only, I would use them. But at current development rate, they will be useable for me in 10 years, yet I won't buy last-gen card just for opensource driver to work no more nor believe anything amd marketing says. Lesson learned.

              Comment


              • Originally posted by pingufunkybeat View Post
                But they're not documenting anything, and AMD is documenting 99% of the stuff. Surely, that is a huge difference.
                Not really. They may have opened up the stuff that's interesting or relevant to the way you use your GPU. But what about the guy who doesn't care about gaming but just want to use his brazos powered HTPC or netbook to watch some HD content at home or when commuting or traveling?

                These APUs or simply not powerful enough to decode HD content without using the UVD block. So they are forced to use the blob with all its deficiencies.

                If it was the other way around and AMD documented UVD but not the 3D engine would that be okay with you? I think not. I think we'd so a lot of people bitching why AMD has not opened up the 3D engine.

                Different people different needs.

                Comment


                • Originally posted by monraaf View Post
                  These APUs or simply not powerful enough to decode HD content without using the UVD block.
                  Just curious, is this just a guess or do you have anecdotal evidence ? I haven't had a chance to run a multithreaded H.264 decoder on the most common APU configuration today (a dual-core Bobcat) and don't know anyone else who has tried.

                  On the GPU side, AFAIK the code to offload decode work to shaders is just being implemented now so I don't think we know the answer yet.

                  Comment


                  • Originally posted by bridgman View Post
                    Just curious, is this just a guess or do you have anecdotal evidence ? I haven't had a chance to run a multithreaded H.264 decoder on the most common APU configuration today (a dual-core Bobcat) and don't know anyone else who has tried.
                    Well that's what I read from other tech sites focusing on Windows. If they are powerful enough and don't really need UVD then why bother wasting silicone die in the first place

                    Comment


                    • Originally posted by crazycheese View Post
                      Your logic has some fails. First, the real bottleneck is hard disk. The NANDs are very low quality and very high price/performance now, so there is currently no ways to cut costs. My 10 year old athlon 3200 with 1gb of ram was sufficient over whole period, quad cores inclusive.
                      Hah! I have an AMD 2800, 1GB Corsair RAM with perfect timings and a 9800 pro. You are simply lying -.- It sucks. But now with the open source drivers it could accelerate full HD Mpeg4. Take that! Even the blob doesn't work there anymore.

                      But it wasn't the point of discussion.
                      Sadly moronic comments from you are, which are in desperate need to be shot down

                      The point is, if we project the current AMD opensource driver state to RAM chips(thanks to von Neumann we don't need a driver for them)
                      Your OS doesn't need to interface with your x86 CPU and registers directly? Seriously, what are you running? No wonder you don't get it with all this closed source crap. Hmpf...

                      you would have 2GiBs free out of 8GiBs installed. You invest 60 dollars and have 15 dollar effect. Consuming whole as-if "60 dollar" electricity. Producing as-if "60-dollar" heat. Delivering as-if "60 dollar" features. Thanks only to driver. Which AMD writes happily for windows for free.
                      Oh jesus nVidia produces cards that suck the suns energy directly. It is a wonder that you can't cook an egg on it! BTW the Radeon card runs at 100% load. It is just that what it is running at 100% load is not as efficient as the blob.

                      Not always the case. Low-end cards exist even now, and they cannot beat 10 year old hi-end cards. Nothing out of the ordinary though.
                      No shit. Why do you think I have not bought a low end card? It is not as if it isn't a great investment to have a decent card with enough power. And the blob isn't creating any advantage in speed with vsync in KDE.

                      You lease your card, if its something new to you. The hardware is not covered by GPL, but belongs to AMD/Nvidia. You bought the ability to use your card only.
                      What on earth. There is no law possibly enforcable that can make AMD take back my card, especialy because it isn't even manufactured and sold by AMD. I have no contract. I paid for ownership, and if they were to take it back I would call the cops and they will be arrested for theft.

                      I don't need your screencast, I used git versions of opensource driver (et al components, libdrm, kernel, mesa etc) with 4770 for more than a year and I had kernel panics, low performance and low features. Hence I use nvidia now.
                      Ah so you fucked up and went to nVidia for their blob.

                      Comment


                      • Originally posted by crazycheese View Post
                        I don't need your screencast, I used git versions of opensource driver (et al components, libdrm, kernel, mesa etc) with 4770 for more than a year and I had kernel panics, low performance and low features. Hence I use nvidia now.
                        PS: Why didn't you use Fedora if you appearantly can't make a propper Gentoo instal? I know why I'm running Fedora....

                        Comment


                        • Originally posted by bridgman View Post
                          I haven't had a chance to run a multithreaded H.264 decoder on the most common APU configuration today (a dual-core Bobcat) and don't know anyone else who has tried.
                          the answer is simple to get use 1 single core of a modern phenomII or corei7 overclock it to 3.2ghz or a little more.

                          and the answer is no..*G*

                          2 points because no: 1) the multicore scale is only 90% at best 2) the bobcad cpu do have less cache overall and need the ram for the gpu to.



                          Originally posted by bridgman View Post
                          On the GPU side, AFAIK the code to offload decode work to shaders is just being implemented now so I don't think we know the answer yet.
                          with only 80 shader-cores.. and less memory bandwidth than a hd2900 the answer is no
                          but yes maybe much faster than the cpu solution.

                          i think the reverence implementation of an shader based solution on the R600 gpu architecture is the hd2900...

                          and i don't know how many shaders are in use in that solution.

                          Comment


                          • Originally posted by crazycheese View Post
                            I was not telling you they don't work, I was telling you they don't project even 20% of features of the card. If I had IGP only, I would use them. But at current development rate, they will be useable for me in 10 years, yet I won't buy last-gen card just for opensource driver to work no more nor believe anything amd marketing says. Lesson learned.
                            You want to have top 3D performance for games & GPU video playback working right now under linux, fine NVidia binary is effectively a very good choice for this. But consider that not all user have same need or wish as you. Many are happy enough with working desktop and CPU video playback if they have a fast enough CPU.

                            But please stop pretending you know anything about the open source driver, you just clueless. Less than one year ago r600g wasn't doing much. As of today it capable of running many games with decent performances. So in less than one years we have made huge progress on support you might think that's slow, well maybe it's but consider the number of people working on open source vs the number of people working on closed source.

                            And no, neither AMD or NVidia sees linux as true market beside workstation and GPGPU. So don't complain to them that your GPU is not supported under linux, it's just not a huge market to them. If you really want more linux support complain to ASUS,ABIT,DIAMOND,... ie the company to who you actually buy your GPU from.

                            And we target way more than 20%, our aim is to support all we can.

                            Comment


                            • Originally posted by Qaridarium View Post
                              with only 80 shader-cores.. and less memory bandwidth than a hd2900 the answer is no
                              but yes maybe much faster than the cpu solution.

                              i think the reverence implementation of an shader based solution on the R600 gpu architecture is the hd2900...

                              and i don't know how many shaders are in use in that solution.
                              What reference implementation are you talking about? An HD2900 should have ample bandwidth and shader power for decode. The gallium decode work is just being implemented now and Christian is working on an RV710 which is the same level, but a generation behind the GPU in the ontario APUs and he is getting good results.

                              Comment


                              • Also, note that whether you are using UVD or 3D, they should have similar bandwidth requirements for decode.

                                Comment

                                Working...
                                X