Announcement

Collapse
No announcement yet.

Radeon HD 6000 Detailed Specs

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I'm sticking with my HD5970. I will not be making a purchase in the HD6000 series. I will wait for TSMC's next smaller process line, and then re-evaluate the standing of ATI and Nvidia's offerings on that line. I can't bring myself to spend top dollar on something that is just an incremental improvement, offering no new API support and with the same transistor size.

    Comment


    • #12
      in my point of view:

      a 6870 will have 800 shader cores its a hd5770 refresh with 265bit memory interface and 205gb/s

      a 6770 will have 800 shader cores its a hd5770 refresh with 128bit memory interface and 100gb/s

      a 6990 is a single GPU card its the compare to the hd5870 to the beginning there will be no dualgpu card at the 6xxx line.

      Comment


      • #13
        Originally posted by Qaridarium View Post
        in my point of view:

        a 6870 will have 800 shader cores its a hd5770 refresh with 265bit memory interface and 205gb/s

        a 6770 will have 800 shader cores its a hd5770 refresh with 128bit memory interface and 100gb/s

        a 6990 is a single GPU card its the compare to the hd5870 to the beginning there will be no dualgpu card at the 6xxx line.
        So you call me bull shitting and you come up with those joke opinion? Man what are you smoking!

        5870 has 1600 SP and 6870, 'In your opinion', will have 800 SP? Not to mention your "265 bit" fucking joke?

        You are such an incompetent piece of joke

        Comment


        • #14
          Originally posted by FunkyRider View Post
          So you call me bull shitting and you come up with those joke opinion? Man what are you smoking!

          5870 has 1600 SP and 6870, 'In your opinion', will have 800 SP? Not to mention your "265 bit" fucking joke?

          You are such an incompetent piece of joke
          maybe but i know amd still do not have the final spec of the highend version.

          the only final card is the 5770(800 shader) refresh called 6870 with 256bit interface

          the highend card will be the 6990 and this one is a single gpu card.

          Comment


          • #15
            What does it matter how great the hardware specs are, when the potential is being hold back by the drivers? I still often hear from ATI (yeah, AMD whatever) users with their problems, minor or otherwise, with Catalyst/fglrx drivers, like nothings changed and still totally stigmatised. And what good is UVD for Linux users?

            No matter how much I ideologically believe in FLOSS, we'll be lucky if we can reach 70% of the potential of a 5770 in less than 4 years, and at that time a 6890 is outdated (not obsolete, but outperformed).

            Specs are useless without good drivers.

            Comment


            • #16
              Originally posted by numasan View Post
              we'll be lucky if we can reach 70% of the potential of a 5770 in less than 4 years
              Even 70%? O_O
              In only 4 years? O_O
              You are too much optimistic!
              ## VGA ##
              AMD: X1950XTX, HD3870, HD5870
              Intel: GMA45, HD3000 (Core i5 2500K)

              Comment


              • #17
                Originally posted by numasan View Post
                No matter how much I ideologically believe in FLOSS, we'll be lucky if we can reach 70% of the potential of a 5770 in less than 4 years, and at that time a 6890 is outdated (not obsolete, but outperformed). Specs are useless without good drivers.
                Numasan, the 70% estimate came from our architects based on the current size of the open source driver development community and the complexity of the development task. Most of what can be done to simplify the development task (eg transition to Gallium3D, documentation etc..) is being done but the easy way for someone to "break the 70% rule" is to get involved and contribute to the development effort. Doubling the resources won't halve the performance gap (as with so many things, you probably need 10x the resources to get 2x the results) but it would sure make a difference.

                Somewhere in the last decade the average users's view of the FOSS ideal has gone from "we can do it" to "why isn't someone else doing it ?". Initial indications suggest that the new model doesn't work so well.

                Comment


                • #18
                  Originally posted by bridgman View Post
                  Somewhere in the last decade the average users's view of the FOSS ideal has gone from "we can do it" to "why isn't someone else doing it ?". Initial indications suggest that the new model doesn't work so well.
                  I actually wonder if that has something to do with the fact that Linux has developed so far. I mean, maybe the people with hacker mentality aren't finding a well-documented platform as fun to hack as some completely unknown frontier? *shrug*

                  Comment


                  • #19
                    Yes, I'm aware of the situation where "normal users" are demanding more and more without contributing back, and being a non-programmer myself I'm probably viewed as belonging to this group. The only "power" I have is the money I choose to spend, and that is a drop in the ocean... AMD doesn't care if I choose to buy AMD hardware exclusively because of your work, Bridgman. Volunteers are scratching their own itch, benefiting us all sure, but as the Kwin story shows even (in my eyes) hardcore developers are having a hard time with the graphics stack. How on earth should I who barely knows Python contribute to reach the full potential of modern GPUs? It seems that even a team of Catalyst developers can't do it.

                    I don't want to dismiss anyones work, but I must say I'm unimpressed by hardware specs that on paper is "20% faster than the current greatest", that doesn't mean anything in reality for us Linux users, using either closed or open drivers. I'd rather have a GPU that is 20% slower but works as advertised and flawlessly. In time open drivers can fulfill this I believe (hope), but now as a common, non-GPU-code-contributing end-user, great AMD hardware is not as attractive as it could be. In this position I don't care how AMD shuffles resources, as long as it can justify my investment.

                    Comment


                    • #20
                      Well, I suppose eg going for RedHat subscription or similar would be the most feasible way to put your money into something that will most likely end up in Linux development.

                      Comment

                      Working...
                      X