Originally posted by Sidicas
View Post
Announcement
Collapse
No announcement yet.
12-Way Radeon Gallium3D GPU Comparison
Collapse
X
-
Originally posted by bridgman View PostI think the issue is that there's no standard way for PTS to obtain this info.
You can actually compare AMD vs. nVidia cards on their memory performance based on memory bitwidth and memory clock which is something you can't do with the GPUs themselves without benchmarking because they have different designs.. nVidia has always historically made very high bitwidth cards because their product releases often happen at awkward times, which is whenever the Taiwan fab is able to manufacture the GPUs with a decent yield... AMD has historically delayed their graphics card releases to correspond with new video memory technology so that they don't need to make their cards high bitwidth, which then means more performance for your money and it's easier to get a lot of board manufs. making AMD graphics cards since the board manuf.'s costs are lower.Last edited by Sidicas; 27 November 2012, 08:23 PM.
Comment
-
Originally posted by Hamish Wilson View PostA better than normal article, but I still would suggest you include a bit more information about the specific cards being tested. For instance, the Radeon HD 4670 you used in your test is not the same as the Radeon HD 4670 I am using in my machine, and I doubt the Radeon HD 6450 in the article is the one I recommended to my brothers. Maybe include a mention of the brand used, such as Sapphire, Diamond, MSI, etc...
What does make a world of diference on low end cards is the memory type and it's interconnect bit width. On the low to lower mid range cards you will find all combinations of GDDR, GDDR2, GDDR3, GDDR5 64-bit, 128-bit, etc. For a given GPU core, a card with 64-Bit GDDR5 is going to perform about the same as a card with 128-bit GDDR3 as they will have about the same total VRAM bandwidth.
Now though you will also see the same GPU core being paired up with an absolutely pitiful 64-bit GDDR which no matter what will always be much slower because all the life is being choked out of it by the lack of VRAM bandwidth.
Rule of thumb on low end dedicated GPUs, never buy anything with less then 128-bit GDDR5. Memory size doesn't matter pretty much at all since these cards wont have the grunt to handle most heavier games that would need allot of vram anyways, but you still want the card to be able to fully stretch it's legs instead of tying an anvil to it's neck..
Comment
-
Originally posted by bridgman View PostI think the issue is that there's no standard way for PTS to obtain this info.
Also, don't make crap up about there being no way to garner this information, since GPU-Z gets all of this information and more about every type of GPU, there is no reason PTS cannot get this same information.
Comment
-
Originally posted by Kivada View PostYet this is not a problem the rest of the hardware sites on the internet have, they put up all the info about the card in all of their graphs.
Also, don't make crap up about there being no way to garner this information, since GPU-Z gets all of this information and more about every type of GPU, there is no reason PTS cannot get this same information.Michael Larabel
https://www.michaellarabel.com/
Comment
-
Originally posted by Kivada View PostAlso, don't make crap up about there being no way to garner this information, since GPU-Z gets all of this information and more about every type of GPU, there is no reason PTS cannot get this same information.Test signature
Comment
-
Originally posted by Michael View PostShow me any open-source Linux graphics driver that readily exposes all of the details you desire... PTS already reads core/memory clocks for all drivers (except for the Radeon driver since it only exposes the clock information over debugfs instead of sysfs, unless I run PTS as root or remember to chmod the debug directory before testing, it won't show anything) but the drivers aren't exposing anywhere the memory bus width, etc as far as I know.
Comment
Comment