Announcement

Collapse
No announcement yet.

AMD A8-3500M Llano Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Luke_Wolf
    replied
    The problem is no CPU manufacturer makes CPUs for any of the standards that you claim are there for legitimate reasons except for their own socket type. Why is that? Oh yeah, it's because they want to fragment the market so that consumers will be locked into one particular socket. It's "good for business" (them taking more of your money) to make consumers have to throw away their entire computer when ever they're interested in upgrading. This waste in the system, like all other waste, results in more money for the super rich, less for everyone else, and thus a lower quality of life for everyone else.
    It's not just good for business it is good for the consumer for the same reason an unstable hardware api is good for the Linux kernel. It gives the designer more freedom to innovate rather than having to be stuck to an older featureset and socket that will never get updated because the committees can't decide on anything.

    Let's be quite blunt here, for something extremely complex that is rapidly moving and developing let's say a Kernel or a CPU you don't want to lock down the progress and bring it to a halt through a bureaucracy. You want the company or developer to have as much freedom as they can possibly have.

    Now let's say we've got ISO holding this standard for the rapidly developing technology of CPUs, now AMD has a new feature they want to implement nobody has ever thought of before, but it requires a new socket to make it work. Now in order to make it work they have to announce to ISO why they want the standard changed, thus tipping their hand on this new tech, and so this means intel is going to come along and implement their hack of it and AMD loses the initiative, also both companies have to wait 6 months for their standard to be argued over resubmitted, argued over again, then they'd have to get it signed in triplicate, sent back, lost, found, subjected to public query, lost again and finally buried in soft peat for three months before finally recycled as lighters before they finally got it approved.

    Let's in comparison look at ODF for a second here and why It can have a standard. ODF isn't really a moving target per say, it evolves as a standard yes.. but at glacial speed. No one is really so much developing the standard so much as the office suite behind it, thus meaning the point is interoperability. These office suites aren't trying to tack on extra stuff that the standard doesn't support because that's not the point it's not really a competition against other ODF it's against OOXML and infact they're trying to make it interopt with the other suites.

    A processor is completely different, what you're asking for is the equivalent of telling all game developers that: "you can develop a game, but only for the Unreal 3 engine, and if you want new feature.. well you'll have to wait for us to discuss it, and eventually just maybe, if we feel like it, get around to it in Unreal 4 in the next few years" Now obviously Unreal 3 is suited well for some tasks but not for others, I wouldn't for instance want to write a 2d adventure game in it, I'd want to hack AGS to work under Linux.

    Leave a comment:


  • Yfrwlf
    replied
    Originally posted by misiu_mp View Post
    There are always technical excuses: vastly different, quickly changing designs, pin counts, pin layouts optimised for different things.
    This would basically force both manufacturers to use similar technologies, e.g. if the socket has support for on-die memory controller (extra pins), then all the cpu manufacturers would have to use them, or else the mobo makers would have to make some mobos with external memory controllers (north bridges), that would not work very well for cpu's with built-in memory controllers, in practice making it cpu-specific.
    This would mean that amd and intel would need to cooperate in designing the motherboard standards and interconnections as well as parts of the cpu designs themselves. Two major manufacturers of competing products starting to cooperate is not good for competition. So this would require a wider committee representing more stakeholders. We've seen this kind of cooperation with benefits for the consumers, for example USB, sata, pci, agp, pci-e, atx, ethernet. The problem is this might pose too strict restrictions on freedom to design the most efficient performance and as opposed to the other standards, performance (or efficiency) is the only thing that counts in cpu design.
    That's the excuse, anyway. They are cooperating, though. Cooperating to not cooperate. In reality these CPUs do exactly the same thing and accomplish the same goal, and if these supposedly huge differences in architecture were actually relevant, multiple socket standards could exist to compensate or you simply allow for both options. Just because pins are available doesn't mean you have to use them all. The problem is no CPU manufacturer makes CPUs for any of the standards that you claim are there for legitimate reasons except for their own socket type. Why is that? Oh yeah, it's because they want to fragment the market so that consumers will be locked into one particular socket. It's "good for business" (them taking more of your money) to make consumers have to throw away their entire computer when ever they're interested in upgrading. This waste in the system, like all other waste, results in more money for the super rich, less for everyone else, and thus a lower quality of life for everyone else.

    Yes, you need to allow for things getting smaller and such and maybe a couple socket types for when there is a need to offload certain things onto other chips on the mobo (the sockets for which should also be standardised), but that doesn't mean you can't have standards. Standards can evolve when needed, but still allow for much better competition than without.

    Leave a comment:


  • misiu_mp
    replied
    Originally posted by Yfrwlf View Post
    When someone announces GPCPU socket standards so mobos and GPCPUs aren't locked together, there is direct competition again, and you don't have to buy a new mobo when you buy a new GPCPU, then I will be excited.

    Oh wait, that will never happen because they only care about taking more of your money...
    There are always technical excuses: vastly different, quickly changing designs, pin counts, pin layouts optimised for different things.
    This would basically force both manufacturers to use similar technologies, e.g. if the socket has support for on-die memory controller (extra pins), then all the cpu manufacturers would have to use them, or else the mobo makers would have to make some mobos with external memory controllers (north bridges), that would not work very well for cpu's with built-in memory controllers, in practice making it cpu-specific.
    This would mean that amd and intel would need to cooperate in designing the motherboard standards and interconnections as well as parts of the cpu designs themselves. Two major manufacturers of competing products starting to cooperate is not good for competition. So this would require a wider committee representing more stakeholders. We've seen this kind of cooperation with benefits for the consumers, for example USB, sata, pci, agp, pci-e, atx, ethernet. The problem is this might pose too strict restrictions on freedom to design the most efficient performance and as opposed to the other standards, performance (or efficiency) is the only thing that counts in cpu design.

    Leave a comment:


  • Yfrwlf
    replied
    When someone announces GPCPU socket standards so mobos and GPCPUs aren't locked together, there is direct competition again, and you don't have to buy a new mobo when you buy a new GPCPU, then I will be excited.

    Oh wait, that will never happen because they only care about taking more of your money...

    Leave a comment:


  • curaga
    replied
    It would be supported as in attempting such a transfer would be instant, I believe.

    Leave a comment:


  • oibaf
    replied
    Originally posted by Qaridarium
    sure why not?
    Because it looks like Fusion GPU in current drivers are managed like standard GPU. I didn't see anything like that up to now. I am asking just because maybe I missed some commits...

    Leave a comment:


  • aussiebear
    replied
    I'm going to skip Llano. (Maybe get something for my dad?)

    Anyway, at the AMD Fusion Developer Summit 2011, they talked about Llano's 2012 replacement...

    Trinity => 2nd generation Bulldozer + cut-down version of Radeon HD 6900-series?

    This is what I've been waiting for from AMD!
    Last edited by aussiebear; 15 June 2011, 07:48 AM.

    Leave a comment:


  • oibaf
    replied
    From: http://www.tomshardware.com/reviews/...pu,2959-4.html
    The Fusion APU also boasts a unique ability that dedicated graphics cards can not possess: direct access to unified memory shared between the CPU and GPU, something that makes Zero Copy and Pin-in-Place possible. To understand the advantage, consider how a discrete graphics card works today; texture maps are created in system memory and then transferred to virtual memory in Windows. When the system needs to bind the texture, it first makes sure it?s in virtual memory, then the OS copies it to DRAM, and the DMA of the PCIe bus transfers it to the graphics memory for access. Simply put, there?s a lot of copying going on that can cause significant latency.

    But an APU doesn?t need to copy memory contents because the GPU and APU blocks share access to the same memory. Zero Copy can access virtual memory directly. Just update the page tables and point to it; no copying is necessary. Application memory can be pinned in place without copying it through the operating system staging buffers. When very large data sets are involved, the APU can even outrun a dedicated GPU (Ed.: I covered this optimization, which AMD was calling Fast Copy previously, in ASRock's E350M1: AMD's Brazos Platform Hits The Desktop First. Brazos is also able to share that memory space, which was previously separate, and enjoy a latency reduction).
    Is this supported in the linux drivers (kernel, ddx, mesa)?

    Leave a comment:


  • Kivada
    replied
    Yeah, they are K10.5 based, the desktop variants look very interesting as well for the HTPC market, I've been reading that the A8-3850 can OC to 3.7Ghz and it's HD6550D scales very nicely with ram bandwidth.

    Leave a comment:


  • renkin
    replied
    I thought it was already disclosed by AMD that it's based on the stars core? So no more guessing =p

    Leave a comment:

Working...
X