Originally posted by chithanh
View Post
Announcement
Collapse
No announcement yet.
AMD Publishes Open-Source Linux HSA Kernel Driver
Collapse
X
-
Originally posted by dungeon View PostIs that new A10-7800 APU on the picture - 65W/45W part? Like lower TDP on that one, 512 shaders with 45W sounds cool .
Comment
-
Originally posted by molecule-eye View PostYou're thinking of the A8-7600 which is available NOWHERE at the moment. We should see it by the end of the year, meaning it was a total paper release. Sad.
The AMD A10-7800 APU will be available for purchase in Japan starting today, with worldwide availability at the end of July.
Comment
-
As i see they have it in hands in Japan and even advertised cTDP of 45W .
Comment
-
Originally posted by molecule-eye View PostYou're thinking of the A8-7600 which is available NOWHERE at the moment. We should see it by the end of the year, meaning it was a total paper release. Sad.Test signature
Comment
-
When Kaveri (Desktop) was first presented in January, A8-7600 was amongst the models presented, though only the two flavors of A10 came to market until now. But they prsented Specs for A8-7600 when running as default 65W config, as well as for the 45W configuralble TDP, see the first slide here.
The numbers are for 65 / 45 Watts:
Default CPU freq: 3.3 / 3.1 GHz
Max Turbo Core: 3.8 / 3.3 GHz
GPU Frequency: 720 / 720 MHz
CPU Cores: 4 / 4
GPU Cores: 6 / 6 (384 shaders)
So there were defined frequencies for the 45W cTDP published, for A10-7800 though, they didn't publish these numbers. But at least they (the frequencies) can be expected to be a little higher than those of A8-7600. GPU and CPU cores are the same as for 65W config (4 CPU, 8 GPU aka 512 shaders) of course. The nice part is that the GPU doesn't seem to need downclocking though in paractice/average it might run al little slower then in 65W mode.
Comment
-
There are some published specs - click on "specs" tab at :
Doesn't seem to say whether clock freqs are turbo or not, but from the numbers I imagine they are. The A10-7800 and A8-7600 are both there, guessing that fabbed parts yielded more A10-7800s than A8-7600s.Test signature
Comment
-
Originally posted by kaprikawn View PostSo if I understand this correctly, it means that both the CPU and GPU portions of an APU can both access the same memory (like they've been banging on about for the PS4 and Xbone 180)?
Does that mean that before, if you had an APU, some of your RAM was allocated to GPU tasks at startup, and when the CPU needed the GPU to do something then it had to transfer the data from the memory addresses used by the CPU to the parts used by the GPU (even if that was on the same physical stick of RAM)?
If my understanding is correct, I'm guessing it has no benefit for users with a CPU and a dedicated GPU where, obviously, the GPU has it's own RAM on the card?
Obviously for certain purpose you would arrange things for optimal performance (just like you arrange thing for optimal audio performance on a CPU). If the task demands it, you would wire down certain pages being accessed by the GPU so that they don't have to fault with the glitch that implies. You'd run certain GPU threads at real-time priority so they aren't interrupted by less important threads, etc. But the basic model is to have the OS controlling memory management and time scheduling for the GPU cores just like for CPUs. The value of this is most obvious when you imagine, for example, that you want a large compute-job to run on your GPU, but you want to time-slice it with something realtime like video decoding or game playing, or just UI. The OS can, on demand, submit bundles of code representing UI updates to the real-time queue and have them immediately executed, but while that's not happening, in any free time, the compute job can do what it does, which might include (for very large jobs) occasionally page faulting to bring in new memory. Compute jobs will no longer have to be written like it's the 80s, manually handling segmentation to swap memory in and out, manually trying to reduce their outer loop to something that lasts for less than a 30th of a second ala co-operative multi-tasking.
But all this is based on the idea that the CPU and GPU cores share a NoC, a common ultra-highspeed communication system, along with a shared address space and a high performance coherency mechanism (eg common L3 cache). That's not the case for existing discrete GPUs, and it's not clear (at least to me) if it could be made to work fast enough to be useful over existing PCIe. Basically this is a model based on the idea that the future of interest is GPU integrated onto the CPU (or, if necessary, communicating with it by the sort of inter-socket communications pathways you see on multi-socket Xeon motherboards). This fact makes gamers scream in fury because it is very obvious that they are being left behind by this. Well, that's life --- gaming just isn't very important compared to mainstream desktop computing and mobile, the worlds that don't use and don't care about discrete GPUs.
Comment
-
Originally posted by ObiWan View PostThe upper (and higher) are turbo clocks,
the lower ones are the standard non turbo clock.Test signature
Comment
Comment