Originally posted by Jabberwocky
View Post
Announcement
Collapse
No announcement yet.
Arm Announces Cortex-A78, Cortex-X Custom
Collapse
X
-
- Likes 1
-
Originally posted by discordian View PostWell, then why have multiple layers of caches? Just put everything in L1.
Its a balance of size/speed (bigger cache = slower access) and power(-efficiency), and the sum is that the chip is now balanced differently and in sum is faster than its predecessor.
Comment
-
Originally posted by PerformanceExpert View PostSecurity is important in the mobile space. However, like on your PC, by far your biggest risk is not from a Spectre-style attack but from downloading an app with a virus or clicking on a link a "friend" sent. If you avoid that, there is generally little to worry about. A long time ago, I connected my brand new Windows PC to the internet and got infected with a worm within 5 minutes due to the total lack of security at the time. Things changed.
Agreed 100% that it's your personal behavior (clicking malicious links, loading malicious apps, etc.) that is by far the biggest threat to mobile device security. Use common sense, and you'll be fine.Last edited by torsionbar28; 26 May 2020, 02:36 PM.
- Likes 2
Comment
-
Originally posted by _Alex_ View PostSince when is halving the L1 cache size an ..."improvement"?
I guess when cpu manufacturers raise it they claim it's a performance improvement and when they reduce it they claim it's an efficiency improvement... They can never lose
And yes, efficiency is at odds with performance. When one wins, the other one loses.
- Likes 3
Comment
-
Umm... The Cortex-series are not SoCs, but CPU cores sold as IP cores (i.e designs ready-fort-synthesis laid out in Verilog/VHDL) to various companies who make SoCs. To call them as such is like calling a Yamaha engine a "car" when they're just component parts that Yamaha makes based on design specs given to them by carmakers like Volvo and Toyota.
I see they're just going bigger and bigger with these cores in terms of transistors per core. Some size growth is completely natural, but it feels like ARM is now very much aiming at continually growing the cores until they have a real desktop replacement in their hands. All the while my current phone uses four comparatively tiny A53 cores and I don't think I need anything considerably faster.
- Likes 3
Comment
-
Originally posted by bug77 View Post
Since L1 cache is the fastest memory in the system (save for registers), it's also the most power hungry.
And yes, efficiency is at odds with performance. When one wins, the other one loses.
- Likes 1
Comment
-
Originally posted by L_A_G View PostI see they're just going bigger and bigger with these cores in terms of transistors per core. Some size growth is completely natural, but it feels like ARM is now very much aiming at continually growing the cores until they have a real desktop replacement in their hands.
Comment
-
Originally posted by L_A_G View PostI see they're just going bigger and bigger with these cores in terms of transistors per core. Some size growth is completely natural, but it feels like ARM is now very much aiming at continually growing the cores until they have a real desktop replacement in their hands.
Originally posted by PerformanceExpert View PostAbsolutely, the X1 will be pretty much equivalent in performance to 3950X according to AnandTech. That would allow seriously fast laptops and desktops!
- Likes 3
Comment
-
Originally posted by PerformanceExpert View Post
It's never either/or. Like with software, hardware is not 100% optimal, so there is always plenty room for improvement of every aspect. You can improve performance and get better efficiency. For example, replace the branch predictor with a larger one that uses the same amount of power. Your performance improves due to fewer branch mispredictions, and as a result your efficiency improves. Similarly improve efficiency and as a result performance improves in power constrained scenarios.
Or you can do it by using a smaller node, but then you're giving up using the same predictor and saving up die space. Still either/or.
The only case where you'd win across the board is if you redesign the branch predictor to be more efficient using the same number of transistors and die space (or less). And that happens, too. Just not so often as juggling other, known variables.
Comment
-
Originally posted by bug77 View Post
Flawed example. You can only replace the branch predictor with a larger one that uses the same amount of power if you give up die space. So it's still either/or.
Or you can do it by using a smaller node, but then you're giving up using the same predictor and saving up die space. Still either/or.
The only case where you'd win across the board is if you redesign the branch predictor to be more efficient using the same number of transistors and die space (or less). And that happens, too. Just not so often as juggling other, known variables.
- Likes 2
Comment
Comment