Announcement

Collapse
No announcement yet.

AMD Ryzen 3 Rolls Out, Linux Benchmarks Coming

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by kylew77 View Post
    Why would AMD disable SMT on purpose on the Ryzen 3? Wouldn't they have to deliberately damage the dies to do so? Wish they wouldn't because then 8 threads would be the lowest common denominator meaning even the lowly netbook would have 8 threads and programs would start being optimized for more than duel core/ quad core parts.
    Well money..... If you could get 4 cores 8 threads at the $75-125 price point that most people are actually buying hardware, the vast majority of people never would buy anything more. People are still walking into Walmart and buying a dual core without hyper threading in 2017. It was over $200 6 months ago and now it's down to $160, it will get there in the next few years.

    It would probably be better if it was driven more by software and make writing programs for concurrency more standard.

    Comment


    • #32
      Originally posted by Geopirate View Post

      Well money..... If you could get 4 cores 8 threads at the $75-125 price point that most people are actually buying hardware, the vast majority of people never would buy anything more. People are still walking into Walmart and buying a dual core without hyper threading in 2017. It was over $200 6 months ago and now it's down to $160, it will get there in the next few years.

      It would probably be better if it was driven more by software and make writing programs for concurrency more standard.
      People said the exact same thing about 1ghz Thunderbirds and Coppermines..... Look at where we are today.

      Comment


      • #33
        Originally posted by bridgman View Post

        Fixed that for you.

        The problem is that once one company does it everyone has to do it, unless your mfg costs are so much lower that you can sell a full featured part for same price your competitor charges for the de-featured parts, which in turn are subsidized by the higher prices they charge for full-featured parts.

        Ok color me confused, I thought ECC wasn't disabled on Ryzen, but it sounds like what you are saying is it has to be in order to be competitive? Or are you just saying in general that's why AMD might choose to disable a feature which, in theory, costs them nothing to leave enabled?


        Comment


        • #34
          Originally posted by bug77 View Post

          It's a bit counterintuitive, but it works the other way around: a TDP is enforced and then cores can go wild as long as they stay within constraints. You can draw 65W by boosting one core to the max or by running all cores at the nominal speed.
          Real power consumption is tested at techpowerup: it's worse than comparable intel quad cores clocked roughly the same. Both idle and under load.
          Techpowerup has whole system power comparisons. According the GamersNexus the 1300X idles at 3.69W and performance is better so the increased power draw at load might not be wasted. I'm looking forward to Michael's performance / Watt results. Another variable is price,

          My opinion: Ryzen 3 looks good for the price. I'm however won't recommend anyone to buy it before Raven Ridge is released. If you are looking for a cheap gaming setup integrated graphics might be better than buying a low-end GPU.

          Comment


          • #35
            Originally posted by duby229 View Post

            People said the exact same thing about 1ghz Thunderbirds and Coppermines..... Look at where we are today.
            I'm not sure how that's a rebuttal. I did say that we will get there in a few years.

            If you compare modern high end hardware to then you make a great point. If you're comparing modern low end hardware to then, it's a lot weaker point. As I said there is some really sad new hardware you can walk into a store and purchase today.

            At least on the GPU side, one could argue that software has been pushing hardware for some time. "Can it run Crysis" is still a meme for a reason.

            Comment


            • #36
              Originally posted by Brophen View Post


              Ok color me confused, I thought ECC wasn't disabled on Ryzen, but it sounds like what you are saying is it has to be in order to be competitive? Or are you just saying in general that's why AMD might choose to disable a feature which, in theory, costs them nothing to leave enabled?

              I don't mean to speak for bridgman, but I'm nearly positive it's the latter.

              Comment


              • #37
                Originally posted by Geopirate View Post

                I'm not sure how that's a rebuttal. I did say that we will get there in a few years.

                If you compare modern high end hardware to then you make a great point. If you're comparing modern low end hardware to then, it's a lot weaker point. As I said there is some really sad new hardware you can walk into a store and purchase today.

                At least on the GPU side, one could argue that software has been pushing hardware for some time. "Can it run Crysis" is still a meme for a reason.
                Those are all good points, but it is a rebuttal, because you'd have to look at single board computers to find something comparable today. Except I'd imagine Thunderbird and Coppermine both would have better IPC and much higher power usage..

                Comment


                • #38
                  Originally posted by duby229 View Post
                  And that's exactly the problem, you never had problems, but they did happen. Like I said, it's just my opinion, So I understand it's not the way I'd like it to be.
                  Again - so what? How has it hurt me, or the literally billions of people who use non-ECC? I don't deny that a byte may have switched on me, but it doesn't matter. ECC RAM on a home or basic office PC is the equivalent of germaphobia. You WILL be infected by something. You are covered in microbes. Your body is under perpetual attack and rebuilding itself as a result. The moment you die, all that bacteria is immediately decomposing you. But does this bacteria ruin my life? No, not really. I may be inconvenienced by a cold once in a while, but as long as I'm not shooting up heroin with used needles, swimming through a sewer, or eating rotting food, worrying about bacteria is irrelevant. I don't need to wear a bubble, I don't need a face mask, and I don't need hand sanitizer, so why bother with ECC RAM?

                  To reiterate - I don't mind that you like to use ECC. If that's what makes you feel good, go for it, I won't judge. My gripe is how necessary you feel it is for everyone, when that is just simply not true.
                  EDIT:
                  I think anyone would agree that if everyone had ECC then sure, that'd be nice - there's not a problem with using ECC if you've got it. But the difference is necessity, and that is too strong of a word.


                  Originally posted by chithanh View Post
                  But you cannot tell if your memory is damaged if you don't have ECC.
                  Sure you can. Programs start to fail. Your PC might not boot up properly. You'll get results that are obviously wrong.

                  A Google study in 2009 found error rates of 8% per DIMM per year for server memory. Age of the memory chips was a significant factor.
                  This means even memory which appears to work fine during initial memtest86 burn-in can develop problems later on. By the time you notice, it may already be too late and data corruption spread to your backups or affected filesystem structures.
                  I believe that. As stated before, I feel ECC in servers is important. I see real legitimate value in ECC, but I find it overkill and unnecessary for home use. You also have to keep in mind that server hardware doesn't run under the same conditions. Servers often use more voltage with lower clocks (for stability purposes), which degrades hardware quicker and may cause leaky transistors. It's also common to not find heatsinks on server RAM, where they're more exposed to EMI.

                  Then you have the problem of attackers intentionally provoking memory errors like in Rowhammer. This has been demonstrated to allow compromise of virtual machines running on the same host. DDR4 memory was once thought to be somewhat resistant against Rowhammer, but no longer.
                  Yeah.... if someone is hacking my PC to cause memory issues, I've got MUCH bigger problems to worry about than the data integrity of my RAM. Having ECC protect me would be the least of my priorities in such a situation.

                  ECC is required where you cannot afford silent corruption of your data.
                  I totally agree, but the average PC does not operate at such a calibur.
                  Last edited by schmidtbag; 27 July 2017, 09:20 PM.

                  Comment


                  • #39
                    Originally posted by Geopirate View Post

                    I'm not sure how that's a rebuttal. I did say that we will get there in a few years.

                    If you compare modern high end hardware to then you make a great point. If you're comparing modern low end hardware to then, it's a lot weaker point. As I said there is some really sad new hardware you can walk into a store and purchase today.

                    At least on the GPU side, one could argue that software has been pushing hardware for some time. "Can it run Crysis" is still a meme for a reason.
                    I agree with you. Crysis is a good example as it scaled as new hardware was released. There are far too many games that are being optimized for last gen. hardware, Fallout 4 being at the bottom for me, with all it's good reviews the game does not perform better with better hardware.

                    Now we have created a big market of games who pay lots of money for software that sucks. With things like DLC, early access, pay to win and even lottery (with real money) all inside AAA games of today you get the idea of how gullible the average person has become. The game does not scare over 2 cores, who cares we can just buy the low quality textures for extra few bucks.

                    There are people/companies that have done the research and are trying to produce the hardware that will enable technology to grow at a faster pace. Then there's people/companies that have done the research and found small scale R&D coupled with customers who are locked into their hardware/software creates small risk high financial gain for investors. Guess who is winning?

                    I believe the cause of the sad new hardware that Geopirate speaks about was created because of the effects of lack in software engineering and the amount of mentally challenged gamers we have today. In the past we had "nerds" that played games, today if you have the latest enthusiast hardware rebrand "most" people like/respect you. Things like esports will only cause gaming to grow bigger with more uneducated people becoming part of it. Hardware and game development companies know this and they are now targeting average gamers that won't buy the latest new micro-architecture that will cover their extremely high R&D and profit costs, who won't be annoyed if their older cards do not receives code optimization in the latest games. From a financial point of view it makes sense.

                    Some people deny the effects that games have had on hardware over the years (even though it was a massive funding point in star citizen), some people say that the latest i3 with a 1050 Ti is a great gaming machine. I don't want to live in their idea of reality. I'll go as far to say that we need s/Elon Musk/inventor to create a new game engine that was written from the ground up that uses vulkan and supports multi gpu processing regardless of model or make. In gaming CPU scaling become useful only after GPU scaling is achieved.

                    Comment


                    • #40
                      As a data point, we have ~20 dell servers running RHEL, totalling just over 3 TB of RAM. It's all ECC of course. Across the 20 servers, we see about 5 ECC events per year. You can see it in /var/log/messages, it will say something like:

                      [Hardware Error]: ECC error. Error Status: Corrected error, no action required.

                      Pretty self explanatory. So clearly memory errors do occur, even in expensive tier-1 servers - its not a purely theoretical thing. They are being detected and corrected by the ECC feature. Without ECC, the errors would still occur, but they would not be corrected, or even detected by the OS.

                      I won't speculate what the error rate is in a desktop pc, just wanted to point out that memory errors are real, and they're regular, they're not some rare theoretical event that only happens once in 100 years.

                      Comment

                      Working...
                      X