Announcement

Collapse
No announcement yet.

Apple Launches The M2 Pro & M2 Max + New Mac Mini With M2 / M2 Pro

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by coder View Post

    I never said it wasn't. I was only ever trying to correct wild claims by mdedetrich that the M1 & M2 need the memory to be in-package, in order for LPDDR5-6400 to be usable, or that putting it there had significant latency benefits.
    You know how I said that you should do some research, you probably should have taken that advice

    https://semiengineering.com/tricky-t...fs-for-lpddr5/

    In Fig. 1, the range of frequency that the device is operating at is shown, along with the range of voltage that is operating. “People pay attention to this, and say, ‘I need to go 10% faster,’ for example,” Zarrinfar said. “One methodology that can be used is to overdrive the memory to get the gain of speed desired. The engineering team may get a low-power memory, but if they desperately need the speed for certain corners, they can overdrive or even underdrive the technology to meet their needs. As such, having full understanding of the ranges that the device will work at is very important. In older technology, such as 40nm, it was acceptable to just look at three corners — typical-typical, fast-fast and slow-slow. With the move down to 40nm, 28nm and 22nm and beyond, we need to look at five corners — typical-typical, fast-fast, slow-slow, fast-slow and slow-fast. This complicates matters.”

    The biggest design difference now is that with LPDDR5, it’s not necessarily in an embedded environment.

    “You’ve now got longer traces which LP originally was not designed to go on PCBs over very long distances,” said Rambus’ Ferro. “If you’re using an AI application, you have to look at signal terminations and in terms of your overall signal integrity, which was not necessarily a challenge in previous LP generations.”
    LPPDR, at least according to the people that created the type of memory, said its not even designed for very long traces on PCB's. If that isn't "LPDDR needs to be soldered/closer to the die" I don't know what is.

    Of course you don't have to solder it, it just has to be placed much closer to the die than what SODIMM allows, and since we are talking about laptop form factors either Apple would have created something proprietary or the much smarter option would have just been to solder it to the die, which is exactly what they did.
    Last edited by mdedetrich; 19 January 2023, 07:42 PM.

    Comment


    • Originally posted by mdedetrich View Post
      You know how I said that you should do some research, you probably should have taken that advice

      https://semiengineering.com/tricky-t...fs-for-lpddr5/
      LOL. You're quoting stuff from a 3+ year old article, out of context, which you don't even understand. And all in some kind of pathetic attempt to defend a reckless claim you shouldn't have made.

      Quit wasting my time and yours. You're out of your depth. I'm happy you like your Mac, but this fanboying is really uncalled for. I'm not sure how you think your reckless and ignorant statements actually help Apple, anyhow. From my perspective, you're just playing into the stereotype of a typical Mac-head blowhard.

      Originally posted by mdedetrich View Post
      LPPDR, at least according to the people that created the type of memory, said its not even designed for very long traces on PCB's.
      You don't know what they mean by "long traces", at what frequencies the problems become severe, or how the "AI applications" they have in mind compare with normal laptop CPU usage. And it speaks volumes that you don't seem to care about such specifics. LPDDR was obviously designed for things like laptops and phones. Honestly, if I didn't know better I'd think I'm talking to a child.

      Originally posted by mdedetrich View Post
      the much smarter option would have just been to solder it to the die, which is exactly what they did.
      sigh. Even that isn't right ...not that it matters.
      Last edited by coder; 19 January 2023, 10:37 PM.

      Comment


      • Originally posted by coder View Post
        And eXactly how did you decide that?
        Wikipedia doesn't go that much into detail.
        I never said it wasn't. I was only ever trying to correct wild claims by mdedetrich that the M1 & M2 need the memory to be in-package, in order for LPDDR5-6400 to be usable, or that putting it there had significant latency benefits.
        Yes that is my view too.

        Comment


        • Originally posted by coder View Post
          LOL. You're quoting stuff from a 3+ year old article, out of context, which you don't even understand. And all in some kind of pathetic attempt to defend a reckless claim you shouldn't have made.
          Ignore stuff which doesn't suit your narrative. The whole premise of the article is the issue in having to deal the ever increasing bandwidth in newer LPDDR standards and the tradeoffs they have to deal with (i.e. its a lower voltage then standard desktop DDR, which you know causes problems such as the one being talked about).

          Originally posted by coder View Post
          Quit wasting my time and yours. You're out of your depth. I'm happy you like your Mac, but this fanboying is really uncalled for. I'm not sure how you think your reckless and ignorant statements actually help Apple, anyhow. From my perspective, you're just playing into the stereotype of a typical Mac-head blowhard.​
          Except its not just Apple that is doing this, they just did it first on a high end laptop machine that is now 2 years old and other companies are starting to do the same now (as seen as CES). This has little to do with Apple, and I think this has more to do with stubborn Phoronix people not being able to admit that a company did something good at once and are really scraping the bottom of the barrel.

          AND Fyi, I have many machines that are not just Apple. The reason I have an M1 is my company offered it, and at the time when they offered it when I benchmarked my software on a colleagues machine it was much faster than any other laptop the company could have purchased at that time that wasn't a tank.

          Originally posted by coder View Post

          You don't know what they mean by "long traces", at what frequencies the problems become severe, or how the "AI applications" they have in mind compare with normal laptop CPU usage. And it speaks volumes that you don't seem to care about such specifics. LPDDR was obviously designed for things like laptops and phones. Honestly, if I didn't know better I'd think I'm talking to a child.
          No shit I don't know, neither do you. The only people that would know exactly at what length traces become problematic are the electrical engineers/designers that work at the company which is why I am quoting them, not you.

          So either you are right and engineers are wasting their time designing something like CAMM, whos whole stated design benefit aside from capacity and speed is lower latency due to trace path, i.e.

          https://www.techpowerup.com/303681/c...and-dell-argue

          Below is an example of a CAMM memory module with a patent showing the SO-DIMM (upper left) versus CAMM (lower right) and CAMM's smaller trace path. With smaller tracing, the latency is also going down, so the new standard will bring additional efficiency. Additionally, devices that are based on LPDDR memory could have an upgrade path with the installment of CAMM.
          Or you know everything everyone else whos job it is to work in this space is stupid. Your arguing as if PCB tracing length has zero effect signal integrity, thats just wrong and then you do stupid shit like compare the M1 to Alder lake and saying that Alder Lake has even lower latency while ignoring that M1 tuned their machine to use as little power as needed for good performance where as Alder lake and other modern Intel CPU's tend to disregard power usage/thermals and just yolo for performance with bursting.

          Also I have no idea what you mean by "need", but my statement has been that with current SODIMM that it would have been to feasibly difficult to optimize/use LPPDR5 as much as possible (which is what Apple wanted to do). The only discussion is whether Apple should have gone CAMM instead of what they just did, but unless we are in some alternate chronology it was likely too late for that (this reminds me of the argument we had earlier where you were shitty on Apple for not implementing Vulkan even though it didn't even exist or was being designed at the time).

          Maybe instead of constantly shitting on companies all of the time you can give them the benefit of the doubt when its kind of clear that in some cases their motives are not world domination, but actual technical.
          Last edited by mdedetrich; 20 January 2023, 05:48 AM.

          Comment


          • I don't see how CAMM supports your argument, it has longer traces (at least for the last line of ICs) and it has a contact patch, so everything you argumented doesn't work with modern high end LPDDR. I found even this variation: camm-so.jpg

            Comment


            • Originally posted by Anux View Post
              I don't see how CAMM supports your argument, it has longer traces (at least for the last line of ICs) and it has a contact patch, so everything you argumented doesn't work with modern high end LPDDR. I found even this variation: 
              Again this isn't my argument, it's the designers/engineers of CAMM. If you check the article I quoted, there is even a section in the CAMM patent talking about it with a diagram.
              My surface level understanding is that its not so much the traces on the last lines of the IC where you place the RAM, but the distance of the traces on the PCB between the CPU and the placement of CAMM. There are different elements to this as well, one design feature of CAMM (again trying to remember when I read about this) is that its vertical height is thinner than SO-DIMM, which makes it easier to place closer to the CPU in a typical thin chassis laptop.
              Last edited by mdedetrich; 21 January 2023, 07:19 AM.

              Comment


              • Originally posted by mdedetrich View Post
                Ignore stuff which doesn't suit your narrative. The whole premise of the article is the issue in having to deal the ever increasing bandwidth in newer LPDDR standards and the tradeoffs ...
                I'm not the one ignoring stuff. The claim you made in post #15 of this 100+ post thread is:

                "there is a reason why the memory is being soldered. Its because the performance of memory has become so high that you can no longer deliver it with SODIMM anymore."

                You have yet to supply any clear evidence of this. The article you cited not only lacks such specifics, it also pre-dates actual deployment of LPDDR5, and therefore lacks any insights gained by experience working with the memory.

                If you're going to make such a specific and absolute statement, you need to be prepared to back it up. That's what this is all about. You cannot back up specific claims with generalities and platitudes. The way you're being so evasive and diversionary reminds me a lot of oiaohm.

                Originally posted by mdedetrich View Post
                Maybe instead of constantly shitting on companies all of the time
                Please cite exactly where, in this thread, I shit on any company or product.

                Originally posted by mdedetrich View Post
                you can give them the benefit of the doubt when its kind of clear that in some cases their motives are not world domination, but actual technical.
                Technical excellence is not achieved through "benefit of the doubt". It's achieved through engineering backed by clear understanding and hard data. If you have neither the theory nor the data, then you only have faith. Faith is a weak foundation upon which to build.
                Last edited by coder; 23 January 2023, 05:40 AM.

                Comment


                • Originally posted by mdedetrich View Post
                  engineers are wasting their time designing something like CAMM, whos whole stated design benefit aside from capacity and speed is lower latency due to trace path, i.e.
                  It's not clear whether the comment about latency is just the article's author making assumptions, or something Dell actually said. No latency benefits are mentioned in either of these writeups (nor would I expect them to be, based on physics & math):
                  ...yet, they do highlight both space-savings and greater memory capacity. Reliability and cooling are also mentioned in the first article.
                  Last edited by coder; 23 January 2023, 06:00 AM.

                  Comment

                  Working...
                  X