Home » Technology » Apple pulls its last 21.5-inch iMac out of range – Computers

Apple pulls its last 21.5-inch iMac out of range – Computers

This memory upgrade is very challenging, and I’m curious how Apple handles it.

So far, all products use low-power memory, both LPDDR4X for the M1 model and LPDDR5 for the M1 Pro and Max models. These channels have 2, 4, and 8 memory channels that are 64-bit wide. They currently use 4 and 8 GB chips, meaning the M1 has 8 or 16 GB, 16 or 32 GB for the M1 Pro and 32 or 64 GB of memory for the M1 Max.

Before we look at other types of memory, there are a few other possibilities with LPDDR5. For example, if we are in micron. catalog Look, see already there two unit With a density of 96 GB (or 12 GB) in production. There are also pieces With a density of 128 GB (ie 16 GB).

With this, Apple can increase the memory capacity of existing chips by at least 1.5x and more than 2x. Then the M1 Max supports 96 or 128 GB of RAM. The current 27-inch iMac comes with up to 128GB of RAM, so it might be enough to replace it completely.

It also seems that at least for the Mac Pro, some chips will be used, in a chip (-like) configuration. This may also apply to the highest configuration of the iMac (Pro). As of earlier this year, the iMac Pro was available with a maximum memory of 256 GB, which is achievable with two M1 Max chips.

Say with four M1 Max chips in Mac Pro, it can accommodate up to 512GB of LPDDR5 memory. With this you can replace most (especially in size) Mac Pro. In terms of power/heat dissipation, it certainly fits in a small cabinet, giving the ability to your Mac Pro mini (or Mac mini Pro?).

More memory is required to completely replace your Mac Pro. While DDR5 chips are very reasonable, and you could easily offer 2 or 4 TB, I see Apple moving towards another option: HBM2e. We already know that heap memory comes from accelerators like the Nvidia A100, and it offers a huge amount of bandwidth in a very small form. The 80 GB version of the A100 achieves bandwidth between 1935 GB/s and 2039 GB/s with five HBM2e plans. That’s actually about 4x the bandwidth of the M1 Max.

Now I don’t see Apple dumping 96 stacks of 16GB HBM2e too soon to be able to offer 1.5TB of memory, but I think it’s possible 16-32 stacks will be a more advanced option, with 256 or 512GB of memory with bandwidth My memory frequency is between 6.4 TB/s and 12.8 TB/s. These can then be supplemented with “regular” DDR5 memory.

HBM3 is also an option in the future, with Stack up to 24 GB And even more bandwidth per end, up to 6.4 Gbit/s.

Of course, that sounds like a ridiculous number, but keep in mind that it’s the same with the CPU and all video, graphics, and AI accelerators. The pool of available memory, the speed and latency that the core has are very dependent on which architecture Apple will choose. Anyway, it looks like Apple is investing heavily in the shared memory of all the compute cores in the Apple Silicon chips we’ve seen so far, so they also now have the freedom to configure this entire architecture for their workloads. without being limited by existing protocols or interfaces.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.