top of page
  • Writer's picturericcardo

Apple M-chip advantage revealed by I^2xR

Apple has recently revolutionized its devices through the proprietary M-chips replacing Intel’s. Overall reviews are extremely positive: faster processing, lower power consumption, lower heating, etc. They seem to confirm the numbers Apple initially declared, which are not mere single-digit improvements. What is the core factor allowing for that type of success? Given the complexity of the argument, the discussion could easily require a bunch of experts discussing topics like the following:

  1. Apple’s chips are designed and optimized for extremely well-determined and limited sets of configurations, not having to fit a broad variety of PC brands and customization

  2. Apple’s chips are arguably optimized for single-task rather than multitasking

  3. … much more

However, we want to be strategic and frame 80% of the solution in a few words and a few numbers. We will do that through the simple equation of the electrical power: P = I^2 x R (detailed immediately below). That will back both the lower power requirement and the higher performance declared by Apple for its M1 chip, figure 1 – note, the newer M2/pro is already on the market:

Figure 1 - Apple’s M1 chip declared performance (2020): x2 performance for 1/4 of the power consumed

The innovation in a nutshell

In general, despite the complexity of a topic, it is often useful to start from the deepest yet not necessarily the most complex factors determining any phenomena. Similarly, if we had to reduce Apple’s innovation we would probably focus on its adoption of the SoC’s configurations. SoC, or System on a Chip, is a “processor” positioned on a chip housing not only the CPU (central processing unit), but also the RAM, the GPU (“graphic card”), and a bunch of other hardware. All of that is usually spread as separate components underneath the typing keyboard of a common PC laptop. Figures 2 and 3 below should clarify the distinction between a common configuration and the SoC.

Note: there are of course other configurations (e.g. SiP) and many more hybrid solutions in the middle. Additionally, Apple’s competitors are going in a similar direction as well, though the many configurations of brands and customization they will have to satisfy will probably constitute a limit toward optimal design.

Figure 2: common PC motherboard [size comparable to the keyboard of a laptop]. Note the Processor, separated from the RAM, the GPU (not present in this configuration possibly positioned similarly to the RAM at a few centimeters from the Processors) - image source – image source:

Figure 3: Apple’s M-chip bringing together all the components necessary for the computation

The SoC configuration is usually to be found in mobile devices (e.g. smartphones, tablets, etc) because allowing for lower power consumption while paying the price of lower performance. The actual “revolution” brought to the market by Apple is being able to leverage the strengths of the SoC (i.e. low power consumption) while eliminating their common weaknesses (i.e. lower computational capabilities). The result seems to be astonishing, and yet, we can explain how that is possible through the simple mathematical equation representing the power within an electrical/electronic circuit: P = I^2 x R (this comes from Ohms Law: V = I x R). In that equation, [I] and [R] are respectively the current and the resistance within the circuit, while [P] is the power required to move that current [I] across the circuit exerting the resistance [R] on the current itself. Please note, this equation will not only justify the lower power consumption but will also back the higher performance. Let us see how.

Starting from figure 1 already shown, the picture declares a x2 improvement in performance for 1/4 of the power consumed. That may seem ridiculous. Even more ridiculous may appear the fact that we want to justify that result through the simple P = I^2 x R. Well, the argument can be stressed even more by saying that, what that simple equation will show is that the huge power saving is allowed by Apple merely bringing together components usually spread across PC laptops, therefore reducing the length of the wiring. Subsequently, that allows for lots of spare resources that can be now used to increase performance.

1/2 The reduction in consumed power - math

For our brief final calculation showing what is anticipated above we can picture an initial round-trip path of about 10 cm for a computational signal within a common PC laptop - the reader can look at Figure 1 and picture a signal having to go from the Processor to the RAM, to a possible GPU, and back to the processor. The same signal within an M-chip will have to cover only millimeters, say 10 mm (1 cm). That is a possible x10 improvement - to be fair, M-chips are bigger components than the stand-alone processors of common PC configuration. Let us now go back to our equation:

  • {1}: P = I^2 x R (dissipated power equal to the square of the current times the resistance of the wire)

The resistance [R] of the electric wire is given by:

  • {2}: R = rho x L / A (resistance of the specific wire’s material to the passage of a current, times the length of the wire, divided by the area of the wire's section)

Since Apple is essentially reducing the length of the circuit [L] by the assumed x10 factor, it is also reducing the resistance [R] by the same amount, being equation {2} linear. Plugging this improvement into equation {1}, we obtain again a final x10 reduction of the power consumed.

So, in a super-simplified discussion, a laptop deploying the M-chip would have the same performance as a common laptop for only 1/10 of the energy consumed. Now, considering such a big saving in energy, we can think about scarifying part of that to increase the overall performance of our machine. In particular, we will see how much we can pump up performance for a reduction in consumed power of “only” 1/4, like in Apple’s case.

2/2 The increase in performance - math

How does P = I^2 x R relates to performances? It is now critical to highlight that in modern CPUs the following reduction may apply: given a specific processor, “performances” can be approximated by the clock-speed (number of operations per unit of time), that in turn can be approximated by the current [I].

For the more interested reader who does not know already a lot about this, a slightly more comprehensive relationship among performances, current, and clock-speed can be found at the following Wikipedia brief chat: That will highlight a bit more how, when it comes to processors, semiconductors, and MOSFET, the current [I] should be thought of as the result of overcoming a diode threshold voltage through the application of an external one. Additionally, it is possible to read directly from Intel the relationship between clock-speed and performance:

To find the allowed higher current [I_new] that coupled with our reduced resistance [R/10] would consume 1/4 of the original power, we can use P = I^2 x R in the following way:

  • (I_new)^2 x R_original / 10 = 1/4 x (I_original)^2 x R_original

That gives us (I_new) = sqrt [ 10/4 x (I_original)^2], finally resulting in I_new = 1.58 x (I_original)

So, we can use a new current about 1.6 times higher than the original one and still consume only 1/4 of the original power. As said above, we could approximate that x1.6 higher current to a similar increase of the clock-speed, and ultimately of the overall performance of the machine.

Compared to Apple’s improvement in performance equal to x2 and its power consumption reduced to 1/4, such a simplified argument already returning a correct order of magnitude could represent a good starting point. Of course, much more than our simple discussion is involved here; we can mention three major arguments that could be discussed with specific experts:

  1. Apple’s chip seems to solve computations in a new proprietary way (e.g. reordering operations); similar to competitors handling multi-threading

  2. SoC not only allows for shorter connections but also direct and internal ones (internal to the chip which has now everything it needs for the computation). That is likely to change the game in terms of data bus/channeling

  3. We ignored any possible non-linearity involved with scaling up current and performances; a lot of that is likely to be related to the fundamental nature of semiconductors, releasing current and therefore signals once the applied voltage surpasses an “energy hill”

All the above could be framed through the following final CRITICAL comment: we said that higher performance results from a higher clock speed; however, preliminary infos report that Apple’s M1 chip has a common 3.2 GHz computational frequency (i.e. clock-speed). For comparison, Intel’s i9 / 11th generation processor has a base frequency of 2.5 GHz and a boost frequency of 4.9 GHz. Even though Intel’s Boost Frequency can be held for only a brief amount of time, Apple’s clock-speed does not seem to back our argument; we would expect Apple to have a frequency almost double Intel’s. The different way of thinking about this is arguably the following: the M-chip, being so much less power-demanding and having everything it needs on a single chip, rather than having a completely higher range of frequency, can hold higher computational frequencies on average.

To conclude, it is worth remarking once again that everything involved here is allowed by the big increase in energy resources and ultimately by the new hardware configuration. In general, it can be a good exercise to try capturing such complex phenomena through simple engineering. Then, we should attach our argument, refine assumptions, add factors previously ignored, and just improve it.

Feel free to reach out:

and connect on LinkedIn:


Main image background's tag:

bottom of page