Consent Preferences

Apple: Why The iPhone 16 Could Be A Breakthrough Platform For AI On The Edge

Apple's next generation of devices, and the iPhone in particular, could be breakthrough platforms for AI on the edge. Apple's lead in chip technology could put it at the forefront of a potential revolution in AI hardware.

Apple: Why The iPhone 16 Could Be A Breakthrough Platform For AI On The Edge
This image was created using OpenAI’s GPT-4.

This is the author's opinion only, not financial advice, and is intended for entertainment purposes only. The author holds a beneficial long position in Apple Inc.

Barclays analysts downgraded Apple Inc. on January 2, saying they expect the next iPhone generation (iPhone 16) to be as "lackluster" as the last. Indeed, the last few iPhone generations have lacked a major leap forward, and technological stagnation could ultimately lead to continued flat or even declining sales over the long term. Since the iPhone accounts for about half of Apple's revenue, this would be a major problem for the company. Just as no one could have imagined 20 years ago how Apple's iPhone would revolutionize the entire concept of a mobile phone, it is now hard to imagine how the somehow boringly perfect iPhone could be revolutionized again. In any case, the annual camera update no longer impresses anyone. With the technological stagnation of the iPhone, the qualitative difference to competing products is getting smaller and smaller (if it still exists at all), and one wonders what should motivate customers to buy a high-priced iPhone in the first place. But what if Apple is already preparing the next quantum leap?

I think it's possible that the next generation of iPhones will turn everything upside down again. For that, we need to take a quick trip into computer science. Modern computers, from PCs to iPhones, follow the von Neumann architecture, first proposed in 1945 by the Hungarian-American mathematician John von Neumann. According to this architecture, a computer consists of separate parts: an arithmetic-logic unit (processor), a control unit, a memory unit, an input/output unit, and a bus system (communication between these components). Since the purpose of computers has long been to perform operations that are difficult for us humans to perform (such as calculating the square root of 1,928,902), modern computers have been optimized to perform such rather complex operations for over 70 years. It is only with the advent of AI that computers are learning to perform tasks that are easy for us humans (such as distinguishing pictures of cucumbers from bananas). AI models such as the new Large Language Models (LLMs) are fundamentally based on matrix operations. If we break these matrix computations down to their elementary operations, we are left with an enormous number of multiplications and additions that are not complex per se. Our modern CPUs (central processing units) and memory (RAM) reach their limits with this amount of data and simultaneous operations. The new LLMs require much more memory than is available on today's edge devices, which is why today's LLMs run on massive servers and are accessed by end users via the cloud.

Running LLMs remains very power intensive: OpenAI CEO Sam Altman even called server costs "eye-watering" in December 2022, shortly after the release of ChatGPT. A possible (but still distant) solution might be to abandon the von Neumann architecture and do the necessary computations in memory by using analog chips for AI: In analog chips, the matrix operations required for AI could be represented in electronic circuits by exploiting Kirchoff's circuit laws. In August 2023, IBM showed that AI for speech recognition could run on an analog chip 14 times more power-efficiently than on a digital computer. The problem with analog chips, however, is that the chip has to be designed specifically for the task you want it to perform, and you generally can't run completely different models on the same analog chip architecture. So it may be combinations of digital and analog chips that enable energy-efficient AI in the future.

But if LLMs could run on future chip technology with orders of magnitude less power, why run the new LLMs on servers in the cloud at all? Running them on the end device - on the edge - would make a lot of sense for privacy reasons alone. Apple still makes its money by selling end devices. So why should Apple watch its end devices become mere middlemen for its competitors' cloud-based AI solutions? As far as I know, there are no signs that Apple will suddenly enter the cloud AI race with Open AI's ChatGPT or Google Bard. I think Apple is planning something else, and there are the first concrete signs of it: starting with the iPhone, Apple's products could become platforms for running AI on the edge. In December 2023, Apple engineers published a method for running large LLMs on the flash memory of a device (such as an iPhone), bypassing the problem of limited DRAM on the edge. Flash memory has about an order of magnitude higher storage capacity than DRAM, making it possible to run even large LLMs on small end devices. Also in December 2023, Apple released its own open source AI framework, MLX, which is optimized for Apple's silicon chip architecture. App developers can already use it to write their own machine learning applications on Apple silicon. Earlier, in October 2023, Apple engineers unveiled their own multimodal LLM Ferret, which has since been released as open source as well. Every new powerful open source LLM ultimately attacks the market position of already existing commercial LLMs. With the M1-M3 generations, Apple has developed cutting-edge processor technology and has been a leading chip developer for years. For now, AI appears to be primarily a software revolution. Given the similar performance of OpenAI's GPT-3.5, xAI's Grok, the open source model Mixtral, and Google's Gemini Pro, as well as the reported similar performance of OpenAI's GPT 4.0 and Google's Gemini Ultra, there is already a suspicion that we are not so far away from the maximum possible performance of the new LLMs - some call it Peak AI. Perhaps the next quantum leap in AI will be preceded by a revolution in computer chips - in which case Apple, as a leading chip designer, would suddenly be at the forefront. And when it comes to implementing and applying such AI models, let's not forget that Apple's devices are already packed with all kinds of sensors that generate a constant stream of data for locally executed AI.

So do we really want to believe that the world's most highly capitalized company is sleeping through the AI revolution and thinks it can continue to satisfy its customers with annual "lackluster" updates to high-priced devices? I doubt that Apple is simply relying on its product ecosystem to keep its customers. A new generation of iPhones and other devices capable of efficiently running AI on the edge would be the foundation for an ecosystem of AI apps by Apple and third-parties that only Apple device owners would have access to - a Buffetian economic moat par excellence. Or, to quote the 2011 iPhone 4 advertising slogan, "If you don't have an iPhone, well you don't have an iPhone."

Do Not Sell or Share My Personal information