Yoopya with The Conversation
As is their tradition at this time of year, Apple announced a new line of iPhones last week. The promised centrepiece that would make us want to buy these new devices was AI – or Apple Intelligence, as they branded it. Yet the reaction from the collective world of consumer technology has been muted.
The lack of enthusiasm from consumers was so evident it immediately wiped over a hundred billion dollars off Apple’s share price. Even the Wired Gadget Lab podcast, enthusiasts of all new things tech, found nothing in the new capabilities that would make them want to upgrade to the iPhone 16.
The only thing that did seem to generate some excitement was not the AI features, but the addition of a new camera shutter button on the side of the phone. If a button is a better selling point than the most hyped technology of the past couple of years, something is clearly amiss.
The reason is that AI has now passed what tech blog The Media Copilot called its “wonderment phase”. Two years ago, we were amazed that ChatGPT, DALL-E and other generative AI systems were able to create coherent writing and realistic images from just a few words in a text prompt. But now, AI needs to show that it can actually be productive. Since their introduction, the models driving these experiences have become much more powerful – and exponentially more expensive.
Nevertheless, Google, NVidia, Microsoft and OpenAI recently met at the White House to discuss AI infrastructure, suggesting these companies are doubling down on the technology.
According to Forbes, the industry is US$500 billion (£375 billion) short of making back the massive investments in AI hardware and software, and the US$100 billion in AI revenue projected to be made in 2024 is not even close to this figure. But Apple still has to enthusiastically push AI features into their products for the same reason that Google, Samsung and Microsoft are doing it –- to give consumers a reason to buy a new device.
Tough sell?
Before AI, the industry was trying to create hype around virtual reality and the Metaverse, an effort that probably peaked with the introduction of the Apple Vision Pro headset in 2023 (a product that incidentally was barely even mentioned in last week’s announcement).
After the Metaverse failed to take off, tech companies needed something else to drive sales, and AI has become the new shiny thing. But it remains to be seen whether consumers will take to the AI-based features included in phones such as photo-editing and writing assistants. This is not to say that current AI is not useful. AI technologies are used in billion-dollar industry applications, in everything from online advertisement to healthcare and energy optimisation.
Generative AI has also become a useful tool for professionals in many fields. According to a survey, 97% of software developers have used AI tools to support their work. Many journalists, visual artists, musicians and filmmakers have adopted AI tools to create content more quickly and more efficiently.
Yet most of us are not actually prepared to pay for a service that draws funny cartoon cats or summarises text –- especially since attempts at AI-supported search have shown to be prone to errors. Apple’s approach to deploying artificial intelligence seems to mostly be a mishmash of existing functions, many of which are already built into popular third-party apps.
Apple’s AI can help you create a custom emoji, transcribe a phone call, edit a photo, or write an email –- neat, but no longer groundbreaking stuff. There is also something called Reduce mode that is supposed to disturb you less and only let through important notifications, but it’s anyone’s guess how well that will work in reality.
The one forward-looking feature is called Visual Intelligence. It allows you to aim the camera at something in the surroundings and get information without explicitly doing a search. For instance, you might photograph a restaurant sign, and the phone will tell you the menu, show you reviews – and perhaps even help you book a table.
Although this is very reminiscent of the Lens in Google’s Pixel phones (or ChatGPT’s multimodal capabilities) it does point towards a future use of AI that is more real-time, interactive, and situated in real-world environments.
In the extension, Apple Intelligence and the Reduce mode could evolve into so-called “context-aware computing”, which has been envisioned and demonstrated in research projects since the 1990s, but for the most part has not yet become robust enough to be a real product category.
The kicker to all this is that Apple Intelligence is not yet really available for anyone to try, as the new iPhones do not yet include them. Perhaps it will turn out they are more valuable than the limited information seems to indicate. But Apple used to be known for only releasing a product when it was well and truly ready, meaning that the use-case was crystal clear and the user experience had been honed to perfection.
This is what made the iPod and iPhone so much more attractive than all the MP3 players and smartphones released before them. It is anyone’s guess if Apple’s approach to AI will be able to claw back some of the lost stock price, not to mention the hundreds of billions invested by them and the rest of the tech industry. After all, AI still has amazing potential, but it may be time to slow down a bit, and take a moment to consider where it will actually be the most useful.
Author:
Lars Erik Holmquist | Professor of Design and Innovation, Nottingham Trent University