Are OpenAI and Jony Ive headed for an iPhone moment?

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.
OpenAI will acquire the AI device startup co-founded by Apple veteran Jony Ive and Sam Altman, called “io,” for nearly $6.5 billion, Bloomberg reported Wednesday. This almost certainly will put OpenAI in the consumer hardware business, and it seems like it will soon release a dedicated personal AI device.
Ive is a pedigreed design guru with a track record of creating iconic tech products like the iPhone and the Apple Watch. Ive, a close friend of Steve Jobs, left Apple in 2019. “I have a growing sense that everything I’ve learned over the last 30 years has led me to this place and to this moment,” Ive told Bloomberg. Coming from the world’s best device designer, that’s saying a lot.
OpenAI brings a lot to the table, too. After setting off the AI boom with the late 2022 release of ChatGPT, the startup has quickly built its chatbot into a mass market AI app and a household name. More than 400 million people around the world now use ChatGPT every week, OpenAI COO Brad Lightcap recently told CNBC. (For comparison, longtime Apple analyst Gene Munster estimates that there are 1.7 billion Apple device users and 2.35 billion active devices worldwide.)
“OpenAI has all of the core leadership and the best-in-class models, as well as some of the most widely used interfaces,” says Gartner analyst Chirag Dekate. “ChatGPT has defined and set the bar for what early experiences of generative AI ought to look like,” Dekate says.
Ive helped Apple set such standards for smartphones and smartwatches. Apple’s traditional power was its mastery of the user interface—–the technology that mediates between a human user and their digital tools and content. But iOS devices evolved from roots in traditional computing. AI models operate in a fundamentally different way, and may require a fundamentally different UX. (Apple, which has struggled to empower its devices with new generative AI capabilities, saw its stock price drop as much as 2.3% on news of the io deal Wednesday.)
OpenAI will now get an opportunity to define the ideal software-hardware fusion for the new computing paradigm it helped usher in—after high-profile flops like Humane’s AI pin. “AI is such a big leap forward in terms of what people can do that it needs a new kind of computing form factor to get the maximum potential out of it,” OpenAI CEO Sam Altman told Bloomberg.
Ive’s and Altman’s design firm has so far given the world no clue of its vision for a personal AI device. But whatever it builds—–a pendant or a bracelet or some kind of glasses—–will very likely provide an influential blueprint for how a vehicle for personal AI should look in the future.
“Jony Ive and io joining OpenAI is so important: It calls out the fact that in order to truly diffuse generative AI across society, we need to build new hardware experiences,” Dekate says. “The QWERTY keyboard may not be the best way to experience generative AI.”
Google got its AI mojo back
Google looks like a company that’s got its mojo back. Yes, yes, I know . . . the company still faces some very big problems—–antitrust actions, changes in its core search business, etc.—–but it also seems to have fully woken up to the AI revolution it helped start, and seems to be navigating it with level-headedness and even a sense of fun.
The company’s two-hour keynote at its developer event in Mountain View Tuesday was all about AI, a showcase for a comprehensive strategy of applying the company’s Gemini AI models broadly across its product portfolio—–to everything from search tools to coding agents to video generators to smart glasses.
Remember that Google is a company that entered the AI race only reluctantly in 2023, in the wake of the ChatGPT launch. Even though it was Google researchers who figured out how to architect and train the type of large language models that power ChatGPT, the company was, for legal and safety reasons, deeply conflicted about exposing such models to the public. But after OpenAI, Microsoft, and others hurried to apply and commercialize LLMs, Google found itself on the back foot, punished by Wall Street for not leading the race: a lackluster February 2023 demo of the Bard chatbot caused an 8% drop in Alphabet stock.
Google’s biggest announcement Tuesday may mark the moment when it retook the lead—–with a new AI product that could remake its core search business. “AI Mode,” a chatbot-format AI search tool that will compete directly against ChatGPT and Perplexity, went from being an experimental product to being available to all users.
It’s powered by Google’s best model, Gemini 2.5 Pro. In AI Mode, rather than entering a search term or phrase, users can describe in detail what they’re looking for (often a solution to a problem rather than just some facts) and then work with Gemini’s reasoning abilities to get to a fully responsive package of custom information.
Features from the company’s Project Astra work will increasingly let the AI gather information about objects (or problems) it sees (through a phone camera or smart glasses) in the real world around the user. Its work in Project Mariner will give AI more and more power to operate the user’s device and call on various web tools (mapping, live weather, etc.) on the user’s behalf. This summer AI Mode will be able to book appointments on the user’s behalf, do deep research projects and data visualization, and help people shop for clothing and other products.
The Gemini Live mode in the Gemini app leverages some of the same technologies. Google’s Demis Hassabis described the experimental project as a universal AI assistant powered by a “world model” that understands the context in which the user is moving, and understands a certain amount about how the physics of the world works, and can plan and take action on behalf of the user.
In the demo video during I/O, a young man talks to the assistant while fixing a bicycle. Through the phone camera, the Gemini Live identifies problems and replacement parts, places a phone call to order a part, looks up schematics on the web, and makes suggestions. It can also control a user’s device, analyze video, share its screen, and remember everything it sees or discusses in a session.
Google has always been a medium between humans and all the data and intelligence that resides on the public web. No other company has as much experience accessing, parsing, organizing, and packaging all that information. Google believes that AI can make that medium a lot smarter, proactive, and personalized. And the power of Gemini goes well beyond search to best-in-class video/audio generation tools (Veo 3, Flow) to coding assistants (Jules) and to wearables (Android XR glasses).
Anthropic debuts new generation of Claude models
Anthropic on Thursday debuted the fourth generation of its AI models with Claude Opus 4 and Claude Sonnet 4, which the company says push the state of the art for coding, advanced reasoning, and AI agents. Both can use tools, such as web search.
The company says Claude Opus 4 is its best model and the best coding model in the world. The model can work for several hours straight on complex, long-running tasks that involve thousands of steps, Anthropic says, and significantly expands what AI agents can accomplish.
Claude Sonnet 4 replaces Anthropic’s previous best effort, Claude 3.7 Sonnet. The company says the new model does everything 3.7 did, but pushes the envelope further on coding, reasoning, and precise instruction following.
Claude Opus 4 and Sonnet 4 are hybrid models offering two modes: near-instant responses and extended thinking for deeper reasoning. Both models are available on the Anthropic API, Amazon Bedrock, and Google Cloud’s Vertex AI. Claude Sonnet 4 is also available in the free version of the Claude chatbot.
The new models use chain-of-thought processing, as Claude 3.7 Sonnet did. But instead of displaying the model’s raw thought process to users, Anthropic now shows summaries of the models’ reasoning steps. Anthropic says it made this change “to preserve visibility for users while better securing our models.” Anthropic’s competitors could potentially use the raw chain-of-thought output to train their own models.
While Anthropic’s models are used to power a number of commercial coding assistants, the company has its own assistant, Claude Code, which it released as a research preview earlier this year. The assistant is now generally available.
Anthropic says it’ll start to release model updates more frequently in an effort to “continuously refine and enhance” its models.
More AI coverage from Fast Company:
- Forget return-to-office. Hybrid now means human plus AI
- What it’s like to wear Google’s Gemini-powered AI glasses
- How Google is rethinking search in an AI-filled world
- Cartwheel uses AI to make 3D animation 100 times faster for creators and studios
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
What's Your Reaction?






