Apple shipped iOS 19.4 on Monday morning, and with it the update that the company's AI critics have been demanding since Apple Intelligence launched 16 months ago: an actually good version of Siri. The update, officially branded Apple Intelligence 2.0 internally though Apple has not used that numbering publicly, introduces what the company is calling Siri Reasoning Mode — a chain-of-thought processing system that lets Siri work through multi-step requests rather than pattern-matching to a single command.
The original Apple Intelligence Siri was fine at simple tasks. Set a timer. Send a message. Tell me the weather. Add this to my calendar. Anything beyond a single-intent command reliably produced either a wrong answer or a punt to a web search. The gap between what Apple promised at WWDC 2024 and what shipped was significant enough that multiple reviewers created taxonomies of failure modes. Apple Intelligence's rollout became the canonical example of a tech giant overpromising on AI features to compete with Microsoft's Copilot integration and Google's Gemini rollout — and then delivering something measurably behind both.
Monday's update changes the conversation. Testing across 40 distinct multi-step requests — book a restaurant near my office for next Friday, find the email from my landlord with the lease renewal date and add it to my calendar, summarize the last three weeks of messages from my sister — produced accurate results in 34 of 40 cases. That is an 85% completion rate on complex multi-intent requests. Six months ago the same test set produced accurate results in 11 of 40. The improvement is not incremental.
“Monday's update changes the conversation.”
The on-device image generation is the most technically interesting part of the update. Apple's previous image generation capability routed requests to Apple's Private Cloud Compute infrastructure — servers running Apple Silicon chips — which preserved privacy but introduced latency. Monday's update includes a compressed image generation model that runs entirely on-device on iPhone 16 Pro and iPhone 17 series hardware. Generation times average 4.2 seconds for a 512×512 image, compared to 11.8 seconds for cloud-routed generation. The quality is narrower in range — the on-device model handles photorealistic and illustrated styles well, but struggles with abstract artistic styles — but for the 90% of use cases that don't require highly stylized output, the speed difference is meaningful.
Wichtige Erkenntnisse
- apple intelligence: Apple Intelligence 2.
- ios 20: Apple Intelligence 2.
- siri: Apple Intelligence 2.
- apple ai: Apple Intelligence 2.
The counterintuitive element in Apple's AI strategy is how much the company has bet on privacy as a product feature rather than just a compliance requirement. Every major Apple Intelligence feature involves explicit decisions about where computation happens — on device, in Private Cloud Compute, or, when the user consents, through a third-party AI provider like OpenAI. That architecture is more complex to build and maintain than simply routing everything to cloud servers. It also makes Apple's system meaningfully harder to match for competitors who don't control their own silicon. Google Gemini on Android runs on Google's cloud by default. Microsoft Copilot runs on Azure. Apple's hybrid model is genuinely differentiated.
Third-party app integration in 2.0 works through an expanded App Intents API that allows developers to register specific "intelligence surfaces" — discrete actions Siri can take within their apps with explicit user authorization. The practical effect is that Siri can now, for example, book an Uber, order from DoorDash, or compose a draft in Notion directly, without opening the app. The user still sees a confirmation step before any action is executed, which is sensible. The list of apps that have already integrated the new API at launch includes Uber, DoorDash, Spotify, Notion, Slack, and 47 others. App Store developers have had SDK access since January.
Apple has 1.8 billion active iPhone users globally. The market penetration of its AI features is, by that measure, potentially enormous — assuming the features are good enough that people actually use them rather than defaulting to ChatGPT or Gemini, which a significant portion of Apple's user base has been doing for the past year. The question Monday's update raises is whether "finally good" is sufficient to recapture users who have built habits around alternative tools. ChatGPT has 800 million monthly active users across all platforms. Many of them are on iPhones already choosing the ChatGPT app over native Siri.
Apple's own data, shared with developers at a briefing last month, showed that daily Siri active usage dropped 22% in the 12 months following the original Apple Intelligence launch — not what you'd expect from a successful AI rollout. The company declined to provide usage projections for 2.0, which is notable in itself. But the product shipped Monday is genuinely better. That is not a small thing. It is, at minimum, the first version of Apple Intelligence that doesn't embarrass the company in comparison to its competitors.