Apple's iOS 26.4 update, rolling out Wednesday to iPhones 15 and later, delivers what the company has been quietly promising for eighteen months: a Siri that can actually compete. The new Siri runs on Google's Gemini architecture — specifically the 1.2 trillion-parameter Gemini Ultra 2 model, according to a technical disclosure Apple made to EU regulators under the Digital Markets Act — and the difference is immediately apparent to anyone who has spent time fighting the old assistant's limitations. This is not an iteration. It is a replacement.
The old Siri, which Apple built on a combination of proprietary models and acquired technology, had become a competitive liability. It couldn't reliably answer multi-step questions, lost context between turns, and had no meaningful awareness of what was on screen. The new Siri has all three capabilities. Ask it to "find the email from Marcus about the Paris trip and add the flight number to my calendar," and it executes. Show it a restaurant menu and ask "what's the lowest-calorie entrée under $30?", and it reads the menu, applies the filter, and answers. These are not cherry-picked demos — they are the baseline capability the Gemini engine delivers.
The partnership structure is worth understanding. Apple is not licensing Gemini the way it licenses Google Search — paying a flat fee for results that stay within Google's infrastructure. Instead, Apple is running a hybrid architecture: simple, on-device queries (set a timer, play a song, turn on the flashlight) run on a distilled Apple model with no data leaving the phone. More complex queries requiring reasoning, web knowledge, or cross-app synthesis are handled by Gemini, with Apple's Private Relay technology used to anonymize the request before it reaches Google's servers. Apple is emphatic that Google cannot use these queries to train its models or build user profiles.