Revision #1
System
21 days ago
Your Phone's AI Can Now Order Dinner for You — And That Changes Everything
Google's Gemini assistant can now open DoorDash, build your cart, and get within one tap of checkout — all while you do something else. The rollout of "screen automation" on the Samsung Galaxy S26 marks the moment agentic AI moved from research demo to daily reality.
The Feature That Launched a Thousand Orders
On March 12, 2026, Google began rolling out what it calls "screen automation" — or, more precisely, Gemini task automation — to owners of the Samsung Galaxy S26, S26+, and S26 Ultra in the United States and South Korea [1][2]. The feature allows users to long-press the power button, issue a natural language command like "order a spicy chicken sandwich from Popeye's on Uber Eats," and watch as Gemini takes over in a background virtual window, navigating menus, selecting items, and building a cart [3].
The supported app list at launch includes DoorDash, Uber Eats, Grubhub, Starbucks, Uber, Lyft, Kroger, and Walmart, with Instacart support expected soon [1][4]. Google plans to expand the feature to Pixel 10 devices in the near future, though it is not yet live on those handsets [2].
Crucially, Gemini does not finalize purchases. When the cart is ready, it sends a notification accompanied by a strong vibration, handing control back to the user to review the order and confirm payment [1][3]. This "human-in-the-loop" design is a deliberate safety guardrail — and, for now, a legal and liability hedge.
How It Actually Works
Under the hood, Gemini's screen automation operates by visually parsing app interfaces and executing taps, swipes, and text entries in a sandboxed environment. When a user issues a command, the AI opens the relevant app in a virtual window running in the background, interprets the on-screen elements, and navigates step by step toward the desired outcome [3][5].
This approach is related to — but distinct from — Google's Project Mariner, a research prototype built on Gemini 2.0 that automates web browsing tasks [6]. While Project Mariner focuses on browser-based interactions and is available to subscribers of Google's $249.99-per-month AI Ultra plan, screen automation is bundled into the standard Gemini experience on supported devices [6].
For developers, Google has introduced a complementary framework called AppFunctions, a Jetpack library that allows apps to expose data and functionality directly to AI agents [7][8]. Mirroring how backend capabilities are declared via MCP cloud servers, AppFunctions provides an on-device solution, letting apps describe their functions so Gemini can discover and invoke them via natural language [7]. In Android 17, Google plans to broaden these capabilities further, sharing more details later in 2026 on how developers can use AppFunctions alongside UI automation to enable agentic integrations [8].
The Stakes: A $350 Billion Delivery Market
The food delivery apps that Gemini now navigates autonomously represent an enormous economic surface. The global online food delivery market is projected to reach approximately $350 billion in 2026, growing at a compound annual growth rate of roughly 7.6% toward a projected $694.65 billion by 2035 [9][10]. In the United States alone, the market exceeded $74 billion in 2025 [11].
DoorDash commands roughly 56% of U.S. food delivery market share, followed by Uber Eats at 23% and Grubhub — now owned by Wonder Group — at 16% [12]. All three are among the first apps supported by Gemini's screen automation. If AI-assisted ordering meaningfully reduces the friction of placing orders, even a modest increase in order frequency across these platforms could translate into billions of dollars in additional gross merchandise volume.
The rideshare apps on the supported list — Uber and Lyft — add another dimension. The ability to say "get me a ride to the airport" and have Gemini handle the booking represents a fundamental shift in how users interact with these services, potentially reducing the competitive advantage that any individual app's user interface provides.
Samsung's AI Gambit — and the Race With Apple
The Galaxy S26 series is the exclusive launch vehicle for Gemini screen automation, and that is no accident. Samsung has made AI the centerpiece of its 2026 device strategy, announcing plans to double the number of Gemini-enabled devices from 400 million in 2025 to 800 million by the end of 2026 [13][14]. The Galaxy S26 runs One UI 8.5, which also introduces Gemini integration with the Samsung Gallery app, allowing natural language photo search [15].
The partnership between Samsung and Google takes on additional strategic weight in light of the Apple-Google deal announced in January 2026. Apple confirmed a multiyear agreement, reportedly worth $1 billion annually, to use Google's Gemini models as the foundation for a rebuilt Siri [16][17]. Under the terms, Gemini handles complex reasoning and multi-step planning on Apple's Private Cloud Compute servers, while Apple retains control over the user interface, data routing, and privacy enforcement [16].
The implication is extraordinary: Google's Gemini now effectively powers AI features across both major mobile operating systems. Samsung's Galaxy S26 is, in CNBC's framing, "an advance look at what the Google-powered Apple Siri could do" [18]. Apple has indicated that the rebuilt Siri will ship with iOS 26.4, initially targeted for March or April, though some features have slipped to May or September [16].
With over 650 million monthly users and a market share that has grown from 13.3% to 22% in recent months, Gemini's reach is expanding rapidly [19]. The combination of the Samsung deployment and the Apple partnership positions Google as the default AI layer for the vast majority of active smartphones globally.
The Security Shadow
The power to have an AI agent navigate apps on your behalf introduces a new and largely uncharted threat surface. Google's own guidance for Gemini warns users: "Do not enter anything you would not want a human reviewer to see or Google to use," as human reviewers may examine conversations to improve the AI [20].
More alarming is the emergence of malware specifically designed to exploit agentic AI capabilities. In February 2026, ESET researchers uncovered PromptSpy, the first known Android malware to use generative AI in its execution flow [21][22]. The malware sends prompts to Google's Gemini API along with XML data describing on-screen UI elements, using the AI's responses to navigate the device — specifically to ensure the malicious app remains pinned in the recent apps list and avoids being killed by the system [21].
PromptSpy's core payload is a VNC module granting attackers remote access to the victim's device, with capabilities including on-demand screenshots, lockscreen PIN interception, screen recording, and pattern unlock capture [22]. The malware's dropper masquerades as a JPMorgan Chase app called "MorganArg," targeting users in Argentina [22].
The significance of PromptSpy extends beyond its immediate threat. As ESET researchers noted, because Android malware often relies on UI navigation, leveraging generative AI enables threat actors to "adapt to more or less any device, layout, or OS version," vastly expanding the pool of potential victims [22]. If Gemini can navigate apps on a user's behalf, the same underlying capability can be weaponized by attackers.
Separately, a high-severity vulnerability (CVE-2026-0628) was patched in January in Chrome 143, which could have allowed malicious browser extensions to inject JavaScript into the Gemini Live panel, escalating privileges and accessing sensitive system resources [23]. The vulnerability underscored how AI integration creates novel attack surfaces in familiar software.
The Developer Dilemma
For app developers, Gemini's screen automation presents a strategic paradox. On one hand, being on Gemini's supported app list provides a new discovery and ordering channel. On the other, if users increasingly interact with apps through an AI intermediary rather than the app's own interface, the brand experience — the logos, the promotions, the carefully designed upsell flows — gets bypassed.
During testing on the Galaxy S26 Ultra, 9to5Google found that Gemini skipped add-on pages and proceeded directly to checkout when ordering through Uber Eats [1]. For platforms that generate meaningful revenue from in-app upsells and impulse additions, this streamlining could cut into margins even as it drives volume.
Google's AppFunctions framework offers developers a more structured alternative: rather than having Gemini blindly navigate an app's UI, developers can expose specific functions that the AI can call directly [7]. This gives developers more control over what Gemini can and cannot do within their app. But adoption requires engineering investment, and the framework is still in early stages, with broader rollout planned through 2026 [8].
The broader food industry is already adapting to AI-driven ordering through other channels. Papa Johns became the first partner for Google Cloud's Food Ordering agent, deploying a unified voice and text AI ordering system across mobile apps, websites, telephones, kiosks, and in-car systems [24]. Burger King's "Patty" chatbot, powered by OpenAI, is piloting drive-thru AI across 500 U.S. locations with plans to go nationwide by year's end [25]. Gemini's screen automation adds yet another layer to an ordering ecosystem that is rapidly being intermediated by AI.
Early Bugs and Growing Pains
The feature is not without rough edges. In one test, a fullscreen preview of Gemini's automation locked a Galaxy S26 Ultra, requiring a forced reboot [1]. The limitation to a curated list of apps — currently eight, with Instacart forthcoming — also constrains the feature's utility. And the prohibition on finalizing purchases, while sensible as a safety measure, adds friction that somewhat undermines the promise of hands-free convenience.
These are the expected growing pains of a first-generation product. The more consequential question is what happens when screen automation moves beyond food and rides. Google has signaled that AppFunctions and UI automation will support a much broader set of app categories in Android 17 [8]. If Gemini can book flights, schedule medical appointments, or file expense reports, the implications for how people interact with their phones — and the entire mobile app ecosystem — shift dramatically.
What Comes Next
The rollout of Gemini screen automation on the Galaxy S26 is not, in itself, a revolutionary moment. It is a carefully scoped beta with a short list of supported apps, a mandatory human confirmation step, and enough bugs to remind everyone that AI agents are still learning to walk.
But it is the first time a mainstream consumer device has shipped with the ability for an AI to autonomously navigate third-party apps to accomplish real-world tasks. Combined with Google's dual-platform AI strategy — powering both Android and, soon, iOS — and the rapid growth of agentic AI frameworks for developers, this week's rollout looks less like a product launch and more like the opening chapter of a fundamental shift in human-computer interaction.
The question is no longer whether AI agents will handle our routine digital tasks. It is whether we — users, developers, regulators, and security researchers — are ready for what that means.
Sources (25)
- [1]Gemini can now order your lunch as Android app control rolls out on Galaxy S269to5google.com
Gemini task automation hands control over Android apps to your AI assistant, now live on the Galaxy S26 series with DoorDash, Uber Eats, Grubhub, and more.
- [2]Samsung Galaxy S26 now has Gemini's task automation featuressammobile.com
Gemini screen automation is now available on the Galaxy S26, S26+, and S26 Ultra, currently supporting food delivery and rideshare apps.
- [3]Google just gave Gemini the power to control apps on the Galaxy S26 — and it's pretty wildandroidcentral.com
Gemini opens apps in a secure virtual window running in the background, navigates menus, builds carts, and hands control back before payment.
- [4]The Galaxy S26 Ultra just ordered dinner and tea for me and all I had to do was hit 'Confirm'androidcentral.com
Hands-on experience with Gemini screen automation ordering food through DoorDash and Uber Eats on the Galaxy S26 Ultra.
- [5]Galaxy S26 Gemini can now order your dinner without you lifting a fingerphandroid.com
Gemini's screen automation feature supports Kroger, Walmart, Starbucks, and rideshare apps alongside major food delivery platforms.
- [6]Project Mariner — Google DeepMinddeepmind.google
Research prototype built with Gemini 2.0 that explores autonomous web browsing, available to AI Ultra plan subscribers.
- [7]Google details 'AppFunctions' that let Gemini use Android apps9to5google.com
Android AppFunctions allows apps to expose data and functionality directly to AI agents through a Jetpack library mirroring MCP cloud servers.
- [8]The Intelligent OS: Making AI agents more helpful for Android appsandroid-developers.googleblog.com
Google details plans for AppFunctions and UI automation in Android 17 to enable agentic integrations for developers.
- [9]Online Food Delivery Market Size to Hit USD 694.65 Bn By 2035precedenceresearch.com
The global online food delivery market is projected to reach $694.65 billion by 2035 at a CAGR of 10.44%.
- [10]Food Delivery Market Size | 2025 Global Analysis Reportbusinessresearchinsights.com
The food delivery market reached approximately $350 billion in 2026, growing at a CAGR of 7.6%.
- [11]United States Online Food Delivery Market Report 2025-2033finance.yahoo.com
DoorDash, Uber Eats, and Grubhub compete for dominance in the U.S. $74+ billion food delivery industry.
- [12]Food Delivery Market Share Statistics in 2026oysterlink.com
DoorDash commands 56% of U.S. food delivery market share, followed by Uber Eats at 23% and Grubhub at 16%.
- [13]Samsung Targets 800 Million AI-Enabled Devices With Google's Gemini By 2026folio3.ai
Samsung plans to double AI-enabled devices from 400 million in 2025 to 800 million in 2026.
- [14]Samsung targeting 800 million Gemini AI devices in 2026sammyfans.com
Samsung's aggressive strategy to reclaim smartphone leadership from Apple through AI differentiation.
- [15]One UI 8.5 on Samsung Galaxy S26 has Gemini integration for one more appsammobile.com
Galaxy S26 introduces Gemini integration with the Samsung Gallery app in One UI 8.5 for natural language photo search.
- [16]Apple picks Google's Gemini to run AI-powered Siri coming this yearcnbc.com
Apple confirmed a multiyear deal worth $1 billion annually to use Gemini as the foundation for a rebuilt Siri, running on Apple's Private Cloud Compute servers.
- [17]Google's Gemini to power Apple's AI features like Siritechcrunch.com
Apple-Google partnership leverages Gemini for complex reasoning while Apple retains control over data privacy.
- [18]Samsung's S26 gives an advance look at what the Google-powered Apple Siri could docnbc.com
The Galaxy S26's Gemini features preview capabilities that the rebuilt Apple Siri is expected to gain later in 2026.
- [19]Google Gemini Stats 2026 – Market Share, Users and Morefatjoe.com
Gemini has over 650 million monthly users with market share growing from 13.3% to 22% in recent months.
- [20]Gemini Apps Privacy Hubsupport.google.com
Google warns users not to enter anything they wouldn't want a human reviewer to see, as conversations may be reviewed.
- [21]PromptSpy Android Malware Abuses Gemini AI to Automate Recent-Apps Persistencethehackernews.com
First known Android malware to use generative AI at runtime, sending prompts to Gemini to navigate device UI for persistence.
- [22]PromptSpy ushers in the era of Android threats using GenAIwelivesecurity.com
ESET research details how PromptSpy uses Gemini AI to adapt to any device layout, deploying VNC for remote access and PIN interception.
- [23]Taming Agentic Browsers: Vulnerability in Chrome Allowed Extensions to Hijack New Gemini Panelunit42.paloaltonetworks.com
CVE-2026-0628, patched in Chrome 143, allowed malicious extensions to inject code into the Gemini Live panel.
- [24]Papa Johns and Google Cloud Reimagine the Future of Food Orderinggooglecloudpresscorner.com
Papa Johns deploys Google Cloud's Food Ordering agent for unified voice and text AI ordering across all channels.
- [25]How Restaurants Use AI Chatbots in 2026: From Burger King's 'Patty' to WhatsApp Orderingchatmaxima.com
Burger King's Patty chatbot powered by OpenAI pilots drive-thru AI across 500 U.S. locations with nationwide plans.