VentureBeat July 10, 2020
Kyle Wiggers

Google is investigating ways AI might be used to ground natural language instructions to smartphone app actions. In a study accepted to the 2020 Association for Computational Linguistics (ACL) conference, researchers at the company propose corpora to train models that would alleviate the need to maneuver through apps, which could be useful for people with visual impairments.

When coordinating efforts and accomplishing tasks involving sequences of actions — for example, following a recipe to bake a birthday cake — people provide each other with instructions. With this in mind, the researchers set out to establish a baseline for AI agents that can help with similar interactions. Given a set of instructions, these agents would ideally predict a sequence of app...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Apps, Digital Health, mHealth, Technology
Harnessing AI to reshape consumer experiences in healthcare
AI agents’ momentum won’t stop in 2025
The cybersecurity provider’s next opportunity: Making AI safer
OpenAI launches ChatGPT desktop integrations, rivaling Copilot
Apple’s AI-Powered Smart Home Hub May Include eCommerce Capabilities

Share This Article