UMD Researchers Caution Against Allowing Robots to Run on AI Models

They urge robot manufacturers to conduct additional safety research before integrating language and vision models into their hardware.
Descriptive image for UMD Researchers Caution Against Allowing Robots to Run on AI Models

Computer scientists at the University of Maryland have asked robot makers to do further safety research before wiring language and vision models to their hardware.

Given the constant stream of reports about error-prone, biased, opaque LLMs and VLMs over the past year, it might seem obvious that putting a chatbot in charge of a mechanical arm or free-roaming robot would be a risky move.

Nonetheless, in its apparent eagerness to advance the field, the robotics community has pressed ahead with efforts to wed LLMs/VLMs with robots. Projects like Google's RT2 vision-action-language model, the University of Michigan's LLM-Grounder, and Princeton's TidyBot illustrate where things are heading.

Given the proliferation of commercial and open-source multi-modal models that can accept images, sound and language as input, there are likely to be many more efforts to integrate language and vision models with mechanical systems in the years to come.

Caution may be advisable. Nine University of Maryland researchers – Xiyang Wu, Ruiqi Xian, Tianrui Guan, Jing Liang, Souradip Chakraborty, Fuxiao Liu, Brian Sadler, Dinesh Manocha and Amrit Singh Bedi – took a look at three language model frameworks used for robots, KnowNo, VIMA and Instruct2Act. They found that further safety work needs to be done before robots should be allowed to run on LLM-powered brains.

Click HERE to read the full article. 

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.