New research demonstrated at Google’s AI event in New York City this morning proposes the notion of letting robotic systems effectively write their own code. The concept is designed to save human developers the hassle of having to go in and reprogram things as new information arises. The company notes that existing research and trained models can be effective in implementing the concept. All of that work can prove foundational in developing systems that can continue to generate their own code based on objects and scenarios encountered in the real world. The new work on display today is Code as Policies (CaP). Google Research Intern Jacky Liang and Robotics Research Scientist Andy Zeng note in a blog post,
With CaP, we propose using language models to directly write robot code through few-shot prompting. Our experiments demonstrate that outputting code led to improved generalization and task performance over directly learning robot tasks and outputting natural language actions. CaP allows a single system to perform a variety of complex and varied robotic tasks without task-specific training.
The system, as described, also relies on third-party libraries and APIs to best generate the code suited to a specific scenario – as well as support for languages and (why not?) emojis. The information accessible in those APIs are one of the existing limitations at present. The researchers note, “These limitations point to avenues for future work, including extending visual language models to describe low-level robot behaviors (e.g., trajectories) or combining CaPs with exploration algorithms that can autonomously add to the set of control primitives.” As part of today’s announcement, Google is release an open source versions of the code accessible through its Github site to build on the research it’s thus far presented. Google wants robots to generate their own code by Brian Heater originally published on TechCrunch