Google is developing robots that write their own code

Google seeks to develop robots that write their own code with instructions given by humans. In a paper published on arXiv, researchers from Google’s Robotics team Jacky Liang and Andy Zeng demonstrated that the robots could code their own physical actions.

The code used for the experiments in the paper was made public on GitHub by the researchers, and a demo of the code-generation technique is now accessible on HuggingFace in an interactive format.

Art generated by AI

This achievement was possible due to the latest generation of language models, such as LLM (Large Language Models). These models have been trained on millions of lines of code and are capable of synthesizing simple Python programs from docstrings. They can be re-purposed to write robot policy code, given natural language commands.

Computers don’t comprehend instructions provided in a human language. But Google researchers have developed an LM (Language Model) program known as Code as Policies (CaP) which is capable of interpreting the natural language instructions and turning them into code. CaP is a code-writing AI model that helps robots in starting to code and perform complex tasks without being trained specifically in this way.

This approach is called hierarchical code generation. Basically the LMs can make their own libraries and generate code in a dynamic way by using arithmetic operations, logic structures, loops, and even other libraries such as NumPy. Moreover, they can translate precise values like velocities, “faster”, “slower”, “to the left” etc. 

Image Source: Code as Policies

This is a new step towards developing robots that can control and modify their behavior independently. Despite the risks that arise from these new capabilities, the approach will reduce the time spent by developers in this field and it could also make it easier for people to interact with robots by distributing a part of the development tasks to the AI.

Learn more:

Other popular posts