To understand the magnitude of this, it helps to know a little about how robots have conventionally been built.įor years, the way engineers at Google and other companies trained robots to do a mechanical task - flipping a burger, for example - was by programming them with a specific list of instructions. “What’s very impressive is how it links semantics with robots,” he said. Robots still fall short of human-level dexterity and fail at some basic tasks, but Google’s use of AI language models to give robots new skills of reasoning and improvisation represents a promising breakthrough, said Ken Goldberg, a robotics professor at the University of California, Berkeley. “A lot of the things that we were working on before have been entirely invalidated.” “We’ve had to reconsider our entire research program as a result of this change,” said Vincent Vanhoucke, Google DeepMind’s head of robotics. Google has recently begun plugging state-of-the-art language models into its robots, giving them the equivalent of artificial brains. Google’s new robotics model, RT-2, is prompted to pick up the extinct animal at Google’s robotics division in Mountain View, Calif., July 26, 2023. The model, which was being unveiled Friday, amounts to a first step toward what Google executives described as a major leap in the way robots are built and programmed. I got a glimpse of that progress during a private demonstration of Google’s latest robotics model, called RT-2.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |