Robot program learns, adapts, compensates. It’s slowly learning.
What’s interesting about this story from Physorg.com is not that we’ve got robots programmed to adapt to their surroundings while carrying out their programmed directives. Programmers and robotic engineers have been working on that for quite a bit of time. What’s interesting is how Cornell University researchers have gone about it: the “underlying algorithm” that is also a promising model for programming future robots.
Just as long as that underlying algorithm does not decree that we taste like bacon, and we are are a threat to their existence, and try to exterminate us in the process.
Normally, the basis for robotic control programs tend to be “rigid” set of commands to the robot to move this or that part, or react to this or that condition. To have a robot anticipate every random condition it would meet in the field would require a lot of code to anticipate and react to those conditions. Needless to say: Heavy. Unwieldy. Blue screen of death, anyone?
Read more after the jump!
What’s interesting about this story from Physorg.com is not that we’ve got robots programmed to adapt to their surroundings while carrying out their programmed directives. Programmers and robotic engineers have been working on that for quite a bit of time. What’s interesting is how Cornell University researchers have gone about it: the “underlying algorithm” that is also a promising model for programming future robots.
Just as long as that underlying algorithm does not decree that we taste like bacon, and we are are a threat to their existence, and try to exterminate us in the process.
Normally, the basis for robotic control programs tend to be “rigid” set of commands to the robot to move this or that part, or react to this or that condition. To have a robot anticipate every random condition it would meet in the field would require a lot of code to anticipate and react to those conditions. Needless to say: Heavy. Unwieldy. Blue screen of death, anyone?
The “philosophy” behind the Cornell robot’s different. Instead of “rigid models,” they instead program the robot they built with an “image or picture” of how it’s built, what it is made of, and only one prime directive: move forward. From there, the robot’s silicon brain figures out the rest, building experimental models of itself to test, gather data, and in short, learn.
It learns how to manipulate itself through experimentation and comparing results. It learns how to use its legs to move forward in accordance with its prime directive. When one of its legs go down (triggered by the researchers), it learns how to compensate for the injury and continue moving forward. If it finds a better way to move forward, it learns it and adapts to it accordingly. It learns. It learns. It learns. It’s aware of itself?
It’s not personal self-awareness – which means no Matrix scenarios here. It is conscious on a primitive level however, as Cornell researcher Josh Bongard asserts. What’s so interesting about this is that AI programs for robots can be built along similar lines, allowing for robots with greater autonomy and adaptability. Useful in applications like space exploration, for example. No need to bounce commands to faraway probes to gather data – or to even ask if something’s wrong.