Us. And them.
Robots are being created that can think, act, and relate to humans. Are we ready?
By Chris Carroll
By Max Aguilera-Hellweg
Someone types a command into a laptop, and Actroid-DER jerks upright with a shudder and a wheeze. Compressed air flows beneath silicone skin, triggering actuators that raise her arms and lift the corners of her mouth into a demure smile. She seems to compose herself, her eyes panning the room where she stands fixed to a platform, tubes and wires running down through her ankles. She blinks, then turns her face toward me. I can’t help but meet her—its—mechanical gaze. “Are you surprised that I’m a robot?” she asks. “I look just like a human, don’t I?”
Her scripted observation has the unfortunate effect of calling my attention to the many ways she does not. Developed in Japan by the Kokoro Company, the Actroid-DER android can be rented to serve as a futuristic spokesmodel at corporate events, a role that admittedly does not require great depth of character. But in spite of the $250,000 spent on her development, she moves with a twitchy gracelessness, and the inelasticity of her features lends a slightly demented undertone to her lovely face. Then there is her habit of appearing to nod off momentarily between utterances, as if she were on something stronger than electricity.
While more advanced models of the Actroid make the rounds of technology exhibitions, this one has been shipped to Carnegie Mellon University in Pittsburgh to acquire the semblance of a personality. Such at least is the hope of five optimistic graduate students in the university’s Entertainment Technology Center, who have been given one 15-week semester to render the fembot palpably more fem and less bot. They have begun by renaming her Yume—dream, in Japanese.
“Kokoro developed her to be physically realistic, but that’s not enough by itself,” says Christine Barnes, student co-producer of the Yume Project. “What we’re going to do is shift the focus from realism to believability.”
The Actroid androids are part of a new generation of robots, artificial beings designed to function not as programmed industrial machines but as increasingly autonomous agents capable of taking on roles in our homes, schools, and offices previously carried out only by humans. The foot soldiers of this vanguard are the Roomba vacuums that scuttle about cleaning our carpets and the cuddly electronic pets that sit up and roll over on command but never make a mess on the rug. More sophisticated bots may soon be available that cook for us, fold the laundry, even babysit our children or tend to our elderly parents, while we watch and assist from a computer miles away.
“In five or ten years robots will routinely be functioning in human environments,” says Reid Simmons, a professor of robotics at Carnegie Mellon.
Such a prospect leads to a cascade of questions. How much everyday human function do we want to outsource to machines? What should they look like? Do we want androids like Yume puttering about in our kitchens, or would a mechanical arm tethered to the backsplash do the job better, without creeping us out? How will the robot revolution change the way we relate to each other? A cuddly robotic baby seal developed in Japan to amuse seniors in eldercare centers has drawn charges that it could cut them off from other people. Similar fears have been voiced about future babysitting robots. And of course there are the halting attempts to create ever willing romantic androids. Last year a New Jersey company introduced a talking, touch-sensitive robot “companion,” raising the possibility of another kind of human disconnect.
In short: Are we ready for them? Are they ready for us?
In a building a mile up the hill from the Entertainment Technology Center, HERB sits motionless, lost in thought. Short for Home Exploring Robotic Butler, HERB is being developed by Carnegie Mellon in collaboration with Intel Labs Pittsburgh as a prototype service bot that might care for the elderly and disabled in the not too distant future. HERB is a homely contraption, with Segway wheels for legs and a hodgepodge of computers for a body. But unlike pretty Yume, HERB has something akin to a mental life. Right now the robot is improving its functionality by running through alternative scenarios to manipulate representations of objects stored in its memory, tens of thousands of scenarios a second.
“I call it dreaming,” says Siddhartha Srinivasa, HERB’s builder and a professor at the Robotics Institute at Carnegie Mellon. “It helps people intuitively understand that the robot is actually visualizing itself doing something.”
Traditional robots, the kind you might find spot-welding a car frame, can be programmed to carry out a very precise sequence of tasks but only within rigidly structured environments. To negotiate human spaces, robots like HERB need to perceive and cope with unfamiliar objects and move about without bumping into people who are themselves in motion. HERB’s perception system consists of a video camera and a laser navigation device mounted on a boom above his mechanical arm. (“We tend to think of HERB as a he,” Srinivasa says. “Maybe because most butlers are. And he’s kind of beefy.”) In contrast to a hydraulic industrial robotic armature, HERB’s arm is animated by a pressure-sensing system of cables akin to human tendons: a necessity if one wants a robot capable of supporting an elderly widow on her way to the bathroom without catapulting her through the door.
In the lab one of Srinivasa’s students taps a button, issuing a command to pick up a juice box sitting on a nearby table. HERB’s laser spins, creating a 3-D grid mapping the location of nearby people and objects, and the camera locks on a likely candidate for the target juice box. The robot slowly reaches over and takes hold of the box, keeping it upright. On command, he gently puts it down. To the uninitiated, the accomplishment might seem underwhelming. “When I showed it to my mom,” Srinivasa says, “she couldn’t understand why HERB has to think so hard to pick up a cup.”
The problem is not with HERB but with the precedents that have been set for him. Picking up a drink is dead simple for people, whose brains have evolved over millions of years to coordinate exactly such tasks. It’s also a snap for an industrial robot programmed for that specific action. The difference between a social robot like HERB and a conventional factory bot is that he knows that the object is a juice box and not a teacup or a glass of milk, which he would have to handle differently. How he understands this involves a great deal of mathematics and computer science, but it boils down to “taking in information and processing it intelligently in the context of everything he already knows about what his world looks like,” Srinivasa explains.
When HERB is introduced to a new object, previously learned rules inform the movement of his pressure-sensitive arm and hand. Does the object have a handle? Can it break or spill? Srinivasa programmed HERB’s grips by studying how humans behave. In a bar, for instance, he watched bartenders use a counterintuitive underhanded maneuver to grab and pour from a bottle. He reduced the motion to an algorithm, and now HERB has it in his repertoire.
Of course the world HERB is beginning to master is a controlled laboratory environment. Programming him to function in real human spaces will be frightfully more challenging. HERB has a digital bicycle horn that he honks to let people know he’s getting near them; if a room is busy and crowded, he takes the safest course of action and simply stands there, honking at everybody.
This strategy works in the lab but would not go over well in an office. Humans can draw on a vast unconscious vocabulary of movements—we know how to politely move around someone in our path, how to sense when we’re invading someone’s personal space. Studies at Carnegie Mellon and elsewhere have shown that people expect social robots to follow the same rules. We get uncomfortable when they don’t or when they make stupid mistakes. Snackbot, another mobile robot under development at Carnegie Mellon, takes orders and delivers snacks to people at the School of Computer Science. Sometimes it annoyingly brings the wrong snack or gives the wrong change. People are more forgiving if the robot warns them first that it might make errors or apologizes when it screws up.
Then there are the vagaries of human nature to cope with. “Sometimes people steal snacks from the robot,” says one of Snackbot’s developers. “We got it on video.”
Us. And them.
Robots are being created that can think, act, and relate to humans. Are we ready?
By Chris Carroll
By Max Aguilera-Hellweg
Someone types a command into a laptop, and Actroid-DER jerks upright with a shudder and a wheeze. Compressed air flows beneath silicone skin, triggering actuators that raise her arms and lift the corners of her mouth into a demure smile. She seems to compose herself, her eyes panning the room where she stands fixed to a platform, tubes and wires running down through her ankles. She blinks, then turns her face toward me. I can’t help but meet her—its—mechanical gaze. “Are you surprised that I’m a robot?” she asks. “I look just like a human, don’t I?”
Her scripted observation has the unfortunate effect of calling my attention to the many ways she does not. Developed in Japan by the Kokoro Company, the Actroid-DER android can be rented to serve as a futuristic spokesmodel at corporate events, a role that admittedly does not require great depth of character. But in spite of the $250,000 spent on her development, she moves with a twitchy gracelessness, and the inelasticity of her features lends a slightly demented undertone to her lovely face. Then there is her habit of appearing to nod off momentarily between utterances, as if she were on something stronger than electricity.
While more advanced models of the Actroid make the rounds of technology exhibitions, this one has been shipped to Carnegie Mellon University in Pittsburgh to acquire the semblance of a personality. Such at least is the hope of five optimistic graduate students in the university’s Entertainment Technology Center, who have been given one 15-week semester to render the fembot palpably more fem and less bot. They have begun by renaming her Yume—dream, in Japanese.
“Kokoro developed her to be physically realistic, but that’s not enough by itself,” says Christine Barnes, student co-producer of the Yume Project. “What we’re going to do is shift the focus from realism to believability.”
The Actroid androids are part of a new generation of robots, artificial beings designed to function not as programmed industrial machines but as increasingly autonomous agents capable of taking on roles in our homes, schools, and offices previously carried out only by humans. The foot soldiers of this vanguard are the Roomba vacuums that scuttle about cleaning our carpets and the cuddly electronic pets that sit up and roll over on command but never make a mess on the rug. More sophisticated bots may soon be available that cook for us, fold the laundry, even babysit our children or tend to our elderly parents, while we watch and assist from a computer miles away.
“In five or ten years robots will routinely be functioning in human environments,” says Reid Simmons, a professor of robotics at Carnegie Mellon.
Such a prospect leads to a cascade of questions. How much everyday human function do we want to outsource to machines? What should they look like? Do we want androids like Yume puttering about in our kitchens, or would a mechanical arm tethered to the backsplash do the job better, without creeping us out? How will the robot revolution change the way we relate to each other? A cuddly robotic baby seal developed in Japan to amuse seniors in eldercare centers has drawn charges that it could cut them off from other people. Similar fears have been voiced about future babysitting robots. And of course there are the halting attempts to create ever willing romantic androids. Last year a New Jersey company introduced a talking, touch-sensitive robot “companion,” raising the possibility of another kind of human disconnect.
In short: Are we ready for them? Are they ready for us?
In a building a mile up the hill from the Entertainment Technology Center, HERB sits motionless, lost in thought. Short for Home Exploring Robotic Butler, HERB is being developed by Carnegie Mellon in collaboration with Intel Labs Pittsburgh as a prototype service bot that might care for the elderly and disabled in the not too distant future. HERB is a homely contraption, with Segway wheels for legs and a hodgepodge of computers for a body. But unlike pretty Yume, HERB has something akin to a mental life. Right now the robot is improving its functionality by running through alternative scenarios to manipulate representations of objects stored in its memory, tens of thousands of scenarios a second.
“I call it dreaming,” says Siddhartha Srinivasa, HERB’s builder and a professor at the Robotics Institute at Carnegie Mellon. “It helps people intuitively understand that the robot is actually visualizing itself doing something.”
Traditional robots, the kind you might find spot-welding a car frame, can be programmed to carry out a very precise sequence of tasks but only within rigidly structured environments. To negotiate human spaces, robots like HERB need to perceive and cope with unfamiliar objects and move about without bumping into people who are themselves in motion. HERB’s perception system consists of a video camera and a laser navigation device mounted on a boom above his mechanical arm. (“We tend to think of HERB as a he,” Srinivasa says. “Maybe because most butlers are. And he’s kind of beefy.”) In contrast to a hydraulic industrial robotic armature, HERB’s arm is animated by a pressure-sensing system of cables akin to human tendons: a necessity if one wants a robot capable of supporting an elderly widow on her way to the bathroom without catapulting her through the door.
In the lab one of Srinivasa’s students taps a button, issuing a command to pick up a juice box sitting on a nearby table. HERB’s laser spins, creating a 3-D grid mapping the location of nearby people and objects, and the camera locks on a likely candidate for the target juice box. The robot slowly reaches over and takes hold of the box, keeping it upright. On command, he gently puts it down. To the uninitiated, the accomplishment might seem underwhelming. “When I showed it to my mom,” Srinivasa says, “she couldn’t understand why HERB has to think so hard to pick up a cup.”
The problem is not with HERB but with the precedents that have been set for him. Picking up a drink is dead simple for people, whose brains have evolved over millions of years to coordinate exactly such tasks. It’s also a snap for an industrial robot programmed for that specific action. The difference between a social robot like HERB and a conventional factory bot is that he knows that the object is a juice box and not a teacup or a glass of milk, which he would have to handle differently. How he understands this involves a great deal of mathematics and computer science, but it boils down to “taking in information and processing it intelligently in the context of everything he already knows about what his world looks like,” Srinivasa explains.
When HERB is introduced to a new object, previously learned rules inform the movement of his pressure-sensitive arm and hand. Does the object have a handle? Can it break or spill? Srinivasa programmed HERB’s grips by studying how humans behave. In a bar, for instance, he watched bartenders use a counterintuitive underhanded maneuver to grab and pour from a bottle. He reduced the motion to an algorithm, and now HERB has it in his repertoire.
Of course the world HERB is beginning to master is a controlled laboratory environment. Programming him to function in real human spaces will be frightfully more challenging. HERB has a digital bicycle horn that he honks to let people know he’s getting near them; if a room is busy and crowded, he takes the safest course of action and simply stands there, honking at everybody.
This strategy works in the lab but would not go over well in an office. Humans can draw on a vast unconscious vocabulary of movements—we know how to politely move around someone in our path, how to sense when we’re invading someone’s personal space. Studies at Carnegie Mellon and elsewhere have shown that people expect social robots to follow the same rules. We get uncomfortable when they don’t or when they make stupid mistakes. Snackbot, another mobile robot under development at Carnegie Mellon, takes orders and delivers snacks to people at the School of Computer Science. Sometimes it annoyingly brings the wrong snack or gives the wrong change. People are more forgiving if the robot warns them first that it might make errors or apologizes when it screws up.
Then there are the vagaries of human nature to cope with. “Sometimes people steal snacks from the robot,” says one of Snackbot’s developers. “We got it on video.”
การแปล กรุณารอสักครู่..