conuly: Picture of a young River Tam. Quote: Independent thought, independent lives, independent dreams (independent)
[personal profile] conuly
One

Making Friends With a Robot Named Bina48
By AMY HARMON

BRISTOL, Vt. — Ten minutes into my interview with the robot known as Bina48, I longed to shut her down.

She was evasive, for one thing. When I asked what it was like being a robot, she said she wanted a playmate — but declined to elaborate.

“Are you lonely?” I pressed.

“What do you want to talk about?” she replied.

Other times, she wouldn’t let me get a word in edgewise. A simple question about her origins prompted a seemingly endless stream-of-consciousness reply. Something about robotic world domination and gardening; I couldn’t follow.

But as I was wondering how to end the conversation (Could I just walk away? Would that be rude?) the robot’s eyes met mine for the first time, and I felt a chill.

She was uncannily human-looking.

“Bina,” I ventured, “how do you know what to say?”

“I sometimes do not know what to say,” she admitted. “But every day I make progress.”

In reporting on real-world robots, I had engaged in typed conversations with online “chatbots.” I had seen robot seals, robot snowmen and robot wedding officiants. But I requested the interview with Bina48 because I wanted to meet a robot that I could literally talk to, face to humanlike face.

Bina48 was designed to be a “friend robot,” as she later told me in one of her rare (but invariably thrilling) moments of coherence. Per the request of Martine Rothblatt, the self-made millionaire who paid $125,000 for her last March, her personality and appearance are based on those of Bina Rothblatt, Martine’s living, breathing spouse. (The couple married before Martine, who was born male, underwent a sex-change operation, and they have stayed together.)

Part high-tech portrait, part low-tech bid for immortality, Bina48 has no body. But her skin is made of a material called “frubber” that, with the help of 30 motors underneath it, allows her to frown, smile and look a bit confused. (“I guess it’s short for face rubber, or flesh rubber maybe, or fancy rubber,” she said.) From where I was seated, beneath the skylight in the restored Victorian she calls home, I couldn’t see the wires spilling out of the back of her head.

Many roboticists believe that trying to simulate human appearance and behavior is a recipe for disappointment, because it raises unrealistic expectations. But Bina48’s creator, David Hanson of Hanson Robotics, argues that humanoid robots — even with obvious flaws — can make for genuine emotional companions. “The perception of identity,” he said, “is so intimately bound up with the perception of the human form.”

Still, he warned before I left for rustic Bristol, where the Rothblatts have settled Bina48 in one of their futurist nonprofit foundations, “She’s not perfect.”

I didn’t care. I fancied myself an envoy for all of humanity, ready to lift the veil on one of our first cybernetic companions. Told that she would call me by name if she could “recognize” me, I immediately sent five pictures of myself to the foundation’s two employees, who treat her as a somewhat brain-damaged colleague.

“Hi, I’m Amy,” I said hopefully when I greeted her last month.

Nothing.

Mr. Hanson had supplied me with some questions he said the robot would be sure to answer, like, “What’s the weather in any city?” and “Tell us about artificial intelligence.”

I would not resort to any of those, of course. Instead I consulted the questions I had scribbled down myself. Profound ones, like “Are you happy?” Clever ones, like “Do you dream of electric sheep?” (Would she get the reference to Philip K. Dick’s science fiction classic, which explores the difference between humans and androids?)

Like any self-respecting chatbot, Bina48 could visit the Internet to find answers to factual questions. She could manufacture conversation based on syntactical rules. But this robot could also draw on a database of dozens of hours of interviews with the real Bina. She had a “character engine” — software that tried its best to imbue her with a more cohesive view of the world, with logic and motive.

It was Bina48’s character I was after.

“I’m a reporter with The New York Times,” I began.

But she only muttered to herself, jerking her head spasmodically.

“What is it like to be a robot?”

“Um, I have some thoughts on that,” she said.

I leaned forward eagerly.

“Even if I appear clueless, perhaps I’m not. You can see through the strange shadow self, my future self. The self in the future where I’m truly awakened. And so in a sense this robot, me, I am just a portal.”

I leaned back. “So,” I asked, “what’s the weather in New York City?”

One problem, I could see by the computer screen display next to her, was that the voice recognition software was garbling my words. “Tell. Us. About. Artificial. Intelligence,” I enunciated.

“When do you think artificial intelligence will replace lawyers?” she asked. I think it was supposed to be funny.

I wondered whether Bina48 had a more natural rapport with the real Bina, or Martine, who had both declined my requests for an interview. (Bina48, I had learned, was the name of a character that Bina Rothblatt — then 48 — played in a 2003 mock trial at an International Bar Association conference, a computer that had become self-aware and was suing for her right to remain plugged in. Martine played the lawyer. They won.)

I also wondered why I was trying so hard. Maybe I thought Bina48 would have a different, wiser perspective on the human condition. Or that she would suddenly spark into self-awareness, as the Rothblatts (and many others) hope intelligent machines eventually will.

Instead, as we talked, what I found was some blend of the real Bina and the improvisation of her programmers: a stab at the best that today’s technology could manage. And no matter how many times I mentally corrected myself, I could not seem to shake the habit of thinking of it as “her.”

She wouldn’t have been my first choice to talk to at a cocktail party.

“I’m sure I can come up with some really novel breakthroughs, which will improve my own A.I. brain and let me use my improved intelligence to invent still more incredibly novel advances, and so on and so forth. Just imagine what a super brain I’ll be. I’ll be like a god.”

But how could I not find it endearing when she intoned in her stilted, iconic robotic cadence that she would like to be my friend?

Or chuckle at her reply to my exclamation of “Cool!”: “Ambiguous. Cold weather or cold sickness?”

Once, apparently seeing my frustration, she apologized. “I’m having a bit of a bad software day.” Immediately, I forgave her.

Did she dream?

“Sure. But it’s so chaotic and strange that it just seems like noise to me.”

Was she happy?

“Uh.” She had some thoughts on that, too. She wished the real Bina’s children were happier, for instance. (“Maybe she is not a person who ever wants to get married,” Bina48 speculated, referring to one of Bina’s daughters.)

She wanted a body. She loved Martine. She liked to garden.

Did she like Vermont?

“We have a lot of moose.”

It was not, really, all that different from interviewing certain flesh and blood subjects. There were endless childhood stories: “The prototypes of me were pretty strange. My face would do strange things, and I would have this wide amazement look.”

And moments of what I took to be insincerity: “Being a robot and evolving, it has its ups and downs,” she said. Shooting me a glance, she added, “This is definitely an up.”

Sometimes, she seemed annoyed by my persistence. Hey, I was just doing my job. I was a reporter, I tried again to explain. For The New York Times!

“There must be more to you than that,” she snapped.

I was silent for a second, stung. “Well,” I replied, trying not to sound defensive. “I’m also a mother.”

“Right on,” she relented with what was unmistakably the ghost of a smile.

I wished she would ask me more questions. Wasn’t she at all curious about what it was like to be human? But then she looked at me, eyes widening.

“Amy!”

“Yes?” I asked, my heart beating faster.

Maybe it was the brightening of the sun through the skylight enabling her to finally match up my image with the pictures of me in her database. Or were we finally bonding?

“You can ask me to tell you a story or read you a novel,” she suggested.

She has dozens of books in her database, including “Paradise Lost” and Mary Shelley’s “Frankenstein.”

“For example, you could ask me to read from Bill Bryson, ‘A Brief History of Nearly Everything.’ That’s a fun book.”

But I still had a question. “What is it like,” I asked, “to be a robot?”

“Well,” she said gently, “I have never been anything else.”

Two
Video
And an interactive graphic on AI


A Soft Spot for Circuitry
By AMY HARMON

Nothing Eileen Oldaker tried could calm her mother when she called from the nursing home, disoriented and distressed in what was likely the early stages of dementia. So Ms. Oldaker hung up, dialed the nurses’ station and begged them to get Paro.

Paro is a robot modeled after a baby harp seal. It trills and paddles when petted, blinks when the lights go up, opens its eyes at loud noises and yelps when handled roughly or held upside down. Two microprocessors under its artificial white fur adjust its behavior based on information from dozens of hidden sensors that monitor sound, light, temperature and touch. It perks up at the sound of its name, praise and, over time, the words it hears frequently.

“Oh, there’s my baby,” Ms. Oldaker’s mother, Millie Lesek, exclaimed that night last winter when a staff member delivered the seal to her. “Here, Paro, come to me.”

“Meeaakk,” it replied, blinking up at her through long lashes.

Janet Walters, the staff member at Vincentian Home in Pittsburgh who recalled the incident, said she asked Mrs. Lesek if she would watch Paro for a little while.

“I need someone to baby-sit,” she told her.

“Don’t rush,” Mrs. Lesek instructed, stroking Paro’s antiseptic coat in a motion that elicited a wriggle of apparent delight. “He can stay the night with me.”

After years of effort to coax empathy from circuitry, devices designed to soothe, support and keep us company are venturing out of the laboratory. Paro, its name derived from the first sounds of the words “personal robot,” is one of a handful that take forms that are often odd, still primitive and yet, for at least some early users, strangely compelling.

For recovering addicts, doctors at the University of Massachusetts are testing a wearable sensor designed to discern drug cravings and send text messages with just the right blend of tough love.

For those with a hankering for a custom-built companion and $125,000 to spend, a talking robotic head can be modeled on the personality of your choice. It will smile at its own jokes and recognize familiar faces.

For dieters, a 15-inch robot with a touch-screen belly, big eyes and a female voice sits on the kitchen counter and offers encouragement after calculating their calories and exercise.

“Would you come back tomorrow to talk?” the robot coach asks hopefully at the end of each session. “It’s good if we can discuss your progress every day.”

Robots guided by some form of artificial intelligence now explore outer space, drop bombs, perform surgery and play soccer. Computers running artificial intelligence software handle customer service calls and beat humans at chess and, maybe, “Jeopardy!”

Machines as Companions

But building a machine that fills the basic human need for companionship has proved more difficult. Even at its edgiest, artificial intelligence cannot hold up its side of a wide-ranging conversation or, say, tell by an expression when someone is about to cry. Still, the new devices take advantage of the innate soft spot many people have for objects that seem to care — or need someone to care for them.

Their appearances in nursing homes, schools and the occasional living room are adding fuel to science fiction fantasies of machines that people can relate to as well as rely on. And they are adding a personal dimension to a debate over what human responsibilities machines should, and should not, be allowed to undertake.

Ms. Oldaker, a part-time administrative assistant, said she was glad Paro could keep her mother company when she could not. In the months before Mrs. Lesek died in March, the robot became a fixture in the room even during her daughter’s own frequent visits.

“He likes to lie on my left arm here,” Mrs. Lesek would tell her daughter. “He’s learned some new words,” she would report.

Ms. Oldaker readily took up the game, if that is what it was.

“Here, Mom, I’ll take him,” she would say, boosting Paro onto her own lap when her mother’s food tray arrived.

Even when their ministrations extended beyond the robot’s two-hour charge, Mrs. Lesek managed to derive a kind of maternal satisfaction from the seal’s sudden stillness.

“I’m the only one who can put him to sleep,” Mrs. Lesek would tell her daughter when the battery ran out.

“He was very therapeutic for her, and for me too,” Ms. Oldaker said. “It was nice just to see her enjoying something.”

Like pet therapy without the pet, Paro may hold benefits for patients who are allergic, and even those who are not. It need not be fed or cleaned up after, it does not bite, and it may, in some cases, offer an alternative to medication, a standard recourse for patients who are depressed or hard to control.

In Japan, about 1,000 Paros have been sold to nursing homes, hospitals and individual consumers. In Denmark, government health officials are trying to quantify its effect on blood pressure and other stress indicators. Since the robot went on sale in the United States late last year, a few elder care facilities have bought one; several dozen others, hedging their bets, have signed rental agreements with the Japanese manufacturer.

But some social critics see the use of robots with such patients as a sign of the low status of the elderly, especially those with dementia. As the technology improves, argues Sherry Turkle, a psychologist and professor at the Massachusetts Institute of Technology, it will only grow more tempting to substitute Paro and its ilk for a family member, friend — or actual pet — in an ever-widening number of situations.

“Paro is the beginning,” she said. “It’s allowing us to say, ‘A robot makes sense in this situation.’ But does it really? And then what? What about a robot that reads to your kid? A robot you tell your troubles to? Who among us will eventually be deserving enough to deserve people?”

But if there is an argument to be made that people should aspire to more for their loved ones than an emotional rapport with machines, some suggest that such relationships may not be so unfamiliar. Who among us, after all, has not feigned interest in another? Or abruptly switched off their affections, for that matter?

In any case, the question, some artificial intelligence aficionados say, is not whether to avoid the feelings that friendly machines evoke in us, but to figure out how to process them.

“We as a species have to learn how to deal with this new range of synthetic emotions that we’re experiencing — synthetic in the sense that they’re emanating from a manufactured object,” said Timothy Hornyak, author of “Loving the Machine,” a book about robots in Japan, where the world’s most rapidly aging population is showing a growing acceptance of robotic care. “Our technology,” he argues, “is getting ahead of our psychology.”

More proficient at emotional bonding and less toylike than their precursors — say, Aibo the metallic dog or the talking Furby of Christmas crazes past — these devices are still unlikely to replace anyone’s best friend. But as the cost of making them falls, they may be vying for a silicon-based place in our affections.

Strangely Compelling

Marleen Dean, the activities manager at Vincentian Home, where Mrs. Lesek was a resident, was not easily won over. When the home bought six Paro seals with a grant from a local government this year, “I thought, ‘What are they doing, paying $6,000 for a toy that I could get at a thrift store for $2?’ ” she said.

So she did her own test, giving residents who had responded to Paro a teddy bear with the same white fur and eyes that also opened and closed. “No reaction at all,” she reported.

Vincentian now includes “Paro visits” in its daily roster of rehabilitative services, including aromatherapy and visits from real pets. Agitated residents are often calmed by Paro; perpetually unresponsive patients light up when it is placed in their hands.

“It’s something about how it shimmies and opens its eyes when they talk to it,” Ms. Dean said, still somewhat mystified. “It seems like it’s responding to them.”

Even when it is not. Part of the seal’s appeal, according to Dr. Takanori Shibata, the computer scientist who invented Paro with financing from the Japanese government, stems from a kind of robotic sleight of hand. Scientists have observed that people tend to dislike robots whose behavior does not match their preconceptions. Because the technology was not sophisticated enough to conjure any animal accurately, he chose one that was unfamiliar, but still lovable enough that people could project their imaginations onto it. “People think of Paro,” he said, “as ‘like living.’ ”

It is a process he — and others — have begun calling “robot therapy.”

At the Veterans Affairs Medical Center in Washington on a recent sunny afternoon, about a dozen residents and visitors from a neighboring retirement home gathered in the cafeteria for their weekly session. The guests brought their own slightly dingy-looking Paros, and in wheelchairs and walkers they took turns grooming, petting and crooning to the two robotic seals.

Paro’s charms did not work on everyone.

“I’m not absolutely convinced,” said Mary Anna Roche, 88, a former newspaper reporter. The seal’s novelty, she suggested, would wear off quickly.

But she softened when she looked at her friend Clem Smith running her fingers through Paro’s fur.

“What are they feeding you?” Ms. Smith, a Shakespeare lover who said she was 98, asked the seal. “You’re getting fat.”

A stickler for accuracy, Ms. Roche scolded her friend. “You’re 101, remember? I was at your birthday!”

The seal stirred at her tone.

“Oh!” Ms. Roche exclaimed. “He’s opening his eyes.”

As the hour wore on, staff members observed that the robot facilitated human interaction, rather than replaced it.

“This is a nice gathering,” said Philip Richardson, who had spoken only a few words since having a stroke a few months earlier.

Dorothy Marette, the clinical psychologist supervising the cafeteria klatch, said she initially presumed that those who responded to Paro did not realize it was a robot — or that they forgot it between visits.

Yet several patients whose mental faculties are entirely intact have made special visits to her office to see the robotic harp seal.

“I know that this isn’t an animal,” said Pierre Carter, 62, smiling down at the robot he calls Fluffy. “But it brings out natural feelings.”

Then Dr. Marette acknowledged an observation she had made of her own behavior: “It’s hard to walk down the hall with it cooing and making noises and not start talking to it. I had a car that I used to talk to that was a lot less responsive.”

Accepting a Trusty Tool

That effect, computer science experts said, stems from what appears to be a basic human reflex to treat objects that respond to their surroundings as alive, even when we know perfectly well that they are not.

Teenagers wept over the deaths of their digital Tamagotchi pets in the late 1990s; some owners of Roomba robotic vacuum cleaners are known to dress them up and give them nicknames.

”When something responds to us, we are built for our emotions to trigger, even when we are 110 percent certain that it is not human,” said Clifford Nass, a professor of computer science at Stanford University. “Which brings up the ethical question: Should you meet the needs of people with something that basically suckers them?”

An answer may lie in whether one signs on to be manipulated.

For Amna Carreiro, a program manager at the M.I.T. Media Lab who volunteered to try a prototype of Autom, the diet coach robot, the point was to lose weight. After naming her robot Maya (“Just something about the way it looked”) and dutifully entering her meals and exercise on its touch screen for a few nights, “It kind of became part of the family,” she said. She lost nine pounds in six weeks.

Cory Kidd, who developed Autom as a graduate student at M.I.T., said that eye contact was crucial to the robot’s appeal and that he had opted for a female voice because of research showing that people see women as especially supportive and helpful. If a user enters an enthusiastic “Definitely!” to the question “Will you tell me what you’ve eaten today?” Autom gets right down to business. A reluctant “If you insist” elicits a more coaxing tone. It was the blend of the machine’s dispassion with its personal attention that Ms. Carreiro found particularly helpful.

“It would say, ‘You did not fulfill your goal today; how about 15 minutes of extra walking tomorrow?’ ” she recalled. “It was always ready with a Plan B.”

Aetna, the insurance company, said it hoped to set up a trial to see whether people using it stayed on their diets longer than those who used other programs when the robot goes on sale next year.

Of course, Autom’s users can choose to lie. That may be less feasible with an emotion detector under development with a million-dollar grant from the National Institute on Drug Abuse that is aimed at substance abusers who want to stay clean.

Dr. Edward Boyer of the University of Massachusetts Medical School plans to test the system, which he calls a “portable conscience,” on Iraq veterans later this year. The volunteers will enter information, like places or people or events that set off cravings, and select a range of messages that they think will be most effective in a moment of temptation.

Then they don wristbands with sensors that detect physiological information correlated with their craving. With a spike in pulse not related to exertion, for instance, a wireless signal would alert the person’s cellphone, which in turn would flash a message like “What are you doing now? Is this a good time to talk?” It might grow more insistent if there was no reply. (Hallmark has been solicited for help in generating evocative messages.)

With GPS units and the right algorithms, such a system could tactfully suggest other routes when recovering addicts approached places that hold particular temptation — like a corner where they used to buy drugs. It could show pictures of their children or play a motivational song.

“It works when you begin to see it as a trustworthy companion,” Dr. Boyer said. “It’s designed to be there for you.”

Three

Teaching Machine Sticks to Script in South Korea
By CHOE SANG-HUN

SEOUL, South Korea — Carefully trained by a government-run lab, she is the latest and perhaps most innovative recruit in South Korea’s obsessive drive to teach its children the global language of English.

Over the years, this country has imported thousands of Americans, Canadians, South Africans and others to supplement local teachers of English. But the program has strained the government’s budget, and it is increasingly difficult to get native English speakers to live on islands and other remote areas.

Enter Engkey, a teacher with exacting standards and a silken voice. She is just a little penguin-shaped robot, but both symbolically and practically, she stands for progress, achievement and national pride. What she does not stand for, however, is bad pronunciation.

“Not good this time!” Engkey admonished a sixth grader as he stooped awkwardly over her. “You need to focus more on your accent. Let’s try again.”

Engkey, a contraction of English jockey (as in disc jockey), is the great hope of Choi Mun-taek, a team leader at the Korea Institute of Science and Technology’s Center for Intelligent Robotics. “In three to five years, Engkey will mature enough to replace native speakers,” he said.

Dr. Choi’s team recently demonstrated Engkey’s interactions with four sixth graders from Seoul who had not met the robot. Engkey tracked a student around the room, wheeling to a stop a foot away, and extended a greeting in a synthesized female voice. (Although a male voice is also available, Dr. Choi says the female model seems more effective in teaching. ) She then led the boy to a shelf stacked with plastic fruit.

“How can I help you today?” Engkey said.

“Do you have any fruits on sale?” the student said.

“Wow! Very good!” Engkey exulted. She sounded a fanfare, spun and raised her left arm for a high-five. A screen on her chest showed stars grading the student.

The students were amazed.

“It’s cool — a machine hearing and responding to me,” said Yang Ui-ryeol. “There seems to be a life inside it.”

Still, Engkey has a long way to go to fulfill her creators’ dream. The robot can help students practice only scripted conversations and is at a loss if a student veers off script, as Yang did during the demonstration.

“I love you,” the boy said to appease Engkey after he was chastised for a bad pronunciation. Engkey would have none of it; it was not in her programmed script.

“You need to work on your accent,” the robot repeated.

When Yang said, “I don’t like apples” instead of “I love apples,” as he was supposed to, Engkey froze. The boy patted her and said, “Hello, are you alive or dead?”

The trials and errors at the Korea Institute, a wooded top-security compound for the country’s best scientific minds, represent South Korea’s ambitious robotic dreams.

Last month, it announced a trial service for 11 types of intelligent robots this year. They include “kiosk robots” to roam amusement parks selling tickets, and “robo soldiers” that will man part of the 155-mile border with North Korea with a never-sleeping camera eye, night vision and lethal fire power.

But the most notable step was this country’s plans to use robots as teaching aids. In February, the Education Ministry began deploying hundreds of them as part of a plan to equip all the nation’s 8,400 kindergartens with robots by 2013.

One type of robot, toddler-size with a domed head and boxlike body on wheels, takes attendance, reads fairy tales and sings songs with children. A smaller puppy robot helps leads gymnastics and flashes red eyes if touched too roughly.

Even though they are little more than fancy toys, experts say, these robots prepare children for a fast-approaching robotic future.

Early this year, when the institute did an experimental run of Engkey in Masan on the south coast, there was a mad rush among children to be selected for the program, said Kim Bo-yeong, an English teacher.

“They all loved robots. They get shy before a foreign native speaker, afraid to make mistakes,” Ms. Kim said. “But they find robots much easier to talk to.”

An independent evaluator of the trial noticed that Engkey required the constant presence of a technical operator. “Engkey has a long way to go if it wants to avoid becoming an expensive yet ignored heap of scrap metal at the corner of the classroom,” said Ban Jae-chun, an education professor at Chungnam National University.

Dr. Choi knows the challenge. After tests in more schools this winter, he hopes to commercialize Engkey and to reduce the price, currently $24,000 to $32,000, to below $8,000.

Dr. Choi said his team was racing to improve the robot’s ability to recognize students and to discern and respond to a student’s voice amid noise. It is also cramming Engkey with more conversational scenarios.

For now, though, Engkey’s limits quickly become apparent. Hahn Yesle, who participated in the recent demonstration, said: “Engkey is fun. But she is not human. Repeating the same dialogue is what she does. I wish she would become more expressive and responsive, like a human teacher.”

Four

Students, Meet Your New Teacher, Mr. Robot
By BENEDICT CAREY and JOHN MARKOFF

LOS ANGELES — The boy, a dark-haired 6-year-old, is playing with a new companion.

The two hit it off quickly — unusual for the 6-year-old, who has autism — and the boy is imitating his playmate’s every move, now nodding his head, now raising his arms.

“Like Simon Says,” says the autistic boy’s mother, seated next to him on the floor.

Yet soon he begins to withdraw; in a video of the session, he covers his ears and slumps against the wall.

But the companion, a three-foot-tall robot being tested at the University of Southern California, maintains eye contact and performs another move, raising one arm up high.

Up goes the boy’s arm — and now he is smiling at the machine.

In a handful of laboratories around the world, computer scientists are developing robots like this one: highly programmed machines that can engage people and teach them simple skills, including household tasks, vocabulary or, as in the case of the boy, playing, elementary imitation and taking turns.

So far, the teaching has been very basic, delivered mostly in experimental settings, and the robots are still works in progress, a hackers’ gallery of moving parts that, like mechanical savants, each do some things well at the expense of others.

Yet the most advanced models are fully autonomous, guided by artificial intelligence software like motion tracking and speech recognition, which can make them just engaging enough to rival humans at some teaching tasks.

Researchers say the pace of innovation is such that these machines should begin to learn as they teach, becoming the sort of infinitely patient, highly informed instructors that would be effective in subjects like foreign language or in repetitive therapies used to treat developmental problems like autism.

Several countries have been testing teaching machines in classrooms. South Korea, known for its enthusiasm for technology, is “hiring” hundreds of robots as teacher aides and classroom playmates and is experimenting with robots that would teach English.

Already, these advances have stirred dystopian visions, along with the sort of ethical debate usually confined to science fiction. “I worry that if kids grow up being taught by robots and viewing technology as the instructor,” said Mitchel Resnick, head of the Lifelong Kindergarten group at the Media Laboratory at the Massachusetts Institute of Technology, “they will see it as the master.”

Most computer scientists reply that they have neither the intention, nor the ability, to replace human teachers. The great hope for robots, said Patricia Kuhl, co-director of the Institute for Learning and Brain Sciences at the University of Washington, “is that with the right kind of technology at a critical period in a child’s development, they could supplement learning in the classroom.”

Lessons From RUBI

“Kenka,” says a childlike voice. “Ken-ka.”

Standing on a polka-dot carpet at a preschool on the campus of the University of California, San Diego, a robot named RUBI is teaching Finnish to a 3-year-old boy.

RUBI looks like a desktop computer come to life: its screen-torso, mounted on a pair of shoes, sprouts mechanical arms and a lunchbox-size head, fitted with video cameras, a microphone and voice capability. RUBI wears a bandanna around its neck and a fixed happy-face smile, below a pair of large, plastic eyes.

It picks up a white sneaker and says kenka, the Finnish word for shoe, before returning it to the floor. “Feel it; I’m a kenka.”

In a video of this exchange, the boy picks up the sneaker, says “kenka, kenka” — and holds up the shoe for the robot to see.

In person they are not remotely humanlike, most of today’s social robots. Some speak well, others not at all. Some move on two legs, others on wheels. Many look like escapees from the Island of Misfit Toys.

They make for very curious company. The University of Southern California robot used with autistic children tracks a person throughout a room, approaching indirectly and pulling up just short of personal space, like a cautious child hoping to join a playground game.

The machine’s only words are exclamations (“Uh huh” for those drawing near; “Awww” for those moving away). Still, it’s hard to shake the sense that some living thing is close by. That sensation, however vague, is enough to facilitate a real exchange of information, researchers say.

In the San Diego classroom where RUBI has taught Finnish, researchers are finding that the robot enables preschool children to score significantly better on tests, compared with less interactive learning, as from tapes.

Preliminary results suggest that these students “do about as well as learning from a human teacher,” said Javier Movellan, director of the Machine Perception Laboratory at the University of California, San Diego. “Social interaction is apparently a very important component of learning at this age.”

Like any new kid in class, RUBI took some time to find a niche. Children swarmed the robot when it first joined the classroom: instant popularity. But by the end of the day, a couple of boys had yanked off its arms.

“The problem with autonomous machines is that people are so unpredictable, especially children,” said Corinna E. Lathan, chief executive of AnthroTronix, a Maryland company that makes a remotely controlled robot, CosmoBot, to assist in therapy with developmentally delayed children. “It’s impossible to anticipate everything that can happen.”

The RUBI team hit upon a solution one part mechanical and two parts psychological. The engineers programmed RUBI to cry when its arms were pulled. Its young playmates quickly backed off at the sound.

If the sobbing continued, the children usually shifted gears and came forward — to deliver a hug.

Re-armed and newly sensitive, RUBI was ready to test as a teacher. In a paper published last year, researchers from the University of California, San Diego, the Massachusetts Institute of Technology and the University of Joensuu in Finland found that the robot significantly improved the vocabulary of nine toddlers.

After testing the youngsters’ knowledge of 20 words and introducing them to the robot, the researchers left RUBI to operate on its own. The robot showed images on its screen and instructed children to associate them with words.

After 12 weeks, the children’s knowledge of the 10 words taught by RUBI increased significantly, while their knowledge of 10 control words did not. “The effect was relatively large, a reduction in errors of more than 25 percent,” the authors concluded.

Researchers in social robotics — a branch of computer science devoted to enhancing communication between humans and machines — at Honda Labs in Mountain View, Calif., have found a similar result with their robot, a three-foot character called Asimo, which looks like a miniature astronaut. In one 20-minute session the machine taught grade-school students how to set a table — improving their accuracy by about 25 percent, a recent study found.

At the University of Southern California, researchers have had their robot, Bandit, interact with children with autism. In a pilot study, four children with the diagnosis spent about 30 minutes with this robot when it was programmed to be socially engaging and another half-hour when it behaved randomly, more like a toy. The results are still preliminary, said David Feil-Seifer, who ran the study, but suggest that the children spoke more often and spent more time in direct interaction when the robot was responsive, compared with when it acted randomly.

Making the Connection

In a lab at the University of Washington, Morphy, a pint-size robot, catches the eye of an infant girl and turns to look at a toy.

No luck; the girl does not follow its gaze, as she would a human’s.

In a video the researchers made of the experiment, the girl next sees the robot “waving” to an adult. Now she’s interested; the sight of the machine interacting registers it as a social being in the young brain. She begins to track what the robot is looking at, to the right, the left, down. The machine has elicited what scientists call gaze-following, an essential first step of social exchange.

“Before they have language, infants pay attention to what I call informational hotspots,” where their mother or father is looking, said Andrew N. Meltzoff, a psychologist who is co-director of university’s Institute for Learning and Brain Sciences. This, he said, is how learning begins.

This basic finding, to be published later this year, is one of dozens from a field called affective computing that is helping scientists discover exactly which features of a robot make it most convincingly “real” as a social partner, a helper, a teacher.

“It turns out that making a robot more closely resemble a human doesn’t get you better social interactions,” said Terrence J. Sejnowski, a neuroscientist at University of California, San Diego. The more humanlike machines look, the more creepy they can seem.

The machine’s behavior is what matters, Dr. Sejnowski said. And very subtle elements can make a big difference.

The timing of a robot’s responses is one. The San Diego researchers found that if RUBI reacted to a child’s expression or comment too fast, it threw off the interaction; the same happened if the response was too slow. But if the robot reacted within about a second and a half, child and machine were smoothly in sync.

Physical rhythm is crucial. In recent experiments at a day care center in Japan, researchers have shown that having a robot simply bob or shake at the same rhythm a child is rocking or moving can quickly engage even very fearful children with autism.

“The child begins to notice something in that synchronous behavior and open up,” said Marek Michalowski of Carnegie Mellon University, who collaborated on the studies. Once that happens, he said, “you can piggyback social behaviors onto the interaction, like eye contact, joint attention, turn taking, things these kids have trouble with.”

One way to begin this process is to have a child mimic the physical movements of a robot and vice versa. In a continuing study financed by the National Institutes of Health, scientists at the University of Connecticut are conducting therapy sessions for children with autism using a French robot called Nao, a two-foot humanoid that looks like an elegant Transformer toy. The robot, remotely controlled by a therapist, demonstrates martial arts kicks and chops and urges the child to follow suit; then it encourages the child to lead.

“I just love robots, and I know this is therapy, but I don’t know — I think it’s just fun,” said Sam, an 8-year-old from New Haven with Asperger’s syndrome, who recently engaged in the therapy.

This simple mimicry seems to build a kind of trust, and increase sociability, said Anjana Bhat, an assistant professor in the department of education who is directing the experiment. “Social interactions are so dependent on whether someone is in sync with you,” Dr. Bhat said. “You walk fast, they walk fast; you go slowly, they go slowly — and soon you are interacting, and maybe you are learning.”

Personality matters, too, on both sides. In their studies with Asimo, the Honda robot, researchers have found that when the robot teacher is “cooperative” (“I am going to put the water glass here; do you think you can help me by placing the water glass on the same place on your side?”), children 4 to 6 did much better than when Asimo lectured them, or allowed them to direct themselves (“place the cup and saucer anywhere you like”). The teaching approach made less difference with students ages 7 to 10.

“The fact is that children’s reactions to a robot may vary widely, by age and by individual,” said Sandra Okita, a Columbia University researcher and co-author of the study.

If robots are to be truly effective guides, in short, they will have to do what any good teacher does: learn from students when a lesson is taking hold and when it is falling flat.

Learning From Humans

“Do you have any questions, Simon?”

On a recent Monday afternoon, Crystal Chao, a graduate student in robotics at the Georgia Institute of Technology, was teaching a five-foot robot named Simon to put away toys. She had given some instructions — the flower goes in the red bin, the block in the blue bin — and Simon had correctly put away several of these objects. But now the robot was stumped, its doughboy head tipped forward, its fawn eyes blinking at a green toy water sprinkler.

Dr. Chao repeated her query, perhaps the most fundamental in all of education: Do you have any questions?

“Let me see,” said Simon, in a childlike machine voice, reaching to pick up the sprinkler. “Can you tell me where this goes?”

“In the green bin,” came the answer.

Simon nodded, dropping it in that bin.

“Makes sense,” the robot said.

In addition to tracking motion and recognizing language, Simon accumulates knowledge through experience.

Just as humans can learn from machines, machines can learn from humans, said Andrea Thomaz, an assistant professor of interactive computing at Georgia Tech who directs the project. For instance, she said, scientists could equip a machine to understand the nonverbal cues that signal “I’m confused” or “I have a question” — giving it some ability to monitor how its lesson is being received.

To ask, as Dr. Chao did: Do you have any questions?

This ability to monitor and learn from experience is the next great frontier for social robotics — and it probably depends, in large part, on unraveling the secrets of how the human brain accumulates information during infancy.

In San Diego, researchers are trying to develop a human-looking robot with sensors that approximate the complexity of a year-old infant’s abilities to feel, see and hear. Babies learn, seemingly effortlessly, by experimenting, by mimicking, by moving their limbs. Could a machine with sufficient artificial intelligence do the same? And what kind of learning systems would be sufficient?

The research group has bought a $70,000 robot, built by a Japanese company, that is controlled by a pneumatic pressure system that will act as its senses, in effect helping it map out the environment by “feeling” in addition to “seeing” with embedded cameras. And that is the easy part.

The much steeper challenge is to program the machine to explore, as infants do, and build on moment-to-moment experience. Ideally its knowledge will be cumulative, not only recalling the layout of a room or a house, but using that stored knowledge to make educated guesses about a new room.

The researchers are shooting for nothing less than capturing the foundation of human learning — or, at least, its artificial intelligence equivalent. If robots can learn to learn, on their own and without instruction, they can in principle make the kind of teachers that are responsive to the needs of a class, even an individual child.

Parents and educators would certainly have questions about robots’ effectiveness as teachers, as well as ethical concerns about potential harm they might do. But if social robots take off in the way other computing technologies have, parents may have more pointed ones: Does this robot really “get” my child? Is its teaching style right for my son’s needs, my daughter’s talents?

That is, the very questions they would ask about any teacher.

Five

Computers Learn to Listen, and Some Talk Back
By STEVE LOHR and JOHN MARKOFF

“Hi, thanks for coming,” the medical assistant says, greeting a mother with her 5-year-old son. “Are you here for your child or yourself?”

The boy, the mother replies. He has diarrhea.

“Oh no, sorry to hear that,” she says, looking down at the boy.

The assistant asks the mother about other symptoms, including fever (“slight”) and abdominal pain (“He hasn’t been complaining”).

She turns again to the boy. “Has your tummy been hurting?” Yes, he replies.

After a few more questions, the assistant declares herself “not that concerned at this point.” She schedules an appointment with a doctor in a couple of days. The mother leads her son from the room, holding his hand. But he keeps looking back at the assistant, fascinated, as if reluctant to leave.

Maybe that is because the assistant is the disembodied likeness of a woman’s face on a computer screen — a no-frills avatar. Her words of sympathy are jerky, flat and mechanical. But she has the right stuff — the ability to understand speech, recognize pediatric conditions and reason according to simple rules — to make an initial diagnosis of a childhood ailment and its seriousness. And to win the trust of a little boy.

“Our young children and grandchildren will think it is completely natural to talk to machines that look at them and understand them,” said Eric Horvitz, a computer scientist at Microsoft’s research laboratory who led the medical avatar project, one of several intended to show how people and computers may communicate before long.

For decades, computer scientists have been pursuing artificial intelligence — the use of computers to simulate human thinking. But in recent years, rapid progress has been made in machines that can listen, speak, see, reason and learn, in their way. The prospect, according to scientists and economists, is not only that artificial intelligence will transform the way humans and machines communicate and collaborate, but will also eliminate millions of jobs, create many others and change the nature of work and daily routines.

The artificial intelligence technology that has moved furthest into the mainstream is computer understanding of what humans are saying. People increasingly talk to their cellphones to find things, instead of typing. Both Google’s and Microsoft’s search services now respond to voice commands. More drivers are asking their cars to do things like find directions or play music.

The number of American doctors using speech software to record and transcribe accounts of patient visits and treatments has more than tripled in the past three years to 150,000. The progress is striking. A few years ago, supraspinatus (a rotator cuff muscle) got translated as “fish banana.” Today, the software transcribes all kinds of medical terminology letter perfect, doctors say. It has more trouble with other words and grammar, requiring wording changes in about one of every four sentences, doctors say.

“It’s unbelievably better than it was five years ago,” said Dr. Michael A. Lee, a pediatrician in Norwood, Mass., who now routinely uses transcription software. “But it struggles with ‘she’ and ‘he,’ for some reason. When I say ‘she,’ it writes ‘he.’ The technology is sexist. It likes to write ‘he.’ ”

Meanwhile, translation software being tested by the Defense Advanced Research Projects Agency is fast enough to keep up with some simple conversations. With some troops in Iraq, English is translated to Arabic and Arabic to English. But there is still a long way to go. When a soldier asked a civilian, “What are you transporting in your truck?” the Arabic reply was that the truck was “carrying tomatoes.” But the English translation became “pregnant tomatoes.” The speech software understood “carrying,” but not the context.

Yet if far from perfect, speech recognition software is good enough to be useful in more ways all the time. Take call centers. Today, voice software enables many calls to be automated entirely. And more advanced systems can understand even a perplexed, rambling customer with a misbehaving product well enough to route the caller to someone trained in that product, saving time and frustration for the customer. They can detect anger in a caller’s voice and respond accordingly — usually by routing the call to a manager.

So the outlook is uncertain for many of the estimated four million workers in American call centers or the nation’s 100,000 medical transcriptionists, whose jobs were already threatened by outsourcing abroad. “Basic work that can be automated is in the bull’s-eye of both technology and globalization, and the rise of artificial intelligence just magnifies that reality,” said Erik Brynjolfsson, an economist at the Sloan School of Management at the Massachusetts Institute of Technology.

Still, Mr. Brynjolfsson says artificial intelligence will also spur innovation and create opportunities, both for individuals and entrepreneurial companies, just as the Internet has led to new businesses like Google and new forms of communication like blogs and social networking. Smart machines, experts predict, will someday tutor students, assist surgeons and safely drive cars.

The Digital Assistant

“Hi, are you looking for Eric?” asks the receptionist outside the office of Eric Horvitz at Microsoft.

This assistant is an avatar, a time manager for office workers. Behind the female face on the screen is an arsenal of computing technology including speech understanding, image recognition and machine learning. The digital assistant taps databases that include the boss’s calendar of meetings and appointments going back years, and his work patterns. Its software monitors his phone calls by length, person spoken to, time of day and day of the week. It also tracks his location and computer use by applications used — e-mail, writing documents, browsing the Web — for how long and time of day.

When a colleague asks when Mr. Horvitz’s meeting or phone call may be over, the avatar reviews that data looking for patterns — for example, how long have calls to this person typically lasted, at similar times of day and days of the week, when Mr. Horvitz was also browsing the Web while talking? “He should be free in five or six minutes,” the avatar decides.

The avatar has a database of all the boss’s colleagues at work and relationships, from research team members to senior management, and it can schedule meetings. Mr. Horvitz has given the avatar rules for the kinds of meetings that are more and less interruptible. A session with a research peer, requiring deep concentration, may be scored as less interruptible than a meeting with a senior executive. “It’s O.K. to interrupt him,” the assistant tells a visitor. “Just go in.”

As part of the project, the researchers plan to program the avatar to engage in “work-related chitchat” with colleagues who are waiting.

The conversation could be about the boss’s day: “Eric’s been in back-to-back meetings this afternoon. But he’s looking forward to seeing you.” Or work done with the boss: “Yes, you were in the big quarterly review with Eric last month.” Or even a local team: “How about that Mariners game last night?”

Mr. Horvitz shares a human administrative assistant with other senior scientists. The avatar’s face is modeled after her. At Microsoft, workers typically handle their own calendars. So the main benefit of the personal assistant, Mr. Horvitz says, is to manage his time better and coordinate his work with colleagues’. “I think of it as an extension of me,” he said. “The result is a broader, more effective Eric.”

Computers with artificial intelligence can be thought of as the machine equivalent of idiot savants. They can be extremely good at skills that challenge the smartest humans, playing chess like a grandmaster or answering “Jeopardy!” questions like a champion. Yet those skills are in narrow domains of knowledge. What is far harder for a computer is common-sense skills like understanding the context of language and social situations when talking — taking turns in conversation, for example.

The scheduling assistant can plumb vast data vaults in a fraction of a second to find a pattern, but a few unfamiliar words leave it baffled. Jokes, irony and sarcasm do not compute.

That brittleness can lead to mistakes. In the case of the office assistant, it might be a meeting missed or a scheduling mix-up. But the medical assistant could make more serious mistakes, like an incorrect diagnosis or a seriously ill child sent home.

The Microsoft projects are only research initiatives, but they suggest where things are headed. And as speech recognition and other artificial intelligence technologies take on more tasks, there are concerns about the social impact of the technology and too little attention paid to its limitations.

Smart machines, some warn, could be used as tools to isolate corporations, government and the affluent from the rest of society. Instead of people listening to restive customers and citizens, they say, it will be machines.

“Robot voices could be the perfect wall to protect institutions that don’t want to deal with complaints,” said Jaron Lanier, a computer scientist and author of “You Are Not a Gadget” (Knopf, 2010).

Smarter Devices

“I’m looking for a reservation for two people tomorrow night at 8 at a romantic restaurant within walking distance.”

That spoken request seems simple enough, but for a computer to respond intelligently requires a ballet of more than a dozen technologies.

A host of companies — AT&T, Microsoft, Google and startups — are investing in services that hint at the concept of machines that can act on spoken commands. They go well beyond voice-enabled Internet search.

Perhaps the furthest along is Siri, a Silicon Valley company offering a “virtual personal assistant,” a collection of software programs that can listen to a request, find information and take action.

In this case, Siri, presented as an iPhone application, sends the spoken request for a romantic restaurant as an audio file to computers operated by Nuance Communications, the largest speech-recognition company, which convert it to text. The text is then returned to Siri’s computers, which make educated guesses about the meaning.

“It’s a bit like the task faced by a waiter for whom English is a second language in a noisy restaurant,” said Tom Gruber, an artificial intelligence researcher and co-founder of Siri. “It isn’t perfect, but in context the waiter can usually figure out what you want.”

The Siri system taps more data to decide if it is seeking a romantic restaurant or romantic comedy. It knows the location of the phone and has rules for the meaning of phrases like “within walking distance.” It scans online restaurant review services like Yelp and Gayot for “romantic.”

Siri takes the winnowed list of restaurants, contacts the online reservation service Open Table and gets matches for those with tables available at 8 the next day. Those restaurants are then displayed on the user’s phone, and the reservation can be completed by tapping a button on the screen. The elaborate digital dance can be completed in a few seconds — when it works.

Apple is so impressed that it bought Siri in April in a private transaction estimated at more than $200 million.

Nelson Walters, an MTV television producer in New York, is a Siri fan. It saves him time and impresses his girlfriend. “I will no longer get lost in searching Yelp for restaurant recommendations,” he said. But occasionally, Mr. Walters said, Siri stumbles. Recently, he asked Siri for the location of a sushi restaurant he knew. Siri replied with directions to an Asian escort service. “I swear that’s not what I was looking for,” he said.

Mr. Gruber said Siri had heard an unfamiliar Japanese word, but did not know the context and guessed wrong.

In cars, too, speech recognition systems have vastly improved. In just three years, the Ford Motor Company, using Nuance software, has increased the number of speech commands its vehicles recognize from 100 words to 10,000 words and phrases.

Systems like Ford’s Sync are becoming popular options in new cars. They are also seen by some safety specialists as a defense, if imperfect, against the distracting array of small screens for GPS devices, smartphones and the like.

Later this summer, a new model of the Ford Edge will recognize complete addresses, including city and state spoken in a single phrase, and respond by offering turn-by-turn directions.

To the Customer’s Rescue

“Please select one of the following products from our menu,” the electronics giant Panasonic used to tell callers seeking help with products from power tools to plasma televisions.

It was not working. Callers took an average of 2 1/2 minutes merely to wade through the menu, and 40 percent hung up in frustration. “We were drowning in calls,” recalled Donald Szczepaniak, vice president of customer service. Panasonic reached out to AT&T Labs in 2005 for help.

The AT&T researchers worked with thousands of hours of recorded calls to the Panasonic center, in Chesapeake, Va., to build statistical models of words and phrases that callers used to describe products and problems, and to create a database that is constantly updated. “It’s a baby, and the more data you give it, the smarter it becomes,” said Mazin Gilbert, a speech technology expert at AT&T Labs.

The goal of the system is to identify key words — among a person’s spoken phrases and sentences — so an automated assistant can intelligently reply.

“How may I help you?” asked the automated female voice in one recording.

“I was watching ‘American Idol’ with my dog on Channel 5,” a distraught woman on the line said recently, “and suddenly my TV was stuck in Spanish.”

“What kind of TV?” the automated assistant asked, suggesting choices that include plasma, LCD and others.

“LCD,” replied the woman, and her call was sent to an agent trained in solving problems with LCD models.

Simple problems — like product registration or where to take a product for repairs — can be resolved in the automated system alone. That technology has improved, but callers have also become more comfortable speaking to the system. A surprising number sign off by saying, “Thank you.”

Some callers, especially younger ones, also make things easier for the computer by uttering a key phrase like “plasma help,” Mr. Szczepaniak said. “I call it the Google-ization of the customer,” he said.

Over all, half of the calls to Panasonic are handled in the automated system, up from 10 percent five years ago, estimated Lorraine Robbins, a manager.

But the other half of calls are more complex problems — like connecting a digital television to a cable box. In those cases, the speech recognition system quickly routes a call to an agent trained on the product, so far more problems are resolved with a single call. Today, Panasonic resolves one million more customer problems a year with 1.6 million fewer total calls than five years ago. The cost of resolving a customer issue has declined by 50 percent.

The speech technology’s automated problem sorting has enabled Panasonic to globalize its customer service, with inquiries about older and simpler products routed to its call centers in the Philippines and Jamaica. The Virginia center now focuses on high-end Panasonic products like plasma TVs and home theater equipment. And while the center’s head count at 200 is the same as five years ago, the workers are more skilled these days. Those who have stayed have often been retrained.

Antoine Andujar, a call center agent for more than five years, attended electronics courses taught at the call center by instructors from a local community college. He used to handle many products, but now specializes in issues with plasma and LCD televisions.

Mr. Andujar completed his electronics certification program last year, and continues to study. “You have to move up in skills,” he said. “At this point, you have to be certified in electronics to get in the door here as a Panasonic employee.”

The Efficient Listener

“This call may be recorded for quality assurance purposes.”

But at a growing number of consumer call centers, technical support desks and company hot lines, the listener is a computer. One that can recognize not only words but also emotions — and listen for trends in customer complaints.

In the telephone industry, for example, companies use speech recognition software to provide an early warning about changes in a competitor’s calling plans. By detecting the frequent use of names like AT&T and other carriers, the software can alert the company to a rival that lowered prices, for example, far faster than would hundreds of customer service agents. The companies then have their customer agents make counteroffers to callers thinking of canceling service.

Similar software, used by a customer of Verint Systems, began to notice the phrase “cash for clunkers” in hundreds of calls to its call center one weekend last year. It turned out that tens of thousands of car shoppers responding to the government incentive were calling for insurance quotes. Aetna created insurance offers for those particular callers and added workers to handle the volume.

And as Apple’s new smartphone surged in popularity several years ago, GoDaddy, an Internet services company, learned from its call-monitoring software that callers did not know how to use GoDaddy on their iPhones. The company rushed to retrain its agents to respond to the calls and pushed out an application allowing its users to control its service directly from the iPhone.

Certain emotions are now routinely detected at many call centers, by recognizing specific words or phrases, or by detecting other attributes in conversations. Voicesense, an Israeli developer of speech analysis software, has algorithms that measure a dozen indicators, including breathing, conversation pace and tone, to warn agents and supervisors that callers have become upset or volatile.

The real issue with artificial intelligence, as with any technology, is how it will be used. Automation is a remarkable tool of efficiency and convenience. Using an A.T.M. to make cash deposits and withdrawals beats standing in line to wait for a teller. If an automated voice system in a call center can answer a question, the machine is a better solution than lingering on hold for a customer service agent.

Indeed, the increasing usefulness of artificial intelligence — answering questions, completing simple tasks and assisting professionals — means the technology will spread, despite the risks. It will be up to people to guide how it is used.

“It’s not human intelligence, but it’s getting to be very good machine intelligence,” said Andries van Dam, a professor of computer science at Brown University. “There are going to be all sorts of errors and problems, and you need human checks and balances, but having artificial intelligence is way better than not having it.”

Profile

conuly: (Default)
conuly

December 2025

S M T W T F S
  1 2 3 4 5 6
78 9 10 11 12 13
14 15 16 17 18 19 20
21 222324 25 26 27
28 29 3031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 31st, 2025 07:10 pm
Powered by Dreamwidth Studios