Print

Math may teach computers to get sarcasm

STANFORD (US) — A new mathematical model may eventually help computers to think like people—and understand what we mean—not just what we say.

Language is so much more than a string of words. To understand what someone means, you need context.

Consider the phrase, “Man on first.” It doesn’t make much sense unless you’re at a baseball game. Or imagine a sign outside a children’s boutique that reads, “Baby sale—one week only.” You easily infer from the situation that the store isn’t selling babies but advertising bargains on gear for them.

But when these well-known quoted scenarios are presented to a computer, there might be a communication breakdown. Computers aren’t very good at pragmatics—how language is used in social situations.

But in a new paper published in the journal Science, a pair of psychologists from Stanford University describe how they have taken steps toward changing that. Assistant Professors Michael Frank and Noah Goodman describe a quantitative theory of pragmatics that promises to help open the door to more human-like computer systems, ones that use language as flexibly as we do.

The work could help researchers understand language better and treat people with language disorders. It might even make speaking to a computerized customer service attendant a little less frustrating.

“If you’ve ever called an airline, you know the computer voice recognizes words but it doesn’t necessarily understand what you mean,” Frank says. “That’s the key feature of human language. In some sense it’s all about what the other person is trying to tell you, not what they’re actually saying.”

Frank and Goodman’s work is part of a broader trend to try to understand language using mathematical tools. That trend has led to technologies like Siri, the iPhone’s speech recognition personal assistant.

But turning speech and language into numbers has its obstacles, mainly the difficulty of formalizing notions such as “common knowledge” or “informativeness.” That is what Frank and Goodman sought to address.

The researchers enlisted 745 participants to take part in an online experiment. The participants saw a set of objects and were asked to bet which one was being referred to by a particular word.

For example, one group of participants saw a blue square, a blue circle and a red square. The question for that group was: Imagine you are talking to someone and you want to refer to the middle object. Which word would you use, “blue” or “circle”?

The other group was asked: Imagine someone is talking to you and uses the word “blue” to refer to one of these objects. Which object are they talking about?

“We modeled how a listener understands a speaker and how a speaker decides what to say,” Goodman explains. The results allowed Frank and Goodman to create a mathematical equation to predict human behavior and determine the likelihood of referring to a particular object.

“Before, you couldn’t take these informal theories of linguistics and put them into a computer. Now we’re starting to be able to do that,” Goodman says.

The researchers are already applying the model to studies on hyperbole, sarcasm, and other aspects of language, Frank says.

“It will take years of work but the dream is of a computer that really is thinking about what you want and what you mean rather than just what you said.”

More news from Stanford University: http://news.stanford.edu/

chat2 Comments

You are free to share this article under the Creative Commons Attribution-NoDerivs 3.0 Unported license.

2 Comments

  1. walt

    I guess someone is really trying to create machines that learn to learn. Maybe the clever algorithms in programs like this will provide ready-made chunks of relationships for those machiines, like the stories in books a three-year old explores? What a chasm there is between learning language and consulting rules of speech!
    For a while i thought we were on the brin of intelligent machines. Now they seem very far from our grasp.

  2. Aardman

    I think the whole program to make computers think link humans might be a very informative academic enterprise but un practice, basically misguided.

    I do not want automated systems that are run by computers that think like humans; they will end up making the same mistakes that humans commit — mistakes of prejudice and errors in judgement based on making the wrong choices as to what underlying contextual assumptions are appropriate for the computing problem at hand. If computers will just make the same mistakes that we do, then of what use are they except to accomodate mental laziness? (I’m too lazy to think, let the computer do it.) Let’s not even talk about the legal nightmare of liability when a computer makes a wrong decision and drives an airliner into the ground.

    One of the big unrecognized benefits of current ‘dumb’ computers is that the process of writing code to transform a real-world problem into a computing problem forces human beings to systematically break down and analyse said problem. I.e. it demands rigorous logical and mental discipline that ultimately leads to deeper understanding of problems and thus expands the stock of human knowledge. I’d rather keep things that way– let the humans do the cognitive heavy lifting, then have the computers do the grunt computational work. It’s a bicycle for the mind, not an autopilot.

We respect your privacy.