U. PENN (US)—Computer scientist Ben Taskar says one of the biggest stumbling blocks in terms of artificial intelligence is the fact that computers learn slower than children.
A toddler learns what a car is by someone pointing to an automobile and saying “car” several times. Computers learn what a car is by a person inputting thousands of images of a car from different viewpoints and of varying shapes.
As part of the recent trend toward bridging perception and language through machine learning, Taskar and his colleagues at the University of Pennsylvania are attempting to teach a computer to look at a video and answer the Five Ws: Who? What? When? Where? Why?
“The five questions reporters ask,” says Taskar, the Magerman Term Assistant Professor in the Department of Computer and Information Science, “we want to ask that of an image or a video, and then get the computer to answer.”
Using novel learning algorithms that combine audio, video, and text streams, Taskar and his research team are teaching computers to recognize faces and voices in videos. Their system recognizes when someone in the video or audio mentions a name, whether he or she is talking about himself or herself, or whether he or she is talking about someone in the third person. It then maps that correspondence between names and faces and names and voices.
“An intelligent system needs to understand more than just visual input, and more than just language input or audio or speech. It needs to integrate everything in order to really make any progress,” Taskar says.
The information Taskar’s team feeds into the system is free training data harvested from the Internet. Attempts to teach computers visual recognition in the pre-Internet age were hampered in large part by a lack of training content. Today, Taskar says, the Internet provides a “massive digitization of knowledge.” People post videos, comments, blogs, music, and critiques about their favorite things and interests.
Take, for example, the ABC show Lost. Fans of the show flock to Web sites like Lostpedia or Lost.com and write reviews about the show, post comments, or play games. Some fanatics post scripts from the show online.
As Tasker’s team feeds more data about Lost into the computer—such as video clips, scripts, or blogs—the system improves at identifying people in the video. If, for example, a clip contains footage of characters Kate and Anna Lucia, after being taught, the computer will recognize their faces.
“The alogorithm is learning this from what people say, or from screenplays as well,” Taskar adds. “The screenplay doesn’t tell you who is who, but it tells you there’s a scene with [two characters] talking to each other.”
Taskar says the information the research has produced can be helpful in many ways, particularly in searching videos for content. Currently, if a father is searching for a photo of his daughter playing with the family dog in his gigabytes of photos and videos on his hard drive, unless the photo is tagged “daughter playing with dog,” chances are he isn’t going to be able to find it.
The system does not yet function in real time and, Taskar says, computers are still quite far from recognizing common objects. Although computers have proved capable of recognizing people and detecting a small number of actions, Taskar’s team would like to get to the point where computers can identify 10,000 different actions and 10,000 different common objects.
University of Pennsylvania news: www.upenn.edu/pennnews/