There’s room to improve A.I. news coverage

(Credit: Getty Images)

A recent analysis of how journalists deal with the ethics of artificial intelligence suggests that reporters are doing a good job of grappling with a complex set of questions—but there’s room for improvement.

In a new paper in AI & Society, researchers dig into how news outlets are thinking (and writing) about new technologies in order to understand how people are thinking (and feeling) about artificial intelligence.

“Journalists may benefit from reaching out to AI technology experts and ethicists to get the relevant facts and values straightened out,” says corresponding author Veljko Dubljević, an assistant professor of philosophy at North Carolina State University.

“The discussion may be relatively sophisticated, but there is certainly room for improvement,” he says.

Dubljević, cowrote the new paper with first author Leila Ouchchy, a former undergrad student, and coauthor Allen Coin, a graduate student.

Here, the three researchers explain their work, why they did it, and why it’s important:

 

Q

This paper focuses, in part, on ethical issues related to AI technologies that people would use in their daily lives. Could you give me one or two examples?

A

Allen Coin: Probably the most well-known application of AI with very real ethical implications would be self-driving cars. If an autonomous car is in a situation where it has, for instance, lost control of its brakes and must either crash into a child or an adult, what should it do? If you are “driving” an autonomous car and you become unconscious, and the car careens out of control and has the choice between crashing into a pedestrian, thus saving your life, or driving off a cliff, thus sacrificing your life, what should it do? What would you want your car to do in that situation?

These are real-world “Trolley Problems” that even human beings would struggle to make moral and ethical decisions about in the heat of the moment.

Another slightly more insidious example would be how human biases and prejudices seemingly have a tendency to crop up in AI applications that humans develop. Machine-learning algorithms, for instance, are touching more and more areas of human life. But these algorithms must be “trained” on real-world datasets. If the datasets represent prejudiced human behavior, even if it’s not immediately apparent or you attempt to screen it out, that may mean the resulting software is not capable of acting objectively.

An example would be HR software for large corporations that screens job applicants based on similar traits to previously successful applicants, meaning that if there was a gender or race bias when humans were making the hiring decisions then the “robot” may continue to behave with those biases when selecting candidates.

Q

For this research, you looked specifically at how news media covered these ethical considerations. Why?

A

Leila Ouchchy: We specifically looked at news media because of the effect news can have on public opinion regarding new technology. The media coverage of the ethics of AI has the potential to impact how AI is implemented in our society, from the kinds of AI that are produced by companies to the way AI is regulated by the government.

Q

Why not also look at the ways in which the ethics of AI are treated in popular culture, such as TV or film? Don’t those also inform and reflect public concerns?

A

Veljko Dubljević: The public representation of ethical issues in AI in fiction has already received more attention in the academic community, and that is both good and bad.

The good of it is that we have some analyses out there, which can be useful.

The bad side, however, is that this is not firmly grounded in scientific fact and already available technologies, which makes this more prone to utopian and dystopian exaggerations.

Q

What did your analysis of news outlets find? And what does that mean?

A

Ouchchy: There was a sharp increase in the number of articles published in recent years, which shows that the amount of media content being published on the ethics of AI has been continuously increasing, and will likely continue to increase as this becomes a more prevalent issue.

Additionally, there was little discussion of ethical frameworks and principles based on them, which suggests a lack of participation or influence from ethicists in the media discussion.

Finally, we found that the articles had mostly neutral tones, and focused on practical and relevant issues, although their recommendations were not very specific. This shows that the media discussion on the ethics of AI is relatively sophisticated, but still in its early stages.

Q

What could or should reporters be doing differently in their coverage of ethics and AI technologies?

A

Dubljević: Journalists may benefit from reaching out to AI technology experts and ethicists to get the relevant facts and values straightened out. The discussion may be relatively sophisticated, but there is certainly room for improvement.

Coin: To add to this point, one thing that struck me about the findings of this research is that one of the most commonly cited “ethical frameworks” in articles about AI ethics is Isaac Asimov’s “Three Laws of Robotics.” Now, I am certainly a fan of Asimov’s science fiction, but his “Three Laws” is not a formal ethical framework. It is also not very useful in the context of real-world discussions about how AI can and should behave ethically.

Utilitarianism was also commonly discussed in news articles about AI ethics, so I think there is an opportunity to modernize how we are talking about AI ethics. Specifically, we could—or should—be using more modern ethical frameworks that are capable of tackling the intricacies and complexity of real-world applications of artificial moral agents. I also agree with Veljko that journalists should reach out to AI researchers and ethicists more often – I think there are a lot of experts who would happily talk the ear off of any journalist wishing to get a detailed ethical perspective on any AI-related news item.

Source: NC State