Faculty members earn NSF grant for training machines to detect fake news

UNIVERSITY PARK, Pa. — With so much fake news floating around, wouldn’t it be nice if our computers and mobile phones told us which news stories are real and which ones are fake? That is what a couple of Penn State faculty members hope to achieve with a $300,000 grant from the National Science Foundation (NSF).

Dongwon Lee, an associate professor in the College of Information Sciences and Technology, and S. Shyam Sundar, distinguished professor in the Donald P. Bellisario College of Communications, designed the research proposal to examine misinformation and the ways to detect it.

Officially titled “Training Computers and Humans to Detect Misinformation by Combining Computational and Theoretical Analysis,” the proposal earned support from NSF’s EArly-concept Grants for Exploratory Research (EAGER) program. EAGER grants are dedicated to potentially transformative ideas that are high risk and high payoff, according to the National Science Foundation. In March, Lee and Sundar received a seed grant from the Penn State Institute for CyberScience to study fake news, and that support provided the basis for the NSF proposal. 

The researchers plan on investigating various characteristic indicators of fake news, developing algorithms for machines to detect fake news, providing training to human coders and then testing whether machines can do a better job than humans in classifying fake and real news.

Lee, the principal investigator on the project with a background in computer science, leads the Penn State Information, Knowledge, and Web research group that studies issues involving the management of and mining in data in diverse forms, including multimedia, social media, structured records, text and the web. He regularly conducts research applying data mining and machine learning techniques to find hidden patterns and build models.

Sundar, who is the founder and co-director of the Media Effects Research Laboratory and co-principal investigator for the grant, has been studying the psychology of online news consumption for more than 20 years. His experiments have identified several important elements of digital media technologies that shape the perceived credibility of internet-based information.

“Fake news has been around for decades. But now, the problem has gotten exacerbated as it’s much easier to share fake news in cyberspace and on social platforms. We want to understand fake news better to build machine-based detection methods,” Lee said. “Consider that some kind of machine-based tools can tell you when you look at a Facebook post, for instance, if it is likely to be suspicious or not, with an accompanying certainty score. Maybe you will think about it one more time before you share it with your peers. If you aren’t blindly sharing information with your peers, the impact of fake news will decrease sharply.”

“The fake news phenomenon is not simply about the information being false,” said Sundar. “It’s also about false sources, deceptive language, sensational content, gullibility of online news consumers and interactivity of the medium. Therefore, a fundamental challenge for the project is to capture this complexity through theoretical analyses in a way that permits computational analyses and training protocols for detecting fake news.”

The project begins Sept. 1 and is expected to take two years.

Media Contacts: 
Last Updated August 24, 2017