Research

Can trust in social media news be improved?

College of IST researchers to combine big data and psychology theory to determine how people become susceptible to what they read online

More than half who read news on social media expect it to be inaccurate. In a new project, researchers from the College of Information Sciences and Technology are working to provide reliable information and improve people’s trust in what they read online.  Credit: © Groenning, Adobe StockAll Rights Reserved.

UNIVERSITY PARK, Pa. — More than two-thirds of Americans get their news from social media sites, according to a 2018 Pew Research Center study. But more than half who read news on social media expect it to be inaccurate.

Penn State researchers are working to improve the prediction of people’s trust in what they read online. In a new project, they’ll advance state-of-the-art machine learning methods to model the psychological phenomenon known as memory illusion, which are memory errors that individuals make when remembering, interpreting and making inferences from past experiences and knowledge. The researchers will use these models to determine why some people are impressionable by false information.

“When we read a news story, we recode the news story to understand it based on our prior experience, which may activate other things that are associated with the news.” said Aiping Xiong, assistant professor in the College of Information Sciences and Technology and principal investigator on the project. “Later, when we see other news that is false but presented in a similar way that people inferenced previously, people will more easily believe that information (as being true). That’s the illusion we are talking about here.”

“Memory illusion has been proven in psychology labs at a small scale, but few have studied it in the real world,” added Dongwon Lee, associate professor in the College of Information Sciences and Technology and collaborator on the project. “We realized that the computational approach to this issue of trust is being actively studied by many data scientists, but the human side is less studied. Because of accessibility to social media data, we thought that we could model the phenomenon of memory illusion, specifically associative inferences, and then see if that phenomenon can help explain how some people become gullible (to what they read on social media).”

The researchers will utilize Twitter data and data-driven machine learning models to characterize the conclusions that people reach and to understand how social media posts contribute to individuals’ trust in what they read. Then, they plan to conduct laboratory and online user studies to determine if there is a causal relationship between the two.

Ultimately, they hope to model associative inferences into existing machine learning approaches to better measure trust in information with an additional human information-processing perspective.

“The computational solution is basically that given the news in question, you collect a lot of clues, and a mathematical model will collectively tell you whether it is likely to be false or not,” said Lee. “People use various approaches to collect these clues: they look at the content, they look at who wrote it, or they look at how it got propagated to them. But none of them use this particular clue of associative inferences.”

He added, “So if this memory illusion checks out, then we can incorporate it into the existing computational models and help better detect false information.”

The researchers are blending their interdisciplinary backgrounds in the project. Lee has expertise in data science, while Xiong will draw on her human factors and psychology background.

“We want to have a combination from both sides to address this problem,” said Xiong. “That makes our project unique when compared to only focusing on data science and automatic detection of false information on social media platforms.”

“We are at the forefront of using this collaborative interdisciplinary method to approach a very complicated yet socially impactful phenomenon,” added Lee.

The researchers hope to contribute to overcoming the widespread false information on social media that has impacted many things globally, such as conversations about the 2016 U.S. presidential election and the Brexit referendum, as well as climate change and vaccines.

“The impacts are across the whole spectrum of people’s daily lives,” Xiong concluded. “We want to devote time and effort to hopefully make improvements to solve this.”

Their work is funded by recent grants from the Penn State Social Science Research Institute and the National Science Foundation.

Last Updated June 25, 2019

Contact