Research

NSF-funded study to examine human-punishment interaction in online communities

A new study will examine how users experience punishments in various forms, such as chat restriction and account suspension, and what can be done to help those punished users, through his aim to better understand human-punishment interaction (HPI).  Credit: Adobe Stock: DENYS KURBATOVAll Rights Reserved.

UNIVERSITY PARK, Pa. — Most users of online community platforms have likely witnessed unfavorable behavior from other users in some form — such as offensive language, harassment or racial slurs. Many times, those disruptive users will have their content removed or accounts suspended.

But, not every instance of online disruptive behavior is motivated by malicious intents, and users could be unfairly punished, according to Yubo Kou, assistant professor of information sciences and technology.

Kou will examine how users experience punishments in various forms, such as chat restriction and account suspension, and what can be done to help those punished users, through his aim to better understand human-punishment interaction (HPI) in a new study funded by the National Science Foundation.

“The design of punishment as of now is fairly simple across many platforms,” said Kou. “The user receives a punishment, such as post removal, and sometimes there is a brief explanation of the punishment.”

But this approach leaves many open ends for the user to understand the outcomes from their actions.

He added, “What if the punishment is unfair? What if the users don’t agree with the punishment? What if the explanation doesn’t make sense? What if the user wants to know more about how to improve behavior in the future? These questions relate to various aspects of punishment and converge at the notion of HPI that I’m planning to investigate.”

In the study, Kou will examine a high-population online community to document and describe human-punishment interaction in terms of how users experience punishment and their post-penalty actions, as well as what support resources they use to better understand community behavioral standards.

“Online platforms have become a significant part of our daily life and the social well-being directly affects us,” said Kou. “The point of understanding and helping punished users could be thought of as analogous to the issue of criminal justice, albeit to a much lesser degree.”

The new study builds on Kou’s past work that explores the role of artificial intelligence techniques in structuring how online users interact, including automating punishments. These findings, in collaboration with Xinning Gui, assistant professor of information sciences and technology, will appear in the proceedings of the ACM on Human-Computer Interaction (October 2020). Kou and Gui have also explored emotion regulation in eSports gaming (in proceedings of the ACM on Human-Computer Interaction, October 2020) and toxic behaviors in team-based competitive gaming (to be presented at the ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play in November). 

”My main research methodology is ethnography which emphasizes researchers' participation and immersion in the study sites that they study,” concluded Kuo. “It is a critical way for researchers to understand a community's language, culture, reasoning styles, and more, which helps them tremendously to explain certain behaviors that take place in the particular community, however unreasonable or outlandish they appear to outsiders.”

Last Updated September 14, 2020