Research

Building better human-bot cybersecurity teams

Penn State students and professor help win U.S. Department of Defense grant

Patrick McDaniel, William L. Weiss Chair in Information and Communications Technology in the School of Electrical Engineering and Computer Science. Credit: Penn State College of Engineering / Penn StateCreative Commons

UNIVERSITY PARK, Pa. — A multi-university team has won a U.S. Department of Defense (DoD) Multidisciplinary University Research Initiative (MURI) Award. In addition to Penn State, six other universities comprise the group: University of Melbourne, Macquarie University and University of Newcastle, all in Australia, and the University of Wisconsin-Madison, Carnegie Mellon University and University of California San Diego. 

The Penn State participants are Patrick McDaniel, William L. Weiss Chair in Information and Communications Technology in the School of Electrical Engineering and Computer Science; Ryan Sheatsley, doctoral student in computer science and engineering; Blaine Hoak, a master’s student in computer science and engineering; and Ahmed Abdou, a master’s student in computer science and engineering. The principal investigator is Somesh Jha, Lubar Professor of Computer Science at the University of Wisconsin-Madison. Benjamin Rubinstein, professor of computing and information systems at the University of Melbourne, will lead the Australian team, or AUSMURI, funded by the Australian government. 

The winning project, "Cohesive and Robust Human-Bot Cybersecurity Teams," aims to develop a rigorous understanding of team science for human-bot cybersecurity teams (HBCT), with the goal of developing a cohesive team that is robust against active human and machine learning (ML) adversaries.

Cybersecurity is one of the most challenging tasks that the DoD faces today, according to the researchers. A typical human analyst in a cybersecurity task has to deal with a plethora of information, such as intrusion logs, network flows, executables and provenance information for files. Real-time cybersecurity scenarios are even more challenging: an active adversarial environment, with large amounts of information and techniques that neither humans nor machines can handle alone. 

In addition to human analysts, ML bots have become part of these cybersecurity teams. ML bots reduce the burden on human analysts by filtering information, thus freeing up cognitive resources for tasks related to the high-level mission.

While a lot is known about how humans use tools to work in teams, less is known about how to manage, observe and improve hybrid teams that are made up of humans and autonomous machines. In particular, researchers plan to study how to coordinate HBCT in the presence of active adversaries that are also adapting to changing conditions.

The research team will first focus on ways to build trust within the HBCT by investigating techniques to produce explanations for the human analysts of how the ML bots work. These explanations will be presented in an appropriate vocabulary for the analysts and will be specific to the task of the HBCT, providing valuable insight for the human analysts so they will trust the ML model and reduce manual effort.

Adversaries in mission-critical DoD scenarios can be very sophisticated, such as nation-state attackers. Existing work on designing ML models focuses on modalities, such as images and audio, rather than addressing the overall task — that the attacker is trying to thwart the mission of the human-bot team. The second approach for this project investigates robust ML techniques that focus on modalities that are relevant to cybersecurity, such as malware and network logs. While investigating these task-aware techniques, the research team will factor in the high-level mission of the cybersecurity team.

The group will also research how analysts integrate information to arrive at decisions, as well as their mental models of how bots operate. This will allow them to take a step toward automating the decision-making process. The mental model also can help design robust ML models that are more specific to the task of the HBCT.

Adaptability will be key as well, the researchers said. Human-bot team dynamics change as new adversaries with different capabilities arise, and adversaries adapt in response to new team strategies. Adversaries interactive learning must be taken into account to develop methods for the entire team to adapt to adversaries in an interactive manner.

“The science and engineering challenges we face today are highly complex and often intersect more than one scientific discipline,” said Bindu Nair, director of the Basic Research Office at the DoD. “MURIs acknowledge these complexities by supporting teams whose members have diverse sets of expertise as well as creative and different approaches to tackling problems. This cross-fertilization of ideas can accelerate research progress to enable more rapid R&D breakthroughs and hasten the transition of basic research finding to practical application. It’s a program that embodies DoD’s legacy of scientific impact.”

A version of this article first appeared on the University of Wisconsin-Madison School of Computer, Data and Information Sciences website

 

Last Updated March 19, 2021

Contact