IST professor launches effort to defeat cyber attackers at their own game

One fundamental reason why today’s computer networks and security systems are facing an increasing number of cyber attacks, according to Peng Liu, a professor at Penn State’s College of Information Sciences and Technology (IST), is a matter of information asymmetry-the attackers know much more about their targets than vice versa. With the aid of a major grant from the Army Research Office (ARO), Liu and his fellow researchers are undertaking an initiative to outsmart cyber attackers at their own game by developing technologies that will level out the playing field.

“Through this interdisciplinary approach, our primary goal is to develop the scientific foundations for Adaptive Cyber Defense,” said Liu, director of the Center for Cyber-Security, Information Privacy, and Trust (LIONS Center).

Liu and his fellow researchers at Penn State, along with Sushil Jajodia, a professor at George Mason University and the principal investigator on the project, and researchers at Dartmouth University and the University of Michigan, were recently awarded a grant totaling $6,244,194 over a five-year period to support the project “Adversarial and Uncertain Reasoning for Adaptive Cyber Defense: Building the Scientific Foundations.” The first increment of $700,000 was distributed in early fall 2013. The project’s aim is to develop a new class of technologies called Adaptive Cyber Defense (ACD) that is aimed at forcing adversaries to continually re-assess, re-engineer and re-launch their cyber attacks. ACD presents adversaries with optimized dynamically changing attack surfaces and system configurations, thereby significantly increasing the attacker’s workloads and decreasing their probabilities for success.

The researchers aim to produce scientific and engineering principles that enable effective Adaptive Cyber Defense; as well as prototypes and demonstrations of technologies embodying these principles in defense-based scenarios, possibly in national cyber test beds. A main benefit of the project is that deploying and operating ACD methods will become easier and more reliable, resulting in the U.S. Department of Defense’s networks and network-centric capabilities becoming more secure, resilient, survivable, and stable.

According to Liu, computer networks are increasingly vulnerable to attacks from botnets. A botnet (also known as a zombie army) is a large number of Internet computers that, although their owners are unaware of it, have been set up to send messages and requests (including malicious payloads) to other computers on the Internet.  Any such computer is referred to as a zombie - in effect, a computer "robot" or "bot" that serves the wishes of a master (e.g., a criminal organization).  Bots often spread themselves across the Internet by searching for vulnerable, unprotected computers to infect. When they find an exposed computer, they quickly infect the machine and then report back to their master. Their goal is then to stay hidden until they are instructed to carry out a task. Criminals use botnets to send out spam e-mail messages, spread viruses, attack computers and servers, and commit other kinds of crime and fraud.

“Today’s cyber attacks are not isolated, they’re not individual,” Liu said. “They’re very sophisticated; they coordinate, and are organized by a remote attacker. They are persistent. They can create an ‘apartment’ in your laptop, live comfortably on your disk.”

One major reason why botnet attacks can be successful, Liu said, is due to information asymmetry. The attacker typically has the capability to know much more about the target network than the target can know about the attacker. The defender usually has very little information regarding the motivation and ultimate goal of its attacker, which can inflict serious damage through malicious activities such as stealing a user’s identity or compromising a military mission.

“There’s a spectrum regarding the objectives of the attack,” he said.

Through the Internet, Liu said, an attacker that can be located anywhere on the planet can send “malicious robots” to invade a computer network. For example, an adversary can figure out the topology of a large network such as the Penn State system, and send e-mails containing viruses to individuals in the system.  If people open the e-mail, he said, “their computers can become a robot home.” The attacker can gain access to a large network by initially compromising a very small portion of the network, such as a student’s laptop.

“That computer can serve as a listening post, a surveillance facility, and a stepping stone for the attacker,” Liu said.

In developing the ACD technology, Liu said, he and his fellow researchers have two main goals. The first goal is to enable network defenders to learn more about their attackers through a methodological approach called Adversarial Reasoning (AR). AR combines machine learning (a branch of artificial intelligence that concerns the construction and study of systems that can learn from data, behavioral science), control theory (an interdisciplinary branch of engineering and mathematics that deals with the behavior of dynamical systems with inputs), and game theory (a study of strategic decision making) to address the goal of computing effective strategies in dynamic, adversarial environments.

“If we know more about the attacker, then asymmetric information will become more symmetric,” Liu said.

The second goal of the project is to provide systems (including the attacker’s target) capability to hide themselves in some way so that the attackers will have decreased observability—a technology known as Moving Target Defense (MTD). For instance, if applications move to different machines across the Penn State campus, that mobility could confuse an attacker.

“The effect is the attacker will know less about (the target),” he said.

While substantial research on MTD has been conducted, Liu said, the science behind the MTD mechanisms has yet to be developed. The conclusions drawn from lab experiments, he added, are relative to the type of attack launched. Researchers can’t try out every type of attack and may not be familiar with new attacks, so they will usually try typical attacks in experiments. There are almost no general laws to quantitatively govern or predict the effectiveness of MTD mechanisms outside experimental settings.

“Hackers are developing new weapons every day,” Liu said. “This is a limitation of existing research.”

To address these challenges, Liu and his colleagues are seeking to expand cyber security research to “see if there is some natural relationship between MTD and control theory, between MTD and game theory, and between MTD and machine learning.”

“Basically, we want to develop the science behind MTD,” Liu said. “We would be very happy if we could peer into this part of nature. Are there some general laws which can allow us to assess the effectiveness of MTD in terms of some new (type of) attack?”

Contacts: 
Last Updated November 25, 2013