Research

The World of Machine Intelligence

"Those people understood the idea of what machines need to know," says Jim Stover. To him, 2001: A Space Odysseyis "one of the greatest movies ever." He says, "For the real identification of the problem, you can't beat Hal."

The problem—Stover's and the movie's—is to make an intelligent machine, a machine that thinks. And of 2001, he says, "That's exactly the way you'd want to design a machine. I saw it recently again, and I was amazed. It was right down the center of what we think machines can and should do. Hal had a role to do, he was given priorities to achieve the mission. Then there's the human factor. Are you going to let your machine kill humans to finish its mission? The answer in 2001 was yes."

Geduld quote in purple

When Stover joined Penn State's Applied Research Lab (ARL) in 1974, the intelligent machines he was designing were torpedoes. Now, with the Cold War over, the thinking system he and colleague Ron Gibson have developed is being applied to medical monitors. Virtual surgery. A smart car to make driving safer in rain or fog. According to ARL director Ray Hettche, Stover and Gibson's work "is in almost every one of our proposals. Everything we're doing has an intelligent controller."

"Once you try to make a torpedo do something other than run in a straight line," Stover explains, "you open up the world."

"Now, I'm going to dunk this thing in the water," Stover says, "and it's going to figure out what to do.

"It's a deadly game, essentially a machine against a human. The operator of the target ship tries to confuse the torpedo. The machine has the advantage of speed, the human has the advantage of intelligence."

ARL's Navy-appointed task was to narrow that gap, to make an intelligent machine. How? According to Hellions of the Deep, a history of early torpedo design written by Penn State English professor Rob Gannon, "American torpedoes at the beginning of World War II . . . were primitive dullards with the intelligence of a garden hose." By 1945, Gannon writes, they were "sophisticated mechanisms crammed with technological innovations, outfitted with organs of voice and hearing, reliable, trustworthy, awesome in their capability . . ." But were they intelligent machines?

Or, to ask the question roundabout, what is intelligence? Is it the ability to reason? (And how do you define that?) Or, as Stover believes, the ability to find patterns?

Since John Von Neumann demonstrated the first computer in 1945, research into machine intelligence or artificial intelligence (AI) has been dominated by the quest for a machine that can reason. This "classical" AI approach sees the human brain, Stover writes in ARL's 1994 Review, as "a processor that receives queries from a (human) operator and produces a response." That is, as something separate from the rest of the organism. "Since the implication is that it is interacting with a human, its inputs and outputs must be compatible with this environment, which is, in essence, the abstract world of human concepts. Thus fundamental intelligence issues are identified as being logic, reasoning, knowledge representation, and language. No attempt at defining intelligence is made."

Proof of success is the Turing test. The familiar and entertaining machine-versus-grandmaster chess contests are Turing tests, as are the long-distance conversations, on such topics as dogs, gardening, or sports, in which one of the conversants might (or might not) be a computer. An intelligent machine, proposed Alan Turing in 1950, is one whose answers are indistinguishable from a person's. In his ARL report, Stover demurs: "There is little likelihood that every human would have the same opinion in a Turing test. Furthermore, conditions and controls on the test are not stated with sufficient rigor; in fact, they cannot be. For example, if the test consisted of adding numbers, any computer could be programmed to imitate the speed and accuracy of a human, but no person would be willing to claim it was of human intelligence (or had any intelligence) after seeing its program."

Now, sitting in a bare conference room deep in the heart of the Applied Research Lab (Stover can't take me up to his office, since it's in the top-secret, restricted-access wing of the lab), he shrugs off his criticism of classical AI. "I have a different view of the world," he says simply. "They're immersed in their view of what the world is, and we're immersed in ours."

Large in Stover's world—larger even than torpedoes—is the bat. "We're familiar with sonar," he notes (it's how both bats and torpedoes "see"), "but bats can beat us hands down. Look at a bat and we know we need to do a lot more research."

A physicist first, Stover worked at Marshall Space Flight Center in the 1960s—"the glory days"—designing control systems for rocket boosters. He left to get a graduate degree in mathematics. After a teaching stint at Memphis State, he came to Penn State's Applied Research Lab. The ARL, which is now "recognized as the corporate memory and center of expertise in acoustically guided torpedoes," according to Fred Saalfeld, deputy chief of research for the U.S. Navy, had been founded in 1945 when its Navy sponsors moved 100 scientists and engineers from Harvard's Underwater Sound Laboratory (or HUSL, which, according to historian Gannon, was pronounced hustle) to Penn State. The move, as ARL director Hettche explains, followed Harvard's "decision to discontinue classified work after the war and the decision of the Navy to maintain its efforts in academic research." When Stover arrived in 1974, the lab had recently changed its acronym from ORL: Its trademark water tunnel had become the focus of anti-war demonstrations, so the O—for Ordnance, or military weapons—went undercover. But the major source of funding for the new Applied Research Lab (the Navy) and its essential mission were unchanged. Stover was hired to work on the mathematical algorithms that then directed torpedoes.

In 1985, ARL associate director Richard Stern approached Stover and his colleague, Ron Gibson, who'd been at the lab since the 60s. " Can you use artificial intelligence to make these things work better?'" Stover remembers him asking. Stern and division head Frank Symons, says Stover, then "gave us the freedom to follow our ideas." (And the Navy provided continuous funding for ten years.) "We've demonstrated a generic concept for such a system," Stover continues, "just this year."

The system has much for which to thank the bat. "There's lots of ideas of what intelligence is, and how you'd want to model it," Stover begins.

"People who work in science think mathematical modeling is the key—it's natural for them to assume that if you're going to model intelligence that mathematical reasoning is most important. But what do they really mean by reasoning? We don't know what that is." As he writes in the ARL Review, "Except for researchers in mathematics and science, human day-to-day operations have little use for logical processes. We drive cars, go shopping, write communications, talk about politics, all without requiring the use of reason interpreted as a process of logical deduction."

Even were it useful to have a reasoning machine, Stover continues, "there is a much deeper problem associated with efforts to endow machines with an ability to reason. Reasoning requires understanding and understanding requires consciousness." He elaborates now, "Unless the entity is conscious, it can't reason, and I think you can argue that I can infer consciousness in another human only because I experience it. We can observe behavior, but we can't observe consciousness: We have to infer consciousness from behavior."

Which means, essentially, that one could succeed in making a reasoning machine—and never know it.

The mistake classical AI researchers make, says Stover, "is that they assume we can jump right in and model human intelligence. We can't even achieve insect level now—we're on the threshold, we can see it ahead on the horizon . . ."

And, he believes, they are "ignoring one fundamental thing: Any organism responds to sensory information. You can't ignore that bottom-level process."

"Let's go back and look at the problem again," Stover invites.

"All biological systems get information about an external world through sensors, process information in the brain, and issue command signals to effector subsystems, such as motor control and voice, that produce responses in the external world. Thus the brain must be considered in its totality, from sensors to response."

Imagine the bat. How does it decide what's leaf, bird, toxin, mate, prey?

"We start with fundamental signals coming from the organism's sensors. We process those—this is a Double-E"—(electrical engineering, Stover means)—"area that has a lot of theory behind it. The extraction of information from background noise, say, is a technology that is well worked on. From there, to include higher-level intelligence involves inferring about things that you can't get from a measuring system."

How does a bat distinguish between a toxic Monarch butterfly, say, and its tasty look-alike, the Viceroy? How do you find a friend in a crowd?

"Sensors never gather complete information about external-world objects or events," Stover writes. "This incompleteness is inherent and unavoidable. In fact, it may be the reason intelligence is required in a system; if sensors provided complete information, there would be no need for further interpretive processing."

"What you're doing," he adds now, in the looking-for-a-friend scenario, "is categorizing the patterns coming out of your signal processors. You see a person in the distance who looks similar. You can see the color of his jacket—and you know your friend is wearing a red jacket today. Your friend is six-foot tall and bald. At this distance, you don't know if the person you see is six foot, but he looks tall. If you have some way of estimating the distance, you can estimate his height as five-foot-ten, within, say, an error of three inches.

"Now we have to transition from this sensory data to a confidence factor. We do something called fuzzy logic. How does this number, five-foot-ten-plus-or-minus-three, correspond to tall'? I have a graph something like this--" He goes to the board on the conference room wall and draws an upward-slanting S-curve; on one axis he marks heights, from 5'5" to 6'2", on the other he makes a zero-to-one confidence scale. "Now if your height estimate came in at five-foot-five, you'd have zero confidence you were seeing a tall person. For a six-foot person, you'd generate a confidence factor of about 0.9.

"This is the initial translation from measured variables, physical variables, into inferencing about vague things. Instead of having things black and white, zero or one, as in binary logic, we're going to have shades of gray—shades of trueness.

"Now the person you're looking for has three of these vague variables: tall, bald, and red jacket. Three variables on an AND node"—he draws it on the board as a circled AND radiating spokes marked "tall," "bald," and "red jacket"—"implies a required merging of the information. If the person you see has a full head of hair, then it's not your friend. AND is only true when all that's coming into the node is true. It's the necessity condition. There's also the sufficiency condition: OR. For example, a human is either a male or a female." Then there's NOT: If your confidence factors come in high on tall, bald, red jacket, and female, than it's not the friend you were looking for. If male, you've found him.

A bat could be said to "think" the same basic way, using AND and OR and calculating "confidence factors" that a moving object in the air is the right size and fluttering with the right period and frequency to be a meal. "It's using a pattern-recognition structure to categorize objects," Stover says. "So can a machine.

"Once you have modeled AND, OR, and NOT you have a capability for directing human knowledge—its vagueness, generalities, and such—into the machine."

In April 1995, Stover and Gibson demonstrated a machine directed by their "Fuzzy Logic Architecture for Autonomous Multisensor Data Fusion" to their Navy sponsors. "This approach," Stover writes, "does not assume conscious' processing on the part of the machine (although it doesn't preclude it, either)."

Would such an intelligent, pattern-recognizing machine see the world we humans see?

"No," says Stover, "it won't. But I don't see the world that you see, either. Everyone is living in a parallel universe. Because we can talk, because we can communicate with each other, we can agree on some features of it. What we want to build is a parallel machine that doesn't have severe conflicts with the world we humans see."

It's a world, according to Stover, that is, in essence, constructed by the mind's act of recognizing patterns. "When one of these nodes turns on"—an AND node or an OR node—"the pattern or the concept it recognizes is there in the physical world.

"Let me give you an example: I was sitting on the porch one evening, and I saw this bird in the woods. I figured it was a flicker, a yellow-shafted flicker. Then I refocused—and it was actually a little brown moth near the lilac bush. But in my world, in that instant, I saw that bird, that flicker. Then it switched to a moth as I reprocessed the data."

James A. Stover is senior research associate in the Information Sciences and Intelligent Systems Division of the Applied Research Lab, 460 Applied Science Bldg, University Park, PA 16802; 814-863-4104. R. E. Gibson is research engineer in the division; 863-4110. Their research is funded by the Navy. Hellions of the Deep by Rob Gannon was published in 1996 by the Penn State Press.

Last Updated June 1, 1996