Thinking Like a Mathematician

Nancy Marie Brown
March 01, 1985

The power of logic and mathematics to surprise us," wrote Alfred Jules Ayer in his 1935 book Language, Truth and Logic, "depends, like their usefulness, on the limitations of our reason. A being whose intellect was infinitely powerful would take no interest in logic and mathematics. For he would be able to see at a glance everything that his definitions implied, and, accordingly, could never learn anything from logical inference which he was not fully conscious of already."

"When Cayley invented matrices in the late 19th century," Steven Krantz tells me, "he was proud of the fact that they were absolutely useless.

"Now, of course, they're vital for algebra, statistics, engineering—Function theory of several complex variables is sort of like that. Even most mathematicians know relatively nothing about it, and no one can foresee what it might be used for. The most dramatic use to come up in my lifetime is what Roger Penrose at Cambridge is doing with the theory of relativity. He's as close to getting a unified field theory as anyone, and he's using complex analysis to get there.

"Most people think math is a fixed subject. It's not. Most phenomena in nature we don't understand. Most equations we can't solve."

Krantz, in his thirties, is a mathematician at Penn State. He studies function theory of several complex variables, or, to be more exact, "automorphism groups of strongly pseudo-convex domains in n-dimension complex Euclidean space," and he thinks he can explain to me what that means. He admits that mathematics (he pronounces the word with five full syllables) hide behind jargon, but asserts that they are trying to reform this behavior, for practical and selfish reasons. A mathematician Krantz knows was branded with Senator Proxmire's Golden Fleece award for wanting $100,000 to study a phenomenon in several complex variables—Proxmire offered him $50,000 to study several simple variables.

"In order to tell you what function theory of several complex variables is about," Krantz says, "I have to back up a bit. You know what a function is, right?"

"I'm not entirely sure."

"Well, suppose you have two sets of objects. A function is a rule that associates the elements of one set to elements of the other set. For example, suppose one set is a set of numbers and you assign to each boy his age—that's a function. Or you assign to each boy his weight—that's a function, okay?"

"Function theory is a basic language of mathematics. You see, there's a psychological element to mathematics that most people aren't aware of. Have you ever heard of 'Occam's Razor'? It's named for the 13th century English philosopher William of Occam. In plain language, Occam's Razor means that you try to reduce your thinking to a few principles, the idea being that if you have your principles laid out clearly and you're logical, then if you reach a conclusion you don't want to reach, the only possible explanation is that there was something wrong with your original principles. Mathematics supposedly follows Occam's Razor more closely than almost any other subject. There's nothing ad hoc in mathematics. The problem is figuring out where to start. You have to start with ideas that you can't really define, because everything is defined in terms of something else. One of the things you start with is function. I've defined it—and you were satisfied with the definition I gave you, but you had only two minutes to think about it. The trouble is, I used the word 'rule,' and rule is not very precise. For instance, I could create the following function: It assigns to all women the number '1,' provided that there is life as we know it on Mars. And it assigns them the number '0' if there is not life on Mars. That is a well-defined rule, but it's not very satisfying because we don't know if there is life on Mars.

"There is a more precise way to define 'function' but that reduces to another undefinable, called 'set,' which we have to somehow accept. So when you start in mathematics, you either have to take 'function' or 'set' as the undefinable. The point I am trying to make is that for your undefinables you want something that is so logical, so intuitively appealing, that not many people are going to argue with it. Function and set seem to be surviving. Everything else is based logically on these.

"In practice, it's very rare that a mathematical question gets pushed all the way back to the definition of function or set. I guess that's a sign that they system is healthy.

"Sometime you should go talk to Steve Simpson, one of our logicians. He makes his living worrying about what mathematicians should be assuming, what our axioms ought to be. Many of his papers deal with questions like the following: Suppose we change our axioms to this, how will it change mathematics?

"When I studied philosophy, one of the most compelling things I read was Language, Truth and Logic by Alfred Jules Ayer. It was an amazing piece of work, because if you believed it, it wiped out whole areas of philosophy. The response of the working philosophers at the time was, "This is terrible. It puts us out of business. We have to ignore it." You don't expect to see that in mathematics. In mathematics, everybody has accepted the axiom system—with a few notable exceptions—everybody has accepted the way we do mathematics.

That's not to say there aren't other psychological problems in mathematics. There is a famous list of questions posed in 1900 by Hilbert—there's Hilbert." He points to a poster behind him, next to ones of Einstein and King Kong, below the four-foot-long slide-rule. "Hilbert was sort of an elder statesman of mathematics. In 1900, he gave a speech to an international body of mathematicians in which he charted what he felt were the 20 or 25 most important problems that mathematics ought to consider in the 20th century. People get promotions and tenure just for solving Hilbert problems. However, it turns out that some of the most intuitively appealing ones, the ones you most feel you would like to have answered, have no answer. They can't be answered within the framework of mathematics as we know it."

"Aren't you always expanding the framework?"

"No, we're not. The axiom system that we work in—the Zermelo-Frankel set theory—people are not inclined to change. Your average working mathematician is not inclined to change it.

"Some of these Hilbert problems I could explain to you in five or 10 minutes."

"Go ahead."

"Well, one of them has to do with polynomials—you know what a polynomial equation is, right? Well, suppose you have a polynomial equation with integer coefficients. If you stare at the equation, is there some way to tell if it has integer solutions? That's a simple-minded version of a Hilbert question. Ideally, what you would like to do is to take a polynomial—here's a polynomial." He pulls out a piece of scratch paper and writes 3x(3) - 5x + 7. "Okay? You set it equal to zero and solve for x, or more generally, it looks like this." Below the first equation, he writes (abcdefgh& #133;). "What you would like to do is simply to take these coefficients, these integers, plug them into a computer, and have the computer spit out 'yes,' it has integer solutions, or 'no,' it doesn't. The answer is, there is no way to do this. That answer would have been inconceivable in the 19th century. I don't mean, we can't do it or it's too hard or we don't know the right theory yet, I mean it can't be done. That's one of the psychological features of mathematics."

"Why did you rewrite the polynomial? The first way looked so simple, and the second—"

Because the first one was a specific example of a polynomial and the second one represents any old polynomial."

"And it's easier to solve if you rewrite it?"

"Oh, no. I was just falling into the trap of being a mathematician then." He laughs. "Another psychological feature of mathematics—I don't know if you want to talk about this, but since I've gotten started—I'm fascinated by the fact that mathematics is not nearly as cut and dried as people like to think, and there are many different ways to expound upon that theme. I've mentioned some of them. Another one is that some proofs now are so complicated that no one can understand them.

"The extreme example is—did you ever hear of the four-color problem? Everybody loves this problem. Suppose you are Rand McNally and your job is to make a map, you don't want two adjacent countries to have the same color. Now, you are going to be printing maps from now to the end of time, and you don't know what kind of maps are going to come up, but you want to have the right number of colors in supply. How many colors do you need?

"This question was posed around 1850 by a student to his professor. The professor couldn't answer it. People fiddled and fiddled with the question and after a while they proved that five colors would always do. They thought that four would do, but they couldn't prove it. One solution was published and believed for 20 years until somebody found a mistake. Finally about 10 years ago, two mathematicians at the University of Illinois, Appel and Haken, came up with a solution. They proved that four colors will suffice—the University of Illinois is so proud of this that when you get a letter from their math department, the cancellation mark says 'Four colors suffice.' Appel and Haken used the ILIAC, the supercomputer at Illinois, and the upsetting thing is, they used 2,000 hours of computer time. Nobody can check their proof in the usual way. It's kind of a crazy thing. Since there are infinitely many different kinds of maps you can draw, there are infinitely many things to check. Appel and Haken came up with an algorithm that they thought would reduce the number to only finitely many things to check. They put it on the computer and hoped the computer would stop. If it did stop, they knew everything had been checked and if it never stopped, well…"

"How did they decide when 'never' had arrived? Did they let the computer run for a day? for a month? for a year?"

"That was part of the problem. It was such a crazy idea that they couldn't get funding for it. They had to arrange to use the computer when nobody else needed it. If it never stopped—say, during their lifetimes—then they wouldn't have known anything. But after 2,000 hours, it did stop. The point is, even one hour of time on the ILIAC represents more calculations than I could do in a lifetime."

When Krantz was an undergraduate, he remembers, he couldn't decide what he wanted to be. He took philosophy courses and found that he could impress his teachers by bringing a little mathematics into his essays, "by bringing in Russell's Paradox or something." After a short while, he decided that he was kidding himself or kidding them. He didn't feel the same satisfaction he felt by taking a hard math problem and solving it.

"Ever since I was small, I've had an aptitude for mathematics. People who know me well—these are mathematicians—often comment, 'You think like a mathematician even when you're not doing mathetmatics.' I've always been that way, I don't think like a philosopher. I don't have a good insight into the way people think. It's probably a character flaw, but I really seem to think like a mathematician. I guess that's why I never became a philosopher, or a social scientist, or a geneticist, or a lawyer, or any of the other things I thought about being."

Krantz is a product of the University of California at Santa Cruz, an experimental school which at that time gave no grades (the professors wrote short essays evaluating each student) except in certain science and math courses, where grades were optional. The system didn't always work. Krantz tells of some of the "bright young people" of his generation who, unable to get into graduate or medical school, became mechanists or postal clerks. Krantz was more fortunate. He opted for grades, and the Princeton admissions office took the time to read his professors' evaluations, which they interpreted as letters of recommendation. At Princeton, he chose as thesis adviser a harmonic analyst named Elias M. Stein. Stein gave Krantz a problem to solve, and Krantz wrote his dissertation on it. Krantz has remained close to Stein; Stein recently took a coterie of mathematicians, Krantz included, to China "to help the Chinese catch up after the Cultural Revolution."

When he went off to his first job, Krantz remembers, he wondered where he was going to find another problem to work on. Since then, the problems have found him. After publishing a new elementary proof of a well-known theorem, he received a letter asking if he would come up with a new proof for a theorem the letter-writer was particularly fond of. He began collaborating with Robert Greene, a differential geometer at UCLA, after Greene asked, "Do you suppose you could prove…?" And when Krantz answered affirmatively, he added, "Well, if you can prove that, then I can prove this, and look what we have then!" A paper Krantz wrote with former Penn State mathematician Torrence Parsons sparked the interest of Paul Erd's, a famous Hungarian thinker who has been called the world's only itinerant mathematician. Erdঝs added a piece, and a fourth man was called in to complete the structure.

"Sometimes a good theory," Krantz adds, "consists only of a different way of writing mathematics. Sometimes you don't contribute a fundamentally new idea, but you think of a different way of rendering a problem. You just write out the problem in a new way and everything becomes clear. There's a famous mathematics book, written by a real mathematician, that actually becomes popular. It's called How To Solve It, by George Polya. He has general principles and they are good principles, one of which is that if you have a problem you can't solve, write it a different way."

"Most non-mathematicians don't realize that there could be another way to write a problem."

"That's true. This is one of the reasons that employers are finally realizing that mathematicians have some redeeming social value. Mathematicians are training to solve problems. It's my stock-in-trade to know devices for solving problems. There's a lot of things I don't know, but I know how to solve problems."

In 1975, the Jet Propulsion Laboratory at Caltech called Krantz with a problem. Pictures of Venus and Mars transmitted by the Voyager spacecraft had been taken in the dark while the craft was moving, and were, as was expected, horribly blurred. The Jet Propulsion Laboratory had a deblurring technique using an on-board computer that logged the spacecraft's motion, but the technique took a month and Congress wanted to see the pictures now. A researcher discovered a way to unblur the pictures more quickly if he could factor polynomials of several variables in a certain way. He called Krantz and asked him how to do it. "I thought about it for a while, and then I said, 'It can't be done.' Like most people, he just assumed that he had called the wrong person.

"Well, I sent him a letter and explained the problem, and put him in touch with somebody who actually helped him out. He was a former student of mine, Don Marshall, who's now a mathematician at the University of Washington. The technique that grew out of what Marshall did for them is now world-famous. The funny thing is, he didn't really solve a math problem, all he did was introduce them to an existing technique. I mean, he went to the library and found a book with a technique that he thought would suit the problem, and it worked like a charm. This happens quite often.

The thing that makes a mathematician a mathematician is that he knows a lot of mathematics, whereas most other people really don't. I got a phone call from an engineering firm recently, and the man said something like, 'We've been kicking this problem around for a while and we don't know how to solve it. We thought that there may be a slight chance that you would know the answer.' He told me his problem and I instantly knew the answer. It was very basic mathematics. There wasn't any reason why he should have known it, but the important fact was that he didn't think instantly of calling a mathematician. It was a simple basic math problem. It wasn't engineering, it wasn't physics, it wasn't astronomy, it was math, and he didn't recognize it was math. He called me as a last resort. This exemplifies how misunderstood we are."

"Part of the problem is that people don't know what is math."

"Hmm. You're right. Do you want me to explain one of my theorems? I mean, I can only do it in the most general sort of way, but I'd like to try just to show you that I can do it."


"One of the things I study is symmetrics. If you want to study a geometric object, you can study its symmetrics—is it okay if I draw you a picture?" He pulls out a piece of scratch paper, draws a circle and a square. "A basic way to understand a geometric object is to understand its symmetrics. For example, what's the difference between a circle and a square? You could say that one is round and one is not, but the trouble with that answer is that you don't quite know what 'round' means. There are more rigorous ways to see differences.

"For instance, a square doesn't have too many symmetrics." He takes out a pair of scissors, cuts out the square, then traces a new one around its edges, leaving the cutout on top of the tracing. "Here's my square, I want to know what kind of symmetrics a square has. Well, we notice that if I go like this"—he picks up the cutout and turns it over—"I get the same square. It matches up with the original one. If I go like this"—he flops the cutout diagonally—"or I go like this"—he turns it 90 degrees—"I still get the same square. Right? There's only finitely many things that I can do. You can write a list of them. A square has only four rotations—you can't rotate it through any angle. If I only rotated it 45 degrees, the corners would stick out it wouldn't match up with the original square."

"So it dosen't simply have to be a square, it has to be a square in a particular relation to the edges of the paper?"

"Right. That's the definition of a symmetry: After I move the figure, I want it to exactly match the figure I started with. Okay?"

"With a circle, now, it's a different matter. I can rotate a circle through any angle, and there are infinitely many angles, so we can say that a circle has infinitely many symmetrics.

"This is the kind of thing mathematicians do. You have an intuitive perception that a circle and a square are different, but you want something that's more concrete, some way to measure how different the two are.

"Now, one of the things that I do in complex function theory is study questions like this, where the symmetrics are represented by holomorphic mappings, holomorphic functions—the kind of function a physicist would use to describe an event in cosmology, for example.

"With functions like these, you're always working in at least four dimensions. That's one of the features of several complex variables that drives people crazy. If you open up a typical book in several complex variables, there are never pictures. In my book, there are 50 or 60 pictures because I really believe in pictures.

"Anyway, you're always working in at least four dimensions, so that one of the challenging features of this subject is to learn to see the geometry of four dimensions."

Which is the fourth?"

"Oh. I was afraid you were going to ask that. Which is my fourth? Well, you probably think I'm going to say 'time' or something. Well, no, it's not like that.

"Ever since Einstein, people have been saying, 'What is a fourth dimension?' and 'It's kind of like the other three dimensions.' That's correct, but if you don't understand the theory, it probably doesn't tell you anything. I prefer to think about four, five, or six dimensions in the following way—I have to give you a two-minute lecture." He pulls out a new piece of paper and draws a line. "Here's one dimension. Every point on this line has a number associated with it. This point"—he marks a dot on the line—"may have the number one associated with it and this point"—he marks another dot to the left of the first one—"the number one half. If you pick any other point, you can figure out what the number is. So one dimension is thought of in the terms that each point has a number, x, associated with it.

"What about two dimensions? Here's a two-dimensional arrangement." He draws a vertical line crossing the first at right angles. "We locate a point in two dimensions with two numbers because you measure everything from where the two lines cross. You measure how far sideways and how far up you have to go. This point"—he makes a dot in the lower left hand quadrant. "This time we moved to the left and down, so we say negative one and negative three-halves. Every point in the plane is located with two numbers.

"If you go to three-dimensional space, it's a little harder to see, but you can locate every point with three numbers." He adds a diagonal line to the grid and marks a new point. "Now if I want to locate this point, I go over and over and up. Think of it as if every point were the corner of a box, and the box has a length, a width, and a height.

"So, everything in one dimension is located by a single number, everything in two dimensions is located with two numbers, and everything in three dimensions is located with three numbers.

"Now I'm going to tell you about four-space. How am I going to do this? Well, I'm going to forget about these pictures." He crumples up the paper, laughing. "This is what mathematicians always do. They convince you they're talking about the right thing and when they've got you hooked, they tell you to forget the pictures and just concentrate on the numbers. At some point, you have to be willing to do that.

"If we want to talk about four dimensions, let's imagine—this is very popular in economics these days—let's imagine that a society's economic system has four products, let's say Ping-Pong balls, Hula Hoops, Betamaxes, and Michael Jackson records. Everybody in the society owns some of these, so every person can be represented by four numbers. The first number is how many Pin-Pong balls he owns, the second number is how many Hula Hoops he owns, and so on. If there are 5 billion people on earth, the world is represented by 5 billion arrays of four numbers. This economic system, then, should be thought about in four-space because there are four parameters. We've taken a radical departure from the first three dimensions because in the first three, the mathematics was based on a picture, but now the mathematics comes first and the picture is forgotten. But this is really how a mathematician thinks about four dimensions, or five or six dimensions. Because I do things geometrically, I try to picture four dimensions, but I'm really doing it by analogy with things that I can draw. Some people will say to you, 'I have been thinking about four-space for 20 years and I can really see it.' Well, I don't know. Maybe they can, but I kind of doubt it. Mainly you think about it by analogy, and if you have to prove anything, you do it with numbers. You can't use pictures because nobody would believe your pictures."

"What do you prove?"

"Well, let's see. What do we prove? One thing we prove is . . ." He draws an oval. "Suppose you take this domain, this subset of complex space. Let's call it D, and let's call its collection of symmetries S. Now suppose you perturb its boundary a little bit." He draws a new, misshapen oval with a flattened projection like a nose. "You get a new domain, called D-prime, which ahs a new collection of symmetries, S-prime. My collaborator Greene and I can prove that there is a very natural way in which the new symmetries form a subcollection, a subset, of the old symmetries. This tells you something about the interplay between geometry and complex functions. It's considered to be one of my better theorems."

"However, what does the picture have to do with it?"

"Oh, nothing. As I said, the picture's just an analogy—"

"—because the second domain doesn't have any symmetries."

"Right. In that case, it's very simple. S-prime has nothing in it and the empty set is always a subset of something else."


"It's a bad picture. If I'd drawn a better picture . . . shall I draw a better picture? Suppose I start out with a circle, which we've agreed has a lot of symmetries." He draws a circle. "And suppose my perturbation of it consists of flattening it a little there and widening it a little here." He draws and oval over top of the circle. "Now the circle is my D, and this is my D-prime. Well, D has a lot of symmetries and D-prime only has a few because it's fat here and thin there. It turns out that in a natural group theoretic sense, the symmetries S-prime of D-prime are a subcollection of the symmetries S of D. Every symmetry of the second figure has a corresponding symmetry in the original. By perturbing a domain you don't get more symmetry, you get less symmetry.

"Now, before you say 'however,' let me point out that you may not be impressed by this, but you have to remember that the theorem handles any situation. I mean, you could write down 50 examples and say, 'I can handle these 50 examples and say, 'I can handle these 50 examples,' but I can handle every example, even ones you can't think of."

"However. There's still something I don't understand. Suppose the oval is your original, and the circle is . . ."

"Oh. Right. That's a very good point. You have good insight, that's an excellent question. That's the standard question I'm asked when I lecture on this. You see, the perturbation has to be sufficiently small. D-prime is a sufficiently small perturbation of D, but D is not a sufficiently small perturbation of D-prime."

"But they're the same thing."

"No, they're not."

"The perturbation is just going in a different direction."

know, but you see, the definition of 'sufficiently small' depends on which domain you start with. If I start with the circle, I can say that if I perturb it a sufficiently small amount, the theorem is true. Or, if I start with the oval, I can say that if I perturb it a sufficiently small amount, the theorem is true. But the perturbation needed to make an oval into a circle is too big."

"Wait a minute. You take a circle and squeeze it a little bit, and you get an oval."


"And you take an oval and squeeze it with the same amount of force in the other direction, and you get a circle."


"How come it's not the same perturbation?"

"It is the same perturbation, but whether it's sufficiently small or not depends on which domain you start with."

"Then you can say your theorem is true whenever you want it to be true—if the amount of perturbation is subjective for each example."

"Subjective isn't the right word. It's calculable. You can calculate it in advance and say how much it is. You have to understand what the theorem depends on. In the pictures I've drawn for you, the theorem depends on the curvature of the domain, and a circle has different curvatures than an oval does. Curvature is a mathematical concept that can be calculated. You can take the curvature of a circle, do a little calculation, and say 'I can perturb this domain this much, and Greene and Krantz's theorem will be true.' Or you could take the oval with its curvatures and do the calculation, and you'd get a different answer because the curvatures are different. And you could say, 'I can perturb this domain this much, and the theorem will be true.'

"I realize this is a difficult point to understand, because I've had mathematicians 20 years older than me ask the same question. It's the right question to ask." He leans back in his chair, looks around the room. "Let's see," he says very softly, "can I tell you about another theorem that I could perhaps explain?" He straightens up, raises his voice. "I always liked that one."

"It was nice and understandable until you—"

"Until you ask yourself too many questions about it."

"Yes. You see, to understand it, I have to take something concrete, something I can feel."

"Sure, I agree."

"And so I have trouble understanding you when you talk about curvatures. You can figure out the equations. I can't. That's a problem."

"Hmm. Right. Let me see if I can, by analogy, make you feel better about this." Long pause. He opens his desk drawer, looks, and produces a matchbook. "Maybe this will help. Suppose I balance this matchbook on the tip of my pen. How much do I have to disturb it before it falls off? Not very much. If I disturb it a micron, that would probably make the difference—it falls off. Little perturbations upset the system. Now, suppose I start with the matchbook tilted, like this, and I ask myself, now that the system is upset, how much do I have to disturb it to rectify it again? The answer is, I have to disturb it a while lot. If one perturbation undoes the system, how come the opposite perturbation doesn't fix it up again? Well, in this case the answer is gravity, right? But the point is that this is an ostensibly symmetric situation that really isn't symmetric at all—you can upset the system with an arbitrarily small perturbation, but to get it back again, you really have to perturb it a lot." He lights a match and applies it to the edge of the piece of scratch paper on his desk, the paper with the D and D-prime on it, picks up the unburning end, and holds the paper out over the floor. "This is another ostensibly symmetric situation that isn't symmetric. You can easily perturb the system, but you can't get it back. Each particle has an equivalent particle on the floor or in the air—it's a one-to-one function, but you can't get it back."

Last Updated March 01, 1985