too big a number to think about. Yet a short bit of rational deduction, such as combining an estimate of the mileage you can drive comfortably in a day on an interstate (about 500 miles) with an estimate of the number of days it would take to drive across the country (about 5–6 days) tells me this distance is closer to 2,500–3,000 miles than it is to, say, 10,000 miles.
Thinking about numbers in terms of what they represent takes most of the mystery out of the whole business. It is also what physicists specialize in. I don’t want to pretend that mathematical thinking is something that everyone can feel comfortable with, or that there is some magic palliative cure for math anxiety. But it is not so difficult—often even amusing and, indeed, essential for understanding the way physicists think—to appreciate what numbers represent, to play with them in your head a little. At the
very least, one should learn to appreciate the grand utility of numbers, even without necessarily being able to carry out detailed quantitative analyses oneself. In this chapter, I will briefly depart somewhat from Stephen Hawking’s maxim (I hope, of course, that you and all the buying public will prove him wrong!) and show you how physicists approach numerical reasoning, in a way that should make clear why we want to use it and what we gain from the process. The central object lesson can be simply stated: We use numbers to make things never more difficult than they have to be.
In the first place, because physics deals with a wide variety of scales, very large or very small numbers can occur in even the simplest problems. The most difficult thing about dealing with such quantities, as anyone who has ever tried to multiply two 8-digit numbers will attest, is to account properly for all the digits. Unfortunately, however, this most difficult thing is often also the most important thing, because the number of digits determines the overall scale of a number. If you multiply 40 by 40, which is a better answer: 160 or 2,000? Neither is precise, but the latter is much closer to the actual answer of 1,600. If this were the pay you were receiving for 40 hours of work, getting the 16 right would not be much consolation for having lost over $1,400 by getting the magnitude wrong.
To help avoid such mistakes, physicists have invented a way to split numbers up into two pieces, one of which tells you immediately the overall scale or magnitude of the number—is it big or small?—to within a range of a factor of 10, while the other tells you the precise value within this range. Moreover, it is easier to specify the actual magnitude without having to display all the digits explicitly, in other words, without having to write a lot of zeros, as one would if one were writing the size of the visible universe in
centimeters: about 1,000,000,000,000,000,000,000,000,000. Displayed this way, all we know is that the number is big!
Both of these goals are achieved through a way of writing numbers called scientific notation. (It should be called sensible notation.) Begin by writing 10 n to be the number 1 followed by n zeros, so that 100 is written as 10 2 , for example, while 10 6 represents the number 1 followed by 6 zeros (1 million), and so on. The key to appreciating the size of such numbers is to remember that a number like 10 6 has one more zero, and therefore is 10 times bigger, than 10 5 . For very small numbers, like the size of an atom in centimeters, about 0.000000001 cm, we can write 10 –n to represent the number 1 divided by 10 n , which is a number with a 1 in the nth place after the decimal point. Thus, one-tenth would be 10 –1 , one billionth would be 10 –9 , and so on.
This not only gets rid of the zeros but it achieves everything we want, because any number can be written simply as a number between 1 and 10 multiplied by a number consisting of 1 followed by n zeros. The number 100 is 10 2 , while the number 135 is 1.35 × 10 2 , for example.