The world doesn’t have to be this way; it’s an empirical fact the objects around us obey the laws of arithmetic, and in fact it doesn’t obey them as well as you might at first think. Some sub-atomic particles run counter to our arithmetic intuitions - they can appear out of empty space and vanish back into nothingness, so quantity isn’t conserved. And according to quantum mechanics if two particles are identical, they cease to be fully distinct from each other; this doesn’t violate arithmetic per se, but it sure is weird. It’s not clear how appropriate naive arithmetic is for describing the universe at its most fundamental level. And even on the macro-scale, something as simple as counting isn’t entirely clear-cut. Say you’re counting human beings in a crowd; seems like there should be one precise answer, and any proposed number is either right or wrong. But how do you count conjoined twins? A pregnant woman? And what about the really fascinating case of people who have had the corpus collosum removed from their brains? These individuals have no direct connection between the left and right hemispheres of their brain, and in many ways they function as two separate minds sharing one body. It is possible to have a legitimate disagreement about how many people are in a room, so good old arithmetic arithmetic only describes humans if you take the trouble of resolving these ambiguities. But for most applications, arithmetic is so good that we never have to fret about the edge cases - we should count ourselves lucky, because other areas of math don’t line up so cleanly.

The first computers were finicky things, prone to problems in the hardware. The word “bug” now generally refers to a mistake made by a computer programmer, but the terminology came about because on the original computers a wrong output was likely caused by an insect crawling into the circuitry and getting burned to a crisp. Computers aimed to conform perfectly to mathematical idealizations, but they missed the mark. As manufacturing technology improved computers got closer and closer to matching their idealizations, so that today a PC or laptop essentially perfectly mirrors the theory behind it; the idealizations we use are as accurate as arithmetic.

On the other hand, as computers have become networked the ambiguity has crept back in. The time it takes for a webpage to load, for example, depends on factors like network congestion and how good your wireless connection is; sometimes the load will fail entirely for reasons like this and you’ll have to refresh the page. I often pay my bills by programming clusters of computers to coordinate in solving problems, and it’s a perennial headache that often one computer will fail for some non-idealized reason and throw off the entire cluster. A friend of mine at Google has told me that that’s the most fascinating part of his job; the fact that the computers he’s working with don’t conform to idealizations.

I think I’ve belabored this point enough. Any model - expressed in English, math, or something else - is always an idealization. In some areas, like statistics, you need to be very mindful of the pathologies of the model. In others, the model is so good that you can pretend its perfect. But in principle they’re always idealizations.