I’m in more of a mathematical mood right now, so I’m going to cover a piece of abstract mathematics. I want to talk about the St. Petersburg Paradox. While a famous problem, you can wikipedia it for more information if you like, here’s a short summary. Imagine we have a game of flipping a coin. Starting at $1, every time the coin lands heads, you double that amount. When it eventually lands tails you win however much you have earned so far. How much should it cost to play?

Now I very much enjoy this problem in a pure mathematical sense, but Daniel Bernoulli, the man who invented it, apparently took the mathematics of this problem rather too far. Bernoulli noticed, as the more astute among you probably either deduced, or probably already knew, that the game’s expected value is in fact infinite. This means that no matter what the cost to play, you should always accept. However most common people wouldn’t pay even $50 to play this game. Bernoulli deduced from mathematical bases a utility function of the game which would explain this behavior using a logarithmic idea of value. He supposed that people’s valuation of money decreases as the amount of money they possess increases, or to use another term, he proposed a diminishing marginal utility function for money. While this approach, I guess, works, the even more astute among you will have noticed that this doesn’t actually solve the paradox. You can just have a game’s payoff function that uses the inverse of whatever utility function and still end up with an infinite payoff that nobody will take. Other mathematicians have wrestled with this problem, and so far the conclusion, as far as I am aware, is that utility must be bounded in order to resolve this type of paradox.

Now, I am not a professional mathematician, but I believe that I have solved this paradox. SImply put, all these mathematicians have been assuming that people have the same conception of reality that they are working with; a mathematical one. These mathematicians have assumed that people think of money as a number. That seems obvious, right? Money is measured numerically. Well, yes, but the fact that different people have different ideas of what money or other commodities are valued at means that it isn’t a number. Numbers are objective, inherently. Two people must categorically agree that a 7 is a 7, it always was, is, and will be 7, and that 7 = 7, which also equals 6 + 1 and an infinitude of other identities. However we all know that two people might have a differing opinion of various exchanges, such as $3 for a mango, for example. Someone who loves mangoes might buy at that price, someone who doesn’t, won’t. So we can’t say that $3 = 1 mango in the same way that we can say that 7 = 7, even if all mangoes in the world were always bought and sold for that price.

The issue here is that these mathematicians, while brilliant direct deductive thinkers, think of the universe in a flatly rational way. While this is probably the best single perspective through which to view the universe, it fails when dealing with people that lack a similar rational strictness. Have you ever been beaten by someone at a game you were clearly better at, simply because the other player just refused to play “properly”? This happens all the time in poker and numerous gambling or card games. In games like chess this rarely happens because in a game of perfect information, “proper” play can be categorically proven to be superior during the game itself. If it would result in a bad situation, then it isn’t proper play. Where information is limited, “proper” play might land you in situations you couldn’t predict or prevent. Anyway, a more textured view of the perception of the universe would allow for nonlinear and unconventional conceptual modes for perceiving the universe. For example, perhaps a certain subsection of people conceive of money like power. The actual number isn’t as relevant as the power it holds to create exchanges. The numbers are negotiable based on the situation and on the value sets of the parties involved So the St. Petersburg Paradox could be equally resolved by saying that power doesn’t scale in the same way that money does. If you offered someone a utility function of power, it would mean nothing. Power is not infinitely reducible: the ability to do something doesn’t blend seamlessly into the ability to do something else. The atomic unit of power is much larger than the infinitely fine divisions between any given numbers. Having ten very small amounts of additional power is also not the same thing as one very large new executive power.

People can link together abstractions and concepts in many, many different ways. For example, some successful investors say that instead of looking at your money like it’s your fruit, look at it like your bag of seed with which to grow more seeds. True, you’re going to have to sell some of those seeds to get what you need, but its purpose is to grow. As you accumulate more and more, the amount you can draw off increases while still maintaining useful volume. This gives a completely different outlook on money, and will generate different decision behavior than looking at money as something to be spent as it is earned. This same principle can apply anywhere at all, because in order for something to exist in your perceptual map, you have to think about it. You might think of movies like books that have been converted, like picture books, like snatches of real-life experience, like a sequence of scenes strung together like string being tied together, or like a strip that runs through its full length in only one direction the same way every time. There are other possibilities of course, but that’s as many as I could think of while I was in the process of typing this post. This is only looking at a small slice of the possibilities of conceptual remapping (analogues and analogies, specifically) but other forms would require a great deal more explanation. I think you get the point though.

Back to mathematicians and the St. Petersburg Paradox. The paradox only exists if you look at utility in the mathematical sense. There exist models, such as the one that “common sense” seems to indicate, that don’t see a paradox. These models instead see a game that has a sliding scale of value and beyond a certain point the value is zero (or negligible). This gradual fading of value is responsible for the probable effect of many people deciding to play the game at differing values. I don’t think even the most hardcore mathematician would play the game for $1 million a round, even though it will eventually pay for itself. The utility solution fails to take into account the common sense evaluation of time and effort as factors in any given activity. You could factor in such an evaluation, but you would probably then be missing something else, and so on until you have built up a complete map of the common sense and shared perceptual map of the most common conceptual space. But then you have duplicated the entire structure you’re attempting to model and created a simulation instead of a simplification.

On simulations and conventional models, we currently use both. Our simulations, however, tend to be based in the real world, and we refer to them as experiments. This is how we collect evidence. The problem with the natural universe is that there is such an unimaginable profusion of activity and information that we can’t pick out any particular aspect to study. An experiment is controlling all those other extraneous factors, or removing/minimizing them from a confusing universe so we can focus on a single test. Once we have our results from that test we can move on to test another part of reality. Eventually we will have built up a complete picture of what’s going on. Simulations are data overkill from which we can draw inductive conclusions because we don’t understand all the underlying mechanics. Models are streamlined flows, as simple and spare as possible, which we can use to draw deductive conclusions. For example, the equation for displacement for a falling object [dp = v0*t + (1/2)a^2*t] is a simplified model, subtracting all other factors than the one being considered, allowing us to deductively conclude the displacement for any values of v0, t, and a. Mathematical conclusions are a sequence of deductive operations, both to make mathematical proofs and to solve/apply any given instance of an equation/expression/situation/etc.

Our minds operate on the most basic level using models primarily, and simulations second. This is because most of the time, a model is close enough. You don’t need to include every factor in order to get an answer at sufficient precision. You don’t have to factor in the time, the temperature, or the quantum wobble of each atom in a baseball to figure out where it’s going to land. If you wanted a perfect answer you could simulate it, but you can get it to an extremely high level of precision by simply ignoring all those marginal factors. They are not worth computing. Now we are beginning to factor in the distinction I’ve brought up before between algorithms and heuristics. Models are often heuristics, and simulations are often algorithms. Models can include algorithms and simulations can include heuristics, but on the whole a simulation (given correct laws and good starting conditions) will algorithmically compute exactly what is going to happen. A model, on the other hand, is a much more efficient process that throws away data in order to make calculation simpler. Usually a lot simpler.

Now I am willing to bet that some readers will be confused. I just said that simulations need the right laws and starting conditions- isn’t that the same thing as a deductive process needing the right logical framework and initial premises? Well, yes. That’s because a logical construct is a simulation. However, it is a simulation constructed using information already stripped of extraneous information by creating a model of it. The line between model and simulation is not black and white- they are simply approximate labels for the extremes of a spectrum, with conflicting ideals. The perfect model is one law that determines everything. The perfect simulation is a colossal, gigantically massive data stream that represents everything, down to the last spin on the last electron. This is also where we get the fundamental distinction between philosophers: the conflict of rationalism versus empiricism. The rationalists believe the model to be the “one true philosophical medium” and the empiricists believe it’s better to use simulations. The tricky part is that in order to construct a simulation, you have to have models to run each of its laws and each of its elements. In order to have a model, you have to have a simulation to draw patterns from. So we have an infinite recursion where rationalists and empiricists are chasing one another’s coattails for all eternity. Fortunately, most people who have thought about this much have come to more or less the same conclusion, and figured out that rationalism and empiricism go hand it hand quite nicely. However there is still a preference for choosing to understand the world through one mode or the other.

How does all this apply to the original issue of the St. Petersburg Paradox? So we have mathematicians who are definitely rationalists- I imagine there aren’t many professional mathematicians who are empiricists. And these mathematicians construct a model that represents a certain behavioral set. Their problem, however, is that reality doesn’t actually support the conclusion they are saying is the most rational. So they change the model, as they should, to better reflect reality. All well and good. Their problem, though, is that they are actually doing their job backwards in one concealed respect. Implicit in their model is the idea that it is the case in the simulation they are describing that the population they are describing has the same conceptual map that the people who created the model did. I am aware that I could have simply said we have some ivory tower mathematicians who are out of touch with reality, but I wanted to cover in-depth what the disconnect with reality is. They are correcting their model by making it better reflect empirical reality in one respect, but in so doing they are simultaneously doing the same in reverse by assuming things from their meta-models onto reality. We have rationalism and empiricism, simulations and models, inductive and deductive thinking, all chasing their dance partner around. But the most vital thought is that the process must only go one way. You must always push forward by correcting both to better fit the other in reality, rather than working backwards and assuming things onto reality which are not the case. If you do this, and then entrench your position with a rationale, you are screwing up your meta-model of reality. And, like a monkey with its hand caught in a banana trap, the tighter you squeeze your fist the more surely you get stuck. For every ratchet backwards on the progress ladder, you get more and more firmly stuck in place, and it even gets harder to continue to go backwards. The wheel spins one way, it grinds to a halt in the other.