The St. Petersburg Paradox

I’m in more of a mathematical mood right now, so I’m going to cover a piece of abstract mathematics.  I want to talk about the St. Petersburg Paradox.  While a famous problem, you can wikipedia it for more information if you like, here’s a short summary.  Imagine we have a game of flipping a coin.  Starting at $1, every time the coin lands heads, you double that amount.  When it eventually lands tails you win however much you have earned so far.  How much should it cost to play?

Now I very much enjoy this problem in a pure mathematical sense, but Daniel Bernoulli, the man who invented it, apparently took the mathematics of this problem rather too far.  Bernoulli noticed, as the more astute among you probably either deduced, or probably already knew, that the game’s expected value is in fact infinite.  This means that no matter what the cost to play, you should always accept.  However most common people wouldn’t pay even $50 to play this game.  Bernoulli deduced from mathematical bases a utility function of the game which would explain this behavior using a logarithmic idea of value.  He supposed that people’s valuation of money decreases as the amount of money they possess increases, or to use another term, he proposed a diminishing marginal utility function for money.  While this approach, I guess, works, the even more astute among you will have noticed that this doesn’t actually solve the paradox.  You can just have a game’s payoff function that uses the inverse of whatever utility function and still end up with an infinite payoff that nobody will take.  Other mathematicians have wrestled with this problem, and so far the conclusion, as far as I am aware, is that utility must be bounded in order to resolve this type of paradox.

Now, I am not a professional mathematician, but I believe that I have solved this paradox.  SImply put, all these mathematicians have been assuming that people have the same conception of reality that they are working with; a mathematical one.  These mathematicians have assumed that people think of money as a number.  That seems obvious, right?  Money is measured numerically.  Well, yes, but the fact that different people have different ideas of what money or other commodities are valued at means that it isn’t a number.  Numbers are objective, inherently.  Two people must categorically agree that a 7 is a 7, it always was, is, and will be 7, and that 7 = 7, which also equals 6 + 1 and an infinitude of other identities.  However we all know that two people might have a differing opinion of various exchanges, such as $3 for a mango, for example.  Someone who loves mangoes might buy at that price, someone who doesn’t, won’t.  So we can’t say that $3 = 1 mango in the same way that we can say that 7 = 7, even if all mangoes in the world were always bought and sold for that price.

The issue here is that these mathematicians, while brilliant direct deductive thinkers, think of the universe in a flatly rational way.  While this is probably the best single perspective through which to view the universe, it fails when dealing with people that lack a similar rational strictness.  Have you ever been beaten by someone at a game you were clearly better at, simply because the other player just refused to play “properly”?  This happens all the time in poker and numerous gambling or card games.  In games like chess this rarely happens because in a game of perfect information, “proper” play can be categorically proven to be superior during the game itself.  If it would result in a bad situation, then it isn’t proper play.  Where information is limited, “proper” play might land you in situations you couldn’t predict or prevent.  Anyway, a more textured view of the perception of the universe would allow for nonlinear and unconventional conceptual modes for perceiving the universe.  For example, perhaps a certain subsection of people conceive of money like power.  The actual number isn’t as relevant as the power it holds to create exchanges.  The numbers are negotiable based on the situation and on the value sets of the parties involved  So the St. Petersburg Paradox could be equally resolved by saying that power doesn’t scale in the same way that money does.  If you offered someone a utility function of power, it would mean nothing.  Power is not infinitely reducible: the ability to do something doesn’t blend seamlessly into the ability to do something else.  The atomic unit of power is much larger than the infinitely fine divisions between any given numbers.  Having ten very small amounts of additional power is also not the same thing as one very large new executive power.

People can link together abstractions and concepts in many, many different ways.  For example, some successful investors say that instead of looking at your money like it’s your fruit, look at it like your bag of seed with which to grow more seeds.  True, you’re going to have to sell some of those seeds to get what you need, but its purpose is to grow.  As you accumulate more and more, the amount you can draw off increases while still maintaining useful volume.  This gives a completely different outlook on money, and will generate different decision behavior than looking at money as something to be spent as it is earned.  This same principle can apply anywhere at all, because in order for something to exist in your perceptual map, you have to think about it.  You might think of movies like books that have been converted, like picture books, like snatches of real-life experience, like a sequence of scenes strung together like string being tied together, or like a strip that runs through its full length in only one direction the same way every time.  There are other possibilities of course, but that’s as many as I could think of while I was in the process of typing this post.  This is only looking at a small slice of the possibilities of conceptual remapping (analogues and analogies, specifically) but other forms would require a great deal more explanation.  I think you get the point though.

Back to mathematicians and the St. Petersburg Paradox.  The paradox only exists if you look at utility in the mathematical sense.  There exist models, such as the one that “common sense” seems to indicate, that don’t see a paradox.  These models instead see a game that has a sliding scale of value and beyond a certain point the value is zero (or negligible).  This gradual fading of value is responsible for the probable effect of many people deciding to play the game at differing values.  I don’t think even the most hardcore mathematician would play the game for $1 million a round, even though it will eventually pay for itself.  The utility solution fails to take into account the common sense evaluation of time and effort as factors in any given activity.  You could factor in such an evaluation, but you would probably then be missing something else, and so on until you have built up a complete map of the common sense and shared perceptual map of the most common conceptual space.  But then you have duplicated the entire structure you’re attempting to model and created a simulation instead of a simplification.

On simulations and conventional models, we currently use both.  Our simulations, however, tend to be based in the real world, and we refer to them as experiments.  This is how we collect evidence.  The problem with the natural universe is that there is such an unimaginable profusion of activity and information that we can’t pick out any particular aspect to study.  An experiment is controlling all those other extraneous factors, or removing/minimizing them from a confusing universe so we can focus on a single test.  Once we have our results from that test we can move on to test another part of reality.  Eventually we will have built up a complete picture of what’s going on.  Simulations are data overkill from which we can draw inductive conclusions because we don’t understand all the underlying mechanics.  Models are streamlined flows, as simple and spare as possible, which we can use to draw deductive conclusions.  For example, the equation for displacement for a falling object [dp = v0*t + (1/2)a^2*t] is a simplified model, subtracting all other factors than the one being considered, allowing us to deductively conclude the displacement for any values of v0, t, and a.  Mathematical conclusions are a sequence of deductive operations, both to make mathematical proofs and to solve/apply any given instance of an equation/expression/situation/etc.

Our minds operate on the most basic level using models primarily, and simulations second.  This is because most of the time, a model is close enough.  You don’t need to include every factor in order to get an answer at sufficient precision.  You don’t have to factor in the time, the temperature, or the quantum wobble of each atom in a baseball to figure out where it’s going to land.  If you wanted a perfect answer you could simulate it, but you can get it to an extremely high level of precision by simply ignoring all those marginal factors.  They are not worth computing.  Now we are beginning to factor in the distinction I’ve brought up before between algorithms and heuristics.  Models are often heuristics, and simulations are often algorithms.  Models can include algorithms and simulations can include heuristics, but on the whole a simulation (given correct laws and good starting conditions) will algorithmically compute exactly what is going to happen.  A model, on the other hand, is a much more efficient process that throws away data in order to make calculation simpler.  Usually a lot simpler.

Now I am willing to bet that some readers will be confused.  I just said that simulations need the right laws and starting conditions- isn’t that the same thing as a deductive process needing the right logical framework and initial premises?  Well, yes.  That’s because a logical construct is a simulation.  However, it is a simulation constructed using information already stripped of extraneous information by creating a model of it.  The line between model and simulation is not black and white- they are simply approximate labels for the extremes of a spectrum, with conflicting ideals.  The perfect model is one law that determines everything.  The perfect simulation is a colossal, gigantically massive data stream that represents everything, down to the last spin on the last electron.  This is also where we get the fundamental distinction between philosophers: the conflict of rationalism versus empiricism.  The rationalists believe the model to be the “one true philosophical medium” and the empiricists believe it’s better to use simulations.  The tricky part is that in order to construct a simulation, you have to have models to run each of its laws and each of its elements.  In order to have a model, you have to have a simulation to draw patterns from.  So we have an infinite recursion where rationalists and empiricists are chasing one another’s coattails for all eternity.  Fortunately, most people who have thought about this much have come to more or less the same conclusion, and figured out that rationalism and empiricism go hand it hand quite nicely.  However there is still a preference for choosing to understand the world through one mode or the other.

How does all this apply to the original issue of the St. Petersburg Paradox?  So we have mathematicians who are definitely rationalists- I imagine there aren’t many professional mathematicians who are empiricists.  And these mathematicians construct a model that represents a certain behavioral set.  Their problem, however, is that reality doesn’t actually support the conclusion they are saying is the most rational.  So they change the model, as they should, to better reflect reality.  All well and good.  Their problem, though, is that they are actually doing their job backwards in one concealed respect.  Implicit in their model is the idea that it is the case in the simulation they are describing that the population they are describing has the same conceptual map that the people who created the model did.  I am aware that I could have simply said we have some ivory tower mathematicians who are out of touch with reality, but I wanted to cover in-depth what the disconnect with reality is.  They are correcting their model by making it better reflect empirical reality in one respect, but in so doing they are simultaneously doing the same in reverse by assuming things from their meta-models onto reality.  We have rationalism and empiricism, simulations and models, inductive and deductive thinking, all chasing their dance partner around.  But the most vital thought is that the process must only go one way.  You must always push forward by correcting both to better fit the other in reality, rather than working backwards and assuming things onto reality which are not the case.  If you do this, and then entrench your position with a rationale, you are screwing up your meta-model of reality.  And, like a monkey with its hand caught in a banana trap, the tighter you squeeze your fist the more surely you get stuck.  For every ratchet backwards on the progress ladder, you get more and more firmly stuck in place, and it even gets harder to continue to go backwards.  The wheel spins one way, it grinds to a halt in the other.

Advertisements

The Rationality of Man

I hate that I have to say this.  But no, the title of this post does not exclude women.  Anyone who claims it does is hunting for semantic ambiguities they can themselves fill to satisfy their political/socio-psychological agenda. It’s rather like a child asking its parents “can’t I have a candy?” and when they say “no” the child happily munches away.

Now, I’ve had an interesting email conversation about whether or not man is actually rational.  It’s actually an extremely difficult proposition to prove either way, because you can cite evidence on either side in the form of anecdotes about people who acted rationally or irrationally, or create hypothetical situations in which the default response for someone is similarly obfuscated.  You can hem and haw all day and still not get anywhere decisively about the true nature of man.  The biggest obstacle to the argument that humans are rational is basically that sometimes humans act irrationally.  Conversely, the argument that humans are irrational is sunk because there are rational human decisions.  According to the popular relativist mode of thinking, we now reach an impasse, a compromise, a non-answer such as “humans are neither rational nor irrational” or “some humans are, and some aren’t” or worse “we can’t prove it, therefore it cannot be determined.”

Such a situation seems to indicate that we’re missing something, as I have often repeated.  To resolve this issue, let’s instead look at what exactly we mean by rationality in this context.  Do we mean that humans are like calculators with mouths, capable of maximizing every last erg of their efficiency and output in order to acquire the most material prosperity?  Obviously not- life is so much richer than cold monetarism.  I use the term ‘personal economics’ which includes subjective value relative to each person.  Family, psychological needs, preferences, etc. all sum together into a complex mass which we use to make choices, different for each person.  Now, I would argue that someone’s choices are always, always going to be rational relative to this standard that they’re carrying around in their heads.  Otherwise they would have done something else, or if they deviate from this model for a reason they consider rational, the act of contradicting that model would have changed the model to accommodate the behavior.  This is called cognitive dissonance.  The issue we’re really after, then, is whether or not this model we have is rational, and not whether the decision-making process used upon this model is rational.  My argument thus runs that the actual data operated upon is not a prerequisite for being a rational being.  If you punch into your calculator “what is the opposite of a hippo?” it’s going to go “ERROR” but that’s not because the calculator isn’t rational (bad example; calculators aren’t sentient… yet!)- it obeys perfectly objective, rational laws, and does so perfectly every time.  In human terms, if your brain were plugged into a computer that simulated reality exactly, with the small exception that you were in the body of a hippo, your rationality will not be affected by the irrational data being given to it.  In all probability you’ll figure out how to do your hippo thing and live life as a hippo with a very large IQ, at least until you’re unplugged.

The opposite argument basically claims that the information contained within your thought entity- i.e. mind, is actually an inseparable and fundamental building block of rationality.  Claiming rationality is dependent upon your actual thoughts/sense data/ideas is not that strange, considering that you have to learn how to be rational.  Otherwise you would be born just as rational as any scholar, and education is just claptrap.  We know for a fact that teaching, or specific sensory data designed to produce specific (usually useful) thought patterns makes people more rational, so rationality is learned and is therefore dependent upon what memories and information are actually in your head.  If a human mind was utterly deprived of sensory input, it would hardly become a rational entity.  I actually agree with this analysis, believe it or not.  However I will make the assertion that by teaching, you aren’t modifying the fundamental operations used to determine choices or actions, you’re actually modifying the model in the person’s head so that the same intrinsic operations will produce a more desirable result relative to objective reality.

Let’s look at an example.  You have to learn that 2 + 2 = 4.  If you haven’t been taught addition, you don’t know that.  I would argue that, rather, you did already know that, but you didn’t actually know the significance of the symbols 2, +, =, and 4 until you had been taught addition.  If you only understand numbers, you can certainly understand you have two, and if they add two, you can just count again and reach the result of 4.  Indeed, you don’t even have to understand counting to recognize the concept of numbers, that three of something is different from four of something in a specific, definite capacity, and that capacity of difference semantically formalized and transmitted through teaching is how we arrive at counting.  In fact, just by understanding numbers on the level of a 4 year old, you fundamentally understand mathematics up to basic algebra.  If you count two, and you count four, what operation will transform this into that?  We have just divined 2 + x = 4.  Furthermore, if you’re really that much smarter than the average bear, you might even develop a formulaic system of mathematics such as u%4/u.  I just made this notation up, and don’t claim it’s as flexible or useful as standard, but in order for someone else to understand, I would have to teach it to them.  Of course, this same reasoning applies to words and concepts as well.  An actual apple is completely different from the semantic identity of the word representing it, in just the same way that a pre-verbal child can understand quantities without knowing how to manipulate or communicate them as concepts.  This seems a trivial point, but the fact that multiple languages exist proves that semantic identity is not equivalent to reality, because if there was an equivalence, there could exist only one “true” language.  And let’s not even get into the idea of sensory data transmission of the nature of language, not just a language itself.

Back to the topic, we can all agree that at some point there existed a time when humans did not understand basic principles in rational terms.  Even if that requires we go back to before there was life on earth, we can do it.  So at some point, beginning with nothing, we developed every last one of those principles which we now teach.  The fact that they were discerned from out of the fabric of objective reality proves that the faculty needed to conceptualize those principles is separate from them.  Because we needed something to start with, some tool we used to derive all our other tools.  Now it is possible that we have a wide variety of tools genetically ingrained in us, such as an understanding of Newtonian physics derived from our monkey days of jumping between trees, and an implicit grasp of inductive reasoning ingrained in our behaviorist psychologies.  My only assertion is that the basic, universal, master decision-making system is one of those tools.  All the other tools, including inductive reasoning and others, are servile to your decision-making algorithm, whether in a sensory or in an enabling capacity.  They either provide you with (presumably accurate) information, or give you the power to act on your environment, or some other form of utility.  The faculty of memory is a very significant, but still servile, form of information storage for recall when useful.

This is a huge topic, actually, and I’ve only touched on it a little bit.  I’ll almost certainly do another post on this.  At least one.