Impulsiveness

Is impulsiveness a desirable characteristic?  I am the categorical thinker- I like to think about things before I do them.  However, as part of that thought process it’s important to be able to suspend thought when necessary.  As such, whether or not impulsiveness has a place in the repertoire of the contemporary rationalist is an interesting question.  Firstly, we need to look at where impulsiveness is typically used.  Impulsiveness is often associated with interpersonal exchanges, with social people and people who enjoy parties.  It is strongly disassociated with business or financial decisions, with some exceptions such as small purchases and gambling.  So while common sense thought acknowledges that impulsive action is improper for weighty decisions, for more trivial matters it helps a great deal.

Before we get into the topic, we need to make some distinctions.  There is impulsiveness and then there is recklessness.  The way I conceive of the terms, impulsiveness is thinking of an action and allowing it to proceed into reality without too much analysis.  Recklessness, on the other hand, implies a full knowledge of the action beforehand, but doing it in spite of your analysis that it is foolhardy.  I will talk about both, but first let’s cover the less complex issue of impulsiveness.  In social situations, impulsiveness is a great aid because you can’t think too much about what you’re going to say.  There are a large number of very smart people who have difficulty in social situations because they don’t realize that their strategy for dealing with reality is not universally applicable- it needs to be changed to fit their needs of the moment.  When I was a kid I was like this.  I have since learned to pragmatically and completely apply rationality and can piece together the solution to such puzzles.  Basically, if you think too much about what you’re going to say, you give an unnatural amount of weight to when you do speak.  So unless you’re able to spout endless amounts of deep, profound thoughts, invariably you’re going to be putting a lot of weight behind fairly trivial statements, and the inconsistency comes across as awkward.  Impulsiveness will decrease the weight of what you’re saying and give it a sort of throwaway characteristic which helps you in a number of ways.  Firstly, if it doesn’t work out, nobody really notices, and you can keep going with whatever suits you.  Secondly, it puts you in a more dominant position of just saying whatever you feel like saying.  You aren’t vetting your thoughts to check if the rest of the group will approve.  This brings us to the second flaw in the introverted thinker’s social rut, the fact that they are attempting to apply thought to the situation to do better and it shows very obviously to the rest of the group.  This is a complex point that I can’t encapsulate in one post, but basically any attempt to earn approval guarantees denial of it in direct proportion to the effort spent.  The introverted thinker’s goal is to earn approval, and his model for deciding what to say is, logically, fixed upon achieving that goal.  While their intentions are good their entire approach has so many incorrect assumptions they aren’t even capable of recognizing the fact that their whole paradigm is nonfunctional.  They just dive right back in with a “it must work” attitude instead of reworking from first principles.

Impulsiveness is also a pragmatic tool to be used liberally in situations of doubt.  When it is clear that hesitation will cost more than immediate action, you have to go.  When I was younger I had this model of “going for help” which essentially contained the idea that the concept of help was distant.  So “going for help” would take a long time, and there was a significant chance that the window would close for whatever the situation was.  So my primary course would have been to just go do it myself.  This is an incorrect application of impulsiveness because of incorrect information.  A proper application of impulsiveness might be, for example, you are handed a test with 100 4-answer multiple choice questions, you have 100 seconds.  Now there is no way you could conceivably cover 25% of the questions if you legitimately tried to answer them.  However, if you guess randomly you have a 1 in 4 chance on each question and so over 100 questions you should get 25 correct.  This is clearly your best strategy given the rules of the game.  You concluded that the best strategy is to suspend rational inquiry into each question because it is simply not worthwhile.  You wouldn’t work for an hour to earn a penny, and you wouldn’t think for X seconds per question.

The other fallacy that makes impulsiveness distasteful to many is the idea that the answer actually matters.  With our test example, you don’t actually care what the answer to any given question is, you have all the information needed to create a sufficient strategy.  For social impulsiveness, the simple fact of the matter is that your actions really don’t matter that much.  Provided you don’t do anything truly inappropriate, at least.  The, and I use this term very reluctantly, “antisocial nerds” ascribe a great deal of value to their interactions and to what each party says.  This is a misunderstanding of the nature of the communication.  The actual content is unimportant.  Nobody cares if you’re talking about the weather, cars, or anything else.  True, this doesn’t make logical sense, and in a perfect world people would communicate usefully instead of feeding their egos by the fact that they’re talking to people.  Most of the “extroverts” are pleased by the fact that they’re talking to people, and are anxious when seen by themselves- this mentality is communicated to introverts and affects them quite adversely because they prefer to be alone for some part of their day and they may believe that there is something wrong with them.  Don’t buy it, please.  The people who *need* to be around others to validate themselves are the unstable ones.  It’s similar to the way men and women treat sex.  Men are usually sexually insensitive and are more pleased the by fact that they are having sex than they are enjoying the sex itself.  They are usually seeking validation from society instead of their own enjoyment.  Of course, most women can pick this up immediately and they would prefer not to be some boy’s tool to self-validation.  Women, you aren’t off the hook, you do the same thing, but not with sex.  Instead, you get validation from men paying attention to you while others are watching.  Don’t get me wrong, it goes both ways.  Some women perceive that they get validation from having lots of sex, and some men get validation by attention from women, they’re just not as common as the other way around.  Impulsiveness as a concept is often bundled with these behaviors which, although nobody really knows why, are widely believed to be “creepy.”  That’s just not the case.

Now, recklessness is a whole ‘nother can of worms.  Doing something that you know to be crazy, or doing something because it’s crazy, has a completely different backing behind it.  Most reckless people do it because the cost of the reckless action is balanced or outweighed by the enjoyment or rush they get from it.  This is the same mechanism that makes skydiving fun, even though skydiving is actually reasonably safe.  If you had a significant chance of dying you wouldn’t be able to sell it to people as a recreational activity without some serious social pressure backing it up.  Ziplining is another example- there has only ever been one zipline death, and that was under suspicious circumstances.  But we perceive it to be dangerous and enjoy a rush from it.  There is, however, a time when outright reckless behavior can be a rational course of action.  Usually these circumstances fall into two categories though, 1) you’re trying to make other people/agents believe you’re reckless, or 2) direct and/or thought-out strategies can be expected or countered easily or are otherwise rendered ineffective.

Category 1 is the more common of the two and can potentially occur in any game or strategic situation.  Essentially your strategy is to do something stupid in the hope that your enemy will misjudge your tactics or your capabilities, enabling you to take greater advantage later on, or in the long run.  In poker, it is sometimes a good thing to get caught bluffing.  That way, next time you have a monster hand your opponent might believe you’re actually bluffing.  If you’ve never been caught bluffing before, they would be much more likely to believe you actually have a hand and fold.  Obviously, if you get caught bluffing enough times that it seriously impacts your pile of chips, you’re just bad at poker, but a single tactical loss can be later utilized to strategic advantage.

Category 2 is much more interesting.  Let’s take a game like Total Annihilation.  By the way, TA: Spring is totally free and open source, and it’s easily a contender for the greatest strategy game ever made.  Although it’s not fundamentally that complicated, there is no in-game help so it can be very confusing for new players.  Feel free to log in to the multiplayer server and just ask for a training game- after one or two you should be up to speed and ready to play for real.  Anyway, in Total Annihilation, at least the more standard-fare mods, there are dozens if not hundreds, there are huge weapons that deal death massively and can pose a serious threat in and of themselves to the opposition.  Things like nukes, long range artillery, giant experimental robots (and you can FPS any unit, bwahaha!!), etc. etc.  Anyway, the construction of one such piece can actually end the game if it stands uncountered or undestroyed for too long.  However each has a counter, which range in effectiveness.  For example, antinuke protects a fairly large area, but if you throw two nukes at it, it can only handle one.  Shields protect against long range artillery but they have a small area and cost a lot to run, and so on.  Now, a calculating player can probably figure out the ideal choice for the opponent in a given situation.  If he’s focusing all his stuff in one place, he may as well get both shields and anti-nuke, but the other player(s) could then steal the whole map.  If he goes for the whole map himself, the other player would probably get air units to attack his sparsely defended holdings.  If he consolidates in a few carefully chosen locations, nukes might be in order, and so on.

This is where we get to the recklessness-as-tool element.  Potentially the greatest advantage in complex games of strategy is surprise, or doing something that the enemy did not expect and must react to.  Ideally the enemy has limited ability to reorganize to counter the new threat.  This is true of real-world military action- there are issues with communication, chaos, and a host of others that make reacting quickly difficult.  The more resources sunk into the threat, the more resources that will be necessary to counter it (assuming that the attacker isn’t just stupid).  There would have been no point in the Manhattan Project, for example, if the enemy could put horseshoes on all their doors to render nuclear weapons impotent, and it would never have been started.  Now let’s say we have a game of TA where it would be obvious that hitting the enemy with a nuke would be the best course of action.  Of course, this same idea will have occurred to the person about to get nuked.  OK, so then big guns are the best strategy.  Except that your opponent can think of that, too, because he might guess you’re not going to use nukes because it’s too obvious.  And so on through all the possible options, whatever one can think of, the other can too.  Whatever strategy you might use to maximize your utility can be equally though of by the enemy.  We are dealing with a perfectly constrained system.

But what if we de-constrained the system just a little bit.  We remove the rule that says we must maximize value.  Now we could feasibly do anything up to and including nuking ourselves.  So we need a different rule in its place because now we’re working with a screwed up and dysfunctional model.  This is where the trick is.  Because you might still have a meta-model of maximizing value in your selection of an alternate strategy, meaning you will be just as predictable, albeit through the use of a much more complex algorithm.  No, you have to truly discard the maximizing value paradigm in order to get the additional value from surprise, and the trick is to not lose too much to put you behind after your surprise factor is added in.

My problem here is I’m trying to reduce a complex and multi-dimensional strategic game to a single aspect under consideration.  My other problem is that many of you will have never heard of Total Annihilation.  The same idea applies to more or less any other sufficiently complex game, such as Starcraft, but value is too directly transformed in most modern games to make such meta-strategies significant.  If you have more troops, or the right kind of troops, you win.  If you’re behind, you’re behind and there’s not a lot you can do about it other than try harder in doing what you were doing before.  So while surprise might give you some advantage, it’s probably not going to be worth enough to be behind to get it.  Careful application of force certainly helps, but it’s not as vital as in Supreme Commander or Total Annihilation.  No, I’m not harping on the games in question, I’m not demanding that you must play them, I’m just sharing my particular taste in video games.

Impulsiveness once again.  I seem to be digressing more and more these days.  Basically what I’m trying to communicate is that in some situations (games to use the theoretical term) the act of analysis must be take into consideration in your planning.  How much time can you spend analyzing, what should you be analyzing, how is the enemy thinking, etc. etc.  Once you bring the act of thinking into the purview of strategic considerations, impulsiveness is one option for a viable strategy that just does not occur to someone who cannot conceive of the act of thinking as a strategic concern.  They implicitly believe that life is a game of perfect information with unlimited time for a given move.  The truth is, you’re acting when you decide what to do, and that act will have an effect on the world and on the results you get.  There are lots of proverbs about hesitation, but they don’t seem to extend to when to think and when to just act.  On the whole, I think most people have an implicit understanding of this type of decision making- it comes pre-packaged with the HBrain OS, but they haven’t really considered exactly what it is they’re doing on a consistent basis.  I’m just here to point it out so those who haven’t can read about it and be provoked into it.

The St. Petersburg Paradox

I’m in more of a mathematical mood right now, so I’m going to cover a piece of abstract mathematics.  I want to talk about the St. Petersburg Paradox.  While a famous problem, you can wikipedia it for more information if you like, here’s a short summary.  Imagine we have a game of flipping a coin.  Starting at $1, every time the coin lands heads, you double that amount.  When it eventually lands tails you win however much you have earned so far.  How much should it cost to play?

Now I very much enjoy this problem in a pure mathematical sense, but Daniel Bernoulli, the man who invented it, apparently took the mathematics of this problem rather too far.  Bernoulli noticed, as the more astute among you probably either deduced, or probably already knew, that the game’s expected value is in fact infinite.  This means that no matter what the cost to play, you should always accept.  However most common people wouldn’t pay even $50 to play this game.  Bernoulli deduced from mathematical bases a utility function of the game which would explain this behavior using a logarithmic idea of value.  He supposed that people’s valuation of money decreases as the amount of money they possess increases, or to use another term, he proposed a diminishing marginal utility function for money.  While this approach, I guess, works, the even more astute among you will have noticed that this doesn’t actually solve the paradox.  You can just have a game’s payoff function that uses the inverse of whatever utility function and still end up with an infinite payoff that nobody will take.  Other mathematicians have wrestled with this problem, and so far the conclusion, as far as I am aware, is that utility must be bounded in order to resolve this type of paradox.

Now, I am not a professional mathematician, but I believe that I have solved this paradox.  SImply put, all these mathematicians have been assuming that people have the same conception of reality that they are working with; a mathematical one.  These mathematicians have assumed that people think of money as a number.  That seems obvious, right?  Money is measured numerically.  Well, yes, but the fact that different people have different ideas of what money or other commodities are valued at means that it isn’t a number.  Numbers are objective, inherently.  Two people must categorically agree that a 7 is a 7, it always was, is, and will be 7, and that 7 = 7, which also equals 6 + 1 and an infinitude of other identities.  However we all know that two people might have a differing opinion of various exchanges, such as $3 for a mango, for example.  Someone who loves mangoes might buy at that price, someone who doesn’t, won’t.  So we can’t say that $3 = 1 mango in the same way that we can say that 7 = 7, even if all mangoes in the world were always bought and sold for that price.

The issue here is that these mathematicians, while brilliant direct deductive thinkers, think of the universe in a flatly rational way.  While this is probably the best single perspective through which to view the universe, it fails when dealing with people that lack a similar rational strictness.  Have you ever been beaten by someone at a game you were clearly better at, simply because the other player just refused to play “properly”?  This happens all the time in poker and numerous gambling or card games.  In games like chess this rarely happens because in a game of perfect information, “proper” play can be categorically proven to be superior during the game itself.  If it would result in a bad situation, then it isn’t proper play.  Where information is limited, “proper” play might land you in situations you couldn’t predict or prevent.  Anyway, a more textured view of the perception of the universe would allow for nonlinear and unconventional conceptual modes for perceiving the universe.  For example, perhaps a certain subsection of people conceive of money like power.  The actual number isn’t as relevant as the power it holds to create exchanges.  The numbers are negotiable based on the situation and on the value sets of the parties involved  So the St. Petersburg Paradox could be equally resolved by saying that power doesn’t scale in the same way that money does.  If you offered someone a utility function of power, it would mean nothing.  Power is not infinitely reducible: the ability to do something doesn’t blend seamlessly into the ability to do something else.  The atomic unit of power is much larger than the infinitely fine divisions between any given numbers.  Having ten very small amounts of additional power is also not the same thing as one very large new executive power.

People can link together abstractions and concepts in many, many different ways.  For example, some successful investors say that instead of looking at your money like it’s your fruit, look at it like your bag of seed with which to grow more seeds.  True, you’re going to have to sell some of those seeds to get what you need, but its purpose is to grow.  As you accumulate more and more, the amount you can draw off increases while still maintaining useful volume.  This gives a completely different outlook on money, and will generate different decision behavior than looking at money as something to be spent as it is earned.  This same principle can apply anywhere at all, because in order for something to exist in your perceptual map, you have to think about it.  You might think of movies like books that have been converted, like picture books, like snatches of real-life experience, like a sequence of scenes strung together like string being tied together, or like a strip that runs through its full length in only one direction the same way every time.  There are other possibilities of course, but that’s as many as I could think of while I was in the process of typing this post.  This is only looking at a small slice of the possibilities of conceptual remapping (analogues and analogies, specifically) but other forms would require a great deal more explanation.  I think you get the point though.

Back to mathematicians and the St. Petersburg Paradox.  The paradox only exists if you look at utility in the mathematical sense.  There exist models, such as the one that “common sense” seems to indicate, that don’t see a paradox.  These models instead see a game that has a sliding scale of value and beyond a certain point the value is zero (or negligible).  This gradual fading of value is responsible for the probable effect of many people deciding to play the game at differing values.  I don’t think even the most hardcore mathematician would play the game for $1 million a round, even though it will eventually pay for itself.  The utility solution fails to take into account the common sense evaluation of time and effort as factors in any given activity.  You could factor in such an evaluation, but you would probably then be missing something else, and so on until you have built up a complete map of the common sense and shared perceptual map of the most common conceptual space.  But then you have duplicated the entire structure you’re attempting to model and created a simulation instead of a simplification.

On simulations and conventional models, we currently use both.  Our simulations, however, tend to be based in the real world, and we refer to them as experiments.  This is how we collect evidence.  The problem with the natural universe is that there is such an unimaginable profusion of activity and information that we can’t pick out any particular aspect to study.  An experiment is controlling all those other extraneous factors, or removing/minimizing them from a confusing universe so we can focus on a single test.  Once we have our results from that test we can move on to test another part of reality.  Eventually we will have built up a complete picture of what’s going on.  Simulations are data overkill from which we can draw inductive conclusions because we don’t understand all the underlying mechanics.  Models are streamlined flows, as simple and spare as possible, which we can use to draw deductive conclusions.  For example, the equation for displacement for a falling object [dp = v0*t + (1/2)a^2*t] is a simplified model, subtracting all other factors than the one being considered, allowing us to deductively conclude the displacement for any values of v0, t, and a.  Mathematical conclusions are a sequence of deductive operations, both to make mathematical proofs and to solve/apply any given instance of an equation/expression/situation/etc.

Our minds operate on the most basic level using models primarily, and simulations second.  This is because most of the time, a model is close enough.  You don’t need to include every factor in order to get an answer at sufficient precision.  You don’t have to factor in the time, the temperature, or the quantum wobble of each atom in a baseball to figure out where it’s going to land.  If you wanted a perfect answer you could simulate it, but you can get it to an extremely high level of precision by simply ignoring all those marginal factors.  They are not worth computing.  Now we are beginning to factor in the distinction I’ve brought up before between algorithms and heuristics.  Models are often heuristics, and simulations are often algorithms.  Models can include algorithms and simulations can include heuristics, but on the whole a simulation (given correct laws and good starting conditions) will algorithmically compute exactly what is going to happen.  A model, on the other hand, is a much more efficient process that throws away data in order to make calculation simpler.  Usually a lot simpler.

Now I am willing to bet that some readers will be confused.  I just said that simulations need the right laws and starting conditions- isn’t that the same thing as a deductive process needing the right logical framework and initial premises?  Well, yes.  That’s because a logical construct is a simulation.  However, it is a simulation constructed using information already stripped of extraneous information by creating a model of it.  The line between model and simulation is not black and white- they are simply approximate labels for the extremes of a spectrum, with conflicting ideals.  The perfect model is one law that determines everything.  The perfect simulation is a colossal, gigantically massive data stream that represents everything, down to the last spin on the last electron.  This is also where we get the fundamental distinction between philosophers: the conflict of rationalism versus empiricism.  The rationalists believe the model to be the “one true philosophical medium” and the empiricists believe it’s better to use simulations.  The tricky part is that in order to construct a simulation, you have to have models to run each of its laws and each of its elements.  In order to have a model, you have to have a simulation to draw patterns from.  So we have an infinite recursion where rationalists and empiricists are chasing one another’s coattails for all eternity.  Fortunately, most people who have thought about this much have come to more or less the same conclusion, and figured out that rationalism and empiricism go hand it hand quite nicely.  However there is still a preference for choosing to understand the world through one mode or the other.

How does all this apply to the original issue of the St. Petersburg Paradox?  So we have mathematicians who are definitely rationalists- I imagine there aren’t many professional mathematicians who are empiricists.  And these mathematicians construct a model that represents a certain behavioral set.  Their problem, however, is that reality doesn’t actually support the conclusion they are saying is the most rational.  So they change the model, as they should, to better reflect reality.  All well and good.  Their problem, though, is that they are actually doing their job backwards in one concealed respect.  Implicit in their model is the idea that it is the case in the simulation they are describing that the population they are describing has the same conceptual map that the people who created the model did.  I am aware that I could have simply said we have some ivory tower mathematicians who are out of touch with reality, but I wanted to cover in-depth what the disconnect with reality is.  They are correcting their model by making it better reflect empirical reality in one respect, but in so doing they are simultaneously doing the same in reverse by assuming things from their meta-models onto reality.  We have rationalism and empiricism, simulations and models, inductive and deductive thinking, all chasing their dance partner around.  But the most vital thought is that the process must only go one way.  You must always push forward by correcting both to better fit the other in reality, rather than working backwards and assuming things onto reality which are not the case.  If you do this, and then entrench your position with a rationale, you are screwing up your meta-model of reality.  And, like a monkey with its hand caught in a banana trap, the tighter you squeeze your fist the more surely you get stuck.  For every ratchet backwards on the progress ladder, you get more and more firmly stuck in place, and it even gets harder to continue to go backwards.  The wheel spins one way, it grinds to a halt in the other.

The Fundamentals of Reason

I realize that I talk about reason and rationality a great deal, but I haven’t done a great deal to explain exactly what I mean by those words.  In fact, through a great part of history it was perfectly acceptable to treat divine inspiration or the product of a drug-induced hallucination as a basis for decision-making.  However that is clearly not rational by today’s standards.  I want to try and stay away from the philosophy of science, though, since that sort of discussion will not have meaning for too many people.  What I want to get across is that we are all fundamentally rational beings because rationality is a prerequisite of survival.  If we did fundamentally insane things on a regular basis then our species would be long extinct to make room for those that react to reality instead of a fantasy world.

Everyone, even the craziest of the crazies, is fundamentally rational.  They know how rationality works, even if it hasn’t been formalized for them.  They know how to apply it to make the right decisions and to sort truth from falsehood.  The trouble comes because rationality is so flexible.  As a meta-rational strategy, it may be wise to ignore rationality.  It may be proper to do any conceivable action in the right circumstances.  If you live in a society where those who don’t jump up and down and make monkey sounds when the man in the absurdly tall green feathered hat says “mookly!” are killed, then you damn well better jump and make monkey sounds.  If you live in a society where your interests are served by neglecting strict basic rationality in favor of a unified community perspective, even if that perspective is clearly ridiculous, it may be a reasonable choice.  Rationality, for those who have experienced it in formal form, is a very seductive thing because it lets you know things.  Truly know, not just “think” or “suppose” but actually know, and prove to a specific and known degree of uncertainty and ambiguity.  The first step is to establish that all propositions may be false given certain future evidence.  If we discovered a rock that fell up, that’s a vital piece of information.  It doesn’t actually prove that gravity is false though, as the stereotypical example says.  Because clearly there’s some value in the model of gravity because it’s been right so often in the past.  If it needs to be extended to cover a more general field of circumstances, so much the better.  This is how knowledge is advanced.  Once you acknowledge that you can never be absolutely sure (and I mean in the sense of absolutes) of anything, there is a ceiling on the strength of propositions.  This ceiling is, put succinctly, “To the extent that it is possible to know anything, I know that ______”  Now, a lot of postmodernists take this to mean that nothing means anything.  Ridiculous!  What it means is that if you observe something, you don’t get to say “that didn’t just happen because I know X.”  Conversely, if you fail to observe something, you can’t say that “I know it is so anyway because X.”  This one is trickier because it may actually be valid in certain circumstances because you can put a weaker proposition in the position of being negatively tested against.

OK this is getting a little confusing.  I shall rephrase.  If a devout Christian fundamentalist who believes that the Bible is literally true, word for word, was presented with a real-life situation which clearly contradicted the Bible, and continued to believe in the Bible, that’s a problem.  The fundy is assuming that the Bible is true in absolute terms.  The contents of the Bible are so true that even reality cannot touch it.  This is, of course, living in a fantasy world.  However this is a common example of someone attributing far too much strength to a proposition- more confidence in a specific statement than you can possibly have while still keeping an objective view of the world.  For the fundy faced with a contradiction, they have basically two alternatives.  Firstly, they might conclude that the Bible isn’t literally true and that reality is, well, real.  Or, they can come up with an explanation of some kind that will explain why the contradiction can exist, explain how it isn’t really a contradiction after all, or shatter their thinking faculty by believing that contradictions are admissible in reality.  There is a fourth option: ignore the problem.  While there are countless problems that are given this treatment at any given time, the invasive nature of religion invariably fills the victim’s life and worldview until they are forced to take one of the aforementioned options.  Modern religions dislike dabblers- they prefer converts, and vector mechanics are selected for accordingly.

The second pillar of rationality is deduction.  The ability to conclude things.  Now, some would say that premises are more important than deductive ability, and while they would probably be right.  It is possible to be a hardcore rationalist operating from very bad premises, arriving at awe-inspiringly terrible conclusions with great certitude.  However, your premises are only subject to rational analysis once you have established the ability to measure them.  Which requires deductive, abstractive, and meta-analytical faculties.  So I place deduction higher.  Anyway, most people understand how this works.  Socrates is a man.  All men are mortal.  Therefore, Socrates is mortal.  Actually this is rather a more complex statement than necessary to prove deductive faculty.  7 = 7, and therefore 7 = 7 will do nicely.  The basic laws are available on Wikipedia, but the structure you use isn’t as important as the ability to follow one.  True, single-order binary logics using only true and false statements have very strict and well-understood laws for operating and maintaining truth values.  But what if you want a system with three states, or n states, or a paradigm specifically designed to deal with ethical choices?  The ability to understand, follow, manipulate, formulate, and eventually innovate in thought forms is important.

Thirdly, your premises.  This is where a lot of people screw up.  If you start from bad premises, there is nothing you can do to arrive at a reasonable conclusion, even if it is factually true.  In fact, it’s worse if you arrive at a conclusion that is correct through flawed reasoning because you will then apply that reasoning elsewhere with undeserved confidence.  This is how we get Creationists on TV talking about how the banana is perfectly shaped for the human hand, therefore there must be a God who designed both the banana and the human hand.  They are operating from some extremely bad premises, but actually, if you admit their premises for the sake of argument you arrive at a relatively strong hypothetical conclusion.  This type of thing happens a lot for religion.  It’s like a compression algorithm in the religion virus’ DNA that also increases its rate of spreading.  It reduces the amount of information that must be transferred (only the premises, not the whole structure), it makes it easier to bypass the natural pseudo-rational approximant functions naturally embedded in the brain, and also enables the subversion of those very faculties once the premises are accepted.  Religion is itself evolved to be an amazingly effective virus for transmission between minds.  It’s what makes discussion about it so fascinating; I’m always finding new things that the religion virus has capitalized upon and been selected for.  You see the same type of thing in a lot of famous books and movies- it appeals to a wide diversity of people and has been selected for among a large population.  The “classics” then provide the seeds upon which new diversity is created.  Of course, in this case the metric by which we measure the species’ utility is entirely subjective and changes with the times so it’s less of a purist evolutionary system, but still an interesting thought.  Also, it’s important to point out that each “species” has essentially one organism: the contents of the book.  In olden days this wasn’t so- every bard and performer had their own version which they performed for specific results.  This is probably why a lot of the very old tales, including fairy tales, have an almost mystical amount of power in them.  They have been naturally selected for in a much more proper fashion with more than a single set of genes in the pool.  Modern books are all carbon-copies of one another because we’re so precise in our exchange of their information contents.  Anyway, now that I’m thoroughly off topic from the basis of rationality, it’s time to return.  If you start from good premises, and use proper rational tools, then you must arrive at a valid solution.  Now it’s important to note that while at one point in time given a certain set of information, a set of premises may be proper and produce the right results.  However, later in time, you may encounter a result which contradicts your original construction.  This is OK, it just means that your premises weren’t perfect, they only covered certain cases.  In reality, it’s more or less impossible to create a model to cover all cases without creating a model as complex as reality itself, thus defeating the point of using a model in the first place.  This is the difference between a rational model and a pure simulation.  A pure simulation would duplicate exactly the information content of the subject matter being considered, and is not necessarily a tool or vessel of intelligence.  If it were, we could say the universe as a whole is a vast intelligent being because it has so many atoms that all can be represented as information patterns performing constant exchanges that we are contained within and thus may never understand.  The second we “understood” the picture, our minds contain a new piece of information which we haven’t accounted for, and so on forever in infinite recursion.

Anyway, I started this post off in fairly short and focused form, but now my mind is all over the place.  It’s a pleasant way to be, but it isn’t conducive to great writing in a linear mode like a text stream.  I hope I’ve given you some food for thought to chew on, and of course the basis for the tools to do it with.

The Bailout

Dammit.  I usually find the reserve to just keep on talking about timeless issues, the human condition, and problems that persist and grow.  But this is just too much.  While I’m still going to avoid talking about the election and such, I just have to talk about the bailout.  It can no longer be avoided.

The source of my agitation is this article (http://www.thesmokinggun.com/archive/years/2008/1007083aig1.html) from the Smoking Gun.  Essentially, the executives of AIG have, days after the $85 billion bailout, thrown a massive executive party at a resort in California.  Now, let’s be honest, this article does slant the situation in the obvious direction.  But don’t they just bloody well deserve it?  I mean, seriously.  These people are seriously threatening even my steely impartiality and objectivity, if I do say so myself.  What on earth are these executives thinking?  Was the whole thing a scam and they simply no longer care because we’re all going to economic hell anyway?

OK, first, let’s be fair.  The invoice tells us that the vast majority of the funds were spent on either rooms or food and not on luxury services like spa services, exploding cakes, urinating ice statues, and the like.  Although there were sizable figures under those categories, to say that they spent a fortune (relative to the $450k price tag) would be improper.  I don’t want to pore through their invoice because it’s a limited source of information anyway, and you can only extract so much information from it.  Anyway, it seems to me that the exorbitant cost of their retreat resulted from their choice of location rather than any particular absurd excess.  Whether or not they might have simply gotten a convention hall for an executive summit is debatable, but any argument we might have lacks all basis because we have no context.  It’s highly probably there is a good reason why that location was chosen, possibly to secure ties to some other company or maybe something as small as associates of the company involved with the resort.

All that said, dammit, do you guys care that little about our money?  That $85 billion is supposed to be saving your asses, not letting you party like it’s 1929.  They must have known how this would look, which only strengthens the possibility that there is a good reason we aren’t seeing.  Of course, Occam’s Razor says they’re just partying because they feel like it and don’t give a damn if we despise them for it.

The basic issue at hand here, though, is not corporate irresponsibility.  In a perfect world, CEOs and executives can do whatever they want with their money.  However, the critical point here is that it must be their money.  The second that they control money they don’t actually own or have legitimate authority to manage, we have a problem.  Of course, that’s exactly what the government has done.  The government extracted that $85 billion from us against our will with the vapid promise that it was *still ours.*  Of course that’s nonsense because we have absolutely no control over how it’s spent.  So they give it to big corporations because those big corporations can incentivize the people in power however they need to in order to get a piece of that pie.  They’ll figure it out if it’s possible.  You can’t build a system that will get the job done properly that will not be open to willful subversion to the same degree.  If you count on intent, then intent will be its weakness.  If you count on structure and checks, then structure and checks will be its weakness, and so on.  If you count on open violence, then open violence will be its weakness.  It doesn’t matter on what motive power the government is managed, it can be subverted by an appropriate strategy because the only way it can’t give money to the wrong hands is if they can’t give money at all.  Anyway, I’m getting off topic.  My point is that those bankers now control a great deal of “our” money, so it pisses us off when they do stuff like this, and rightly so.  We’ve been robbed on the promise that we would get something in return, and later we were deprived even that (as usual I might add).  If there were no public money floating around, then why should it irritate you if these CEO’s throw flagrantly irresponsible parties?  You already got your goods or services rendered.  Just like how they don’t care if you burn that plasma TV after you buy it, you don’t care if they burn your money once you’ve paid for that TV.  It’s not yours anymore- you willfully parted with it in exchange for your new TV.

There are so many unfortunate fools who will blame the companies for this type of fiasco.  What bullshit!  Imagine if you were handed $85 billion dollars on a silver platter.  Well, that’s not strictly true.  Imagine that you could spend all of your assets and have some probability of pulling down $85 billion from the government cloud funds, and it pays off for you.  You’re A) going to party like a wild animal because you’re set, and B) you’re going to keep on using that money to try to get more.  It is obviously a very effective strategy.  Not only that, but there comes a time when they don’t even need to provide the same level of services.  Because when things are in the shitter, they get free money.  Does this sound like a good plan to fix the economy?  No.  These banks are going to do their damnedest to keep that free money flowing.  The only way they would stop would be if they could make significantly more money by actually working.  There must be a significant enough difference that their profits from being honest will exceed the free funds they get while cutting expenses at the same time.  Which probably is never going to happen.

My point is that these executives are being completely reasonable given their environment, even if they’re just being 100% wasteful.  Imagine if you were paid to spend money.  The more money you spent, the more you earned.  In fact, if you were so bad that you were in a constant state of poverty, you would get even more.  I’m not trying to argue that the poor don’t deserve aid, but I am trying to say that this incentive structure is just insane.  It will literally incentivize insanity.  Doing the exact opposite of the preferred behavior is rewarded.  Total madness.  “We want people to be wealthy, so we should give money to people who are poor.”  What the hell?  The banks are actually in this situation.  The worse they are at managing their business, the more money they get from the government, on the grounds that if they fail it will be bad for the economy.  Haha!  It would be hilarious if it wasn’t so sad.

Now a lot of people think that incentives are a crude way to look at humanity.  I would agree.  However, incentives provide a direct model upon which complications can be built.  It’s like how we discovered algebra before we discovered complex numbers, but the discovery of the complex numbers doesn’t invalidate the process of algebra.  Deeper structure to the human mind and personality doesn’t change the fact that they judge by application and comparison of incentives or utility functions.  If the deeper structure would predict that someone would pick a free $5 over a free $10 then your model is broken, no matter how complex or interesting or otherwise intuitive.

In fact, models tend to be constructed to formalize specific inconsistencies in the way that the world is filtered.  Fallacies are easily communicated through analogy, models, and other constructions.  For example, it seems perfectly logical that gun ownership would cause crimes involving guns.  Very intuitive, right?  The more guns there are, the more likely people will be to use them because they will be more available.  Actually, this is not the case.  In fact, I would wager that gun crime goes down if you decrease, or increase, the supply of guns.  The gun-control-slanted middle area is where the most crime involving guns will take place.  Those who really want to can legally acquire them, and they can be reasonably sure that nobody else will have one.  If you make them harder to acquire, it goes down a little.  Although crime might remain equally high- it’s just crime with guns that would probably go down.  However, if you increase the supply of guns, then the thugs no longer have the same certitude that they will not be met with lethal force.  Even if the probability that the person they’re mugging will be armed and belligerent are small, they only have to do it numerous times before statistics kills them.  Knowing this, a whole slew of crimes such as violent crime, muggings, robberies, etc. all would be expected to drop.  This is of course a theory only moderately supported by evidence, but it’s a reasonable hypothesis that deserves further testing.

Before I end this post, I want to talk about positive obligation.  There is no such thing.  Who cares?  Well, how come it’s possible to take out a loan which your children are then responsible for, even though they never spent a penny of it?  Irresponsible and morally reprehensible for the person, sure, but how come it’s even possible?  Simply put, it’s because the people lending the money are the people with the money.  They get to decide how those loans are paid back.  And it is true that if some loans where the client dies are paid back, it makes capital cheaper and more available.  This somewhat stimulates the economy, and at the expensive of only a little moral questionability.  This is acceptable to the people lending the money, and they have a legitimate case to make.  However, the day my government can spend money it doesn’t have on the good faith that I’ll pay interest on it?  Not a chance.  I’m not old enough to have a stake in the vast national debt.  So I just won’t.  If they make me pay for it, I will, but not because I want to.  I’m not going to put myself in a difficult position or enter direct conflict with the government- that’s just foolhardy.  You can’t fight them- they’ve got everything from the army up to and including the nuclear option.  So don’t.  Just do what they say, but keep to principles.  They can threaten you.  Let them, and accede to their demands.  Very simple.  It’s the mob.  You don’t pay, your life gets difficult.

The Fallacy of Composition

The fallacy of composition is an especially effective and insidious mental tic that affects many decisions made in society.  To go over the basic nature of the fallacy quickly, it means to ascribe properties to a group as a logical result of the composition of that group.  When described that way, it seems perfectly logical.  However we arrive at such propositions as “I shall get all the strongest men in my army, and they will form my strongest unit.” (example originally used by Madsen Pirie in How to Win Every Argument)  Now in a sense this is true.  If you are looking to make a military unit that is adept at moving large amounts of freight.  However military units aren’t strong in the same sense that men are strong, and this misapplication of semantic significance leads to the fallacy.  If you wanted a strong military unit you need things like discipline, competence, efficiency, morale, ability to survive in tough conditions, and so on.  If you were to convert the desired properties appropriately, such as specifying that you want to select men for their ability to work together, keep morale up, survive, or whatever else you’re looking for, and those skills are commutative, then you might be getting somewhere.  As I said earlier, if you got 100 men who are adept at lifting things, it is the case that the group of 100 men will be adept at lifting things because every member within it is, and direct action is commutative.  For example, if you have 100 people playing ping-pong, it is correct to say the group of 100 are all in the act of playing ping pong.  However, I didn’t specify if they were playing each other, other people, or if there are only 100 people playing ping pong (they could be 100 among many more).

This seems like an obvious fallacy, used as above.  How could anyone fail to notice that?  Well this same logic, or illogic, is used in countless places in modern public discourse.  For example, whenever anyone argues that it is moral for the government to give money to group X, they are probably utilizing it.  For example, charity.  There is a soup kitchen that feeds homeless people and needs money.  Or some other program to help the homeless, the needy, the hungry in foreign countries, etc. etc.  They probably say or otherwise imply something along the lines of “it’s a kind act to give your time or money to help other people, therefore it’s moral for us to help them.”  Consider the actual significance of the statement: because it’s moral for an individual to give money to charity, it’s moral for society.  Now, while that might (arguably) be a proper application of individual-group semantic conversion, consider that the “society” as a semantic identity is not a decision-maker.  “Society” cannot actually do anything because it is just a vague/y specified conglomerate of individuals.  Things can happen to a society, in the same way that I as an agent can drop a ball or eat a sandwich.  But the sandwich cannot act in such a way to determine whether or not I eat it.  In order for “society” to do anything, there must be some agent controlling that group- implying the existence of a government or controlling body.  So what you’re really asking is whether it’s moral for the government to give money to charity.

This is a sticky issue for many people, but consider where the government gets its money from.  Taxes are involuntary.  If taxes were optional, nobody would pay them.  If you presented people with the option of A) Taxes, get complete government services, or B) No taxes, no government services, a great many would choose to live independently.  This is unacceptable for governments because it actually puts competitive pressure on them.  They actually have to offer value to get people to stay with them, they have to somehow convince recalcitrant customers that their product will help them.  Every company would really rather have a guaranteed income backed by threats of persecution.  Now, among the people who gave their money voluntarily, knowing the mechanism through which it will be filtered before eventually being spent,  I have absolutely no issue with that money being spent on anything at all.  I can have an issue with things they might do with it, of course.  If they use that money to buy tanks and attack people, we’re going to have a big problem.  But I don’t have an issue with the basic operation of such an entity.  However, government taxes are basically bold-faced theft.  Worse, they’ll try to convince you they’re doing it for your own good.  If it really was for my own good, then you wouldn’t have any issue with me choosing or not choosing your service.  If it’s really going to help me, I would choose it anyway, wouldn’t I?  Even the Mafia at least has the good decency to be honest with you.  They want your money, and they’ll beat you up if you don’t give it to them.  The greatest subtlety of the mob is calling it “protection money.”  The government actually believes it is protection money- it’s called National Security and Homeland Defense.  I’ll be totally honest with you, I really don’t see any significant threat that isn’t actually created by the government itself.

While it is true that there are terrorists, who may or may not hate America, it is certainly true that they are ascribing specific characteristics to Americans that are based on actions taken by the US government.  By the same token, many Americans are ascribing characteristics to Muslims or Middle Easterners based on the actions of a few extremists.  While some would call this simple generalization and stop there, I think it’s more detailed than that.  The thought process is a back-and-forth interplay between the individuals in the group and the conception of the group itself, a sort of repeat fallacy of composition, over and over again, getting worse and worse each time on both sides.  Like telephone played with abstract sketches in a kindergarten art class.

I’m getting a little off topic, and just found another instance of the fallacy of composition somewhere in the tangent sea.  Anyway, the government is not subject to the same type of moral analysis as an individual.  Neither are corporations.  They are groups, not individuals.  Moreover, moral laws as applied to individuals will apply to each individual in that group.  Moral laws for groups will apply to the entire group.  So, the government, as any group, would be virtuous in giving to charity if the money belonged to it in the first place.  Using charity as a justification for theft is just ridiculous.  However that’s exactly what the “generous” politicians are asking you to do.  Let’s say they convince some people that it’s a good thing for people, and therefore the government, to give to charity.  Fine.  Then why isn’t the politician, and why aren’t the individuals so convinced, going and donating money to charity instead of voting to force others to do so against their will?  And why isn’t the politician simply asking people to donate to a particular charity, as opposed to asking for taxes to be spent in that fashion?