Macroscopic Decoherence

Macroscopic decoherence is a fancy name for the theory in physics of “many worlds,” a resolution to the dilemma presented by quantum physics that, to some, makes a lot of sense.  Before I discuss what it is and what it means if it is true, first I’ll go over the more commonly accepted modern viewpoint more specifically its aspect labelled the Copenhagen interpretation.  OK, here’s the dilemma.  Heisenberg’s Uncertainty Principle, a verifiable precondition of any theory of quantum physics, states that you cannot determine both the position and the velocity of a particle.  The practical reason for this is that, for objects as small as particles, the act of measuring their properties has a significant effect in changing those properties.  For macroscopic objects such as a table, the photons bouncing off the table into our eyes don’t change the position or velocity of the table and therefore we can ascertain both.  However, there is no yet discovered tool which can be used to probe a particle without changing it in any respect, thus preserving its condition for a second measurement.  Hypothetically, I guess you could measure both properties simultaneously- within the exact same Planck time- but this is utterly impossible with current technology, totally incapable of operating on anything close to that time scale with simultaneity, and there may be other limitations I am not aware of.  Now, strictly speaking, this isn’t an accurate model of quantum decoherence.  Actually, particles behaving like waves exhibit a linear relationship of definition between variables such as, say, position and velocity.  This means that the more certain an agent is about one property, the correspondingly linked property can only be known with a correspondingly limited precision.  So it’s possible to have a continuum of accuracy about both properties.  This seems like a mad system, but this is due to the nature of waves.  I think I should stop and leave it at that before I get sidetracked from the main point- I haven’t even gotten to the standard model yet.
This gives modern physicists a dilemma- it would appear that our universe is a fickle beast.  Let’s say that we ascertain a given particle’s position with perfect accuracy- doesn’t that mean that it is categorically impossible for us to make any statements at all about its momentum, due to total uncertainty?  With the caveat that perfect accuracy is impossible, yes.  So what happens to the velocity?  Or, more importantly, what happens to all the other places it could have been if we hadn’t measured it?
The Copenhagen interpretation of quantum physics claims that the other possibilities do not exist in any case.  This more closely parallels the way we think about the macroscopic world in practical terms because even if we don’t know where a table is, we know the table has a given location that is not subject to change unless someone or something moved it.  The act of measuring the position of the table only puts the information about the table’s position into our heads, and does not change any fundamental properties about the table.  So, the Copenhagen model concludes that the act of measuring where the particle is collapses its waveform into one possible state.  It actually changes the waveform by nailing down one of the variables to a certain degree, leaving the other one free to flap around in a similar degree.  This collapse model causes particles to behave similarly to macroscopic objects in one sense.  However, in order to reach this conclusion, the Copenhagen interpretation has to violate numerous major precepts of modern science- I won’t go into all of them, although it is a laundry list if you want to look it up, universality and objectivity of the universe for one.  The fact that there are observers begins to matter because it appears that we can change the fundamental nature of reality by observing it.  This raises the question of what exactly constitutes an observation, perhaps one particle bumping into another counts as an “observation”?  But relative to us, the uncertainty principle still stands relative to both particles, so there really is something intrinsically different about being an observer.  This is the most serious flaw in an otherwise excellent model, and it is to address this flaw that I add my thoughts to the camp of macroscopic decoherence- the other one being that this causes particles on a small scale to behave in a fundamentally different way than larger objects.

Macroscopic decoherence does not require a theoretically sticky collapse, hence its appeal.  Instead, the theory goes that the other possibilities exist too, in parallel universes.  Each possible position, momentum, etc. exists in an independent parallel universe.  Of course, due to the number of permutations for each particle, and the number of particles in the universe, this causes us to postulate the existence of an indescribably large number of infinities of universes.  Now, if you accept that postulate, it allows a theory that explains particles in the same terms as macroscopic objects, you only have to accept that this same permutation mechanism applies to any and all groupings of particles as well as individual particles.  So there exists a parallel universe for every possible version of you, every choice you have made, and so on into infinity.  This is something of a whopper to accept in common-sense terms, but it does create a more manageable theory, in theory.  The linchpin of the theory is that, rather than the act of observing causing the mystical destruction of the other probabilistic components of a particle’s waveform, it only pins down what those properties are relative to the observer in question.
In other words, the act of observing only tells the observer in which parallel world they happen to be.  Each parallel world has only one possible interpretation in physical terms- one position and velocity for every particle.  Unfortunately, there are an endless infinity of future parallel worlds, so you can’t pin down all properties of the universe, or a distinct set of physical laws would necessitate the existence of a single universe derived from that one.  The flaw in this theory is that this same approach can be taken to a variety of other phenomena, with silly results.  Basically, there is no reason to postulate the existence of parallel worlds beyond the beauty of the theory.  The same data explains both the Copenhagen interpretation and macroscopic decoherence, which is why the theories exist.  Both produce the same experimental predictions because they’re explaining the same phenomena in the first place.  We can’t go backwards into a parallel universe, and similarly we can’t go back in time and find information that has been destroyed by the act of observing the information we observed then.  It appears to me that, given current understanding, both theories are unfalsifiable relative to each other.  Overcoming Bias makes a fascinating case as to why decoherence should be testable using the general waveform equations, but the problem I see is that theoretically the Copenhagen model could follow the same rules.  True, it lends serious weight to macroscopic decoherence because it systemically requires those equations apply whereas it could incidentally apply to the Copenhagen model.  Or some souped-up version of the Copenhagen model could take this into account without serious revisions, it’s difficult to say.  I do disagree with the idea that macroscopic decoherence must be false because postulating the existence of multiple universes violates Occam’s Razor.  This is a misapplication of the razor.  Occam’s Razor doesn’t refer to the number of entities in question, but to the overall improbability by complexity of the concept or argument being considered.  It just so happens that you have two options- either there is some mechanism by which observers collapse a wave into only one possible result, or there exist many possibilities of which we are observing one.  It is not a question of “well, he’s postulating one function of collapse, versus the existence of an endless infinity of universes.  1 vs infinite infinities infinitely…  Occam’s razor says smaller is better so collapse is right.”  This is not correct by any stretch.  True, currently there is no way to verify which theory is correct, but a rational scientist should consider them equally probable and work towards whichever theory seems more testable.

Well, let’s consider the ramifications if this theory of macroscopic decoherence happens to be correct.  It means that every possible universe, ever, exists.  Every possible motion of every single particle.  According to quantum physics as we know it now, there exists some possibility that the statue of liberty will get up and take a stroll through New York.  It is a…  shall we say… exceedingly small… probability.  I won’t even attempt to calculate it, but I bet it would be a 10 to the 10 to the 10 to the 10…. so many times you couldn’t fit all the exponents into a book.  It could easily be improbable enough that you couldn’t write that many exponents on all the paper ever produced on Earth, but I won’t presume I have any goddamn clue.  However, according to macroscopic decoherence, there actually exist a very large number of infinities of universes where this occurs- one for each possible stroll, one for each particle’s individual motion inside the statue of liberty for each possible stroll, etc. etc. etc.  And this is for events that are truly so unlikely as to be totally impossible, let alone for events as likely as intelligent choices between reasonable alternatives, such as what to order at a restaurant, or what to say every time you open your mouth, and then every minor permutation of each…. gah!  Any attempt to describe how many possible universes there are is doomed to fail.  Trying to diagram the possible life courses on the grand scale that each person might make, I will leave to your imagination.

So now we get to the interesting bit- the reason why I am writing this post.  So in all of these parallel universes there exists a version of you that is doing all of these different things.  So the question I have is, are they really you?  Seriously, there are versions of you out there that are exactly, exactly the same in every respect and living exactly the same lives in exactly the same universes, with a single particle moving in an infinitely small way elsewhere in the universe in a way that does not and could not possibly affect you.  However, because of this schism of universes, you are separate consciousnesses inhabiting different parallel universes.  Now there is a high probability that these universes are not totally discrete- rather they inhabit a concept-space that, while isotropic, could be conceived of as having contours that describe the similarity of the universes, with very similar universes being close together and very different universes very far apart, in a space with an infinite infinity of dimensions.  As a result, even with respect to these parallel universes, these versions of you will be infinitely close to you and could be said to inhabit the exact same space, with versions splitting off into space while remaining identical, and other versions experiencing physical changes on the same spot (some of them infinitesimal, and others rather drastic, such as turning into a snake, a werewolf, or anything else you can conceive of).
So which of them is the “real” you?  Or have you figured out that the concept doesn’t have any significant meaning in this context?  If we narrow down this infinite schisming into a single binary split, then both sides can be said to be equally “original” based on the preceding frame.  By the same token, an exact copy of someone in the same universe should be treated as synonymous with the “original.”  Please note, those who are unfamiliar with this territory- I get this a lot.  I am NOT referring to cloning.  A clone is genetically the same, but so utterly disparate from its progenitor that this level of identity is not even approached.  I am referring to two entities that are so identical that there is no test you could perform to tell them apart.  Obviously, with any time spent in different physical locations in the universe they will diverge after their initial point of creation, but it is that critical instant of creation where the distinction matters.  If the two are synonymous, there is no “original” and a “copy”- indeed, the original is merely existing in two places at once.  If they could somehow be artificially kept identical by factoring out particle randomness and their environment then they would continue to act in perfect synchrony until something caused a change, such as a minute aspect of their environment or a tiny change in their body’s physical makeup, such as a nerve firing or even a single particle moving differently (although that probably wouldn’t change much, somewhere down the line it might due to chaos theory).
So now we get to the difficult bit.  What about alternate encodings of the same information, but represented in a different format?  Are the two synonymous?  I argue that it is, but only under certain circumstances.  1) Using a rigorous and perfectly accurate transcoding method to encode one into the other, 2) the timespan of the encoding must be fast enough that significant changes in the source material are minimized, if not completely eliminated, and 3) the encoding can, theoretically, be converted back into the original form with zero loss or error.  The first requirement is the only ironclad one- if you make an error in the encoding then the result will not be representative of the original.  The second and third are more complicated, but easy to assume in an ideal case.  The reason for this is that there is a continuum of identity, and a certain degree of change is acceptable and will produce results that are “similar enough” to meet identity criteria.  If it’s the “you” from a year ago, it’s still the you from a year ago even if it isn’t identical to you now.  So if the encoding takes a year then it does preserve identity, it just doesn’t preserve identity with changes into the future, which is an utterly impossible task because even a perfect copy will diverge into the future due to uncontrollable factors.  Thirdly, if there is no method to convert the new encoding back then it cannot be verified that it is indeed synonymous with the original.  It is possible to produce an identical representation without this clause, but if for some reason it is impossible to convert it back then you can’t know that it is indeed a perfect process that preserves material identity absolutely.  This is the test of a given process.  Now, for digital conversion, reconversion back into physical media is impossible, but simulation in a perfect physics simulation and producing the same results is synonymous with re-creation in the physical world.  I am aware that this appears to be a figure-eight argument, depending upon the identity of a simulation to prove the identity of digital simulation as a medium.  However, this is false because I am referring to a test of a specific conversion method.  In order to create a proven physics simulation, other provable methods might be used to compare the simulation’s results with the physical world.  Once the simulation has been proven to produce the same results as the physical world, given the same input, then a given instance of simulation can be added and compared with the exact same situation in the physical world, using the simulation as the calibrated meter stick by which to judge the newly simulated person or other digitized entity’s accuracy.

Advertisements

The Contradiction of Freedom

Freedom appears to be the favored subject among my readers, so here we go into greater detail.  First of all we need to establish what I mean when I use the word.  By “freedom” I am referring to unencumbrance in the transformation from desire to reality.  This is distinct from the idea of “liberty” or the fulfillment of all intrinsic rights to the satisfaction of the individual being considered.  I believe the issue of maintaining liberty to be a solved one- however, the issue of freedom certainly is not.  The fact that there are no slaves, no wanton executions in the developed world, etc. etc. indicate to me that the fulfillment of basic liberty is not even particularly difficult if the conditions are right.  Freedom, on the other hand, is more difficult to work with.  The reason for this is that reality itself necessarily impinges on our freedom.  I want to be able to fly around, but gravity says I am not free to do that.  In my common definition of “freedom” I don’t consider such possibilities on the grounds that they are physically impossible.  It is a childlike idea that we should have absolutely everything that we want in a direct transmission from wanting to having.  However, it is not at all a childish idea of freedom that you should be able to make any choice you wish, including both the costs and the gains from that choice.  For example, I could choose to invest millions of dollars in inventing a sleek, compact jetpack that would enable me to fly around to my satisfaction- there is a considerable cost to this venture, and no certainty of success (risk is itself a cost), but I am free to try and free to succeed if that’s how the dice fall.

In this line of thinking, a direct transition from desire to actualization should be the default state of reality.  If an item I want has a cost associated with it, then I can pay that cost and have it without qualms.  This is not the situation of “I want, therefore I should have”- I cannot stress this enough.  Too many people are walking around in that sort of entitlement-based fantasy world.  However, if the demand is reasonable and I am prepared to deal with whatever costs, risks, or other consequences that arise from my decision, then the only thing standing in my way is a bunch of unnecessary human barriers.  If I want an apple and am prepared to endure the cost, given the circumstances, then I should have one.  Now, the circumstances can cause the cost to vary tremendously.  If there’s a grocery store then I only have to pony up the dollar or so required to buy it.  However, if I’m in the middle of nowhere, then the desire to eat an apple requires a more complex plan involving obtaining an apple seed, growing the tree, and then harvesting the apple and eating it.  It just so happens that this is a great deal of cost and effort for quite a small reward, which is why it is much more efficient to have consolidated apple farms which grow apples efficiently in large numbers and sell them to distributors.  Rather than the large investment of personal energy to acquire a tree’s worth of apples, I only have to pay for a fraction of that effort due to the scale of apples being produced.  If I’m an apple grower, this system is also to my advantage because if I grow a lot of apples, each apple costs me less to produce, and because I make a profit on every apple (or else I wouldn’t sell them) then the more apples I sell the more money I make.

This is all fairly typical free-market capitalist thinking so far.  However, the crunch comes when we consider that the government must necessarily introduce barriers to this system in order to do, well, anything at all.  Let’s suppose the existence of a government that has no barrier-producing authority.  Nobody has to take it seriously because it has no money since it can’t institute taxes, and even if it did institute taxes, nobody has to pay them because it has no power to enforce compliance.  THe only type of action such an agency is useful for is advising, and concerned parties can listen and take its advice when it is to their advantage to do so.  If this government started a campaign using volunteers to spread awareness about brushing your teeth, and it worked because it demonstrably improves your dental hygiene and health, that’s all it’s good for.  However, my usual case is that this is all government should be good for, because this isn’t actually a government- it’s a very weak and ineffectual DRO choosing to occupy the nonprofit niche instead of actively pursuing customers.  The idea that government should somehow be fundamentally nonprofit is just laughable.  Most people say that if you have a for-profit government, well that’s just loosing the dogs for corruption the likes of which has never before been seen.  They actually have a point, but the tricky bit is- that’s my point.  No company has a police force with the authority to arrest you if you don’t comply with that company’s policy.  If they did, they would be in exactly the same position as any typical government, minus the checks and balances that most modern governments have.  However checks and balances are like band-aids on a gangrenous wound- government just fundamentally will not be ethical, non-corrupt, balanced, fair, what have you, because it has the authority to seize as much money and power as it can grab.  It may have to disguise its efforts, but under the guise of national security or some other necessity it will do what it pleases.

So now we arrive at the contradiction of freedom that political scientists agonize over so much.  People want freedom, but they appear to need a government to secure those freedoms.  At the same time, in the act of securing their freedoms, the government itself must necessarily impinge upon those freedoms.  I understand the difficulty of wrestling with such a dilemma, but you’re wasting your brain cycles.  What you’ve got there is a conundrum of the first order- totally unsolvable with the same type of thinking that created it.

Here is the logical analysis of the argument in question: 1) People want to be free.  2) Freedoms are insecure in a state of nature.  3) Governments secure freedoms.  Conclusion: We should have a government.  The solution is brutally simple: the premise that governments somehow reduce a state of nature, or that governments act to secure freedoms.  Indeed, governments have only ever acted to reduce the freedoms of individuals beneath them.  Perhaps at times those citizens were under the impression that they were being aided in some fashion, at times perhaps a large majority of them were so deceived.  However the simple fact of the matter is that if what a government offered was so valuable then rational individuals would sign up voluntarily.

The proof that individuals can create extremely complex systems that are able to fulfill their needs is evident in government itself.  Government’s methodology is fine, with the single vital exception that participation is mandatory, and will be backed up by force.  In return, however, the government promises not to take everything you have, only a fraction such as one quarter or one third, which will be put toward projects you have essentially no control over.  Once again, I have no issue with any of these projects in and of themselves.  There may even be circumstances where actions as severe as the war in Iraq become necessary (they definitely were not in this case, but government idiocy is a side effect of the fact that the government retains power no matter what, even if the parties in it change).  Governments should offer services at a fair price, in a manner that its citizens will be prepared to pay for them.  One possible strategy is to have a single subscription model, requiring a third of your income, to which you must subscribe in order to legally inhabit land that the government in question owns.  As a subset of this government’s ownership, it is possible to own land.  We are approaching a fixed model of the US government where it’s essentially the same, with the critical exception that participation is voluntary.  Granted, the costs involved depend on your circumstances.  If the (rather impractical) stance of having a subscribe-or-leave policy were instituted, then you would probably stay just to keep what property you have, such as a house.  However, this solution presumes the existence of a government with the power to simply lay claim to your property as desired, and can use that threat to coerce you to subscribe in one final death throb to stab its superior and would-be-ethical successor in the gut.

So we arrive at the same contradiction for iteration round two.  In order to create a free society it is necessary for people already living under governments to somehow act as though they were not, at exactly the same moment that the government decides to relieve itself of its coercive power in favor of a voluntary or contractual model.  This is never going to happen.  So, the statist theorizes, in order to make a free society, you have to use coercive force to make them free, yes?  So we need a government to, not secure our freedoms, but force us to participate in our free society.  No.  Absolutely, definitely not.

The whole issue here is the idea of power.  The idea that a problem requires power to solve it, or that power is ever a solution worth choosing.  I am referring to power as the exercise of coercive power.  This is to distinguish it from freedom, which is the ability, or the facility, to accomplish something.  Using the definition from earlier, technology very clearly extends our freedom by enabling new courses of action that were previously physically impossible.  However, actions are morally neutral.  By creating new actions that were previously physically impossible, new crimes and new options for the use of power exist as well.  This is a cliche, but the invention of the blade creates both kitchen knives and swords.  The same holds true for everything up to and including F-22’s, although it’s hard to see how some of the more elaborate and expensive pieces of military hardware have any use at all beyond blowing stuff up, if that.  I digress here, but I am actually referring to the fundamental technological components in each case.  Technologies such as avionics systems in advanced fighter jets can be used in civilian planes and other places as well.  Simply that the F-22 and civilian planes are superficially different is taking advantage of the fact that, unlike primitive tools like kitchen knives and swords, they look and act very differently.  Although, if you looked into it, you would likely find that the design of cookware and the blacksmithing of military edged weapons were, and are, extremely different, although the fundamental technologies were the same.  Anyway, my point is that an increased availability of facility and options doesn’t actually get you anywhere in terms of the freedom versus power conflict- it only allows the scale to tip farther in either direction, irrespective of which way it is currently tipping.

I am aware that framing the discussion as “freedom versus power” seems to present a foregone conclusion, but keep in mind that I am referring to freedom as the ability to do subjective work, whereas power is the ability to have others do subjective work on your behalf.  While it is highly likely that the subjective work you have them do will not serve their own interests, there is no reason why this could not be the case.  I believe the origin of centralized authority was in the fact that disparate forces united to a common purpose can accomplish far more than they could individually, even though this means a subsuming of the individual’s judgment to whatever authority is making the decision about what must be done.  So when the scale tips toward freedom, by this logic, it appears that we are being modest in our desires.  We can’t accomplish as much in total.  I suspect this is why, in times of distress such as World War II, nations bond together.  States tighten up and hunker down, and the civilians set to work for the greater good, for fear of annihilation due to defeat in global war, but still a unified and powerful force.  It appears to me that this outcome is simply a result of economy of scale.  The issue, though, is that people are not cogs in machines, and we don’t necessarily respond well to economy of scale on the human level.  We don’t all want to eat the same food, even though it would be most efficient in the grand scheme of things to consolidate all the vast sprawling food industries into a single entity (if we utterly disregard politicking, management inefficiency, balance in parallelism, competition, and a ridiculous number of other factors) and have everyone eat well-designed vitamin and carbohydrate supplements with tap water.  It would cost virtually nothing, and free up so much human capital, labor, and time to other pursuits.  Unfortunately, as a side effect, everyone would have to live on vitamins and carb pills, which is clearly an undesirable situation.  However, on the other side of power, it’s clear that if we consolidate power too much, then human error becomes magnified.  If we consolidate absolute power in one leader then there will be fluctuations not only in that leader’s mood and ability, but also in the variation between leaders, where one person’s thought and personality can have profoundly different effects than another.  We get the good-king bad-king effect, with the good kings working steadfastly for the good of the people, and the huge contrast with the bad kings merrily chopping everyone’s heads off, starting wars and economic crises, and putting a pall of fear over the whole country.  So we see a continuum between power creating efficiency in terms of economy of scale, but inefficiency in terms of the magnification of human error.  Freedom, by contrast, limits the absolute utility available to the sum of the group in question, but also limits the effects of human error to the bounds of the party concerned.  If you want to smoke crack until you overdose- feel free.  You’ll probably be dead, but that will be the total extent of the damage you cause.

The issue with this description is that it isn’t entirely accurate.  In the freedom scenario, people still form together in groups and organizations, they just do so voluntarily only.  As a result, people in control of those large groups might still have a significant amount of power to direct and affect a large number of people.  However, and here is the critical difference, every single one of those people is free to leave at any time.  As a result, we get both the benefits of applying centralized power, and the benefits of freedom’s damage control.  If the leader is being totally ridiculous and irrational, he will either be replaced by those sensible enough to recognize it or everyone the crazy bastard has power over will jump ship and do business with someone else.  This creates a huge incentive for leaders to be effective, but also limits the damage if they are not.  It is the judgment of each person with whom they become involved, and also who they permit to have power over them, and to what degree.

Mandatory participation where each person has significant involvement and power, such as democracy in small communities, approaches this situation, but unlike mandatory democracy it scales to societies of any size.  With the possible exception of small groups in isolation.  However this is because it is assumed to be true in small groups in isolation, so the complex contracts are not worthwhile to make, resulting in stereotypical independence anarchy- the desert island scenario that statists like to employ so much.  However this fails too because the same system could be applied, and in fact would be if the situation became dire enough.  The Lord of the Flies scenario is unrealistic for rational beings (of course, there is some possibility that the circumstances caused them to become irrational) because when a problem arose, a solution, whether systemic or responsive, would be created even if there was only one individual to implement it.  This only fails when the rest of the group is behaving similarly, but treating each other as problems to be solved, resulting in never-ending conflict.  Eventually they’ll figure out how to trust one another, or kill one another first, just as barbarians of old did.  However the idea that appointing a leader prevents this type of worst-case scenario from playing out is shortsighted because the leader could easily be the cause if he tries to direct them in ways their own reason tells them are bad, and they have the independence to resist.  Anyway, this whole paragraph addresses an edge case which is increasingly rare in modern society, and irrelevant with regards to any community, city, state, national, or global scale.

Is There a True, True Self?

I have compared the “true self” to the “false self” before, and while I will still stand behind the claim that the distinction can be made usefully within a certain semantic realm, I’m going to go the other direction in this post because in a different, more general realm, there is no “true self.”  As a matter of fact, if you look at it in the most general, explicit sense, you have no self at all apart from the information that constitutes your decision-making and thinking matrix.  What I’m trying to say is that when someone says that they act a certain way and that’s their “true self” and all other ways of acting are them doing something other than being their true self, they are misleading themselves.  No matter what they do, they cannot escape the fact that the same decision-making matrix, no matter how intricate or complex, caused them to act that way in each of those situations.  Now, if they mean to say that they have a preferred mode of behavior, but are forced to use a different mode of behavior in varying circumstances, well of course.  I have preferred modes of behavior, too, like I prefer to sleep or go out or play video games to doing actual work.  That doesn’t mean that I’m my true self only when I’m in the process of a preferred mode of behavior.  But that’s exactly how a lot of people reason out their reactions to, most commonly, certain other people.

I’m getting into material identity again, but since it is I suppose my preferred philosophical specialty I may as well.  Because of the fact that there is no single piece of information you can subtract from a person to make them not-that-person, the person as a whole (considered as a contiguous entity) only has meaning as far as perception will take it.  Relative to someone else, it’s their perception.  Relative to the person themselves, it’s their own perception that matters.  Imagine that you woke up and you were a different person!  Now, because of the nature of logic, this sentence has no true parseable non-tautological meaning.  I have included in the sentence that “you” are a different person, meaning you are still you.  So the Engish way to handle this issue is to change the meaning to “you wake up with a different body, probably that once belonged to someone else.” or something similar.  No matter the way you parse it in English, it isn’t handled in a logically rigorous way in the same way that we don’t answer the question “Would you like tea or coffee?” with “Yes.”  While logical, it conveys little useful conversational meaning.  Bear in mind though, that if we spoke a truly logical language, you would answer in a way that did convey conversational meaning, the same way you don’t say “Yes” in English (Although framework of asking the questions would probably receive more semantic-structural changes than the affirmative/negative response structure).

But I digress, seriously this time.  We nearly had a terminal digression there into the land of logical languages.  Back to the issue of having one identity.  The truth is that we have an assumption here that we haven’t questioned: is it necessary to treat identities in the same way that we treat physical objects?  Once again this is a conceptual piece of English- we like to treat concepts like objects.  We can pick up drawing, have an idea, find an answer, and so on.  I’m not going too far into this as a topic- I would recommend Steve Pinker’s The Stuff of Thought for more on the subject.  Anyway, the assumption that identity is an object has numerous flawed bases.  Firstly, there is 1 “person” per body, and we can count bodies.  Ergo, there must be 1 and only 1 identity per person because that person has 1 and exactly 1 body.  The next flawed idea is that identity is immutable and does not change.  That there could ever be a “one true” identity.  This isn’t even true for the lowest-level aspect of identity at the level of the physical body, so how anyone can formalize the idea that identity must be fixed is beyond me, but it does happen.  It should be completely obvious that the body of a child is different from the body of an adult, and so assuming that there is any relation beyond material continuity is a flagrant violation of logic.  Now it is not an error to say that there may exist similarities between these two identities/bodies/people, especially considering how causally connected the latter stage is from the former.  But to say that there is a fixed identity from which changes may be noted as deviations is just plain wrong.  People change a lot- people change very quickly.  Through the course of a day each of us goes through periods of high and low energy, moods, thought patterns, and who knows what.  However there are people who are guilty of the next identity fallacy, which is that somehow those aspects aren’t significant pieces of your identity.  They are passing and trivial and should be ignored because in the grand scheme of the human identity they are categorically different.  Well this is wrong, but it’s less obvious to most people because it has some deep religious roots.  The idea that the body is distinct from the soul, and that the soul is much more important than the body can ever hope to be is an old religious idea with tendrils all over the place.  The idea that something like a state of hunger contributes to your identity in any significant way is perhaps odd.  But look at it this way.  If there was a teleportation machine that destroyed your body and created one exactly like it at a different location- I have used this example before.  If there was such a machine, and it re-created your body perfectly in every detail, except it omitted recording information needed to compute and recreate a state of hunger (somewhere between total satiety and death by starvation) then is it a valid teleportation machine?  I’ll tell you what, I wouldn’t step through that bastard for a billion dollars, and not because I might be a starved corpse on the other side- it’s because I have no idea what information went into the complex computation of my own state of hunger/satiety.  Probably all kinds of things from the contents of my intestinal tract to levels of certain hormones and neurotransmitters.  If the machine omits all that information, I don’t come out the other side of that teleporter.  Someone else does.

So I am aware that I have a difficult position to defend here.  I’m saying, at the same time, that there is an immense degree of flexibility in what constitutes a person- that you can still be “you” in the sense that counts from the time that you’re a child until the day you die, but at the same time the standard for building a teleporter must be absolutely flawlessly perfect in order to preserve material identity.  The reason for this is that I’m making the two comparisons based on different criteria.  I’m a strict materialist- everything can be reduced to an arrangement of matter and energy if a sufficient level of detail and fidelity is used.  However, matter and energy in and of themselves are just rocks and colored lights- they have to be organized into information patterns to be interesting.  So in the case of a stardard human life, without being teleported, the information pattern persists in direct fashion through space and time and can be identified perfectly as being materially continuous.  However, once you introduce the ability to jump around in space and time, you have to get a little bit smarter than that in order to maintain material continuity.  To think about material continuity, I’ll call it the Where’s Waldo? Effect.  If it’s possible to look into the universe like a giant, four-dimensional Where’s Waldo book (including all periods of time) and find you, or any given person, then you have material continuity.  When you introduce the ability to jump around in space, then you need to have the end of one string and the beginning of another match to a sufficient level of detail that the four-dimensionally-conscious being looking into the Where’s Waldo Universe can put together the pieces.  The same thing is true if you’re jumping through time, of course, but most conceptualizations of time travel account for perfect material transport as a matter of course, so it’s not as interesting to talk about.  Still, if you have a time machine then you necessarily have created a teleportation device because you could teleport back in time exactly enough time to go wherever you’re going and then go there, arriving at exactly when you left.  Not a super elegant mode of teleportation, but quite effective in physical and relativistic terms.

In fact, to be even more technically precise, it’s impossible to build a teleporter without somehow cheating relativity.  The modern idea on how this might be done is taking advantage of quantum entanglement to transfer information instantaneously to anywhere in the universe- it might also be done with some form of tachyon particle but entanglement shows much more promise.  It’s something of an important idea that material identity is both time and space independent because even if you could transfer the totality of your information instantaneously anywhere, I find it unlikely that it’s possible to instantly create a new body for you on demand.  As long as a more or less perfect copy gets made (ideally before you get “re-activated”) it makes no difference if you lost some time in the middle.  The real question is- how perfect does this copy have to be?  That is an extraordinarily difficult question to answer.  I have no idea how you would go about answering it in a mathematical sense.  As long as you have material continuity to fall back on then you have nearly endless flexibility, but the second that gets taken away it really becomes a question of what you believe the limit is.  And a strange sort of “are you feeling lucky, punk?” kind of attitude.  It’s the same operation, because material continuity is just using the super-perfect teleport trick over impossibly small distances and over the smallest possible time lengths (Planck time, approx 10^-44 seconds) using the same medium that the stability of the information pattern itself is composed of, so the accuracy is so absolute as to be perfect.  Sure, particles jitter and all sorts of other stuff is going on, but that’s the nature of the pattern that you’re made of anyway.  Even in periods of the most rapid change you can conceive of, relative to the length of a single Planck time- I mean, come on.

I don’t think that 10^-44 seconds will even fit into the human mind as a workable unit of time.  That means that you would need 1 followed by 44 zeroes of them in order to get one single second.  To put that into perspective, if you had that many nanoseconds the total length would be 3×10^27 years, or enough to contain the entire history of the universe (15 billion years) over 200,000,000,000,000,000 times.  A Planck time is small.  There is no practical way that sufficient change to break material identity could happen on a timescale so small.  So I just say that no matter what, material continuity equals material identity.  It’s not strictly true, but if you’re seriously in doubt then you must be talking about some thought-experiment edge case like “what if we had a particle accelerator that could destroy n brain cells in exactly 1 Planck time, how many would we have to destroy…”.  They’re awesome, and I do it all the time, so that’s great, but as a rule of thumb I think the idea of material continuity = material identity works quite well.

Strategy, Tactics, and Games

First of all, read this post.  Now.  http://www.ribbonfarm.com/2007/09/24/strategy-tactics/  It is pure genius.

After you’ve done that, I have analysis to do.  I’m not going to regurgitate a single shred of the information in the above article because I have too much to say.

First of all, the author Venkatesh Rao is absolutely correct, and not only did this idea never occur to me, I never thought to question the idea that the fundamental assumptions used in the creation of strategies and tactics were fundamentally flawed- adding a level of meta-tactical formulation that is essentially lacking in most decision-making.  Now, more specifically, the idea that tactics are general and strategic thinking is unique to situations, while it appears to be generally true, and it’s a much better approximation than the old model that strategy is somehow more all-encompassing than tactics, it falls victim to the same thinking that the old model did.

What do I mean by this?  Well, strategy by this definition does actually include tactics necessarily.  Because it’s constructed for an individual circumstance it must necessarily be built up from the different tactical options available to the agent.  However, tactics do not necessarily have to be a part of a grander or lesser strategy.  A tactic can be described in pure game-theoretical terms without any real-world interaction.  This is accomplished by building a tactic up from axioms in a way that strategies derived from doctrines aren’t.  A doctrine is an assumption about the world for practical purposes and is therefore derived from experience in an inductive fashion- as a practical assumption which is most often true, or otherwise useful to assume.  Tactics derived from axioms are arrived at deductively.  For example, in a military situation, we know that we want to destroy as much enemy materiel as possible while incurring as few losses as we can.  This is not a doctrine- this is an axiom.  Similar axioms are such assumptions as “guns have range” or “guns are highly lethal to humans.”  So if we build up a number of axioms like this we can arrive at a situation where we have whatever weapons in whatever known situation, and we can compute tactics such as have troops use cover, use infantry with anti-armor weapons to engage enemy tanks, use tanks to engage enemy assault infantry, etc. etc.  So maybe we arrive at an effective tactic of creating a formation with the tanks in the front, and a large number of infantry in a supporting role, to be brought forward when the enemy fields their tanks.  It’s important to note that we can change these parameters however we like and we’ll arrive at different tactical results.  For example, if we changed the situation to include the axiom that all infantry are highly effective at killing tanks, then it may not be worthwhile to field tanks at all because they would be destroyed too easily, and it certainly wouldn’t be a good idea to have them go first if they were all you had.

In a strategic sense, we have a different way of looking at our available units.  We could talk about units in the same abstract sense as before and still come up with concepts of strategic interest, but in order to formulate a valid strategy we would really need to know the specifics of what we’re dealing with.  Do we have 122 tanks and 300,000 troops to call upon?  What’s the supply situation, what about morale, training, enemy targets available, etc. etc.  From this we might formulate a diverse array of potential strategies to maximize the effectiveness of the resources available.  However, in order to do that we need to have both good doctrine, or practical assumptions about the nature of the world, and good intel, or exact specifics about the situation at hand.  The difference is fairly easy to handle.  If we know that setting the tempo of the military engagement is critical, that’s a doctrine.  It has direct strategic significance by reducing the infinite field of possible strategies down to a more manageable number of probably useful ones very quickly.  Intel would be “the enemy has 513,889 soldiers located in that city” or “the enemy is going to attack in three days.”  Intel is necessary for making operational decisions, or low-level instance decisions.  I suppose it could be said that operations are simply a lower-level form of strategy, but they’re low enough level that it is practical to consider them fundamentally different.  Strategic thinking is necessary to make them work, as opposed to abstract tactical deduction, but the strategy selected is known and an implementation is all that is required.

Strategic thinking is not, as I and many others once thought, “higher level” than tactical thinking.  I would argue that it requires more experience and more intelligence to think strategically in a given field than to analyze it tactically.  With strategy, you are necessarily dealing with imperfect information and chance.  Chess is a game of pure tactics, with very little true strategy.  I would argue that more complex games like Go actually do include levels of strategic thinking because you have to address the board at hand and your opponent in a unique fashion.  However, in chess, you don’t care who your opponent is or what the individual situation is.  Given a sufficiently advanced derivational strategy you could compute the ideal move in a given situation.  The same thing could be said for Go, of course, but the computational capacity required is so immense that it is utterly impossible with the resources of a human brain.  However, chess masters make this sort of analysis when deciding what to do.  Ah, who cares about individual games.

Real time strategy games tend to contain strategy, with a fairly sparse diversity of individual tactics.  Some tactics that are generally common in all RTS games are things like rushing, turtling, spamming, and so on.  Strategically, however, you have to look at the terrain and what units your opponent is fielding and make a decision that will only hold for this specific situation.  One of the main flaws in RTS games in my book is that maps tend to play out the same way each time because the terrain has too little effect.  This sounds like I’ve got it backwards, but bear with me.  Two armies meeting in a field with no terrain at all have very few factors to make strategic decisions on.  Barring some really different logistical or technological factor, the battle will probably play out much the same way every time you ran such a simulation.  Now, if you added in a little terrain, just enough to create a few significant areas of strategic significance, then the nature of the game changes.  Both sides try to hold the same strategic areas, and succeed to the degree of the resources available and the ease with which they can hold a specific area (if it’s closer to them, etc).  However these battles will also play out the same way every time because there aren’t enough options.  If you’ve only got a few points of obvious interest to both sides then they’ll fight over them every time.  The tactics utilized to obtain them may be different, but the strategic objectives are not up for negotiation.  In order to have a strategically interesting game there must be a greater number of possible strategic choices than a given side can hope to capitalize on.  What do I mean by this?  If we increase the number of points of strategic significance, up to the point where it is no longer an option to simply take them all, then the game starts to become strategically interesting in the sense that different players will make different strategic choices on the grand scale.  Now, I have to mention here, that it is also important to have multiple dimensions of possible choice.  If you have a wide selection of areas which will all give you resources, then the strategy doesn’t actually change.  You just have to get as many of them as possible- and the order that you take them becomes the individual strategy and doesn’t make an interesting strategic setting.  Perhaps the best way to create strategic significance is to give the players the ability to create strategic weapons, and depending on where they place them, the course of the battle changes.  The issue with this method though is that a given setup will lend itself to specific places to put such weapons.  So if you put these choices in the players’ hands, they’ll quickly settle on where the best choice is and just repeatedly place there.

I am trying to bring to light the principle of strategic consolidation.  This is known in game theory as Nash equilibria.  Ideally, in order to create a strategically interesting situation, you would ideally make it so that there are no Nash equilibrium for your setup.  However this in almost an impossible task.  So instead you can set about creating as many of them in as complex a formulation as possible so that it doesn’t play out the same way too often.  I would posit that there must be a way to create a game which, from its fundamental structure, will be strategically interesting every time.

Now how would we go about doing this?  The first point is we must somehow factor in the right level of extra-structural and intra-structural factors.  Meaning, the map, player choices, and other circumstantial factors must have a variable level of influence, but not so variable that any one of them can ever break the game.  Of course, it would always be possible to create a map which breaks strategic interest, or for a player to be outright retarded.  However we as the hypothetical game designers get to put certain parameters on these things.  For example, maps should be between X and Y size with properties A, B, and C, yada yada yada.  We will only make a game that is always strategically interesting if our input parameters are followed.  We will also assume that all players will be trying to win, although we have to allow for disparate skill levels.  That said, because we’re trying to make a strategic game, if we’re doing our job right then better players will straight up destroy worse players.  This is acceptable because we can keep the game strategically interesting by always introducing a flaw in any given strategy chosen that the other player might exploit, except that they might not be skilled enough to.

Alright, now we begin in earnest.  Because we want our game to be strategically interesting, we need a large diversity of points of interest, which necessarily entails a map of a certain size.  As a result, we will have to scale our unit balance accordingly.  Ideally we would have bigger maps = better, but then we run into the issue of time limitations.  Games need to be limited to a certain time frame, or nobody will ever finish them and they won’t be fun.  We could get around this in a number of ways, such as having games run in phases or have a perpetual game, or maybe run it in turns, etc. etc.  However all of these will curtail the structure of the game in a significant way.  So instead we’re just not going to worry about time being an issue.  Our theoretical game won’t account for the players having fun in any realm outside of the actual strategy of the game.  For example, we will not concern ourselves with the processing power required to run it, the graphics, the cost of the computer, or the market share of people who might be interested in buying such a game.  So we will have maps that are exceedingly large with lots of different points of interest such as geographic features, resources, and perhaps even significant locations such as cities.  Regarding our resource model- we want it to be simple enough that the player doesn’t have to break their brain in order to get units to play around with, but we also need it to be extremely important.  The ability to reduce the opponent’s ability to fight is a fundamental and necessary strategic concern.  As an aside, in order to have a diverse array of points of interest, we might cheat and have a massive variety of resources.  This is effective to a point.  I don’t know what the ideal number would be, but certainly 100 is far too many.  I would be leery of anything upwards of 10 or 20, and in order to have numbers that high it would need to be necessary to be able to convert them conveniently (at a price, possibly substantial).  The other important issue is logistics.  Most modern strategy games ignore them because they are something of a pain.  However I am confident that it is possible to implement a logistics system that the player doesn’t have to worry about except in the sense that they keenly feel the need to protect it, and to attack the enemy’s.  The player should never have to give orders to manually maximize the efficiency of their logistics systems.  The player is for making strategic and tactical decisions, not daily maintenance.  If they were so inclined they should be able to change whatever they wanted, but a liberal dose of heavily customizable helper AI would do RTS games a great deal of good.  Similarly, the player should be in a position to decide what gets produced, but should not have to manually queue up individual buildings and units.  Using a flexible template system complemented with artificial intelligence would be fantastic.  The player can say “I want a firebase built here.” and the servitor AI summoned will see to it that the location in question has whatever buildings the player associated with a firebase are built there.

In a similar vein, the player should never be called upon to give orders to individual units.  This is a critical point.  The UI built on top of the basic unit level should be sophisticated enough that the player can quickly and easily pick out whatever units they want, organize them automatically into squads, order squads or companies, battalions, armies, whatever to be built and assembled automatically, and have those units automatically organized for them.  If iTunes can do it with massive libraries of mp3 files then an RTS game can do it with units.  Complex reports and commands should be routine.  The player should be able to get a complete breakdown of whatever subsection of units they like, according to whatever criteria they like.  For example, I might ask my war machine AI to give me a complete breakdown of my air force.  It will show me a page saying I have a total of 344,000 planes and then a breakdown by grouping, role, and further breakdown by type, with individual conditions and orders should I ask.  I should be able to look at a procedurally generated map showing what I have where and what they’re currently doing.  Regarding complex commands, it should be possible for the game to understand more complex elements than “move” and “fire.”  For example, if I want to mount a sustained bombing run on an enemy base, it’s not a complex task.  I just want to get a whole lot of bombers and have them kill everything in this here area while returning to base/aircraft carrier for fuel and ammo when necessary.  The player absolutely should not be required to designate every single target for every single bomber, and then manually order them to return.  It should definitely be an option to order specific units to destroy a specific target, but a more abstracted and powerful UI solution would be much better.  For example, I might designate a specific area as an enemy base which I label “southwestern air staging base” or whatever.  Having the game automatically divide the map into sectors would be handy too.  Being able to then draw symbols and regions on this fabric that you can order units around with would be fantastic.  Anyway, I can then designate specific enemy targets within that area with different values depending on how badly I want those targets destroyed.  I might even create an algorithm describing a way to automatically determine which targets I want destroyed more, such as always aiming for factories or artillery pieces or whatever else.  Then when I order a sustained bombing run, my bombers do what I want them to even when I didn’t specifically order them to.  I can go do something else without having to micromanage.  I guess that’s the whole point of this paragraph.  The age of micromanagement is over.  Hopefully future RTS games will realize this, and we will look back on the RTS games of today as basically RPG games with more units.

To go further into what abstraction might do for our strategy game, RTS games need to start having operations.  By operations, I mean a large, coordinated plan with many active elements all going together, which the player could give specific names if they wanted to.  Including specific objectives as conditionals would be fantastic.  For example, if a player defined an objective as “blow this up” then your AI will understand that if the offending enemy is destroyed, that statement will return true.  The player could then have a breakdown by operation to see how they’re going in all their operations at once.  Your operation readout might be:

Operation FIrestorm – In Progress
• 5:11 of planned 14 minutes elapsed.
• 4 of 11 objectives completed
• General force strength 87%”
– notes
• massed assault eastward on sectors B65 through B88
Operation Lightning Spear (covert) – In Progress
• Jammers operational
• Cloaking operational
• believed to be undetected
• 1:30 of planned 7 min 35 seconds elapsed
• 1 of 5 objectives completed
• 100% General force strength

I am aware that none of this seems like it has any bearing on how to make a game that stays strategically interesting.  It seems to me that the main stumbling block for RTS games today is the user interface.  They are just not suited to having a really strategy-oriented game.  The player has to do too much.  While this increases the twitch factor- not necessarily a bad thing, it detracts from the ability to create large and sweeping, grand strategies.  Using groupings to combine individuals into squads, squads into companies, companies into battalions, and battalions into armies would be a huge improvement.  Doing it atomically allows a computer to easily construct the desired units based on input from the player.  For example, I design a squad of 20 soldiers and give 2 of them machine guns and everyone has grenades.  I then say give me a company with 13 of those squads, 3 units of 3 tanks apiece, 1 unit of 3 anti-air vehicles, 2 units of snipers, and 1 command squad unit.  I’ll put 30 of those companies into a battalion, of which I would like you to build one at this base, one at this base way over here, and another at this third base.  Automation is the name of the game, to free the player up for making the decisions that really count.

Impulsiveness

Is impulsiveness a desirable characteristic?  I am the categorical thinker- I like to think about things before I do them.  However, as part of that thought process it’s important to be able to suspend thought when necessary.  As such, whether or not impulsiveness has a place in the repertoire of the contemporary rationalist is an interesting question.  Firstly, we need to look at where impulsiveness is typically used.  Impulsiveness is often associated with interpersonal exchanges, with social people and people who enjoy parties.  It is strongly disassociated with business or financial decisions, with some exceptions such as small purchases and gambling.  So while common sense thought acknowledges that impulsive action is improper for weighty decisions, for more trivial matters it helps a great deal.

Before we get into the topic, we need to make some distinctions.  There is impulsiveness and then there is recklessness.  The way I conceive of the terms, impulsiveness is thinking of an action and allowing it to proceed into reality without too much analysis.  Recklessness, on the other hand, implies a full knowledge of the action beforehand, but doing it in spite of your analysis that it is foolhardy.  I will talk about both, but first let’s cover the less complex issue of impulsiveness.  In social situations, impulsiveness is a great aid because you can’t think too much about what you’re going to say.  There are a large number of very smart people who have difficulty in social situations because they don’t realize that their strategy for dealing with reality is not universally applicable- it needs to be changed to fit their needs of the moment.  When I was a kid I was like this.  I have since learned to pragmatically and completely apply rationality and can piece together the solution to such puzzles.  Basically, if you think too much about what you’re going to say, you give an unnatural amount of weight to when you do speak.  So unless you’re able to spout endless amounts of deep, profound thoughts, invariably you’re going to be putting a lot of weight behind fairly trivial statements, and the inconsistency comes across as awkward.  Impulsiveness will decrease the weight of what you’re saying and give it a sort of throwaway characteristic which helps you in a number of ways.  Firstly, if it doesn’t work out, nobody really notices, and you can keep going with whatever suits you.  Secondly, it puts you in a more dominant position of just saying whatever you feel like saying.  You aren’t vetting your thoughts to check if the rest of the group will approve.  This brings us to the second flaw in the introverted thinker’s social rut, the fact that they are attempting to apply thought to the situation to do better and it shows very obviously to the rest of the group.  This is a complex point that I can’t encapsulate in one post, but basically any attempt to earn approval guarantees denial of it in direct proportion to the effort spent.  The introverted thinker’s goal is to earn approval, and his model for deciding what to say is, logically, fixed upon achieving that goal.  While their intentions are good their entire approach has so many incorrect assumptions they aren’t even capable of recognizing the fact that their whole paradigm is nonfunctional.  They just dive right back in with a “it must work” attitude instead of reworking from first principles.

Impulsiveness is also a pragmatic tool to be used liberally in situations of doubt.  When it is clear that hesitation will cost more than immediate action, you have to go.  When I was younger I had this model of “going for help” which essentially contained the idea that the concept of help was distant.  So “going for help” would take a long time, and there was a significant chance that the window would close for whatever the situation was.  So my primary course would have been to just go do it myself.  This is an incorrect application of impulsiveness because of incorrect information.  A proper application of impulsiveness might be, for example, you are handed a test with 100 4-answer multiple choice questions, you have 100 seconds.  Now there is no way you could conceivably cover 25% of the questions if you legitimately tried to answer them.  However, if you guess randomly you have a 1 in 4 chance on each question and so over 100 questions you should get 25 correct.  This is clearly your best strategy given the rules of the game.  You concluded that the best strategy is to suspend rational inquiry into each question because it is simply not worthwhile.  You wouldn’t work for an hour to earn a penny, and you wouldn’t think for X seconds per question.

The other fallacy that makes impulsiveness distasteful to many is the idea that the answer actually matters.  With our test example, you don’t actually care what the answer to any given question is, you have all the information needed to create a sufficient strategy.  For social impulsiveness, the simple fact of the matter is that your actions really don’t matter that much.  Provided you don’t do anything truly inappropriate, at least.  The, and I use this term very reluctantly, “antisocial nerds” ascribe a great deal of value to their interactions and to what each party says.  This is a misunderstanding of the nature of the communication.  The actual content is unimportant.  Nobody cares if you’re talking about the weather, cars, or anything else.  True, this doesn’t make logical sense, and in a perfect world people would communicate usefully instead of feeding their egos by the fact that they’re talking to people.  Most of the “extroverts” are pleased by the fact that they’re talking to people, and are anxious when seen by themselves- this mentality is communicated to introverts and affects them quite adversely because they prefer to be alone for some part of their day and they may believe that there is something wrong with them.  Don’t buy it, please.  The people who *need* to be around others to validate themselves are the unstable ones.  It’s similar to the way men and women treat sex.  Men are usually sexually insensitive and are more pleased the by fact that they are having sex than they are enjoying the sex itself.  They are usually seeking validation from society instead of their own enjoyment.  Of course, most women can pick this up immediately and they would prefer not to be some boy’s tool to self-validation.  Women, you aren’t off the hook, you do the same thing, but not with sex.  Instead, you get validation from men paying attention to you while others are watching.  Don’t get me wrong, it goes both ways.  Some women perceive that they get validation from having lots of sex, and some men get validation by attention from women, they’re just not as common as the other way around.  Impulsiveness as a concept is often bundled with these behaviors which, although nobody really knows why, are widely believed to be “creepy.”  That’s just not the case.

Now, recklessness is a whole ‘nother can of worms.  Doing something that you know to be crazy, or doing something because it’s crazy, has a completely different backing behind it.  Most reckless people do it because the cost of the reckless action is balanced or outweighed by the enjoyment or rush they get from it.  This is the same mechanism that makes skydiving fun, even though skydiving is actually reasonably safe.  If you had a significant chance of dying you wouldn’t be able to sell it to people as a recreational activity without some serious social pressure backing it up.  Ziplining is another example- there has only ever been one zipline death, and that was under suspicious circumstances.  But we perceive it to be dangerous and enjoy a rush from it.  There is, however, a time when outright reckless behavior can be a rational course of action.  Usually these circumstances fall into two categories though, 1) you’re trying to make other people/agents believe you’re reckless, or 2) direct and/or thought-out strategies can be expected or countered easily or are otherwise rendered ineffective.

Category 1 is the more common of the two and can potentially occur in any game or strategic situation.  Essentially your strategy is to do something stupid in the hope that your enemy will misjudge your tactics or your capabilities, enabling you to take greater advantage later on, or in the long run.  In poker, it is sometimes a good thing to get caught bluffing.  That way, next time you have a monster hand your opponent might believe you’re actually bluffing.  If you’ve never been caught bluffing before, they would be much more likely to believe you actually have a hand and fold.  Obviously, if you get caught bluffing enough times that it seriously impacts your pile of chips, you’re just bad at poker, but a single tactical loss can be later utilized to strategic advantage.

Category 2 is much more interesting.  Let’s take a game like Total Annihilation.  By the way, TA: Spring is totally free and open source, and it’s easily a contender for the greatest strategy game ever made.  Although it’s not fundamentally that complicated, there is no in-game help so it can be very confusing for new players.  Feel free to log in to the multiplayer server and just ask for a training game- after one or two you should be up to speed and ready to play for real.  Anyway, in Total Annihilation, at least the more standard-fare mods, there are dozens if not hundreds, there are huge weapons that deal death massively and can pose a serious threat in and of themselves to the opposition.  Things like nukes, long range artillery, giant experimental robots (and you can FPS any unit, bwahaha!!), etc. etc.  Anyway, the construction of one such piece can actually end the game if it stands uncountered or undestroyed for too long.  However each has a counter, which range in effectiveness.  For example, antinuke protects a fairly large area, but if you throw two nukes at it, it can only handle one.  Shields protect against long range artillery but they have a small area and cost a lot to run, and so on.  Now, a calculating player can probably figure out the ideal choice for the opponent in a given situation.  If he’s focusing all his stuff in one place, he may as well get both shields and anti-nuke, but the other player(s) could then steal the whole map.  If he goes for the whole map himself, the other player would probably get air units to attack his sparsely defended holdings.  If he consolidates in a few carefully chosen locations, nukes might be in order, and so on.

This is where we get to the recklessness-as-tool element.  Potentially the greatest advantage in complex games of strategy is surprise, or doing something that the enemy did not expect and must react to.  Ideally the enemy has limited ability to reorganize to counter the new threat.  This is true of real-world military action- there are issues with communication, chaos, and a host of others that make reacting quickly difficult.  The more resources sunk into the threat, the more resources that will be necessary to counter it (assuming that the attacker isn’t just stupid).  There would have been no point in the Manhattan Project, for example, if the enemy could put horseshoes on all their doors to render nuclear weapons impotent, and it would never have been started.  Now let’s say we have a game of TA where it would be obvious that hitting the enemy with a nuke would be the best course of action.  Of course, this same idea will have occurred to the person about to get nuked.  OK, so then big guns are the best strategy.  Except that your opponent can think of that, too, because he might guess you’re not going to use nukes because it’s too obvious.  And so on through all the possible options, whatever one can think of, the other can too.  Whatever strategy you might use to maximize your utility can be equally though of by the enemy.  We are dealing with a perfectly constrained system.

But what if we de-constrained the system just a little bit.  We remove the rule that says we must maximize value.  Now we could feasibly do anything up to and including nuking ourselves.  So we need a different rule in its place because now we’re working with a screwed up and dysfunctional model.  This is where the trick is.  Because you might still have a meta-model of maximizing value in your selection of an alternate strategy, meaning you will be just as predictable, albeit through the use of a much more complex algorithm.  No, you have to truly discard the maximizing value paradigm in order to get the additional value from surprise, and the trick is to not lose too much to put you behind after your surprise factor is added in.

My problem here is I’m trying to reduce a complex and multi-dimensional strategic game to a single aspect under consideration.  My other problem is that many of you will have never heard of Total Annihilation.  The same idea applies to more or less any other sufficiently complex game, such as Starcraft, but value is too directly transformed in most modern games to make such meta-strategies significant.  If you have more troops, or the right kind of troops, you win.  If you’re behind, you’re behind and there’s not a lot you can do about it other than try harder in doing what you were doing before.  So while surprise might give you some advantage, it’s probably not going to be worth enough to be behind to get it.  Careful application of force certainly helps, but it’s not as vital as in Supreme Commander or Total Annihilation.  No, I’m not harping on the games in question, I’m not demanding that you must play them, I’m just sharing my particular taste in video games.

Impulsiveness once again.  I seem to be digressing more and more these days.  Basically what I’m trying to communicate is that in some situations (games to use the theoretical term) the act of analysis must be take into consideration in your planning.  How much time can you spend analyzing, what should you be analyzing, how is the enemy thinking, etc. etc.  Once you bring the act of thinking into the purview of strategic considerations, impulsiveness is one option for a viable strategy that just does not occur to someone who cannot conceive of the act of thinking as a strategic concern.  They implicitly believe that life is a game of perfect information with unlimited time for a given move.  The truth is, you’re acting when you decide what to do, and that act will have an effect on the world and on the results you get.  There are lots of proverbs about hesitation, but they don’t seem to extend to when to think and when to just act.  On the whole, I think most people have an implicit understanding of this type of decision making- it comes pre-packaged with the HBrain OS, but they haven’t really considered exactly what it is they’re doing on a consistent basis.  I’m just here to point it out so those who haven’t can read about it and be provoked into it.

The St. Petersburg Paradox

I’m in more of a mathematical mood right now, so I’m going to cover a piece of abstract mathematics.  I want to talk about the St. Petersburg Paradox.  While a famous problem, you can wikipedia it for more information if you like, here’s a short summary.  Imagine we have a game of flipping a coin.  Starting at $1, every time the coin lands heads, you double that amount.  When it eventually lands tails you win however much you have earned so far.  How much should it cost to play?

Now I very much enjoy this problem in a pure mathematical sense, but Daniel Bernoulli, the man who invented it, apparently took the mathematics of this problem rather too far.  Bernoulli noticed, as the more astute among you probably either deduced, or probably already knew, that the game’s expected value is in fact infinite.  This means that no matter what the cost to play, you should always accept.  However most common people wouldn’t pay even $50 to play this game.  Bernoulli deduced from mathematical bases a utility function of the game which would explain this behavior using a logarithmic idea of value.  He supposed that people’s valuation of money decreases as the amount of money they possess increases, or to use another term, he proposed a diminishing marginal utility function for money.  While this approach, I guess, works, the even more astute among you will have noticed that this doesn’t actually solve the paradox.  You can just have a game’s payoff function that uses the inverse of whatever utility function and still end up with an infinite payoff that nobody will take.  Other mathematicians have wrestled with this problem, and so far the conclusion, as far as I am aware, is that utility must be bounded in order to resolve this type of paradox.

Now, I am not a professional mathematician, but I believe that I have solved this paradox.  SImply put, all these mathematicians have been assuming that people have the same conception of reality that they are working with; a mathematical one.  These mathematicians have assumed that people think of money as a number.  That seems obvious, right?  Money is measured numerically.  Well, yes, but the fact that different people have different ideas of what money or other commodities are valued at means that it isn’t a number.  Numbers are objective, inherently.  Two people must categorically agree that a 7 is a 7, it always was, is, and will be 7, and that 7 = 7, which also equals 6 + 1 and an infinitude of other identities.  However we all know that two people might have a differing opinion of various exchanges, such as $3 for a mango, for example.  Someone who loves mangoes might buy at that price, someone who doesn’t, won’t.  So we can’t say that $3 = 1 mango in the same way that we can say that 7 = 7, even if all mangoes in the world were always bought and sold for that price.

The issue here is that these mathematicians, while brilliant direct deductive thinkers, think of the universe in a flatly rational way.  While this is probably the best single perspective through which to view the universe, it fails when dealing with people that lack a similar rational strictness.  Have you ever been beaten by someone at a game you were clearly better at, simply because the other player just refused to play “properly”?  This happens all the time in poker and numerous gambling or card games.  In games like chess this rarely happens because in a game of perfect information, “proper” play can be categorically proven to be superior during the game itself.  If it would result in a bad situation, then it isn’t proper play.  Where information is limited, “proper” play might land you in situations you couldn’t predict or prevent.  Anyway, a more textured view of the perception of the universe would allow for nonlinear and unconventional conceptual modes for perceiving the universe.  For example, perhaps a certain subsection of people conceive of money like power.  The actual number isn’t as relevant as the power it holds to create exchanges.  The numbers are negotiable based on the situation and on the value sets of the parties involved  So the St. Petersburg Paradox could be equally resolved by saying that power doesn’t scale in the same way that money does.  If you offered someone a utility function of power, it would mean nothing.  Power is not infinitely reducible: the ability to do something doesn’t blend seamlessly into the ability to do something else.  The atomic unit of power is much larger than the infinitely fine divisions between any given numbers.  Having ten very small amounts of additional power is also not the same thing as one very large new executive power.

People can link together abstractions and concepts in many, many different ways.  For example, some successful investors say that instead of looking at your money like it’s your fruit, look at it like your bag of seed with which to grow more seeds.  True, you’re going to have to sell some of those seeds to get what you need, but its purpose is to grow.  As you accumulate more and more, the amount you can draw off increases while still maintaining useful volume.  This gives a completely different outlook on money, and will generate different decision behavior than looking at money as something to be spent as it is earned.  This same principle can apply anywhere at all, because in order for something to exist in your perceptual map, you have to think about it.  You might think of movies like books that have been converted, like picture books, like snatches of real-life experience, like a sequence of scenes strung together like string being tied together, or like a strip that runs through its full length in only one direction the same way every time.  There are other possibilities of course, but that’s as many as I could think of while I was in the process of typing this post.  This is only looking at a small slice of the possibilities of conceptual remapping (analogues and analogies, specifically) but other forms would require a great deal more explanation.  I think you get the point though.

Back to mathematicians and the St. Petersburg Paradox.  The paradox only exists if you look at utility in the mathematical sense.  There exist models, such as the one that “common sense” seems to indicate, that don’t see a paradox.  These models instead see a game that has a sliding scale of value and beyond a certain point the value is zero (or negligible).  This gradual fading of value is responsible for the probable effect of many people deciding to play the game at differing values.  I don’t think even the most hardcore mathematician would play the game for $1 million a round, even though it will eventually pay for itself.  The utility solution fails to take into account the common sense evaluation of time and effort as factors in any given activity.  You could factor in such an evaluation, but you would probably then be missing something else, and so on until you have built up a complete map of the common sense and shared perceptual map of the most common conceptual space.  But then you have duplicated the entire structure you’re attempting to model and created a simulation instead of a simplification.

On simulations and conventional models, we currently use both.  Our simulations, however, tend to be based in the real world, and we refer to them as experiments.  This is how we collect evidence.  The problem with the natural universe is that there is such an unimaginable profusion of activity and information that we can’t pick out any particular aspect to study.  An experiment is controlling all those other extraneous factors, or removing/minimizing them from a confusing universe so we can focus on a single test.  Once we have our results from that test we can move on to test another part of reality.  Eventually we will have built up a complete picture of what’s going on.  Simulations are data overkill from which we can draw inductive conclusions because we don’t understand all the underlying mechanics.  Models are streamlined flows, as simple and spare as possible, which we can use to draw deductive conclusions.  For example, the equation for displacement for a falling object [dp = v0*t + (1/2)a^2*t] is a simplified model, subtracting all other factors than the one being considered, allowing us to deductively conclude the displacement for any values of v0, t, and a.  Mathematical conclusions are a sequence of deductive operations, both to make mathematical proofs and to solve/apply any given instance of an equation/expression/situation/etc.

Our minds operate on the most basic level using models primarily, and simulations second.  This is because most of the time, a model is close enough.  You don’t need to include every factor in order to get an answer at sufficient precision.  You don’t have to factor in the time, the temperature, or the quantum wobble of each atom in a baseball to figure out where it’s going to land.  If you wanted a perfect answer you could simulate it, but you can get it to an extremely high level of precision by simply ignoring all those marginal factors.  They are not worth computing.  Now we are beginning to factor in the distinction I’ve brought up before between algorithms and heuristics.  Models are often heuristics, and simulations are often algorithms.  Models can include algorithms and simulations can include heuristics, but on the whole a simulation (given correct laws and good starting conditions) will algorithmically compute exactly what is going to happen.  A model, on the other hand, is a much more efficient process that throws away data in order to make calculation simpler.  Usually a lot simpler.

Now I am willing to bet that some readers will be confused.  I just said that simulations need the right laws and starting conditions- isn’t that the same thing as a deductive process needing the right logical framework and initial premises?  Well, yes.  That’s because a logical construct is a simulation.  However, it is a simulation constructed using information already stripped of extraneous information by creating a model of it.  The line between model and simulation is not black and white- they are simply approximate labels for the extremes of a spectrum, with conflicting ideals.  The perfect model is one law that determines everything.  The perfect simulation is a colossal, gigantically massive data stream that represents everything, down to the last spin on the last electron.  This is also where we get the fundamental distinction between philosophers: the conflict of rationalism versus empiricism.  The rationalists believe the model to be the “one true philosophical medium” and the empiricists believe it’s better to use simulations.  The tricky part is that in order to construct a simulation, you have to have models to run each of its laws and each of its elements.  In order to have a model, you have to have a simulation to draw patterns from.  So we have an infinite recursion where rationalists and empiricists are chasing one another’s coattails for all eternity.  Fortunately, most people who have thought about this much have come to more or less the same conclusion, and figured out that rationalism and empiricism go hand it hand quite nicely.  However there is still a preference for choosing to understand the world through one mode or the other.

How does all this apply to the original issue of the St. Petersburg Paradox?  So we have mathematicians who are definitely rationalists- I imagine there aren’t many professional mathematicians who are empiricists.  And these mathematicians construct a model that represents a certain behavioral set.  Their problem, however, is that reality doesn’t actually support the conclusion they are saying is the most rational.  So they change the model, as they should, to better reflect reality.  All well and good.  Their problem, though, is that they are actually doing their job backwards in one concealed respect.  Implicit in their model is the idea that it is the case in the simulation they are describing that the population they are describing has the same conceptual map that the people who created the model did.  I am aware that I could have simply said we have some ivory tower mathematicians who are out of touch with reality, but I wanted to cover in-depth what the disconnect with reality is.  They are correcting their model by making it better reflect empirical reality in one respect, but in so doing they are simultaneously doing the same in reverse by assuming things from their meta-models onto reality.  We have rationalism and empiricism, simulations and models, inductive and deductive thinking, all chasing their dance partner around.  But the most vital thought is that the process must only go one way.  You must always push forward by correcting both to better fit the other in reality, rather than working backwards and assuming things onto reality which are not the case.  If you do this, and then entrench your position with a rationale, you are screwing up your meta-model of reality.  And, like a monkey with its hand caught in a banana trap, the tighter you squeeze your fist the more surely you get stuck.  For every ratchet backwards on the progress ladder, you get more and more firmly stuck in place, and it even gets harder to continue to go backwards.  The wheel spins one way, it grinds to a halt in the other.

The Fundamentals of Reason

I realize that I talk about reason and rationality a great deal, but I haven’t done a great deal to explain exactly what I mean by those words.  In fact, through a great part of history it was perfectly acceptable to treat divine inspiration or the product of a drug-induced hallucination as a basis for decision-making.  However that is clearly not rational by today’s standards.  I want to try and stay away from the philosophy of science, though, since that sort of discussion will not have meaning for too many people.  What I want to get across is that we are all fundamentally rational beings because rationality is a prerequisite of survival.  If we did fundamentally insane things on a regular basis then our species would be long extinct to make room for those that react to reality instead of a fantasy world.

Everyone, even the craziest of the crazies, is fundamentally rational.  They know how rationality works, even if it hasn’t been formalized for them.  They know how to apply it to make the right decisions and to sort truth from falsehood.  The trouble comes because rationality is so flexible.  As a meta-rational strategy, it may be wise to ignore rationality.  It may be proper to do any conceivable action in the right circumstances.  If you live in a society where those who don’t jump up and down and make monkey sounds when the man in the absurdly tall green feathered hat says “mookly!” are killed, then you damn well better jump and make monkey sounds.  If you live in a society where your interests are served by neglecting strict basic rationality in favor of a unified community perspective, even if that perspective is clearly ridiculous, it may be a reasonable choice.  Rationality, for those who have experienced it in formal form, is a very seductive thing because it lets you know things.  Truly know, not just “think” or “suppose” but actually know, and prove to a specific and known degree of uncertainty and ambiguity.  The first step is to establish that all propositions may be false given certain future evidence.  If we discovered a rock that fell up, that’s a vital piece of information.  It doesn’t actually prove that gravity is false though, as the stereotypical example says.  Because clearly there’s some value in the model of gravity because it’s been right so often in the past.  If it needs to be extended to cover a more general field of circumstances, so much the better.  This is how knowledge is advanced.  Once you acknowledge that you can never be absolutely sure (and I mean in the sense of absolutes) of anything, there is a ceiling on the strength of propositions.  This ceiling is, put succinctly, “To the extent that it is possible to know anything, I know that ______”  Now, a lot of postmodernists take this to mean that nothing means anything.  Ridiculous!  What it means is that if you observe something, you don’t get to say “that didn’t just happen because I know X.”  Conversely, if you fail to observe something, you can’t say that “I know it is so anyway because X.”  This one is trickier because it may actually be valid in certain circumstances because you can put a weaker proposition in the position of being negatively tested against.

OK this is getting a little confusing.  I shall rephrase.  If a devout Christian fundamentalist who believes that the Bible is literally true, word for word, was presented with a real-life situation which clearly contradicted the Bible, and continued to believe in the Bible, that’s a problem.  The fundy is assuming that the Bible is true in absolute terms.  The contents of the Bible are so true that even reality cannot touch it.  This is, of course, living in a fantasy world.  However this is a common example of someone attributing far too much strength to a proposition- more confidence in a specific statement than you can possibly have while still keeping an objective view of the world.  For the fundy faced with a contradiction, they have basically two alternatives.  Firstly, they might conclude that the Bible isn’t literally true and that reality is, well, real.  Or, they can come up with an explanation of some kind that will explain why the contradiction can exist, explain how it isn’t really a contradiction after all, or shatter their thinking faculty by believing that contradictions are admissible in reality.  There is a fourth option: ignore the problem.  While there are countless problems that are given this treatment at any given time, the invasive nature of religion invariably fills the victim’s life and worldview until they are forced to take one of the aforementioned options.  Modern religions dislike dabblers- they prefer converts, and vector mechanics are selected for accordingly.

The second pillar of rationality is deduction.  The ability to conclude things.  Now, some would say that premises are more important than deductive ability, and while they would probably be right.  It is possible to be a hardcore rationalist operating from very bad premises, arriving at awe-inspiringly terrible conclusions with great certitude.  However, your premises are only subject to rational analysis once you have established the ability to measure them.  Which requires deductive, abstractive, and meta-analytical faculties.  So I place deduction higher.  Anyway, most people understand how this works.  Socrates is a man.  All men are mortal.  Therefore, Socrates is mortal.  Actually this is rather a more complex statement than necessary to prove deductive faculty.  7 = 7, and therefore 7 = 7 will do nicely.  The basic laws are available on Wikipedia, but the structure you use isn’t as important as the ability to follow one.  True, single-order binary logics using only true and false statements have very strict and well-understood laws for operating and maintaining truth values.  But what if you want a system with three states, or n states, or a paradigm specifically designed to deal with ethical choices?  The ability to understand, follow, manipulate, formulate, and eventually innovate in thought forms is important.

Thirdly, your premises.  This is where a lot of people screw up.  If you start from bad premises, there is nothing you can do to arrive at a reasonable conclusion, even if it is factually true.  In fact, it’s worse if you arrive at a conclusion that is correct through flawed reasoning because you will then apply that reasoning elsewhere with undeserved confidence.  This is how we get Creationists on TV talking about how the banana is perfectly shaped for the human hand, therefore there must be a God who designed both the banana and the human hand.  They are operating from some extremely bad premises, but actually, if you admit their premises for the sake of argument you arrive at a relatively strong hypothetical conclusion.  This type of thing happens a lot for religion.  It’s like a compression algorithm in the religion virus’ DNA that also increases its rate of spreading.  It reduces the amount of information that must be transferred (only the premises, not the whole structure), it makes it easier to bypass the natural pseudo-rational approximant functions naturally embedded in the brain, and also enables the subversion of those very faculties once the premises are accepted.  Religion is itself evolved to be an amazingly effective virus for transmission between minds.  It’s what makes discussion about it so fascinating; I’m always finding new things that the religion virus has capitalized upon and been selected for.  You see the same type of thing in a lot of famous books and movies- it appeals to a wide diversity of people and has been selected for among a large population.  The “classics” then provide the seeds upon which new diversity is created.  Of course, in this case the metric by which we measure the species’ utility is entirely subjective and changes with the times so it’s less of a purist evolutionary system, but still an interesting thought.  Also, it’s important to point out that each “species” has essentially one organism: the contents of the book.  In olden days this wasn’t so- every bard and performer had their own version which they performed for specific results.  This is probably why a lot of the very old tales, including fairy tales, have an almost mystical amount of power in them.  They have been naturally selected for in a much more proper fashion with more than a single set of genes in the pool.  Modern books are all carbon-copies of one another because we’re so precise in our exchange of their information contents.  Anyway, now that I’m thoroughly off topic from the basis of rationality, it’s time to return.  If you start from good premises, and use proper rational tools, then you must arrive at a valid solution.  Now it’s important to note that while at one point in time given a certain set of information, a set of premises may be proper and produce the right results.  However, later in time, you may encounter a result which contradicts your original construction.  This is OK, it just means that your premises weren’t perfect, they only covered certain cases.  In reality, it’s more or less impossible to create a model to cover all cases without creating a model as complex as reality itself, thus defeating the point of using a model in the first place.  This is the difference between a rational model and a pure simulation.  A pure simulation would duplicate exactly the information content of the subject matter being considered, and is not necessarily a tool or vessel of intelligence.  If it were, we could say the universe as a whole is a vast intelligent being because it has so many atoms that all can be represented as information patterns performing constant exchanges that we are contained within and thus may never understand.  The second we “understood” the picture, our minds contain a new piece of information which we haven’t accounted for, and so on forever in infinite recursion.

Anyway, I started this post off in fairly short and focused form, but now my mind is all over the place.  It’s a pleasant way to be, but it isn’t conducive to great writing in a linear mode like a text stream.  I hope I’ve given you some food for thought to chew on, and of course the basis for the tools to do it with.