The Morality of Socialism

For some reason in the news recently there has been a great deal of discussion about socialism, most often with respect to the Obama health care plan. Before I start ripping into socialism as an idea, I think it’s important for me to point out that I see virtually nothing in any Obama policy that smacks of socialism.

Socialism as a theoretical framework is quite simple to disprove on moral grounds- by any classical argument in favor of the inalienable right of property. However, the people who advocate for welfare programs tend to disagree on the grounds that property is not an inalienable right. Moreover, they will argue that there are people who need help, who are unable to help themselves, and that the agency to help them may as well be the government. Especially since the government is in the business of taking responsibility for its citizens. So the issue is not as clear-cut as many conservatives will claim it is. In fact, I would say pretty much everything is more complex and nuanced than any conservative in the media has the neurons to understand. On the other hand, conservatives do tend to be conservative because deductions from moral frameworks make sense to them, where a liberal instead prefers arguments from emotion, relativism, and pragmatism in chaos. This is not to say that the political strategies they use reflects these paradigms, in fact it tends to be the opposite, where conservatives use smear campaigns, evocative language, and outright lies, and liberals use deliberate logical arguments from effect, which are principally arguments from pragmatism. It is somewhat sad that nobody seems able to reconcile theory with pragmatism- it’s not terribly difficult as long as the theory is sufficiently complete and the points where it is flexible are known.

Anyway, modern “socialism” is really a question of whether liberal democratic welfare programs are morally justified. The conservatives throw hissy fits and cry socialism, and the liberals claim it will address the issues. The conservatives claim the government is going to increase taxes to finance wasteful programs, the liberals claim big business is screwing everyone over and Big Daddy government must step in to save us.

First of all, I would like to point out that both sides of the argument are intrinsically linked, like two sides of a coin. Capitalism allows for owners and shareholders to profit from their businesses and holdings, which can through some wrangling be framed as waste. Conversely, the government can take some of the money in circulation through commerce and salary in sales and income tax, and that can be framed as a waste. There is a finite amount of money in circulation, and claiming that it is a waste when party :X acquires it is erroneous. My reason for this is that it is the nature of money to be spent. Government taxes, in large part, recirculate back into the economy because the government pays for services, in very large part to parties in their own country. Similarly, big business takes its money and either reinvests into itself, pays off its suppliers, or ends up in its employees and executives’ bank accounts. It could be said that overseas commerce and outsourcing “leaks” money, but that is absurd. In the act of paying for labor, a service or act of production is purchased in return, which presumably is worth more than the cost of the labor or it wouldn’t be worth making. If this product is then sold, a profit is made, and also the worker now has a little cash to spend which will recirculate. This process in economics is called the multiplier effect, where one dollar actually does a great deal more than one dollar’s work in the course of a year because it changes hands many times. So this issue of “it’s a waste if X acquires money” is really a question over who has control of that money. The one who controls that money has just that measure of extra power. So, which entity would you vest that power in? This is the fundamental question of welfare programs.

Now, as much as it pains the anarcho-capitalist in me to say this, you don’t necessarily want a company as they exist today to handle some concerns. Development of civilization proceeds in many dimensions, not just technological. The invention of the check caused a revolution of the “web of trust” between people and financial institutions. Before that network existed, credit as we know it was inconceivable. It was a recipe for being ripped off, and the economy was locked into a coin-or-barter mode, except between friends. In truth, it’s not as clean as the development of a technology, for example laws against usury and distrust of Jews and all this nonsense. Anyway, my point is that social development of society allows things which could not have happened before in a similar way that technological development does, it’s just as absolute as “the invention of the airplane- 1904- now we can fly!” It is my belief that government is one of those features that has been evolved over time, and whose evolution is not yet finished. At some time in the future we will not need it anymore, but given our current level of societal development and technological capability, it is most likely a necessary evil. This is not to say we should not try to develop past it as quickly as possible.

Karl Marx was unquestionably a brilliant man, although his theories are not exactly the font of human social development. Nevertheless I think he may have contributed at least one very important idea to the body of human knowledge. When the power of production drastically outstrips the wants and needs of an entire society, then we will have a utopia, materially at least, where everyone has everything they want. The social side is a separate issue, and is in my opinion infinitely more important to creating the sort of utopia that all theoretical political science is predicated upon producing. Now the question is, what is the best method of reaching a stage when we have that sort of productive power at our fingertips? Is it welfare programs, or by technological innovation? My favorite new and upcoming technology is rapid prototyping- check out RepRap. This one technology has the power to obviate material products at a stroke, by having a ubiquitous machine that can produce nearly anything. More advanced later versions will follow quickly, using that very device, and we may well have a true make-anything-machine very soon after that. Now, Marx believed that this world would be Communist in nature. I would react that communism is essentially capitalism where money is no longer relevant in day-to-day life. The best explanation for this is that goods and services change hands so easily that the monetary system is not worth its upkeep.

Those who argue that there are people who are poor and destitute need to be helped by the government providing welfare programs are reacting instinctively, their conscience is grating against the injustice. To some extent that’s fine, although it gets a little out of hand when you see this righteous indignation that some people are fabulously wealthy while others are poor. In any reasonable world there will be a set of choices which anyone can choose from, some of which will result in poverty. I don’t mean to say that all poverty is controllable- there are many, many unfortunates who had no opportunity to do anything else. The mentally ill, the handicapped, the people saddled with medical bills unexpectedly, there are all kinds of possibilities for being poor beyond all control. One stance is that the problem then becomes to differentiate between the deserving and the undeserving. My issue with this position is that any judgment on who is deserving and who is not is made by an agent who will lack a clear and objective metric. So whoever chooses to help one or more of these people is excluding others for subjective reasons. The only way this could possibly work is if it is entirely acceptable for those subjective reasons to be valid, and subjectivity is not something a government should EVER mix itself up in, because then corruption and misuse of public resources will run rampant. So private organizations should pick up the slack, offering resources where they can or choose to, and if they exclude someone for subjective or arbitrary or even completely bone-deep-corrupt reasons, it’s not morally nice but it is entirely within their purview. The government, on the other hand, by reserving the use of force restricts itself to a much higher moral standard that is virtually impossible to meet for beings with human-level intelligence, much less a conglomerate of them. A corruption of the use of force is a terrible, terrible moral crime, while a refusal to give alms to a beggar, however deserving, is not a big deal. Any policy the government might use to help the poor is subject to a host of issues stemming from this problem. But then, so does everything the government does, so it’s not like this will deter them.

My central point is that pragmatism at the expense of ethics is a bad idea in the long run, no matter how good your intentions. The poor and the underprivileged are much better served by advancing technology and social progress than by any attempt to simply hand them their daily bread. Now, I would be open to an argument that instituting government health care is itself a push towards social progress, but that is a very different type of argument than nearly all arguments being put forth in its defense, which tend to run something along the lines of “evil insurance companies! government good! Simple solution!” With the other side pretty much barking the reverse, and decrying that the solution is just as simple. It is not simple, and I hope to hear some real arguments for a change, instead of catering to the reptilian brain of people too stupid to think their way out of a wet cardboard box.

Advertisements

Axiomatic Human Properties

In any philosophy of human nature there are certain parameters of the human condition which are inserted axiomatically. These properties are extremely significant to the formulation of any philosophy governing people, namely ethics and politics, but usually aren’t addressed in a uniform and clear manner. The following elements are single pieces that might be composed together to create complex ethical theories or political philosophies. Simply rattling off a list of beliefs about human nature being one way or the other in reactionary mode is pretty much a waste of time. Connecting them together to create a model that accurately reflects the world, or some piece of it, can be very important to the advancement of human knowledge. Big names in political philosophy like Hobbes, Locke, and Nietzsche have built their ideas up from the same basic elements, but they’ve done it in such a creative, novel, and useful way that reflects the way many people see and interact with the world. I believe that spreading a little understanding about what exactly the building blocks of such thinking can improve the quality of thinking in the US and around the world.

The first and most commonly addressed one is whether people are fundamentally good or evil. This question has so many ramifications for all aspects of any philosophy. If people are inherently evil then it is necessary to use some form of philosophical machinery to control, alter, or ameliorate the evil nature of humanity. This is a totally different viewpoint from someone who believes people are fundamentally good, who doesn’t need their philosophy to do much to control human behavior. Indeed, the entire realm of philosophy, particularly ethics, is more focused on what individuals decide virtue is, and each person can have their own philosophy and you can trust them to be virtuous anyway. Their virtue is given, the philosophy is a result instead of the other way around. If human nature is evil, however, then philosophy must come before human virtue can be achieved, and it is necessary to identify the philosophy most conducive to society and then enforce that point of view on everyone. If they can’t be forced to accept it, they must be forced to at least obey it through the application of laws and punishments. Most political philosophers of sufficient import are in the camp of humans being evil, and most of the governments derived from their philosophy depend upon coercive application of laws and police and courts in order to control their population. Whether people or philosophy come first is the ultimate chicken-or-the-egg question, and its primary embodiment is the debate over whether human nature is good or evil.

There is also a question about whether one man is competent or not, regarding whether one man has great powers available to him or if one man is nothing by himself. It is reasonable to have a point of view where human nature is good, but naturally stupid. This is more akin to the Stoic idea, where everyone has virtue as a driving force. Every murderer has a justification for why they saw fit to commit murder (assuming they aren’t innocent), and they really believe their justification. If they were fundamentally evil, they could care less about virtue. They may still be trying to dress up their actions as virtuous to cynically try to escape punishment, and we arrive at a Chinese Room dilemma of having to verify whether or not someone “really believes” something or if they’re just pretending. In most all cases, however, they truly believe their rationale, despite the fact that it is highly irrational. Murder and other crimes, viewed in a broader context by a rational being, are all stupid, even discounting the additional punishments inflicted by laws. If you lie for your own benefit, then nobody has the incentive to trust you. In the extreme short term, perhaps you don’t care, but if such a person was actually rational they would realize the immense value of having a perfect reputation and rock-solid name can yield far greater dividends for their own success than simply cheating and running. The law is an attempt to make this choice “more obvious” by putting a direct penalty on undesirable actions, making the line of reasoning a little easier for the less rational in the populace.
It is also possible to have a worldview, and this is the particularly sinister Hobbesian or Machiavellian view, that people are both cunning and malevolent. If this is the case, the only recourse is to make people act outside of their nature. Indeed, not only is distrust of everyone to be expected, but there’s no authority to look to for protection who isn’t subject to the same rule- they can’t be trusted, they will seize power and abuse it. Hobbes is the more primitive philosopher, and his answer to the cunning-and-evil dilemma is to put the most cunning and evil of them all in charge, the better to protect the people under the power of the ruler. Obviously he didn’t phrase it like that, but in effect creating a single all-powerful ruler in such an environment will only magnify the problem. Machiavelli addresses the issue more accurately by saying yes, it is the most cunning and evil who will be in charge, and the more cunning and evil he is the better a ruler he will make because cunning and dirty tricks are the best way to get ahead. An extremely pessimistic view, but at least it’s internally consistent. It’s actually very difficult to disprove that argument because it contains within itself its own genesis, but I believe it fails on the grounds that people would shy away from a world like that and attempt to make it a more pleasant place to live in for themselves and others.

Whether people are rational, whether people are social, whether people are natural leaders, natural followers, etc. Indeed, there is always a huge debate over what properties we can ascribe as natural to humans, and which ones are learned or inculcated, and by whom they are or should be conditioned by, whether it’s the parents, the community, the government, the religion, etc. Different philosophers have proposed different traits as being innate, and I imagine that at some point some thinker has claimed each and every imaginable aspect under the sun must be natural and innate. The oldest anachronism of this type is that humans are innately social beings, and indeed this is backed up by recent discoveries in biology, anthropology, and genetics. If we are innately social creatures, then we will congregate into groups and there is no modification you can make to the human condition that will overcome this. You can compensate for it by conditioning behaviors, but the natural tendency will still exist. The idea of human nature is actually a special case of the naturalness argument which argues that people have both a natural ethical decision-making faculty and also makes a statement about the tendencies of that faculty. The argument that there is no such faculty can be used to construct nihilism, pragmatism, and numerous other theoretical frameworks. The same can be said of any given property that you wish to ascribe as natural to humans.

What properties are innate to a person, and what properties can change through the course of their lives. This is a similar issue, but quite distinct, from the question of whether a person has the capability to change themselves, and to what extent such willed self-change is possible, or what properties or aspects can be changed this way. The same question applies to other vectors such as parents, the state, etc. Innateness is distinct from natural appearance in that a property that is innate is dependent entirely on physical (or other immutable) composition. A naturally emergent property is merely said to exist, with no particular emphasis on how or why it is that way. If it’s innate then it is a product of the human physical (possibly soul or spiritual) existence. If it’s not innate then it is acquired at some point over the course of your life. Note that non-innate properties can still be natural. For example, humans lack the capability to walk at birth so it’s not truly innate (I use a philosophically difficult example because this is highly debatable, I apologize, but there is no example of something that is obviously not innate but is natural) but it is natural because it is a naturally emergent behavior. A better example may be language, where it could be argued that a natural faculty for languages in general exists, though perhaps not innate, but the faculty for any particular language such as English is definitely not innate (although it also probably isn’t natural because saying “humans naturally speak English” is obviously wrong. We can get around this by citing a particular unspecified instantiation, such as “Humans naturally speak some language” but this is rapidly becoming too complicated to use as an example).
An argument for extreme nativism puts total emphasis on innateness. The entire course of your development is preprogrammed into you as a baby, and is fully contained within your existence at any point in time. Extreme nativism is a more or less extinct line of reasoning. The opposite end, what has been called “tabula rasa” or “blank slate” is the idea that you have zero internal programming at birth- you are totally blank, and you acquire a mind and life over the course of your life. While this seems a lot more reasonable, purist tabula rasa thinking is also more or less extinct. It’s clear that there is some mixture of the two going on, but exactly how much of each is present is not entirely clear. I dislike this phrasing of the issue, but this debate has been called “Nature vs Nurture.” I hate saying that because nurturing is a natural process- indeed humans have certain parameters for raising children encoded into our genes (preying mantises have different ones…).

Part and parcel of the natural human condition debate is what is mutable about human nature, and what is immutable, which of course form a continuum between hard wiring and total flux. A certain trait might be imparted at birth, but still be changeable such as through changes in gene expression. My hair color is different than it was when I was eight (I was blonde, now I have brown hair) and this is a property that is usually associated with genes and assumed to be immutable. We usually assume that the Nature side of the debate assumes immutability, and the Nurture side likes mutable traits. There is no requirement that these assumptions be the case, but nevertheless they tend that way. It makes intuitive sense because after all, if you were born without a certain trait, it must have been installed at a later time and must therefore be reversible, right? Wrong. Conditioning received as a young child is often highly immutable and tough to change, and mental models touching core beliefs are often very difficult to change as well, even if they are destructive.

The reason why these human properties are axiomatic is that for the most part you can come to any conclusion you like and have it result in an internally consistent model. These are fundamental building blocks from which you can construct any theory you like. While someone may disagree with you on axiomatic grounds, a direct proof of their argument will not be sufficient to disprove or otherwise dislodge your position. As it should be, an argument made from such axiomatic points can be incorrect from premises, or improper in logic, and pushing an alternate position will not influence the impact of an argument made by someone else. There is an immense possible composite-theory space that can be created just from the extremely few basic axioms I have chosen to mention here, and there are many, many, many more that can be used reasonably.

On Antisocial Stoics

I would like to address a claim that is sometimes made against stoics, particularly against some of the ideas of Marcus Aurelius, who said, among other things, “Permit nothing to cleave to you that is not your own, and nothing to grow upon you that will give you agony when it is torn away.”  Given the extremely elevated status of friends and interpersonal relationships in our society, this concept doesn’t jive well with the idea that we all have to form deep bonds with one another.  The idea of being stoic and of suppressing your emotions as subservient to your mind seems to conflict with the idea that we’re supposed to share our feelings with others.  Why the belief is that if someone else is aware of the factual state of your existence creates a bond is beyond me, but it is implicitly assumed in our interactions with one another.  The most canonical example is when you encounter someone you know and ask them how they’re doing, what’s going on with them, or the like.  Both of you probably know, if you thought about it, that the other person’s answer is irrelevant.  Neither of you could give a damn.  But it’s the greeting you use because it is a sharing of information of a moderately personal nature, or at least it’s a question requesting that information which implies a certain closeness.  Whether you’re doing it to provoke that sense of intimacy in the other person, in the impressions of people listening in, or to convince yourself, I don’t know.  However I do know that very little of what is commonly thought of as conversation is an actual sharing of empathic significance or deep thoughts.  What is commonly accepted as “small talk” is the norm of human interaction, and it is accepted as having zero functionality.

Now, I am of course being a little over-literal here.  The purpose of small talk is that it is talk where everyone concerned might be uncomfortable in having a real conversation, it fills up the time and allows people to get comfortable with one another.  However it is not and will never be the goal or endpoint.  It is vital that just “being with” other people is never something you’re setting out to do, because standing next to other humanoid figures and flapping your vocal folds is, in and of itself, not really a worthwhile activity.  If you’re interacting on an empathic, mental, philosophical, or whatever medium in a way that gives you genuine enjoyment such that you would actively choose to enjoy that person’s presence in favor of some other activity you enjoy then of course it’s a good thing- that’s just a basic pursuit of your own satisfaction.  This is obvious and a trivial proof, but I think I need to inject it here so I’m not scaring off exactly the people who need to hear this.

The best corollary to this whole mess is our modern conception of sex, especially among men.  Men tend to be in a position of weakness and insecurity due to having conflicting internal models and programming and all manner of other nonsense going on in their heads leaving them a little lost and confused.  One of the dominant themes that result is a pursuit of sex that is driven more by social power than actual personal satisfaction.  Many men are more gratified by the fact that they are having sex than they are enjoying the sex itself.  They’ll brag to their buddies about it and allow themselves that extra iota of self-respect because they “got laid.”  The self-destructive side of this thinking is that they honestly believe they aren’t worth anything unless they can convince a woman that they are worthwhile enough to sleep with.  I am unsure of how many women have this problem, but it is widespread among men.  I suspect that because women are dealing with this population of men, they live in sexual abundance and don’t develop the same complex- attractive women at least if not all women.  I am speculating now, but I find it probable that women have a similar complex revolving around marriage, gratified more by the fact of being married than they enjoy the marriage itself, resulting in the “must get married” effect at a certain age.  Many, many people of both sexes are gratified more by the presence of other people than they are actually enjoying being with them.

The simple fact of the matter is that if you go out seeking deep bonds, what you will find is the most superficial of relations with people as desperate for companionship as yourself.  Deep bonds, described as such, actually don’t exist as we conceive of them.  It’s not that you spend a lot of time with someone or that you have known them for a long time, or even that you know a great deal about them and their personal preferences such as their favorite flavor of ice cream.  In fact, I would go so far as to say that knowing a huge amount about their preferential minutiae actually subtracts significantly from the goal that most people are seeking.  If there’s a woman I like, I could care less what her favorite flavor of ice cream is.  The question is whether or not she is fun to be around.  If I was to feverishly try to get her to like me or memorize her personal preferences, that’s work.  Stupid, counterproductive, and manipulative work, at that.  That’s all.  Perhaps we have deep empathy, perhaps we’re alike, maybe we have good discussions or great sex, it makes no difference (OK, I lie) the question is only if she’s a positive presence in some- preferably many- ways.

Part of the problem is the widespread perspective of the “personality.”  And for the love of life NEVER evaluate someone’s “personality” as ‘good’ or ‘bad.’  Both those words are the most abused semantic identities ever created, and they both can mean nearly anything while being very specific about one thing and one thing only- and by hiding the implementation of that judgment there is no way to argue with it.  There is no such thing as a personality- a person is composed of the sum of their mind and actions derived from it.  There is no way that you can ascribe someone a personality which if they do something that is “not like them” then they’re being fake or somehow not being themselves.  Whatever the circumstances, they are merely exhibiting a decision-making pattern you haven’t previously observed or were otherwise unaware of.  It is the same person, ergo they are the same person.  This idea that we can understand someone else, ascribe them a simplified model that will predict their behavior and then expect that behavior from them is disgusting.  People are very complex- one person is far more complex than the sum of all of their understandings of other people, much less someone else’s understanding of them.  It can’t be your personality that you like coffee, and that you’re doing something bad when you don’t drink coffee.  The drive to be consistent is not a natural one- it’s a societal stamp mark on the inside of your brain that tells you to be simple so that others can understand you better.  But who gives a flying shit about whether other people understand you?  Do what you want!  If you wake up and wonder if eggs scrambled with cocoa and baking soda tastes good with ketchup, then go right ahead and try it!  It doesn’t have to be your personality that you eat weird things- it’s just something you want to do, so you do it.  That’s a bit of a weird example, but it holds.  Why we don’t expect one another to do what we want is just beyond me, especially in our day and age with so many options available.  There are all manner of stigma against jocks, nerds, cheerleaders, sluts, you name it, there’s a stereotype that someone wants to slot you into.  So, how about, just to screw with them, completely break their model of the world by totally not fitting into the model they would like you to.  Just for fun.

So here’s the question.  “Permit nothing to cleave to you that is not your own, and nothing to grow upon you that will give you agony when it is torn away.”  The idea here is that you are your own pursuits and not permitting external people or objects to influence you or your goals.  This is both a warning against addictions of all forms, perhaps especially social ones, and a caveat emptor for everything you allow into your life.  You control your personal sphere- to the best of your ability at least.  It is your responsibility and nobody else’s to make sure that only elements you want are a part of your life, and it’s your duty to yourself to safeguard the vaults against the thieves that would seek to plunder your wealth.

I have something to say about victimization here.  Blaming the victim for a crime committed against them is the original scam.  It is the classical attempt to cheat and then get away with it, and the more serious the crime, the more potent a tactic it becomes.  The idea that you control your person means that yes, to a degree, you are responsible if something bad happens to you.  There are precautions you could have taken, etc. etc.  No matter the event, there are always choices you could have made to avoid that outcome you deem makes you a victim.  However part of the idea of being actually in control means that you are never a “victim” of other people’s choices or actions, because the very idea implies that you aren’t actually in control.  So you are only actually a victim when the aggressor has actively applied intelligence to disable, short-circuit, or otherwise evade whatever defenses or precautions you have taken against being taken advantage of.  Think of it like this: if you’re on a desert island and a bear comes and steals your food, then you’re a victim.  But you could have done any number of things to prevent your food from being stolen, such as hanging your food from a tree, out of reach.  The bear is fundamentally at fault here (I don’t believe the conventional idea of “blame” either, so this explanation might be a little awkward without a background but I’ll have to go on anyway) but that doesn’t mean you can sit there and rage about how that damn bear has made you a victim.  Your actions, to the degree that you invested resources to prevent an undesirable outcome, resulted in some probability of that undesirable outcome occurring- a risk.  Now, there are obviously far too many *possible* risks to address, but we can exercise our reason to determine which ones we need to address, which ones are worthwhile to address, and which ones we can safely ignore.  If you ignore a risk you should not have, then you are responsible for that mistake, even if you aren’t the acting agent of the aggression committed.  A bear is too animate.  Let’s go with physics.  You leave your food outside for a long time, and it rots.  Well?  You are responsible because you misjudged the risk of it rotting, didn’t take sufficient precautions, and now your food is gone.  In this case, there is no aggressor at all- it’s you against the laws of physics, but the situation is exactly identical.  You can mope around claiming to be a victim, perhaps go to the government and demand that your food be replaced…  yada yada.  Now, I absolutely do not want this concept of judgment and addressing of risk to be confused with actually blaming the victim as the active agent in their own victimization.  These are completely different concepts entirely.  An agent acting in a way that is exploitative of another agent is doing so because their incentives line up appropriately to make that a course of action they find acceptable.  The idea of punishing them is to tip these scales enough that it is no longer economical to exploit others.  There is of course the problem of giving the power of retribution to who, exactly, which I won’t go into here because this isn’t a post about anarchism.  The reason why you can’t have the punishment be equal to the crime (remove connotations of law or government) committed is that the risk of capture is never 100%.  Let’s say a thief steals purses.  If he gets caught 50% of the time, but each time he’s caught he only has to return the amount he stole, then it doesn’t really change the thief’s decision-making circumstances that much.  However, if the cost is losing a hand then the thief will think twice before stealing that purse because there would need to be a lot of money in there to justify a 50% chance, or even a 1% chance, of losing a hand.  Now, the funny thing about punishment is that you also have to account for a certain probability of false positives.  So if an innocent man is accused of stealing that purse and gets his hand cut off, well that’s pretty damn unjust, isn’t it?  So we have to scale back the punishment until it is enough to stop thieves while being acceptable to the innocents based on the risk of being hit with that false positive.  Keeping in mind that we are assuming the populace has a say in what the punishments are.  If you’re a totalitarian government, you could give a damn what the civvies say, and drastic punishments make sense because it’s less crime you have to deal with, freeing up resources for you to put towards your own ends.  Draconian methods of control are, pound for pound, more efficient in terms of resources spent versus results achieved.  Their main problem, in fact, is that they are so efficient that it makes life a living hell for nearly everyone.

After that long digression, back to the main issue.  If you’re simply enjoying another person’s presence, then there’s no further expectation in the matter.  If they leave, you’re no longer enjoying their presence.  You start to run into problems when you ascribe ultimate value to people or objects, because you can’t unlink ultimate value as long as you actually perceive it as “the ultimate good in the whole universe.”  Now we run into a very controversial edge case when dealing with the loss of loved ones.  I say it’s an edge case because it doesn’t happen very often relative to our lifetimes.  We’re not losing loved ones every other week.  A model that was focused primarily on dealing with death of the most intimate friends (I will not say “and family” because if your family are not your close friends then why are you with them?).  You know what, I’m going to elaborate on that parenthetical thought.  Your family, especially your nuclear family such as parents and immediate siblings, are people.  You know them for longer, and have more opportunity to become very good friends with them, and when you’re a child there is a certain amount of not-having-a-choice in the matter that forces you to make friends or make war, and rational individuals choose the former in all but the most extreme circumstances.  So there’s just very close friends.  The fact that you’re biologically related is of no philosophical significance whatsoever.  Medical significance, yes, but only because knowledge of your family’s genes can be used to deduce your genes.  Social significance, of course not.  So I will treat death of family as the death of friends who were equally close as family members.  Now, to be honest, this is a topic that I’m reluctant to exercise my usual methods of beating to death because there may be readers who have such a powerful subjective experience of the matter that I will waste my time if I try to dismiss the bits that require dismissal, focus in on what is significant , and use it build up a new model that more accurately fits reality and rationality.  We have arrived at the idea that being with people is something you do for yourself, but it seems like lunacy to say that the death of a loved one shouldn’t hurt because you aren’t able to enjoy their presence any more.  That’s just not strong enough, right?  BUt isn’t that exactly what mourning is?  You won’t speak to that person again, or see them, or talk to them, or whatever else.  If you could do those things then you wouldn’t care if they were technically dead- that’s just a cessation of some bodily functions.  If they could die and leave the person intact, now wouldn’t that be a wonderful thing- you wouldn’t have to worry about death.  This is actually a fairly direct deduction for most people, but the idea that the physical death isn’t the source of their trouble, isn’t.  It is the result of the event of death that they’re mourning.  Many religions exploit this weakness in thinking to interject “But life does continue after death!” and then the explanations, the fairy tales, and the bullshit that follows.  They are careful, however, to always exclude the very functionality that death precludes because they are unable to provide it.  They can’t help you talk to your dead loved ones, so they hide them away somewhere as ghosts or in heaven where you will go, too, once you die.  The intuitive universality of the death process makes this nearly logical, except that a slight elaboration can add a significant degree of control over the behavior of the people who want to believe.  And some of the crueler religions take advantage of exactly these people, and make this death process conditional upon your life, and exactly prescribed behaviors.  The most common trick is to exploit vague semantic identities such as “good” and “bad” which enable retroactive changing of what exactly those conditions are for live updating of the behavior of the believers based on what is expedient at the time.  I’m always amazed and fascinated at the complexity of religion as an organism, and the huge potential that religion proves memes have as a life form.

I am not suggesting that you shouldn’t feel pain- what a ridiculous assertion for a stoic.  The idea is that pain, like other sensations or emotions, are there to help you, not govern you.  If you felt fear and were unable to do anything else but freeze up, curl up into the fetal position, and pray, then what use is that?  For animals like the possum, it is an irresistible instinctive reaction programmed into them because in 99% of cases (at least in the genes’ experience) this is an effective defense mechanism, and giving the possum control over the matter would just screw up the system.  This isn’t strictly accurate because possums evolved their primary featureset in the time before memetic delegation had been “invented” by evolutionary processes.  The application of reason is itself a major feature of humanity, and quite novel in genetic terms.  If you wanted to be truly biological about it, you can look at memetic evolution as the ultimate genetic trick, but the problem is that it is so effective it makes genes obsolete.  Also, intelligence is so effective that genetic evolution can’t keep up with the rate of change.  For the prurient example, we have invented cars and now they’re everywhere.  And now possums, with their very effective defense mechanism of freezing up when afraid, causes them to get run over by speeding cars, and the genes can’t un-wire that feature given the new environment because they aren’t able to perceive and judge.  I would like to say, though, that genes are definitely alive.  Not just in the sense that a person is alive, but the gene of HUMANS is alive in a strange information amalgamation of the genes in every person in a way that we really can’t quite comprehend because there’s too many people, too much noise, and too much uncertainty about genes themselves.  The day that we truly understand genes completely, we won’t need them anymore because we’ll be able to construct our own biological machines to any specification or design we like.  They’re just like any other machine, but far more complicated and sophisticated.  Especially the organic ability to reproduce.  Interestingly, though, the body is itself one of the few things that we are currently unable to separate our selves from.  Some can conceive of what that might be like, and most of them have it wrong (I guarantee that I do, but it’s more complete than most, at least).  Note that the objective is to separate your self from as much as possible of what you don’t want, of that which subtracts from your good or your happiness.  I would argue that, for as long as it works, your body adds immensely to that happiness.  And as far as it doesn’t, it subtracts immensely.  So an ability to perfectly fix the human body, a hypothetical perfect medicine, would obsolete the need for mechanical bodies unless their features were so far beyond those of a human body (which is the case) that you could get even more out of one.  Probably the main advantage is the ability to add processing power and memory, and the ability to have direct inputs.  Anyway, permit nothing to cleave to you that is not your own.  I am not my body, but insofar as I use it, rely upon it, and wish to keep it, it is mine.

So if I don’t even value my own body enough to want to keep it, what does that mean?  Well, I never said that I didn’t value my body, just that the value it provides is of the material sort, similar to eating a burrito, except that instead of the satisfaction of the burrito, my body contains the hardware necessary to eat the burrito, and without it any sort of gustatory satisfaction would be impossible (not strictly true- a perfect simulation of the experience is an identity).  This is similar to having a computer.  The computer in and of itself doesn’t actually provide a whole lot of satisfaction, but the things you can do with it will.  Perhaps the computer hardware hobbyists who make it a point of pride to have the best possible machine wired up in the best possible configuration get significant enjoyment out of simply possessing the hardware itself.  However, even with that example, we see parallels with the human body, such as with fitness junkies who make it a point of pride to have bodies sculpted out of steel, and enjoy simply having it.  Important note: most of these “fitness junkies” are doing it because of other people, not because they genuinely enjoy it, or because they even want the results.  And they get further conflicted by the fact that they are causing a change, which might conflict with their perception of themselves, or with others’ perceptions, and for some reason they’re anxious to step outside of that box.

Anyway, my entire point is quite simple, as usual, but it’s dressed up with many trimmings like mirrors in every corner of the room to show off the gleam on the little gem in the middle.  The idea that you should be dependent on others, the idea that that constitutes good social practices, the concept of a social personality, all of these things are foisted upon us because others had them foisted upon them.  We are the monkeys conditioned not to reach for the bananas within our reach because someone, at some point in the past, was punished for trying.  So now we have to live with everyone else.  But the most vital point is this: they don’t matter.  If you want to reach for that banana, they could physically stop you, but if they do then you have a clear and objective obstacle in your way, which can be overcome, instead of the hazy, confusing aimlessness of contradiction.

Macroscopic Decoherence

Macroscopic decoherence is a fancy name for the theory in physics of “many worlds,” a resolution to the dilemma presented by quantum physics that, to some, makes a lot of sense.  Before I discuss what it is and what it means if it is true, first I’ll go over the more commonly accepted modern viewpoint more specifically its aspect labelled the Copenhagen interpretation.  OK, here’s the dilemma.  Heisenberg’s Uncertainty Principle, a verifiable precondition of any theory of quantum physics, states that you cannot determine both the position and the velocity of a particle.  The practical reason for this is that, for objects as small as particles, the act of measuring their properties has a significant effect in changing those properties.  For macroscopic objects such as a table, the photons bouncing off the table into our eyes don’t change the position or velocity of the table and therefore we can ascertain both.  However, there is no yet discovered tool which can be used to probe a particle without changing it in any respect, thus preserving its condition for a second measurement.  Hypothetically, I guess you could measure both properties simultaneously- within the exact same Planck time- but this is utterly impossible with current technology, totally incapable of operating on anything close to that time scale with simultaneity, and there may be other limitations I am not aware of.  Now, strictly speaking, this isn’t an accurate model of quantum decoherence.  Actually, particles behaving like waves exhibit a linear relationship of definition between variables such as, say, position and velocity.  This means that the more certain an agent is about one property, the correspondingly linked property can only be known with a correspondingly limited precision.  So it’s possible to have a continuum of accuracy about both properties.  This seems like a mad system, but this is due to the nature of waves.  I think I should stop and leave it at that before I get sidetracked from the main point- I haven’t even gotten to the standard model yet.
This gives modern physicists a dilemma- it would appear that our universe is a fickle beast.  Let’s say that we ascertain a given particle’s position with perfect accuracy- doesn’t that mean that it is categorically impossible for us to make any statements at all about its momentum, due to total uncertainty?  With the caveat that perfect accuracy is impossible, yes.  So what happens to the velocity?  Or, more importantly, what happens to all the other places it could have been if we hadn’t measured it?
The Copenhagen interpretation of quantum physics claims that the other possibilities do not exist in any case.  This more closely parallels the way we think about the macroscopic world in practical terms because even if we don’t know where a table is, we know the table has a given location that is not subject to change unless someone or something moved it.  The act of measuring the position of the table only puts the information about the table’s position into our heads, and does not change any fundamental properties about the table.  So, the Copenhagen model concludes that the act of measuring where the particle is collapses its waveform into one possible state.  It actually changes the waveform by nailing down one of the variables to a certain degree, leaving the other one free to flap around in a similar degree.  This collapse model causes particles to behave similarly to macroscopic objects in one sense.  However, in order to reach this conclusion, the Copenhagen interpretation has to violate numerous major precepts of modern science- I won’t go into all of them, although it is a laundry list if you want to look it up, universality and objectivity of the universe for one.  The fact that there are observers begins to matter because it appears that we can change the fundamental nature of reality by observing it.  This raises the question of what exactly constitutes an observation, perhaps one particle bumping into another counts as an “observation”?  But relative to us, the uncertainty principle still stands relative to both particles, so there really is something intrinsically different about being an observer.  This is the most serious flaw in an otherwise excellent model, and it is to address this flaw that I add my thoughts to the camp of macroscopic decoherence- the other one being that this causes particles on a small scale to behave in a fundamentally different way than larger objects.

Macroscopic decoherence does not require a theoretically sticky collapse, hence its appeal.  Instead, the theory goes that the other possibilities exist too, in parallel universes.  Each possible position, momentum, etc. exists in an independent parallel universe.  Of course, due to the number of permutations for each particle, and the number of particles in the universe, this causes us to postulate the existence of an indescribably large number of infinities of universes.  Now, if you accept that postulate, it allows a theory that explains particles in the same terms as macroscopic objects, you only have to accept that this same permutation mechanism applies to any and all groupings of particles as well as individual particles.  So there exists a parallel universe for every possible version of you, every choice you have made, and so on into infinity.  This is something of a whopper to accept in common-sense terms, but it does create a more manageable theory, in theory.  The linchpin of the theory is that, rather than the act of observing causing the mystical destruction of the other probabilistic components of a particle’s waveform, it only pins down what those properties are relative to the observer in question.
In other words, the act of observing only tells the observer in which parallel world they happen to be.  Each parallel world has only one possible interpretation in physical terms- one position and velocity for every particle.  Unfortunately, there are an endless infinity of future parallel worlds, so you can’t pin down all properties of the universe, or a distinct set of physical laws would necessitate the existence of a single universe derived from that one.  The flaw in this theory is that this same approach can be taken to a variety of other phenomena, with silly results.  Basically, there is no reason to postulate the existence of parallel worlds beyond the beauty of the theory.  The same data explains both the Copenhagen interpretation and macroscopic decoherence, which is why the theories exist.  Both produce the same experimental predictions because they’re explaining the same phenomena in the first place.  We can’t go backwards into a parallel universe, and similarly we can’t go back in time and find information that has been destroyed by the act of observing the information we observed then.  It appears to me that, given current understanding, both theories are unfalsifiable relative to each other.  Overcoming Bias makes a fascinating case as to why decoherence should be testable using the general waveform equations, but the problem I see is that theoretically the Copenhagen model could follow the same rules.  True, it lends serious weight to macroscopic decoherence because it systemically requires those equations apply whereas it could incidentally apply to the Copenhagen model.  Or some souped-up version of the Copenhagen model could take this into account without serious revisions, it’s difficult to say.  I do disagree with the idea that macroscopic decoherence must be false because postulating the existence of multiple universes violates Occam’s Razor.  This is a misapplication of the razor.  Occam’s Razor doesn’t refer to the number of entities in question, but to the overall improbability by complexity of the concept or argument being considered.  It just so happens that you have two options- either there is some mechanism by which observers collapse a wave into only one possible result, or there exist many possibilities of which we are observing one.  It is not a question of “well, he’s postulating one function of collapse, versus the existence of an endless infinity of universes.  1 vs infinite infinities infinitely…  Occam’s razor says smaller is better so collapse is right.”  This is not correct by any stretch.  True, currently there is no way to verify which theory is correct, but a rational scientist should consider them equally probable and work towards whichever theory seems more testable.

Well, let’s consider the ramifications if this theory of macroscopic decoherence happens to be correct.  It means that every possible universe, ever, exists.  Every possible motion of every single particle.  According to quantum physics as we know it now, there exists some possibility that the statue of liberty will get up and take a stroll through New York.  It is a…  shall we say… exceedingly small… probability.  I won’t even attempt to calculate it, but I bet it would be a 10 to the 10 to the 10 to the 10…. so many times you couldn’t fit all the exponents into a book.  It could easily be improbable enough that you couldn’t write that many exponents on all the paper ever produced on Earth, but I won’t presume I have any goddamn clue.  However, according to macroscopic decoherence, there actually exist a very large number of infinities of universes where this occurs- one for each possible stroll, one for each particle’s individual motion inside the statue of liberty for each possible stroll, etc. etc. etc.  And this is for events that are truly so unlikely as to be totally impossible, let alone for events as likely as intelligent choices between reasonable alternatives, such as what to order at a restaurant, or what to say every time you open your mouth, and then every minor permutation of each…. gah!  Any attempt to describe how many possible universes there are is doomed to fail.  Trying to diagram the possible life courses on the grand scale that each person might make, I will leave to your imagination.

So now we get to the interesting bit- the reason why I am writing this post.  So in all of these parallel universes there exists a version of you that is doing all of these different things.  So the question I have is, are they really you?  Seriously, there are versions of you out there that are exactly, exactly the same in every respect and living exactly the same lives in exactly the same universes, with a single particle moving in an infinitely small way elsewhere in the universe in a way that does not and could not possibly affect you.  However, because of this schism of universes, you are separate consciousnesses inhabiting different parallel universes.  Now there is a high probability that these universes are not totally discrete- rather they inhabit a concept-space that, while isotropic, could be conceived of as having contours that describe the similarity of the universes, with very similar universes being close together and very different universes very far apart, in a space with an infinite infinity of dimensions.  As a result, even with respect to these parallel universes, these versions of you will be infinitely close to you and could be said to inhabit the exact same space, with versions splitting off into space while remaining identical, and other versions experiencing physical changes on the same spot (some of them infinitesimal, and others rather drastic, such as turning into a snake, a werewolf, or anything else you can conceive of).
So which of them is the “real” you?  Or have you figured out that the concept doesn’t have any significant meaning in this context?  If we narrow down this infinite schisming into a single binary split, then both sides can be said to be equally “original” based on the preceding frame.  By the same token, an exact copy of someone in the same universe should be treated as synonymous with the “original.”  Please note, those who are unfamiliar with this territory- I get this a lot.  I am NOT referring to cloning.  A clone is genetically the same, but so utterly disparate from its progenitor that this level of identity is not even approached.  I am referring to two entities that are so identical that there is no test you could perform to tell them apart.  Obviously, with any time spent in different physical locations in the universe they will diverge after their initial point of creation, but it is that critical instant of creation where the distinction matters.  If the two are synonymous, there is no “original” and a “copy”- indeed, the original is merely existing in two places at once.  If they could somehow be artificially kept identical by factoring out particle randomness and their environment then they would continue to act in perfect synchrony until something caused a change, such as a minute aspect of their environment or a tiny change in their body’s physical makeup, such as a nerve firing or even a single particle moving differently (although that probably wouldn’t change much, somewhere down the line it might due to chaos theory).
So now we get to the difficult bit.  What about alternate encodings of the same information, but represented in a different format?  Are the two synonymous?  I argue that it is, but only under certain circumstances.  1) Using a rigorous and perfectly accurate transcoding method to encode one into the other, 2) the timespan of the encoding must be fast enough that significant changes in the source material are minimized, if not completely eliminated, and 3) the encoding can, theoretically, be converted back into the original form with zero loss or error.  The first requirement is the only ironclad one- if you make an error in the encoding then the result will not be representative of the original.  The second and third are more complicated, but easy to assume in an ideal case.  The reason for this is that there is a continuum of identity, and a certain degree of change is acceptable and will produce results that are “similar enough” to meet identity criteria.  If it’s the “you” from a year ago, it’s still the you from a year ago even if it isn’t identical to you now.  So if the encoding takes a year then it does preserve identity, it just doesn’t preserve identity with changes into the future, which is an utterly impossible task because even a perfect copy will diverge into the future due to uncontrollable factors.  Thirdly, if there is no method to convert the new encoding back then it cannot be verified that it is indeed synonymous with the original.  It is possible to produce an identical representation without this clause, but if for some reason it is impossible to convert it back then you can’t know that it is indeed a perfect process that preserves material identity absolutely.  This is the test of a given process.  Now, for digital conversion, reconversion back into physical media is impossible, but simulation in a perfect physics simulation and producing the same results is synonymous with re-creation in the physical world.  I am aware that this appears to be a figure-eight argument, depending upon the identity of a simulation to prove the identity of digital simulation as a medium.  However, this is false because I am referring to a test of a specific conversion method.  In order to create a proven physics simulation, other provable methods might be used to compare the simulation’s results with the physical world.  Once the simulation has been proven to produce the same results as the physical world, given the same input, then a given instance of simulation can be added and compared with the exact same situation in the physical world, using the simulation as the calibrated meter stick by which to judge the newly simulated person or other digitized entity’s accuracy.

Is There a True, True Self?

I have compared the “true self” to the “false self” before, and while I will still stand behind the claim that the distinction can be made usefully within a certain semantic realm, I’m going to go the other direction in this post because in a different, more general realm, there is no “true self.”  As a matter of fact, if you look at it in the most general, explicit sense, you have no self at all apart from the information that constitutes your decision-making and thinking matrix.  What I’m trying to say is that when someone says that they act a certain way and that’s their “true self” and all other ways of acting are them doing something other than being their true self, they are misleading themselves.  No matter what they do, they cannot escape the fact that the same decision-making matrix, no matter how intricate or complex, caused them to act that way in each of those situations.  Now, if they mean to say that they have a preferred mode of behavior, but are forced to use a different mode of behavior in varying circumstances, well of course.  I have preferred modes of behavior, too, like I prefer to sleep or go out or play video games to doing actual work.  That doesn’t mean that I’m my true self only when I’m in the process of a preferred mode of behavior.  But that’s exactly how a lot of people reason out their reactions to, most commonly, certain other people.

I’m getting into material identity again, but since it is I suppose my preferred philosophical specialty I may as well.  Because of the fact that there is no single piece of information you can subtract from a person to make them not-that-person, the person as a whole (considered as a contiguous entity) only has meaning as far as perception will take it.  Relative to someone else, it’s their perception.  Relative to the person themselves, it’s their own perception that matters.  Imagine that you woke up and you were a different person!  Now, because of the nature of logic, this sentence has no true parseable non-tautological meaning.  I have included in the sentence that “you” are a different person, meaning you are still you.  So the Engish way to handle this issue is to change the meaning to “you wake up with a different body, probably that once belonged to someone else.” or something similar.  No matter the way you parse it in English, it isn’t handled in a logically rigorous way in the same way that we don’t answer the question “Would you like tea or coffee?” with “Yes.”  While logical, it conveys little useful conversational meaning.  Bear in mind though, that if we spoke a truly logical language, you would answer in a way that did convey conversational meaning, the same way you don’t say “Yes” in English (Although framework of asking the questions would probably receive more semantic-structural changes than the affirmative/negative response structure).

But I digress, seriously this time.  We nearly had a terminal digression there into the land of logical languages.  Back to the issue of having one identity.  The truth is that we have an assumption here that we haven’t questioned: is it necessary to treat identities in the same way that we treat physical objects?  Once again this is a conceptual piece of English- we like to treat concepts like objects.  We can pick up drawing, have an idea, find an answer, and so on.  I’m not going too far into this as a topic- I would recommend Steve Pinker’s The Stuff of Thought for more on the subject.  Anyway, the assumption that identity is an object has numerous flawed bases.  Firstly, there is 1 “person” per body, and we can count bodies.  Ergo, there must be 1 and only 1 identity per person because that person has 1 and exactly 1 body.  The next flawed idea is that identity is immutable and does not change.  That there could ever be a “one true” identity.  This isn’t even true for the lowest-level aspect of identity at the level of the physical body, so how anyone can formalize the idea that identity must be fixed is beyond me, but it does happen.  It should be completely obvious that the body of a child is different from the body of an adult, and so assuming that there is any relation beyond material continuity is a flagrant violation of logic.  Now it is not an error to say that there may exist similarities between these two identities/bodies/people, especially considering how causally connected the latter stage is from the former.  But to say that there is a fixed identity from which changes may be noted as deviations is just plain wrong.  People change a lot- people change very quickly.  Through the course of a day each of us goes through periods of high and low energy, moods, thought patterns, and who knows what.  However there are people who are guilty of the next identity fallacy, which is that somehow those aspects aren’t significant pieces of your identity.  They are passing and trivial and should be ignored because in the grand scheme of the human identity they are categorically different.  Well this is wrong, but it’s less obvious to most people because it has some deep religious roots.  The idea that the body is distinct from the soul, and that the soul is much more important than the body can ever hope to be is an old religious idea with tendrils all over the place.  The idea that something like a state of hunger contributes to your identity in any significant way is perhaps odd.  But look at it this way.  If there was a teleportation machine that destroyed your body and created one exactly like it at a different location- I have used this example before.  If there was such a machine, and it re-created your body perfectly in every detail, except it omitted recording information needed to compute and recreate a state of hunger (somewhere between total satiety and death by starvation) then is it a valid teleportation machine?  I’ll tell you what, I wouldn’t step through that bastard for a billion dollars, and not because I might be a starved corpse on the other side- it’s because I have no idea what information went into the complex computation of my own state of hunger/satiety.  Probably all kinds of things from the contents of my intestinal tract to levels of certain hormones and neurotransmitters.  If the machine omits all that information, I don’t come out the other side of that teleporter.  Someone else does.

So I am aware that I have a difficult position to defend here.  I’m saying, at the same time, that there is an immense degree of flexibility in what constitutes a person- that you can still be “you” in the sense that counts from the time that you’re a child until the day you die, but at the same time the standard for building a teleporter must be absolutely flawlessly perfect in order to preserve material identity.  The reason for this is that I’m making the two comparisons based on different criteria.  I’m a strict materialist- everything can be reduced to an arrangement of matter and energy if a sufficient level of detail and fidelity is used.  However, matter and energy in and of themselves are just rocks and colored lights- they have to be organized into information patterns to be interesting.  So in the case of a stardard human life, without being teleported, the information pattern persists in direct fashion through space and time and can be identified perfectly as being materially continuous.  However, once you introduce the ability to jump around in space and time, you have to get a little bit smarter than that in order to maintain material continuity.  To think about material continuity, I’ll call it the Where’s Waldo? Effect.  If it’s possible to look into the universe like a giant, four-dimensional Where’s Waldo book (including all periods of time) and find you, or any given person, then you have material continuity.  When you introduce the ability to jump around in space, then you need to have the end of one string and the beginning of another match to a sufficient level of detail that the four-dimensionally-conscious being looking into the Where’s Waldo Universe can put together the pieces.  The same thing is true if you’re jumping through time, of course, but most conceptualizations of time travel account for perfect material transport as a matter of course, so it’s not as interesting to talk about.  Still, if you have a time machine then you necessarily have created a teleportation device because you could teleport back in time exactly enough time to go wherever you’re going and then go there, arriving at exactly when you left.  Not a super elegant mode of teleportation, but quite effective in physical and relativistic terms.

In fact, to be even more technically precise, it’s impossible to build a teleporter without somehow cheating relativity.  The modern idea on how this might be done is taking advantage of quantum entanglement to transfer information instantaneously to anywhere in the universe- it might also be done with some form of tachyon particle but entanglement shows much more promise.  It’s something of an important idea that material identity is both time and space independent because even if you could transfer the totality of your information instantaneously anywhere, I find it unlikely that it’s possible to instantly create a new body for you on demand.  As long as a more or less perfect copy gets made (ideally before you get “re-activated”) it makes no difference if you lost some time in the middle.  The real question is- how perfect does this copy have to be?  That is an extraordinarily difficult question to answer.  I have no idea how you would go about answering it in a mathematical sense.  As long as you have material continuity to fall back on then you have nearly endless flexibility, but the second that gets taken away it really becomes a question of what you believe the limit is.  And a strange sort of “are you feeling lucky, punk?” kind of attitude.  It’s the same operation, because material continuity is just using the super-perfect teleport trick over impossibly small distances and over the smallest possible time lengths (Planck time, approx 10^-44 seconds) using the same medium that the stability of the information pattern itself is composed of, so the accuracy is so absolute as to be perfect.  Sure, particles jitter and all sorts of other stuff is going on, but that’s the nature of the pattern that you’re made of anyway.  Even in periods of the most rapid change you can conceive of, relative to the length of a single Planck time- I mean, come on.

I don’t think that 10^-44 seconds will even fit into the human mind as a workable unit of time.  That means that you would need 1 followed by 44 zeroes of them in order to get one single second.  To put that into perspective, if you had that many nanoseconds the total length would be 3×10^27 years, or enough to contain the entire history of the universe (15 billion years) over 200,000,000,000,000,000 times.  A Planck time is small.  There is no practical way that sufficient change to break material identity could happen on a timescale so small.  So I just say that no matter what, material continuity equals material identity.  It’s not strictly true, but if you’re seriously in doubt then you must be talking about some thought-experiment edge case like “what if we had a particle accelerator that could destroy n brain cells in exactly 1 Planck time, how many would we have to destroy…”.  They’re awesome, and I do it all the time, so that’s great, but as a rule of thumb I think the idea of material continuity = material identity works quite well.

Strategy, Tactics, and Games

First of all, read this post.  Now.  http://www.ribbonfarm.com/2007/09/24/strategy-tactics/  It is pure genius.

After you’ve done that, I have analysis to do.  I’m not going to regurgitate a single shred of the information in the above article because I have too much to say.

First of all, the author Venkatesh Rao is absolutely correct, and not only did this idea never occur to me, I never thought to question the idea that the fundamental assumptions used in the creation of strategies and tactics were fundamentally flawed- adding a level of meta-tactical formulation that is essentially lacking in most decision-making.  Now, more specifically, the idea that tactics are general and strategic thinking is unique to situations, while it appears to be generally true, and it’s a much better approximation than the old model that strategy is somehow more all-encompassing than tactics, it falls victim to the same thinking that the old model did.

What do I mean by this?  Well, strategy by this definition does actually include tactics necessarily.  Because it’s constructed for an individual circumstance it must necessarily be built up from the different tactical options available to the agent.  However, tactics do not necessarily have to be a part of a grander or lesser strategy.  A tactic can be described in pure game-theoretical terms without any real-world interaction.  This is accomplished by building a tactic up from axioms in a way that strategies derived from doctrines aren’t.  A doctrine is an assumption about the world for practical purposes and is therefore derived from experience in an inductive fashion- as a practical assumption which is most often true, or otherwise useful to assume.  Tactics derived from axioms are arrived at deductively.  For example, in a military situation, we know that we want to destroy as much enemy materiel as possible while incurring as few losses as we can.  This is not a doctrine- this is an axiom.  Similar axioms are such assumptions as “guns have range” or “guns are highly lethal to humans.”  So if we build up a number of axioms like this we can arrive at a situation where we have whatever weapons in whatever known situation, and we can compute tactics such as have troops use cover, use infantry with anti-armor weapons to engage enemy tanks, use tanks to engage enemy assault infantry, etc. etc.  So maybe we arrive at an effective tactic of creating a formation with the tanks in the front, and a large number of infantry in a supporting role, to be brought forward when the enemy fields their tanks.  It’s important to note that we can change these parameters however we like and we’ll arrive at different tactical results.  For example, if we changed the situation to include the axiom that all infantry are highly effective at killing tanks, then it may not be worthwhile to field tanks at all because they would be destroyed too easily, and it certainly wouldn’t be a good idea to have them go first if they were all you had.

In a strategic sense, we have a different way of looking at our available units.  We could talk about units in the same abstract sense as before and still come up with concepts of strategic interest, but in order to formulate a valid strategy we would really need to know the specifics of what we’re dealing with.  Do we have 122 tanks and 300,000 troops to call upon?  What’s the supply situation, what about morale, training, enemy targets available, etc. etc.  From this we might formulate a diverse array of potential strategies to maximize the effectiveness of the resources available.  However, in order to do that we need to have both good doctrine, or practical assumptions about the nature of the world, and good intel, or exact specifics about the situation at hand.  The difference is fairly easy to handle.  If we know that setting the tempo of the military engagement is critical, that’s a doctrine.  It has direct strategic significance by reducing the infinite field of possible strategies down to a more manageable number of probably useful ones very quickly.  Intel would be “the enemy has 513,889 soldiers located in that city” or “the enemy is going to attack in three days.”  Intel is necessary for making operational decisions, or low-level instance decisions.  I suppose it could be said that operations are simply a lower-level form of strategy, but they’re low enough level that it is practical to consider them fundamentally different.  Strategic thinking is necessary to make them work, as opposed to abstract tactical deduction, but the strategy selected is known and an implementation is all that is required.

Strategic thinking is not, as I and many others once thought, “higher level” than tactical thinking.  I would argue that it requires more experience and more intelligence to think strategically in a given field than to analyze it tactically.  With strategy, you are necessarily dealing with imperfect information and chance.  Chess is a game of pure tactics, with very little true strategy.  I would argue that more complex games like Go actually do include levels of strategic thinking because you have to address the board at hand and your opponent in a unique fashion.  However, in chess, you don’t care who your opponent is or what the individual situation is.  Given a sufficiently advanced derivational strategy you could compute the ideal move in a given situation.  The same thing could be said for Go, of course, but the computational capacity required is so immense that it is utterly impossible with the resources of a human brain.  However, chess masters make this sort of analysis when deciding what to do.  Ah, who cares about individual games.

Real time strategy games tend to contain strategy, with a fairly sparse diversity of individual tactics.  Some tactics that are generally common in all RTS games are things like rushing, turtling, spamming, and so on.  Strategically, however, you have to look at the terrain and what units your opponent is fielding and make a decision that will only hold for this specific situation.  One of the main flaws in RTS games in my book is that maps tend to play out the same way each time because the terrain has too little effect.  This sounds like I’ve got it backwards, but bear with me.  Two armies meeting in a field with no terrain at all have very few factors to make strategic decisions on.  Barring some really different logistical or technological factor, the battle will probably play out much the same way every time you ran such a simulation.  Now, if you added in a little terrain, just enough to create a few significant areas of strategic significance, then the nature of the game changes.  Both sides try to hold the same strategic areas, and succeed to the degree of the resources available and the ease with which they can hold a specific area (if it’s closer to them, etc).  However these battles will also play out the same way every time because there aren’t enough options.  If you’ve only got a few points of obvious interest to both sides then they’ll fight over them every time.  The tactics utilized to obtain them may be different, but the strategic objectives are not up for negotiation.  In order to have a strategically interesting game there must be a greater number of possible strategic choices than a given side can hope to capitalize on.  What do I mean by this?  If we increase the number of points of strategic significance, up to the point where it is no longer an option to simply take them all, then the game starts to become strategically interesting in the sense that different players will make different strategic choices on the grand scale.  Now, I have to mention here, that it is also important to have multiple dimensions of possible choice.  If you have a wide selection of areas which will all give you resources, then the strategy doesn’t actually change.  You just have to get as many of them as possible- and the order that you take them becomes the individual strategy and doesn’t make an interesting strategic setting.  Perhaps the best way to create strategic significance is to give the players the ability to create strategic weapons, and depending on where they place them, the course of the battle changes.  The issue with this method though is that a given setup will lend itself to specific places to put such weapons.  So if you put these choices in the players’ hands, they’ll quickly settle on where the best choice is and just repeatedly place there.

I am trying to bring to light the principle of strategic consolidation.  This is known in game theory as Nash equilibria.  Ideally, in order to create a strategically interesting situation, you would ideally make it so that there are no Nash equilibrium for your setup.  However this in almost an impossible task.  So instead you can set about creating as many of them in as complex a formulation as possible so that it doesn’t play out the same way too often.  I would posit that there must be a way to create a game which, from its fundamental structure, will be strategically interesting every time.

Now how would we go about doing this?  The first point is we must somehow factor in the right level of extra-structural and intra-structural factors.  Meaning, the map, player choices, and other circumstantial factors must have a variable level of influence, but not so variable that any one of them can ever break the game.  Of course, it would always be possible to create a map which breaks strategic interest, or for a player to be outright retarded.  However we as the hypothetical game designers get to put certain parameters on these things.  For example, maps should be between X and Y size with properties A, B, and C, yada yada yada.  We will only make a game that is always strategically interesting if our input parameters are followed.  We will also assume that all players will be trying to win, although we have to allow for disparate skill levels.  That said, because we’re trying to make a strategic game, if we’re doing our job right then better players will straight up destroy worse players.  This is acceptable because we can keep the game strategically interesting by always introducing a flaw in any given strategy chosen that the other player might exploit, except that they might not be skilled enough to.

Alright, now we begin in earnest.  Because we want our game to be strategically interesting, we need a large diversity of points of interest, which necessarily entails a map of a certain size.  As a result, we will have to scale our unit balance accordingly.  Ideally we would have bigger maps = better, but then we run into the issue of time limitations.  Games need to be limited to a certain time frame, or nobody will ever finish them and they won’t be fun.  We could get around this in a number of ways, such as having games run in phases or have a perpetual game, or maybe run it in turns, etc. etc.  However all of these will curtail the structure of the game in a significant way.  So instead we’re just not going to worry about time being an issue.  Our theoretical game won’t account for the players having fun in any realm outside of the actual strategy of the game.  For example, we will not concern ourselves with the processing power required to run it, the graphics, the cost of the computer, or the market share of people who might be interested in buying such a game.  So we will have maps that are exceedingly large with lots of different points of interest such as geographic features, resources, and perhaps even significant locations such as cities.  Regarding our resource model- we want it to be simple enough that the player doesn’t have to break their brain in order to get units to play around with, but we also need it to be extremely important.  The ability to reduce the opponent’s ability to fight is a fundamental and necessary strategic concern.  As an aside, in order to have a diverse array of points of interest, we might cheat and have a massive variety of resources.  This is effective to a point.  I don’t know what the ideal number would be, but certainly 100 is far too many.  I would be leery of anything upwards of 10 or 20, and in order to have numbers that high it would need to be necessary to be able to convert them conveniently (at a price, possibly substantial).  The other important issue is logistics.  Most modern strategy games ignore them because they are something of a pain.  However I am confident that it is possible to implement a logistics system that the player doesn’t have to worry about except in the sense that they keenly feel the need to protect it, and to attack the enemy’s.  The player should never have to give orders to manually maximize the efficiency of their logistics systems.  The player is for making strategic and tactical decisions, not daily maintenance.  If they were so inclined they should be able to change whatever they wanted, but a liberal dose of heavily customizable helper AI would do RTS games a great deal of good.  Similarly, the player should be in a position to decide what gets produced, but should not have to manually queue up individual buildings and units.  Using a flexible template system complemented with artificial intelligence would be fantastic.  The player can say “I want a firebase built here.” and the servitor AI summoned will see to it that the location in question has whatever buildings the player associated with a firebase are built there.

In a similar vein, the player should never be called upon to give orders to individual units.  This is a critical point.  The UI built on top of the basic unit level should be sophisticated enough that the player can quickly and easily pick out whatever units they want, organize them automatically into squads, order squads or companies, battalions, armies, whatever to be built and assembled automatically, and have those units automatically organized for them.  If iTunes can do it with massive libraries of mp3 files then an RTS game can do it with units.  Complex reports and commands should be routine.  The player should be able to get a complete breakdown of whatever subsection of units they like, according to whatever criteria they like.  For example, I might ask my war machine AI to give me a complete breakdown of my air force.  It will show me a page saying I have a total of 344,000 planes and then a breakdown by grouping, role, and further breakdown by type, with individual conditions and orders should I ask.  I should be able to look at a procedurally generated map showing what I have where and what they’re currently doing.  Regarding complex commands, it should be possible for the game to understand more complex elements than “move” and “fire.”  For example, if I want to mount a sustained bombing run on an enemy base, it’s not a complex task.  I just want to get a whole lot of bombers and have them kill everything in this here area while returning to base/aircraft carrier for fuel and ammo when necessary.  The player absolutely should not be required to designate every single target for every single bomber, and then manually order them to return.  It should definitely be an option to order specific units to destroy a specific target, but a more abstracted and powerful UI solution would be much better.  For example, I might designate a specific area as an enemy base which I label “southwestern air staging base” or whatever.  Having the game automatically divide the map into sectors would be handy too.  Being able to then draw symbols and regions on this fabric that you can order units around with would be fantastic.  Anyway, I can then designate specific enemy targets within that area with different values depending on how badly I want those targets destroyed.  I might even create an algorithm describing a way to automatically determine which targets I want destroyed more, such as always aiming for factories or artillery pieces or whatever else.  Then when I order a sustained bombing run, my bombers do what I want them to even when I didn’t specifically order them to.  I can go do something else without having to micromanage.  I guess that’s the whole point of this paragraph.  The age of micromanagement is over.  Hopefully future RTS games will realize this, and we will look back on the RTS games of today as basically RPG games with more units.

To go further into what abstraction might do for our strategy game, RTS games need to start having operations.  By operations, I mean a large, coordinated plan with many active elements all going together, which the player could give specific names if they wanted to.  Including specific objectives as conditionals would be fantastic.  For example, if a player defined an objective as “blow this up” then your AI will understand that if the offending enemy is destroyed, that statement will return true.  The player could then have a breakdown by operation to see how they’re going in all their operations at once.  Your operation readout might be:

Operation FIrestorm – In Progress
• 5:11 of planned 14 minutes elapsed.
• 4 of 11 objectives completed
• General force strength 87%”
– notes
• massed assault eastward on sectors B65 through B88
Operation Lightning Spear (covert) – In Progress
• Jammers operational
• Cloaking operational
• believed to be undetected
• 1:30 of planned 7 min 35 seconds elapsed
• 1 of 5 objectives completed
• 100% General force strength

I am aware that none of this seems like it has any bearing on how to make a game that stays strategically interesting.  It seems to me that the main stumbling block for RTS games today is the user interface.  They are just not suited to having a really strategy-oriented game.  The player has to do too much.  While this increases the twitch factor- not necessarily a bad thing, it detracts from the ability to create large and sweeping, grand strategies.  Using groupings to combine individuals into squads, squads into companies, companies into battalions, and battalions into armies would be a huge improvement.  Doing it atomically allows a computer to easily construct the desired units based on input from the player.  For example, I design a squad of 20 soldiers and give 2 of them machine guns and everyone has grenades.  I then say give me a company with 13 of those squads, 3 units of 3 tanks apiece, 1 unit of 3 anti-air vehicles, 2 units of snipers, and 1 command squad unit.  I’ll put 30 of those companies into a battalion, of which I would like you to build one at this base, one at this base way over here, and another at this third base.  Automation is the name of the game, to free the player up for making the decisions that really count.

Impulsiveness

Is impulsiveness a desirable characteristic?  I am the categorical thinker- I like to think about things before I do them.  However, as part of that thought process it’s important to be able to suspend thought when necessary.  As such, whether or not impulsiveness has a place in the repertoire of the contemporary rationalist is an interesting question.  Firstly, we need to look at where impulsiveness is typically used.  Impulsiveness is often associated with interpersonal exchanges, with social people and people who enjoy parties.  It is strongly disassociated with business or financial decisions, with some exceptions such as small purchases and gambling.  So while common sense thought acknowledges that impulsive action is improper for weighty decisions, for more trivial matters it helps a great deal.

Before we get into the topic, we need to make some distinctions.  There is impulsiveness and then there is recklessness.  The way I conceive of the terms, impulsiveness is thinking of an action and allowing it to proceed into reality without too much analysis.  Recklessness, on the other hand, implies a full knowledge of the action beforehand, but doing it in spite of your analysis that it is foolhardy.  I will talk about both, but first let’s cover the less complex issue of impulsiveness.  In social situations, impulsiveness is a great aid because you can’t think too much about what you’re going to say.  There are a large number of very smart people who have difficulty in social situations because they don’t realize that their strategy for dealing with reality is not universally applicable- it needs to be changed to fit their needs of the moment.  When I was a kid I was like this.  I have since learned to pragmatically and completely apply rationality and can piece together the solution to such puzzles.  Basically, if you think too much about what you’re going to say, you give an unnatural amount of weight to when you do speak.  So unless you’re able to spout endless amounts of deep, profound thoughts, invariably you’re going to be putting a lot of weight behind fairly trivial statements, and the inconsistency comes across as awkward.  Impulsiveness will decrease the weight of what you’re saying and give it a sort of throwaway characteristic which helps you in a number of ways.  Firstly, if it doesn’t work out, nobody really notices, and you can keep going with whatever suits you.  Secondly, it puts you in a more dominant position of just saying whatever you feel like saying.  You aren’t vetting your thoughts to check if the rest of the group will approve.  This brings us to the second flaw in the introverted thinker’s social rut, the fact that they are attempting to apply thought to the situation to do better and it shows very obviously to the rest of the group.  This is a complex point that I can’t encapsulate in one post, but basically any attempt to earn approval guarantees denial of it in direct proportion to the effort spent.  The introverted thinker’s goal is to earn approval, and his model for deciding what to say is, logically, fixed upon achieving that goal.  While their intentions are good their entire approach has so many incorrect assumptions they aren’t even capable of recognizing the fact that their whole paradigm is nonfunctional.  They just dive right back in with a “it must work” attitude instead of reworking from first principles.

Impulsiveness is also a pragmatic tool to be used liberally in situations of doubt.  When it is clear that hesitation will cost more than immediate action, you have to go.  When I was younger I had this model of “going for help” which essentially contained the idea that the concept of help was distant.  So “going for help” would take a long time, and there was a significant chance that the window would close for whatever the situation was.  So my primary course would have been to just go do it myself.  This is an incorrect application of impulsiveness because of incorrect information.  A proper application of impulsiveness might be, for example, you are handed a test with 100 4-answer multiple choice questions, you have 100 seconds.  Now there is no way you could conceivably cover 25% of the questions if you legitimately tried to answer them.  However, if you guess randomly you have a 1 in 4 chance on each question and so over 100 questions you should get 25 correct.  This is clearly your best strategy given the rules of the game.  You concluded that the best strategy is to suspend rational inquiry into each question because it is simply not worthwhile.  You wouldn’t work for an hour to earn a penny, and you wouldn’t think for X seconds per question.

The other fallacy that makes impulsiveness distasteful to many is the idea that the answer actually matters.  With our test example, you don’t actually care what the answer to any given question is, you have all the information needed to create a sufficient strategy.  For social impulsiveness, the simple fact of the matter is that your actions really don’t matter that much.  Provided you don’t do anything truly inappropriate, at least.  The, and I use this term very reluctantly, “antisocial nerds” ascribe a great deal of value to their interactions and to what each party says.  This is a misunderstanding of the nature of the communication.  The actual content is unimportant.  Nobody cares if you’re talking about the weather, cars, or anything else.  True, this doesn’t make logical sense, and in a perfect world people would communicate usefully instead of feeding their egos by the fact that they’re talking to people.  Most of the “extroverts” are pleased by the fact that they’re talking to people, and are anxious when seen by themselves- this mentality is communicated to introverts and affects them quite adversely because they prefer to be alone for some part of their day and they may believe that there is something wrong with them.  Don’t buy it, please.  The people who *need* to be around others to validate themselves are the unstable ones.  It’s similar to the way men and women treat sex.  Men are usually sexually insensitive and are more pleased the by fact that they are having sex than they are enjoying the sex itself.  They are usually seeking validation from society instead of their own enjoyment.  Of course, most women can pick this up immediately and they would prefer not to be some boy’s tool to self-validation.  Women, you aren’t off the hook, you do the same thing, but not with sex.  Instead, you get validation from men paying attention to you while others are watching.  Don’t get me wrong, it goes both ways.  Some women perceive that they get validation from having lots of sex, and some men get validation by attention from women, they’re just not as common as the other way around.  Impulsiveness as a concept is often bundled with these behaviors which, although nobody really knows why, are widely believed to be “creepy.”  That’s just not the case.

Now, recklessness is a whole ‘nother can of worms.  Doing something that you know to be crazy, or doing something because it’s crazy, has a completely different backing behind it.  Most reckless people do it because the cost of the reckless action is balanced or outweighed by the enjoyment or rush they get from it.  This is the same mechanism that makes skydiving fun, even though skydiving is actually reasonably safe.  If you had a significant chance of dying you wouldn’t be able to sell it to people as a recreational activity without some serious social pressure backing it up.  Ziplining is another example- there has only ever been one zipline death, and that was under suspicious circumstances.  But we perceive it to be dangerous and enjoy a rush from it.  There is, however, a time when outright reckless behavior can be a rational course of action.  Usually these circumstances fall into two categories though, 1) you’re trying to make other people/agents believe you’re reckless, or 2) direct and/or thought-out strategies can be expected or countered easily or are otherwise rendered ineffective.

Category 1 is the more common of the two and can potentially occur in any game or strategic situation.  Essentially your strategy is to do something stupid in the hope that your enemy will misjudge your tactics or your capabilities, enabling you to take greater advantage later on, or in the long run.  In poker, it is sometimes a good thing to get caught bluffing.  That way, next time you have a monster hand your opponent might believe you’re actually bluffing.  If you’ve never been caught bluffing before, they would be much more likely to believe you actually have a hand and fold.  Obviously, if you get caught bluffing enough times that it seriously impacts your pile of chips, you’re just bad at poker, but a single tactical loss can be later utilized to strategic advantage.

Category 2 is much more interesting.  Let’s take a game like Total Annihilation.  By the way, TA: Spring is totally free and open source, and it’s easily a contender for the greatest strategy game ever made.  Although it’s not fundamentally that complicated, there is no in-game help so it can be very confusing for new players.  Feel free to log in to the multiplayer server and just ask for a training game- after one or two you should be up to speed and ready to play for real.  Anyway, in Total Annihilation, at least the more standard-fare mods, there are dozens if not hundreds, there are huge weapons that deal death massively and can pose a serious threat in and of themselves to the opposition.  Things like nukes, long range artillery, giant experimental robots (and you can FPS any unit, bwahaha!!), etc. etc.  Anyway, the construction of one such piece can actually end the game if it stands uncountered or undestroyed for too long.  However each has a counter, which range in effectiveness.  For example, antinuke protects a fairly large area, but if you throw two nukes at it, it can only handle one.  Shields protect against long range artillery but they have a small area and cost a lot to run, and so on.  Now, a calculating player can probably figure out the ideal choice for the opponent in a given situation.  If he’s focusing all his stuff in one place, he may as well get both shields and anti-nuke, but the other player(s) could then steal the whole map.  If he goes for the whole map himself, the other player would probably get air units to attack his sparsely defended holdings.  If he consolidates in a few carefully chosen locations, nukes might be in order, and so on.

This is where we get to the recklessness-as-tool element.  Potentially the greatest advantage in complex games of strategy is surprise, or doing something that the enemy did not expect and must react to.  Ideally the enemy has limited ability to reorganize to counter the new threat.  This is true of real-world military action- there are issues with communication, chaos, and a host of others that make reacting quickly difficult.  The more resources sunk into the threat, the more resources that will be necessary to counter it (assuming that the attacker isn’t just stupid).  There would have been no point in the Manhattan Project, for example, if the enemy could put horseshoes on all their doors to render nuclear weapons impotent, and it would never have been started.  Now let’s say we have a game of TA where it would be obvious that hitting the enemy with a nuke would be the best course of action.  Of course, this same idea will have occurred to the person about to get nuked.  OK, so then big guns are the best strategy.  Except that your opponent can think of that, too, because he might guess you’re not going to use nukes because it’s too obvious.  And so on through all the possible options, whatever one can think of, the other can too.  Whatever strategy you might use to maximize your utility can be equally though of by the enemy.  We are dealing with a perfectly constrained system.

But what if we de-constrained the system just a little bit.  We remove the rule that says we must maximize value.  Now we could feasibly do anything up to and including nuking ourselves.  So we need a different rule in its place because now we’re working with a screwed up and dysfunctional model.  This is where the trick is.  Because you might still have a meta-model of maximizing value in your selection of an alternate strategy, meaning you will be just as predictable, albeit through the use of a much more complex algorithm.  No, you have to truly discard the maximizing value paradigm in order to get the additional value from surprise, and the trick is to not lose too much to put you behind after your surprise factor is added in.

My problem here is I’m trying to reduce a complex and multi-dimensional strategic game to a single aspect under consideration.  My other problem is that many of you will have never heard of Total Annihilation.  The same idea applies to more or less any other sufficiently complex game, such as Starcraft, but value is too directly transformed in most modern games to make such meta-strategies significant.  If you have more troops, or the right kind of troops, you win.  If you’re behind, you’re behind and there’s not a lot you can do about it other than try harder in doing what you were doing before.  So while surprise might give you some advantage, it’s probably not going to be worth enough to be behind to get it.  Careful application of force certainly helps, but it’s not as vital as in Supreme Commander or Total Annihilation.  No, I’m not harping on the games in question, I’m not demanding that you must play them, I’m just sharing my particular taste in video games.

Impulsiveness once again.  I seem to be digressing more and more these days.  Basically what I’m trying to communicate is that in some situations (games to use the theoretical term) the act of analysis must be take into consideration in your planning.  How much time can you spend analyzing, what should you be analyzing, how is the enemy thinking, etc. etc.  Once you bring the act of thinking into the purview of strategic considerations, impulsiveness is one option for a viable strategy that just does not occur to someone who cannot conceive of the act of thinking as a strategic concern.  They implicitly believe that life is a game of perfect information with unlimited time for a given move.  The truth is, you’re acting when you decide what to do, and that act will have an effect on the world and on the results you get.  There are lots of proverbs about hesitation, but they don’t seem to extend to when to think and when to just act.  On the whole, I think most people have an implicit understanding of this type of decision making- it comes pre-packaged with the HBrain OS, but they haven’t really considered exactly what it is they’re doing on a consistent basis.  I’m just here to point it out so those who haven’t can read about it and be provoked into it.