Axiomatic Human Properties

In any philosophy of human nature there are certain parameters of the human condition which are inserted axiomatically. These properties are extremely significant to the formulation of any philosophy governing people, namely ethics and politics, but usually aren’t addressed in a uniform and clear manner. The following elements are single pieces that might be composed together to create complex ethical theories or political philosophies. Simply rattling off a list of beliefs about human nature being one way or the other in reactionary mode is pretty much a waste of time. Connecting them together to create a model that accurately reflects the world, or some piece of it, can be very important to the advancement of human knowledge. Big names in political philosophy like Hobbes, Locke, and Nietzsche have built their ideas up from the same basic elements, but they’ve done it in such a creative, novel, and useful way that reflects the way many people see and interact with the world. I believe that spreading a little understanding about what exactly the building blocks of such thinking can improve the quality of thinking in the US and around the world.

The first and most commonly addressed one is whether people are fundamentally good or evil. This question has so many ramifications for all aspects of any philosophy. If people are inherently evil then it is necessary to use some form of philosophical machinery to control, alter, or ameliorate the evil nature of humanity. This is a totally different viewpoint from someone who believes people are fundamentally good, who doesn’t need their philosophy to do much to control human behavior. Indeed, the entire realm of philosophy, particularly ethics, is more focused on what individuals decide virtue is, and each person can have their own philosophy and you can trust them to be virtuous anyway. Their virtue is given, the philosophy is a result instead of the other way around. If human nature is evil, however, then philosophy must come before human virtue can be achieved, and it is necessary to identify the philosophy most conducive to society and then enforce that point of view on everyone. If they can’t be forced to accept it, they must be forced to at least obey it through the application of laws and punishments. Most political philosophers of sufficient import are in the camp of humans being evil, and most of the governments derived from their philosophy depend upon coercive application of laws and police and courts in order to control their population. Whether people or philosophy come first is the ultimate chicken-or-the-egg question, and its primary embodiment is the debate over whether human nature is good or evil.

There is also a question about whether one man is competent or not, regarding whether one man has great powers available to him or if one man is nothing by himself. It is reasonable to have a point of view where human nature is good, but naturally stupid. This is more akin to the Stoic idea, where everyone has virtue as a driving force. Every murderer has a justification for why they saw fit to commit murder (assuming they aren’t innocent), and they really believe their justification. If they were fundamentally evil, they could care less about virtue. They may still be trying to dress up their actions as virtuous to cynically try to escape punishment, and we arrive at a Chinese Room dilemma of having to verify whether or not someone “really believes” something or if they’re just pretending. In most all cases, however, they truly believe their rationale, despite the fact that it is highly irrational. Murder and other crimes, viewed in a broader context by a rational being, are all stupid, even discounting the additional punishments inflicted by laws. If you lie for your own benefit, then nobody has the incentive to trust you. In the extreme short term, perhaps you don’t care, but if such a person was actually rational they would realize the immense value of having a perfect reputation and rock-solid name can yield far greater dividends for their own success than simply cheating and running. The law is an attempt to make this choice “more obvious” by putting a direct penalty on undesirable actions, making the line of reasoning a little easier for the less rational in the populace.
It is also possible to have a worldview, and this is the particularly sinister Hobbesian or Machiavellian view, that people are both cunning and malevolent. If this is the case, the only recourse is to make people act outside of their nature. Indeed, not only is distrust of everyone to be expected, but there’s no authority to look to for protection who isn’t subject to the same rule- they can’t be trusted, they will seize power and abuse it. Hobbes is the more primitive philosopher, and his answer to the cunning-and-evil dilemma is to put the most cunning and evil of them all in charge, the better to protect the people under the power of the ruler. Obviously he didn’t phrase it like that, but in effect creating a single all-powerful ruler in such an environment will only magnify the problem. Machiavelli addresses the issue more accurately by saying yes, it is the most cunning and evil who will be in charge, and the more cunning and evil he is the better a ruler he will make because cunning and dirty tricks are the best way to get ahead. An extremely pessimistic view, but at least it’s internally consistent. It’s actually very difficult to disprove that argument because it contains within itself its own genesis, but I believe it fails on the grounds that people would shy away from a world like that and attempt to make it a more pleasant place to live in for themselves and others.

Whether people are rational, whether people are social, whether people are natural leaders, natural followers, etc. Indeed, there is always a huge debate over what properties we can ascribe as natural to humans, and which ones are learned or inculcated, and by whom they are or should be conditioned by, whether it’s the parents, the community, the government, the religion, etc. Different philosophers have proposed different traits as being innate, and I imagine that at some point some thinker has claimed each and every imaginable aspect under the sun must be natural and innate. The oldest anachronism of this type is that humans are innately social beings, and indeed this is backed up by recent discoveries in biology, anthropology, and genetics. If we are innately social creatures, then we will congregate into groups and there is no modification you can make to the human condition that will overcome this. You can compensate for it by conditioning behaviors, but the natural tendency will still exist. The idea of human nature is actually a special case of the naturalness argument which argues that people have both a natural ethical decision-making faculty and also makes a statement about the tendencies of that faculty. The argument that there is no such faculty can be used to construct nihilism, pragmatism, and numerous other theoretical frameworks. The same can be said of any given property that you wish to ascribe as natural to humans.

What properties are innate to a person, and what properties can change through the course of their lives. This is a similar issue, but quite distinct, from the question of whether a person has the capability to change themselves, and to what extent such willed self-change is possible, or what properties or aspects can be changed this way. The same question applies to other vectors such as parents, the state, etc. Innateness is distinct from natural appearance in that a property that is innate is dependent entirely on physical (or other immutable) composition. A naturally emergent property is merely said to exist, with no particular emphasis on how or why it is that way. If it’s innate then it is a product of the human physical (possibly soul or spiritual) existence. If it’s not innate then it is acquired at some point over the course of your life. Note that non-innate properties can still be natural. For example, humans lack the capability to walk at birth so it’s not truly innate (I use a philosophically difficult example because this is highly debatable, I apologize, but there is no example of something that is obviously not innate but is natural) but it is natural because it is a naturally emergent behavior. A better example may be language, where it could be argued that a natural faculty for languages in general exists, though perhaps not innate, but the faculty for any particular language such as English is definitely not innate (although it also probably isn’t natural because saying “humans naturally speak English” is obviously wrong. We can get around this by citing a particular unspecified instantiation, such as “Humans naturally speak some language” but this is rapidly becoming too complicated to use as an example).
An argument for extreme nativism puts total emphasis on innateness. The entire course of your development is preprogrammed into you as a baby, and is fully contained within your existence at any point in time. Extreme nativism is a more or less extinct line of reasoning. The opposite end, what has been called “tabula rasa” or “blank slate” is the idea that you have zero internal programming at birth- you are totally blank, and you acquire a mind and life over the course of your life. While this seems a lot more reasonable, purist tabula rasa thinking is also more or less extinct. It’s clear that there is some mixture of the two going on, but exactly how much of each is present is not entirely clear. I dislike this phrasing of the issue, but this debate has been called “Nature vs Nurture.” I hate saying that because nurturing is a natural process- indeed humans have certain parameters for raising children encoded into our genes (preying mantises have different ones…).

Part and parcel of the natural human condition debate is what is mutable about human nature, and what is immutable, which of course form a continuum between hard wiring and total flux. A certain trait might be imparted at birth, but still be changeable such as through changes in gene expression. My hair color is different than it was when I was eight (I was blonde, now I have brown hair) and this is a property that is usually associated with genes and assumed to be immutable. We usually assume that the Nature side of the debate assumes immutability, and the Nurture side likes mutable traits. There is no requirement that these assumptions be the case, but nevertheless they tend that way. It makes intuitive sense because after all, if you were born without a certain trait, it must have been installed at a later time and must therefore be reversible, right? Wrong. Conditioning received as a young child is often highly immutable and tough to change, and mental models touching core beliefs are often very difficult to change as well, even if they are destructive.

The reason why these human properties are axiomatic is that for the most part you can come to any conclusion you like and have it result in an internally consistent model. These are fundamental building blocks from which you can construct any theory you like. While someone may disagree with you on axiomatic grounds, a direct proof of their argument will not be sufficient to disprove or otherwise dislodge your position. As it should be, an argument made from such axiomatic points can be incorrect from premises, or improper in logic, and pushing an alternate position will not influence the impact of an argument made by someone else. There is an immense possible composite-theory space that can be created just from the extremely few basic axioms I have chosen to mention here, and there are many, many, many more that can be used reasonably.

On Antisocial Stoics

I would like to address a claim that is sometimes made against stoics, particularly against some of the ideas of Marcus Aurelius, who said, among other things, “Permit nothing to cleave to you that is not your own, and nothing to grow upon you that will give you agony when it is torn away.”  Given the extremely elevated status of friends and interpersonal relationships in our society, this concept doesn’t jive well with the idea that we all have to form deep bonds with one another.  The idea of being stoic and of suppressing your emotions as subservient to your mind seems to conflict with the idea that we’re supposed to share our feelings with others.  Why the belief is that if someone else is aware of the factual state of your existence creates a bond is beyond me, but it is implicitly assumed in our interactions with one another.  The most canonical example is when you encounter someone you know and ask them how they’re doing, what’s going on with them, or the like.  Both of you probably know, if you thought about it, that the other person’s answer is irrelevant.  Neither of you could give a damn.  But it’s the greeting you use because it is a sharing of information of a moderately personal nature, or at least it’s a question requesting that information which implies a certain closeness.  Whether you’re doing it to provoke that sense of intimacy in the other person, in the impressions of people listening in, or to convince yourself, I don’t know.  However I do know that very little of what is commonly thought of as conversation is an actual sharing of empathic significance or deep thoughts.  What is commonly accepted as “small talk” is the norm of human interaction, and it is accepted as having zero functionality.

Now, I am of course being a little over-literal here.  The purpose of small talk is that it is talk where everyone concerned might be uncomfortable in having a real conversation, it fills up the time and allows people to get comfortable with one another.  However it is not and will never be the goal or endpoint.  It is vital that just “being with” other people is never something you’re setting out to do, because standing next to other humanoid figures and flapping your vocal folds is, in and of itself, not really a worthwhile activity.  If you’re interacting on an empathic, mental, philosophical, or whatever medium in a way that gives you genuine enjoyment such that you would actively choose to enjoy that person’s presence in favor of some other activity you enjoy then of course it’s a good thing- that’s just a basic pursuit of your own satisfaction.  This is obvious and a trivial proof, but I think I need to inject it here so I’m not scaring off exactly the people who need to hear this.

The best corollary to this whole mess is our modern conception of sex, especially among men.  Men tend to be in a position of weakness and insecurity due to having conflicting internal models and programming and all manner of other nonsense going on in their heads leaving them a little lost and confused.  One of the dominant themes that result is a pursuit of sex that is driven more by social power than actual personal satisfaction.  Many men are more gratified by the fact that they are having sex than they are enjoying the sex itself.  They’ll brag to their buddies about it and allow themselves that extra iota of self-respect because they “got laid.”  The self-destructive side of this thinking is that they honestly believe they aren’t worth anything unless they can convince a woman that they are worthwhile enough to sleep with.  I am unsure of how many women have this problem, but it is widespread among men.  I suspect that because women are dealing with this population of men, they live in sexual abundance and don’t develop the same complex- attractive women at least if not all women.  I am speculating now, but I find it probable that women have a similar complex revolving around marriage, gratified more by the fact of being married than they enjoy the marriage itself, resulting in the “must get married” effect at a certain age.  Many, many people of both sexes are gratified more by the presence of other people than they are actually enjoying being with them.

The simple fact of the matter is that if you go out seeking deep bonds, what you will find is the most superficial of relations with people as desperate for companionship as yourself.  Deep bonds, described as such, actually don’t exist as we conceive of them.  It’s not that you spend a lot of time with someone or that you have known them for a long time, or even that you know a great deal about them and their personal preferences such as their favorite flavor of ice cream.  In fact, I would go so far as to say that knowing a huge amount about their preferential minutiae actually subtracts significantly from the goal that most people are seeking.  If there’s a woman I like, I could care less what her favorite flavor of ice cream is.  The question is whether or not she is fun to be around.  If I was to feverishly try to get her to like me or memorize her personal preferences, that’s work.  Stupid, counterproductive, and manipulative work, at that.  That’s all.  Perhaps we have deep empathy, perhaps we’re alike, maybe we have good discussions or great sex, it makes no difference (OK, I lie) the question is only if she’s a positive presence in some- preferably many- ways.

Part of the problem is the widespread perspective of the “personality.”  And for the love of life NEVER evaluate someone’s “personality” as ‘good’ or ‘bad.’  Both those words are the most abused semantic identities ever created, and they both can mean nearly anything while being very specific about one thing and one thing only- and by hiding the implementation of that judgment there is no way to argue with it.  There is no such thing as a personality- a person is composed of the sum of their mind and actions derived from it.  There is no way that you can ascribe someone a personality which if they do something that is “not like them” then they’re being fake or somehow not being themselves.  Whatever the circumstances, they are merely exhibiting a decision-making pattern you haven’t previously observed or were otherwise unaware of.  It is the same person, ergo they are the same person.  This idea that we can understand someone else, ascribe them a simplified model that will predict their behavior and then expect that behavior from them is disgusting.  People are very complex- one person is far more complex than the sum of all of their understandings of other people, much less someone else’s understanding of them.  It can’t be your personality that you like coffee, and that you’re doing something bad when you don’t drink coffee.  The drive to be consistent is not a natural one- it’s a societal stamp mark on the inside of your brain that tells you to be simple so that others can understand you better.  But who gives a flying shit about whether other people understand you?  Do what you want!  If you wake up and wonder if eggs scrambled with cocoa and baking soda tastes good with ketchup, then go right ahead and try it!  It doesn’t have to be your personality that you eat weird things- it’s just something you want to do, so you do it.  That’s a bit of a weird example, but it holds.  Why we don’t expect one another to do what we want is just beyond me, especially in our day and age with so many options available.  There are all manner of stigma against jocks, nerds, cheerleaders, sluts, you name it, there’s a stereotype that someone wants to slot you into.  So, how about, just to screw with them, completely break their model of the world by totally not fitting into the model they would like you to.  Just for fun.

So here’s the question.  “Permit nothing to cleave to you that is not your own, and nothing to grow upon you that will give you agony when it is torn away.”  The idea here is that you are your own pursuits and not permitting external people or objects to influence you or your goals.  This is both a warning against addictions of all forms, perhaps especially social ones, and a caveat emptor for everything you allow into your life.  You control your personal sphere- to the best of your ability at least.  It is your responsibility and nobody else’s to make sure that only elements you want are a part of your life, and it’s your duty to yourself to safeguard the vaults against the thieves that would seek to plunder your wealth.

I have something to say about victimization here.  Blaming the victim for a crime committed against them is the original scam.  It is the classical attempt to cheat and then get away with it, and the more serious the crime, the more potent a tactic it becomes.  The idea that you control your person means that yes, to a degree, you are responsible if something bad happens to you.  There are precautions you could have taken, etc. etc.  No matter the event, there are always choices you could have made to avoid that outcome you deem makes you a victim.  However part of the idea of being actually in control means that you are never a “victim” of other people’s choices or actions, because the very idea implies that you aren’t actually in control.  So you are only actually a victim when the aggressor has actively applied intelligence to disable, short-circuit, or otherwise evade whatever defenses or precautions you have taken against being taken advantage of.  Think of it like this: if you’re on a desert island and a bear comes and steals your food, then you’re a victim.  But you could have done any number of things to prevent your food from being stolen, such as hanging your food from a tree, out of reach.  The bear is fundamentally at fault here (I don’t believe the conventional idea of “blame” either, so this explanation might be a little awkward without a background but I’ll have to go on anyway) but that doesn’t mean you can sit there and rage about how that damn bear has made you a victim.  Your actions, to the degree that you invested resources to prevent an undesirable outcome, resulted in some probability of that undesirable outcome occurring- a risk.  Now, there are obviously far too many *possible* risks to address, but we can exercise our reason to determine which ones we need to address, which ones are worthwhile to address, and which ones we can safely ignore.  If you ignore a risk you should not have, then you are responsible for that mistake, even if you aren’t the acting agent of the aggression committed.  A bear is too animate.  Let’s go with physics.  You leave your food outside for a long time, and it rots.  Well?  You are responsible because you misjudged the risk of it rotting, didn’t take sufficient precautions, and now your food is gone.  In this case, there is no aggressor at all- it’s you against the laws of physics, but the situation is exactly identical.  You can mope around claiming to be a victim, perhaps go to the government and demand that your food be replaced…  yada yada.  Now, I absolutely do not want this concept of judgment and addressing of risk to be confused with actually blaming the victim as the active agent in their own victimization.  These are completely different concepts entirely.  An agent acting in a way that is exploitative of another agent is doing so because their incentives line up appropriately to make that a course of action they find acceptable.  The idea of punishing them is to tip these scales enough that it is no longer economical to exploit others.  There is of course the problem of giving the power of retribution to who, exactly, which I won’t go into here because this isn’t a post about anarchism.  The reason why you can’t have the punishment be equal to the crime (remove connotations of law or government) committed is that the risk of capture is never 100%.  Let’s say a thief steals purses.  If he gets caught 50% of the time, but each time he’s caught he only has to return the amount he stole, then it doesn’t really change the thief’s decision-making circumstances that much.  However, if the cost is losing a hand then the thief will think twice before stealing that purse because there would need to be a lot of money in there to justify a 50% chance, or even a 1% chance, of losing a hand.  Now, the funny thing about punishment is that you also have to account for a certain probability of false positives.  So if an innocent man is accused of stealing that purse and gets his hand cut off, well that’s pretty damn unjust, isn’t it?  So we have to scale back the punishment until it is enough to stop thieves while being acceptable to the innocents based on the risk of being hit with that false positive.  Keeping in mind that we are assuming the populace has a say in what the punishments are.  If you’re a totalitarian government, you could give a damn what the civvies say, and drastic punishments make sense because it’s less crime you have to deal with, freeing up resources for you to put towards your own ends.  Draconian methods of control are, pound for pound, more efficient in terms of resources spent versus results achieved.  Their main problem, in fact, is that they are so efficient that it makes life a living hell for nearly everyone.

After that long digression, back to the main issue.  If you’re simply enjoying another person’s presence, then there’s no further expectation in the matter.  If they leave, you’re no longer enjoying their presence.  You start to run into problems when you ascribe ultimate value to people or objects, because you can’t unlink ultimate value as long as you actually perceive it as “the ultimate good in the whole universe.”  Now we run into a very controversial edge case when dealing with the loss of loved ones.  I say it’s an edge case because it doesn’t happen very often relative to our lifetimes.  We’re not losing loved ones every other week.  A model that was focused primarily on dealing with death of the most intimate friends (I will not say “and family” because if your family are not your close friends then why are you with them?).  You know what, I’m going to elaborate on that parenthetical thought.  Your family, especially your nuclear family such as parents and immediate siblings, are people.  You know them for longer, and have more opportunity to become very good friends with them, and when you’re a child there is a certain amount of not-having-a-choice in the matter that forces you to make friends or make war, and rational individuals choose the former in all but the most extreme circumstances.  So there’s just very close friends.  The fact that you’re biologically related is of no philosophical significance whatsoever.  Medical significance, yes, but only because knowledge of your family’s genes can be used to deduce your genes.  Social significance, of course not.  So I will treat death of family as the death of friends who were equally close as family members.  Now, to be honest, this is a topic that I’m reluctant to exercise my usual methods of beating to death because there may be readers who have such a powerful subjective experience of the matter that I will waste my time if I try to dismiss the bits that require dismissal, focus in on what is significant , and use it build up a new model that more accurately fits reality and rationality.  We have arrived at the idea that being with people is something you do for yourself, but it seems like lunacy to say that the death of a loved one shouldn’t hurt because you aren’t able to enjoy their presence any more.  That’s just not strong enough, right?  BUt isn’t that exactly what mourning is?  You won’t speak to that person again, or see them, or talk to them, or whatever else.  If you could do those things then you wouldn’t care if they were technically dead- that’s just a cessation of some bodily functions.  If they could die and leave the person intact, now wouldn’t that be a wonderful thing- you wouldn’t have to worry about death.  This is actually a fairly direct deduction for most people, but the idea that the physical death isn’t the source of their trouble, isn’t.  It is the result of the event of death that they’re mourning.  Many religions exploit this weakness in thinking to interject “But life does continue after death!” and then the explanations, the fairy tales, and the bullshit that follows.  They are careful, however, to always exclude the very functionality that death precludes because they are unable to provide it.  They can’t help you talk to your dead loved ones, so they hide them away somewhere as ghosts or in heaven where you will go, too, once you die.  The intuitive universality of the death process makes this nearly logical, except that a slight elaboration can add a significant degree of control over the behavior of the people who want to believe.  And some of the crueler religions take advantage of exactly these people, and make this death process conditional upon your life, and exactly prescribed behaviors.  The most common trick is to exploit vague semantic identities such as “good” and “bad” which enable retroactive changing of what exactly those conditions are for live updating of the behavior of the believers based on what is expedient at the time.  I’m always amazed and fascinated at the complexity of religion as an organism, and the huge potential that religion proves memes have as a life form.

I am not suggesting that you shouldn’t feel pain- what a ridiculous assertion for a stoic.  The idea is that pain, like other sensations or emotions, are there to help you, not govern you.  If you felt fear and were unable to do anything else but freeze up, curl up into the fetal position, and pray, then what use is that?  For animals like the possum, it is an irresistible instinctive reaction programmed into them because in 99% of cases (at least in the genes’ experience) this is an effective defense mechanism, and giving the possum control over the matter would just screw up the system.  This isn’t strictly accurate because possums evolved their primary featureset in the time before memetic delegation had been “invented” by evolutionary processes.  The application of reason is itself a major feature of humanity, and quite novel in genetic terms.  If you wanted to be truly biological about it, you can look at memetic evolution as the ultimate genetic trick, but the problem is that it is so effective it makes genes obsolete.  Also, intelligence is so effective that genetic evolution can’t keep up with the rate of change.  For the prurient example, we have invented cars and now they’re everywhere.  And now possums, with their very effective defense mechanism of freezing up when afraid, causes them to get run over by speeding cars, and the genes can’t un-wire that feature given the new environment because they aren’t able to perceive and judge.  I would like to say, though, that genes are definitely alive.  Not just in the sense that a person is alive, but the gene of HUMANS is alive in a strange information amalgamation of the genes in every person in a way that we really can’t quite comprehend because there’s too many people, too much noise, and too much uncertainty about genes themselves.  The day that we truly understand genes completely, we won’t need them anymore because we’ll be able to construct our own biological machines to any specification or design we like.  They’re just like any other machine, but far more complicated and sophisticated.  Especially the organic ability to reproduce.  Interestingly, though, the body is itself one of the few things that we are currently unable to separate our selves from.  Some can conceive of what that might be like, and most of them have it wrong (I guarantee that I do, but it’s more complete than most, at least).  Note that the objective is to separate your self from as much as possible of what you don’t want, of that which subtracts from your good or your happiness.  I would argue that, for as long as it works, your body adds immensely to that happiness.  And as far as it doesn’t, it subtracts immensely.  So an ability to perfectly fix the human body, a hypothetical perfect medicine, would obsolete the need for mechanical bodies unless their features were so far beyond those of a human body (which is the case) that you could get even more out of one.  Probably the main advantage is the ability to add processing power and memory, and the ability to have direct inputs.  Anyway, permit nothing to cleave to you that is not your own.  I am not my body, but insofar as I use it, rely upon it, and wish to keep it, it is mine.

So if I don’t even value my own body enough to want to keep it, what does that mean?  Well, I never said that I didn’t value my body, just that the value it provides is of the material sort, similar to eating a burrito, except that instead of the satisfaction of the burrito, my body contains the hardware necessary to eat the burrito, and without it any sort of gustatory satisfaction would be impossible (not strictly true- a perfect simulation of the experience is an identity).  This is similar to having a computer.  The computer in and of itself doesn’t actually provide a whole lot of satisfaction, but the things you can do with it will.  Perhaps the computer hardware hobbyists who make it a point of pride to have the best possible machine wired up in the best possible configuration get significant enjoyment out of simply possessing the hardware itself.  However, even with that example, we see parallels with the human body, such as with fitness junkies who make it a point of pride to have bodies sculpted out of steel, and enjoy simply having it.  Important note: most of these “fitness junkies” are doing it because of other people, not because they genuinely enjoy it, or because they even want the results.  And they get further conflicted by the fact that they are causing a change, which might conflict with their perception of themselves, or with others’ perceptions, and for some reason they’re anxious to step outside of that box.

Anyway, my entire point is quite simple, as usual, but it’s dressed up with many trimmings like mirrors in every corner of the room to show off the gleam on the little gem in the middle.  The idea that you should be dependent on others, the idea that that constitutes good social practices, the concept of a social personality, all of these things are foisted upon us because others had them foisted upon them.  We are the monkeys conditioned not to reach for the bananas within our reach because someone, at some point in the past, was punished for trying.  So now we have to live with everyone else.  But the most vital point is this: they don’t matter.  If you want to reach for that banana, they could physically stop you, but if they do then you have a clear and objective obstacle in your way, which can be overcome, instead of the hazy, confusing aimlessness of contradiction.

The St. Petersburg Paradox

I’m in more of a mathematical mood right now, so I’m going to cover a piece of abstract mathematics.  I want to talk about the St. Petersburg Paradox.  While a famous problem, you can wikipedia it for more information if you like, here’s a short summary.  Imagine we have a game of flipping a coin.  Starting at $1, every time the coin lands heads, you double that amount.  When it eventually lands tails you win however much you have earned so far.  How much should it cost to play?

Now I very much enjoy this problem in a pure mathematical sense, but Daniel Bernoulli, the man who invented it, apparently took the mathematics of this problem rather too far.  Bernoulli noticed, as the more astute among you probably either deduced, or probably already knew, that the game’s expected value is in fact infinite.  This means that no matter what the cost to play, you should always accept.  However most common people wouldn’t pay even $50 to play this game.  Bernoulli deduced from mathematical bases a utility function of the game which would explain this behavior using a logarithmic idea of value.  He supposed that people’s valuation of money decreases as the amount of money they possess increases, or to use another term, he proposed a diminishing marginal utility function for money.  While this approach, I guess, works, the even more astute among you will have noticed that this doesn’t actually solve the paradox.  You can just have a game’s payoff function that uses the inverse of whatever utility function and still end up with an infinite payoff that nobody will take.  Other mathematicians have wrestled with this problem, and so far the conclusion, as far as I am aware, is that utility must be bounded in order to resolve this type of paradox.

Now, I am not a professional mathematician, but I believe that I have solved this paradox.  SImply put, all these mathematicians have been assuming that people have the same conception of reality that they are working with; a mathematical one.  These mathematicians have assumed that people think of money as a number.  That seems obvious, right?  Money is measured numerically.  Well, yes, but the fact that different people have different ideas of what money or other commodities are valued at means that it isn’t a number.  Numbers are objective, inherently.  Two people must categorically agree that a 7 is a 7, it always was, is, and will be 7, and that 7 = 7, which also equals 6 + 1 and an infinitude of other identities.  However we all know that two people might have a differing opinion of various exchanges, such as $3 for a mango, for example.  Someone who loves mangoes might buy at that price, someone who doesn’t, won’t.  So we can’t say that $3 = 1 mango in the same way that we can say that 7 = 7, even if all mangoes in the world were always bought and sold for that price.

The issue here is that these mathematicians, while brilliant direct deductive thinkers, think of the universe in a flatly rational way.  While this is probably the best single perspective through which to view the universe, it fails when dealing with people that lack a similar rational strictness.  Have you ever been beaten by someone at a game you were clearly better at, simply because the other player just refused to play “properly”?  This happens all the time in poker and numerous gambling or card games.  In games like chess this rarely happens because in a game of perfect information, “proper” play can be categorically proven to be superior during the game itself.  If it would result in a bad situation, then it isn’t proper play.  Where information is limited, “proper” play might land you in situations you couldn’t predict or prevent.  Anyway, a more textured view of the perception of the universe would allow for nonlinear and unconventional conceptual modes for perceiving the universe.  For example, perhaps a certain subsection of people conceive of money like power.  The actual number isn’t as relevant as the power it holds to create exchanges.  The numbers are negotiable based on the situation and on the value sets of the parties involved  So the St. Petersburg Paradox could be equally resolved by saying that power doesn’t scale in the same way that money does.  If you offered someone a utility function of power, it would mean nothing.  Power is not infinitely reducible: the ability to do something doesn’t blend seamlessly into the ability to do something else.  The atomic unit of power is much larger than the infinitely fine divisions between any given numbers.  Having ten very small amounts of additional power is also not the same thing as one very large new executive power.

People can link together abstractions and concepts in many, many different ways.  For example, some successful investors say that instead of looking at your money like it’s your fruit, look at it like your bag of seed with which to grow more seeds.  True, you’re going to have to sell some of those seeds to get what you need, but its purpose is to grow.  As you accumulate more and more, the amount you can draw off increases while still maintaining useful volume.  This gives a completely different outlook on money, and will generate different decision behavior than looking at money as something to be spent as it is earned.  This same principle can apply anywhere at all, because in order for something to exist in your perceptual map, you have to think about it.  You might think of movies like books that have been converted, like picture books, like snatches of real-life experience, like a sequence of scenes strung together like string being tied together, or like a strip that runs through its full length in only one direction the same way every time.  There are other possibilities of course, but that’s as many as I could think of while I was in the process of typing this post.  This is only looking at a small slice of the possibilities of conceptual remapping (analogues and analogies, specifically) but other forms would require a great deal more explanation.  I think you get the point though.

Back to mathematicians and the St. Petersburg Paradox.  The paradox only exists if you look at utility in the mathematical sense.  There exist models, such as the one that “common sense” seems to indicate, that don’t see a paradox.  These models instead see a game that has a sliding scale of value and beyond a certain point the value is zero (or negligible).  This gradual fading of value is responsible for the probable effect of many people deciding to play the game at differing values.  I don’t think even the most hardcore mathematician would play the game for $1 million a round, even though it will eventually pay for itself.  The utility solution fails to take into account the common sense evaluation of time and effort as factors in any given activity.  You could factor in such an evaluation, but you would probably then be missing something else, and so on until you have built up a complete map of the common sense and shared perceptual map of the most common conceptual space.  But then you have duplicated the entire structure you’re attempting to model and created a simulation instead of a simplification.

On simulations and conventional models, we currently use both.  Our simulations, however, tend to be based in the real world, and we refer to them as experiments.  This is how we collect evidence.  The problem with the natural universe is that there is such an unimaginable profusion of activity and information that we can’t pick out any particular aspect to study.  An experiment is controlling all those other extraneous factors, or removing/minimizing them from a confusing universe so we can focus on a single test.  Once we have our results from that test we can move on to test another part of reality.  Eventually we will have built up a complete picture of what’s going on.  Simulations are data overkill from which we can draw inductive conclusions because we don’t understand all the underlying mechanics.  Models are streamlined flows, as simple and spare as possible, which we can use to draw deductive conclusions.  For example, the equation for displacement for a falling object [dp = v0*t + (1/2)a^2*t] is a simplified model, subtracting all other factors than the one being considered, allowing us to deductively conclude the displacement for any values of v0, t, and a.  Mathematical conclusions are a sequence of deductive operations, both to make mathematical proofs and to solve/apply any given instance of an equation/expression/situation/etc.

Our minds operate on the most basic level using models primarily, and simulations second.  This is because most of the time, a model is close enough.  You don’t need to include every factor in order to get an answer at sufficient precision.  You don’t have to factor in the time, the temperature, or the quantum wobble of each atom in a baseball to figure out where it’s going to land.  If you wanted a perfect answer you could simulate it, but you can get it to an extremely high level of precision by simply ignoring all those marginal factors.  They are not worth computing.  Now we are beginning to factor in the distinction I’ve brought up before between algorithms and heuristics.  Models are often heuristics, and simulations are often algorithms.  Models can include algorithms and simulations can include heuristics, but on the whole a simulation (given correct laws and good starting conditions) will algorithmically compute exactly what is going to happen.  A model, on the other hand, is a much more efficient process that throws away data in order to make calculation simpler.  Usually a lot simpler.

Now I am willing to bet that some readers will be confused.  I just said that simulations need the right laws and starting conditions- isn’t that the same thing as a deductive process needing the right logical framework and initial premises?  Well, yes.  That’s because a logical construct is a simulation.  However, it is a simulation constructed using information already stripped of extraneous information by creating a model of it.  The line between model and simulation is not black and white- they are simply approximate labels for the extremes of a spectrum, with conflicting ideals.  The perfect model is one law that determines everything.  The perfect simulation is a colossal, gigantically massive data stream that represents everything, down to the last spin on the last electron.  This is also where we get the fundamental distinction between philosophers: the conflict of rationalism versus empiricism.  The rationalists believe the model to be the “one true philosophical medium” and the empiricists believe it’s better to use simulations.  The tricky part is that in order to construct a simulation, you have to have models to run each of its laws and each of its elements.  In order to have a model, you have to have a simulation to draw patterns from.  So we have an infinite recursion where rationalists and empiricists are chasing one another’s coattails for all eternity.  Fortunately, most people who have thought about this much have come to more or less the same conclusion, and figured out that rationalism and empiricism go hand it hand quite nicely.  However there is still a preference for choosing to understand the world through one mode or the other.

How does all this apply to the original issue of the St. Petersburg Paradox?  So we have mathematicians who are definitely rationalists- I imagine there aren’t many professional mathematicians who are empiricists.  And these mathematicians construct a model that represents a certain behavioral set.  Their problem, however, is that reality doesn’t actually support the conclusion they are saying is the most rational.  So they change the model, as they should, to better reflect reality.  All well and good.  Their problem, though, is that they are actually doing their job backwards in one concealed respect.  Implicit in their model is the idea that it is the case in the simulation they are describing that the population they are describing has the same conceptual map that the people who created the model did.  I am aware that I could have simply said we have some ivory tower mathematicians who are out of touch with reality, but I wanted to cover in-depth what the disconnect with reality is.  They are correcting their model by making it better reflect empirical reality in one respect, but in so doing they are simultaneously doing the same in reverse by assuming things from their meta-models onto reality.  We have rationalism and empiricism, simulations and models, inductive and deductive thinking, all chasing their dance partner around.  But the most vital thought is that the process must only go one way.  You must always push forward by correcting both to better fit the other in reality, rather than working backwards and assuming things onto reality which are not the case.  If you do this, and then entrench your position with a rationale, you are screwing up your meta-model of reality.  And, like a monkey with its hand caught in a banana trap, the tighter you squeeze your fist the more surely you get stuck.  For every ratchet backwards on the progress ladder, you get more and more firmly stuck in place, and it even gets harder to continue to go backwards.  The wheel spins one way, it grinds to a halt in the other.

The Good Stuff

After watching a bad movie which shall remain nameless which several people I know recommended highly, I have been prompted to write a post about semantic ambiguity and why we like the things we do.

Firstly, it’s actually a difficult proposition, semantically, to say that anything is “good.” What are you actually saying? The definition of “good” will, by definition, change according to the properties being measured according to each object, the intention of the speaker, the understanding of the listener, and the appropriateness of the environment. It might be “good” for something negative to happen, given the right circumstances. Or, it might be the understanding of the speaker that a certain property is good.

To circumlocute my semantically ridiculous point, I’ll approach it from Rene Descartes’ proof of the existence of God.  Yes, he did that.  Descartes said that within the world there must exist some maximally perfect thing in all respects, and that whatever that thing was must necessarily possess all the properties commonly ascribed to God. Therefore, whatever that thing is, he would call God.  There, God proved, now let’s get on with Cartesian mathematics.  I’m not sure if he actually believed this, he may have been agnostic and just included this in order to avoid being suppressed and possibly persecuted.  Anyway, this idea of “maximally perfect” is the crux of my point.  How is it possible for something to be absolutely perfect in all respects?  Included in the definition, perfect’s characteristics are relative to the object being described.  A perfect pen has absolutely different properties from a perfect refrigerator, or solution to a problem, or book.  You can even ascribe perfection to a specific property of an entity like “this book is the perfect length.”  To say “this steel member is the perfect lergth for this bridge” is essentially the same statement, but the circumstances change the meaning entirely.  So are we to think, Descartes, that God must be a perfect-length book, steel beam, and book, a perfect-weight human, elephant, and field mouse, etc. etc.?  Or are we talking about a perfect being, which would have consciousness and perfection in such fields as omniscience, omnipotence, etc., etc., in which case we are presuming the existence of a perfect being and haven’t actually concluded anythnig.

On a more practical matter, what do we mean when we say that a movie or book is good?  We actually mean that we enjoyed it or were pleased by it in some manner.  If we said some thriller was good, it presumably was exciting and had a good plot, possibly with enough other elements to keep it interesting.  A good stand-up comedy show was funnry, a good academic paper introduced some new idea into the world, and a good philosopher makes other people think.  So why have the word “good”?  Spanish, while having a word for “good” (buen@) actually uses “me gusta(n)” for something you like- literally, “it pleases me.”  But we English speakers love the word “good” to the point of distraction.  I believe that abstracting together the concepts of caliber, “a good/well-made X”, scalar judgment “that X was/is good”, and virtue along with a few other meanings serves a useful purpose.  It’s like we’ve taken all things that exist or could possibly exist and created a scale of meaning with two axes, “good for its purpose” and “I like it” and thrown them together.  Virtue can only be applied to people, and it’s a little improper to say that people are good for their purpose.  For entertainment content like movies, there is no function, so we are left to judge how much we like them and what it is we like about them.

The issue with this is that both of those scales are quite subjective.  One person might have a different perspective on what needs to be done, and judge the way certain things need to be differently from someone else.  However there are some widely agreed-upon factors, namely that regarding primary functions, more is better.  A pen that never runs out of ink is much closer to a perfect pen than a pen that is really comfortable.  Keep in mind that this applies to specifically desired properties as well.  As to fridges, colder is not better, because there is some ideal temperature desired within that fridge.  The faster and more robustly the fridge can maintain that desired temperature, the more perfect the fridge is.

However as to liking, anything goes.  The frameworks by which we make such subjective judgements are so complex, and sometimes tenuous, that someone else has only limited predictive power on others’ opinions.  Now we can finally get to the interesting stuff.  OK, let’s presuppose that different people with different frameworks derive different levels of enjoyment from different media.  Let’s say Bob reallly enjoys horror movies, and Jill loves reading classic literature.  Now let’s give them a machine that allows them to temporarily superimpose someone else’s subjective framework over their own perception, not damaging their own in any way except that they remember the experience.  Would Bob enjoy using this machine to adopt Jill’s framework, and then read classic literature, and vice versa?

If it is true that there is a certain set of information that makes you enjoy certain things and disgusted at others, then it should be possible to figure out why people like things and appreciate them for the same reason.  We could theoretically appreciate and enjoy pretty much every flavor of media anyone ever put energy into producing.  Does that make sense, or is there some objective standard of subjective judgment by which we should dismiss certain works as just outrightly bad?

The Intelligence Process

I have generalized the scientific method, at least for my own use, because while the scientific method works perfectly for science there is as yet no model which ideally describes the application of intelligence against objective reality. Now, this basically is the scientific method, but factors in a number of elements which are useful to exclude in scientific discourse.

1.) Assumptions: Intelligent agents always begin from assumptions, and although there is nothing we can do about it, it’s not a bad place to start unless you use poor assumptions and do not recognize them as assumptions. Also includes circumstantial evidence about surroundings, self, etc. The initial information set at any reference point you choose.

2.) Deficit: Any set of assumptions will find a case or situation where information is lacking, possibly a method to do a certain thing, maybe a rule about the world, or perhaps unknown circumstantial information. Formally phrased this would end up something like a question, spurring the creation of a solution. This step is also significant in providing us with the drive to seek stimuli.

3.) Hypothesis: A solution/guess is derived based on assumptions, utilizes rational, predictive, and imaginative abilities. Given accurate starting information and sound methods, this result will be useful. Otherwise, it is suspect (although it may still be useful or accurate- by the “the moon is made of cheese, therefore the sky is blue” effect, it’s just not reliable).

4.) Ecology Check: The hypothesis is actually cross-checked with the assumptions before being tested against reality. This is, for example, why people who don’t like broccoli may decide not to eat broccoli. Without this step, there would be no reason to assume that you wouldn’t like broccoli now, regardless of how you thought it tasted yesterday. While not a strictly logical approach, this is usually an immensely useful heuristic process.

5.) Test: I have actually combined a number of the scientific method’s steps here- steps like “prepare” and “procedure” and somewhat pointlessly specific and I just rolled them into this step. The objective of the test is to analyze the effectiveness of a piece of information you have put into “sandbox mode” as a hypothesis. The reason for this is that you cannot test a deficit, you can only test positive information. It can be disproved. Statements like “there is no such thing as a goose” are disprovable- they are simply about the nonexistence of something rather than its existence, all you need to do is find a goose. A negative statement might be “a goose can transform into an elephant under some conditions.” [s] Wow, that’s helpful [/sarcasm] Now, here’s the rub. Testing is the most important part of intelligence, but at the same time it is the most liable to fail. It is inherently an inductive process, as I have said before. So statements like “all swans are white” cannot be proven authoritatively. They can, however, be disproven, by finding a swan that is not white (as there indeed are). Yet if you have seen a million swans and they were all white, and you have no reason to believe your field of swan observation to have been constricted by some other factor, then you may conclude that all swans are white, and you would be quite rational in doing so. Provided that you recognize that you are making an assumption for practical purposes.

6.) Inference: The test only provides you with the data to make an analysis. Deciding what the test means is a whole ‘nother can of worms. In the case of our goose example, perhaps I’m a goose breeder who wants to grow a better goose. This is a subjective and situational step, so I’m just going to make something up here, but let’s say this here goose breeder is of the entrepreneurial variety and decides that because there are only white geese, if he could produce multicolor geese he would make a killing. Goose show spectators around the world would be shocked into buying spectrum geese at exorbitant prices. Now, even in this extremely short example, look at all the other factors and assumptions I brought to bear to determine what the meaning behind “there are only white geese” was. I needed some ideas about the nature of the world, the economy, my own experiences and tendencies, all these things which are a complete construction on top of the conclusion “all geese are white.”

7.) Compression: Another step which, while being illogical most of the time, is highly useful. Concept compression takes a number of forms, usually dependent on someone’s learning style. There are auditory learners, visual learners, kinesthetic learners, etc. etc. My experience is that each of these labels is an oversimplification. When I’m learning a method or a set of information I mainly gauge how familiar any given piece feels. This is extremely effective for nonlinear processes like abstraction, but extremely poor for rigorous linear processes, or arbitrary elements like rote memorization. If I have to give a presentation, I cannot memorize a script, and memorizing bullet points is even tougher. I can, however, just learn holistically about the entire topic to be covered and then just stream of consciousness about it and do quite well. Now, I have other methods for lots of different things, as do we all, but I’m reasonably sure that’s my main label. I have my own theories about how we label thoughts and sensory data, but that’s probably for another time. For now, I think we can agree that we don’t encode in memory the actual sensory data or concepts or ideas received/conceived/whatever, but actually a compressed interpretation of that information.

8.) Association: The issue with putting this step at number 8 is that association is the sole purpose of the neuron in the brain, so this is actually going on all the time, at every step along the way. Whenever you string two bits together in your brain you’re making an association, so the entire process itself is associating one step with the next. Also, anything that happens to be going on might be associated with the thoughts you had at the time, or maybe you’re connecting together two similar things, maybe tests you’ve made or hypotheses from different times, whatever. However I think this is the best place to put it because in the strictest sense, you can’t associate anything that you don’t remember, and you can’t remember something until it has been compressed. If you’ve ever done that experiment where you have to count the number of R’s in a sentence, but the question afterward asks about the number of H’s, or similar, you know what I’m talking about. You didn’t encode how many H’s there were. (Actually, to be proper, you didn’t encode the number of R’s either, you created a program on-the-fly that would increment a number whenever you saw R as you scanned the line, encoding a single number which is much more efficient. Encoding the number of R’s would be memorizing “there are 7 R’s in the sentence [blah]” which you probably didn’t do because it’s stupid and wasteful.)

9.) State Hook: This step has the same issue as associations in that you are experiencing some sort of state all the time, however it goes after association because it is used as a sort of meta-tag on top of any inter-idea associations you may have made. If you make the association of press button->get candy conceptualized and ready to go, realizing that you can now have candy if you want it, then your state, perhaps happiness, sadness, hunger, or other conditions (not necessarily related to your body) are applied. If you wanted candy, for example, you’ll get a state change, some different associations, and a different resulting behavior than if you just ate. For example, you might be more inclined to find that candy tasty.

10.) Framing: I’m wrapping up all the higher-level thinking into one big category, because you’re basically just repeating this step over and over again to go from beliefs to values to paradigms or whatever else. Ascribing synthetic meaning to things is framing. Rearranging models or performing manipulations on your conceptions is performing operations by adding synthetic meaning to delete, replace, or augment bits. Naming something is a framing operation. Grouping things is a framing operation. Note the distinction between associating two things, and grouping two things. When two things are associated, one might lead you to the other. However a fir and a poplar can both be trees without the mention of firs causing you to think of poplars. There are also a number of interesting oddities of peoples’ histories of associating groups with individual members, or maybe something else entirely. Free association: “Tree” and they say “Larch” then that’s one model they have of the standard tree, perhaps representative of trees as a class to them.

11.) Confirmation: Any given piece of information has several stages to go through before it is really accepted, and some will always be more respected than others. This level of trust or integration is a full spectrum extending from violent opposition to devil’s advocate thought experiment to skepticism to acceptance to total faith. Your belief may increase due to emotional reaction, resonance, application, utility, or any of a number of other reasons. Healthy systems of thought will tend to eradicate false beliefs in one shot once they are disproved- systems that are unhealthy may have a tug-of-war with emotional reactions, etc. pulling in both or (god forbid) more than two directions. Persisting beliefs will tend to gradually increase in acceptance due to increased association and exposure, and extinct beliefs are just not even in your head anymore. I’m going to use this step as a placeholder for several significant levels of acceptance, to the point that a given piece of information is trusted/believed to 1. the same degree as your original thoughts, 2. the same as your perceptions, and 3. on the level with your beliefs.

12.) Utility: The function of intelligence to maximize its utility given a specific information set, defined by the previous steps.

13.) Morality: The function of intelligence to deduce and follow morality. The reason why this is a product of intelligence is that morality is simply the application of reciprocity in society to utility. Morality is doing what is best for everyone, an abstraction out from doing what is best for you, with the significant difference that morality is a higher level, and therefore guides and supersedes personal utility. On a slightly related note, arbitrary social laws are a hijacking of this function to no real benefit- or more commonly, to an impossibly small benefit at the expense of a potentially massive gain. If they did blatant harm they would be abandoned as corrupt and pointless by the lower-level and more powerful utility principles.

14.) Creation: Intelligence seeks to produce. Artistically, socially, culturally, whatever. We’re seeking to stimulate others’ perceptions and minds, satisfying the sensory deficit with the richest material we can produce because we want to experience also. This works even if nobody else existed in the world because the act of creation is a bottomless supply of auto-stimulation.

15.) Self-Actualization: Realizing your potential, from Maslow. Drives artists to be artists and accountants to suicide. Just kidding. Not everyone’s greatest potential is in direct creation of memetically or mentally stimulating material.

16.) Philosophy: Understanding and wisdom. The drive to understand ourselves, our world, our thoughts, everything. The problem is that we are like computers seeking to describe their own code. We can’t do it because every line of code used to help the computer understand its other code…. is one other line of code requiring explanation. What makes me happy? What do I want, really? What should I do? If you had everything you could conceivably want- infinite utility, morality, etc. then is life pointless? Why or why not? What would you do?

More Simulated Realities

Before I get into this post I have an announcement. Although this site lacks any donation mechanism, if you like what I write and want to reward me as its author I have created an account with istuff- if that’s what they’re called. Basically it’s a pyramid scheme, I’m not going to lie. However, because there’s no money involved I don’t see any problem with it. I’m supposed to obtain referrals in order to get my free iPod. Signing up takes about 5 minutes, and then doing one of their offers takes 5 to 10 more. Please use my promo link here so I’ll get credit towards my iPod. Thanks!

Now, more in-depth on your brain, simulations, and the computability of the universe. Asking if the universe is computable is basically asking if all aspects of the universe’s functioning are a) universal, b) consistent, c) predictable, and d) functionally limited in scope to our own universe. If the laws of physics are not universal then one part of the universe might follow different laws of physics than another. If they are not consistent then they may be subject to change over time. If they are not predictable then mathematics cannot duplicate them- although randomness and like phenomena are duplicable in a probabilistic fashion. Lastly, if the universe is not limited in scope, then we’re just sunk. Basically what I’m saying by the scope of the universe is that there cannot be some other non-observable otherworld that affects our own universe. Although that outside influence may itself be subject to universal, consistent, and predictable laws, if we can’t discern its workings from within our own universe then we cannot simulate our own universe because we can’t simulate its effect. Although the most complicated of our 4 contingencies, it’s probably the one we have least to worry about. Most physicists or scientists would agree that all four of these are well established to be true of our universe.

If the universe is computable, and there are those who say it isn’t although they are completely wrong, then it is physically possible to create a simulation matching our own universe in complexity, size, or resolution, but never all three at the same time or our entire universe must necessarily have been subsumed into creating such a simulation. We can shave off a massive amount of unnecessary computing power by limiting our simulation to salient details only. For example, we can use macroscopic heuristics to make objects behave like objects without needing to simulate the position, energy disposition, etc. of every atom within said object. Unless someone within the simulation is actively perusing each atom of that object, nobody will notice the difference. And if anyone should examine those atoms, why our simulation can just render those atoms for them like the light turning on in the refrigerator. So in a conceptual sense, it wouldn’t be very hard to make a simulation that was extremely believable to someone within it. There are several different models of simulation we might have, and each has its advantages.  A brain interface simulation like the Matrix means that you get to keep your body, and don’t face any of the weird issues associated with copying your mind from one place to another.  However, you also don’t get to play the simulation at whatever speed you would like because it can only operate at the same speed that your extra-simulation brain can handle.  If you still want to keep your body, maybe you can go for a half-and-halfer arrangement, where you plug in your brain and a temporary copy of it is loaded up into the simulation as a virtual self, strongly typed back to your original brain which must be temporarily disabled so the “real” you doesn’t walk off.  This is weird because there must necessarily be two copies of you existing at the same time, one in reality (unconscious?) and one in the simulation.  But this method gets you the in-simulation advantages of scaling with the simulation’s speed, etc. etc.  Of course the best way in my opinion is to just be a virtual self completely.  This means you are governed by the simulation’s physics, and so on and so forth.  Probably the most effective way to manage this situation is for your virtual self to exist in a meta-simulation connected computer that you own.  So you still have a body- it’s just a computer connected to the Internet, basically.  If you want to create a simulation for yourself, you can do it within your computer, like imagination with a sensory supercomputer.  You might even opt to purchase/rent additional processing power into your property if you so desired.  Or, you can place your processor into another simulation governed by someone else, somewhat like interfacing with a game over the internet.  Your mind would of course be kept discrete and secure from all the other workings, but functioning within the simulation.

Now things get interesting. Once we had a simulation that was indistinguishable from reality, why would you want to live in actual reality? There’s no reason whatsoever why there should arbitrarily be only one- that’s absurd. But your body as you know it could be exactly created as a sequence of virtual atoms within the simulation. If that was all you did then there would be no effective difference between being in a simulation and being in “real life.” But why stop there? Carefully crafted algorithms to alter the content of the simulation would effectively give you magic powers. Absolute control over material reality, mind reading perhaps, whatever floats your boat. If you owned your own private simulation you would be as a god among NPCs. While you could play one hell of a game of Sims or Civilization or whatever you wanted, I imagine playing with only bots would get tiresome very quickly. What you need are some real intelligences to sink your teeth into. Of course in such an advanced simulation, you have lots of options. Option A is to simply arrange some virtual atoms into intelligent agents. There’s nothing stopping you from having a legitimate human opponent whenever you wanted. Or even a super genius opponent. Hell, you could hand-craft a genius expert at anything you wanted by setting up a smart person in a situation where they just practice practice practice at whatever you want to challenge them at. You then set the simulation on maximum speed and step back for a millisecond. When you return your opponent will have perhaps thousands of years of experience, and will destroy you. You could simulate the Matrix universe which contains within it another simulation, or perhaps the Firefly ‘verse, or whatever other fiction world you pleased. Full Metal Alchemist anyone?

Option B is probably even more fun: other simulation gods. PVP takes on a whole new meaning. Highlander is just the beginning. World of Warcraft is the tip of the iceberg. Try KAOS in a simulated real-world environment, with each player being assigned some other player to kill, somewhere in the world.  I for one would particularly look forward to some genius coding up some Halo-like universe where a player commands armies in RPG/RTS format where each of your characters is essentially a real person. You start off solo and may eventually build up armies of millions if you so desire, and if you can. Each side would be headed up by one Player. Maybe they can respawn, but that’s kinda pathetic. If you die, you should be dead. In a game like that any hardware you may have obtained could be easily gotten back in a new character if you were so inclined. Randomly generated authentic characters, on the other hand, would be priceless.

Which raises an interesting and vital question- if you’ve created a real person in a virtual world, do they have rights? Are they entitled to better than a gameworld of eternal war? We have no problem blowing away humanoid models in modern shooters, but when those models are atom-for-atom replicas of real people with fully functioning brains and the works, then what? I’m not really sure of this point, to be honest. While I do believe that they would be people in every sense, and that in truth their reality is just as “real” as ours despite the fact that they live in a simulation stemming from ours. However, I am disinclined to believe that it is unethical to create such a world. It is unethical to kill people within that world, but the creation of a world with the intent of waging bloody mayhem within it is not unethical. The distinction here is that by the act of creating the world, you have not killed anyone. In fact you have given life to everyone created within that world. The fact that you did so in order for other people to wage war within it is irrelevant. Intent is never significant: only action matters. However, even if you were to go inside that world and kill everyone within it, have you really taken anything from them? After creating your world, are you morally obligated to keep it running on behalf of those within it? No, you can cancel your simulation whenever you like and you have given those within it life for a certain period. Is it better than never having existed at all? Of course. In fact, I would go so far as to say that any existence whatsoever is superior to nonexistence. Yet, while some people will continue to enjoy war games with perfectly realistic human beings, I’m not sure I would find it enjoyable for long. The people running the simulation would obviously sanitize the battlefield to make it enjoyable because nobody would pay to participate in the hell of war as we know it. Perhaps some of the more hardcore people would want a somewhat realistic experience, but I’m not one of them.

I suspect that we would see many more peaceful video games with much improved realism.  Current games are trying to capitalize on the visceral immersion factor they can acquire through violence.  If they’re indistinguishable from reality, that gut reaction is no longer necessary.  Simulating a poker room, open ocean, or even a farm (where you only decide what gets done- it fast-forwards through the actual farm labor, if you’re even the one doing it) makes a lot more sense.  Interestingly though, anything you learned to do in such a simulation would be fully applicable in real life.  If you learned to swordfight in your pirate game world then if you picked up a sword in real life, the skills would be the same.  This is ignoring the fact that if we have developed sufficient technology to interface your brain with a computer to that degree, you could probably just download whatever knowledge you desired and it would be available to you in both cases.  The possibilities of creating simulations for ourselves are just endless.  I want to be a cyborg.

Thanks again to those who signed up using my promo code here.  Every one helps!

Unorthodox Determinism

I have recently encountered a massive conflict between the proponents of free will and determinism, and to me both sides seem a little shortsighted.  The free will crew believes they have free will more or less because they want to, or they argue that if the universe is deterministic then things like moral responsibility or experience become worthless.  Now this is clearly false because the only thing the deterministic side claims is that the universe follows universal causal rules and there are no miracles that violate those rules.  They can counter the free will arguments with arguments about building houses, saying that “you start building the house because if you just sit on your ass then it won’t get built.”  Saying that it is predestined that the house be built and then doing nothing is an incorrect and fraudulent corruption of deterministic thinking.

Though a fascinating debate, you’re both wrong.  And you’re both right.  Free will is a direct result of a causal, deterministic universe to the point that without such a universe then free will would be meaningless.  Time for an example; let’s take a deck of cards and mix it up randomly.  Clearly, while the deck is just sitting there, the order of the cards is fixed, unchanging, and predetermined.  The fact that this is true does not mean that the contents of the deck are somehow irrelevant.  In fact, the knowledge that they aren’t changing doesn’t actually help you at all because you don’t know what they are.  If you were playing a game like Texas Hold ‘Em Poker then you have to allow for the fact that any of the unknown cards could be any of the cards you haven’t accounted for.  In reality the identity of those cards is completely fixed.  Another player can be looking at some of those cards and be presented with exactly the same situation but with a different context containing differing information.  By the logic of the free will corps, the fact that the cards are predetermined somehow makes the game irrelevant, boring, and useless.  This is clearly false due to the interplay of information and unknowns.  There is a case to be laid against my example because I introduce a second layer of free will in the players’ responses to their predetermined cards, but we’re talking imprecise examples right now and I’ll lay out my true and complete argument shortly.  So with our deck of cards, you can draw a card and then its position is locked in in a past-historical sense, but its position was equally predetermined beforehand.  Your knowledge has changed, and that’s all.  It is a significant and common fallacy, however, to then assume that the cards could not have been ordered in any other way.  The fact that they could have been drawn in any other logically possible way means that you are forced to allow for it on equal terms with the way they actually were drawn.  Notice the quantum zippering effect of multiple strings of possible futures being reduced to one single past as you draw each card.  Also note the interesting effects of inference as you go through the deck.  If all the clubs are gone then you know that the next card will not be a club, for example.  Saying that the future is predetermined is really an extremely short step from the obvious truth that the past is predetermined, or more accurately that it is unchangeable after the fact.

The fundamental principle in question is emergent behavior.  Our universe exhibits emergent predictability based on inherently random subunits.  The most elementary particles behave extraordinarily erratically, but macroscopic objects exhibit stability, and extremely large conglomerates of matter such as stars or galaxies are materially determined into the future, and the fluctuations on the lowest level aren’t going to affect entities of such a massive scale.  The weight of probability is just too large at high scales.  The basic organizing principle of the universe is therefore that, probabilistically speaking, it follows the path of least resistance.  The universe resolves itself into the most probable stable arrangement based upon the input of all its particles.  Humans inhabit the scale at which the world around us is stable, but still able to fluctuate enough for small systems’ outputs to produce differing results as conditions require.  Life is the self-organization of matter, and as life becomes more sophisticated in its organization techniques, its ability to convert more matter into animate matter increases.  Once upon a time the chaos event horizon was on the microbial level; random fluctuations in the primordial soup produced the first RNA capable of duplicating itself purely by “chance.”  Statistically, on earth’s conditions, given the vast volume and time scales we’re talking about it wasn’t really “chance.”  Especially so because of the anthropic principle.  If we hadn’t appeared, we wouldn’t be here to talk about it.  If we had appeared somewhere else, we’d be talking about it wherever the conditions were suitable for us to appear.  So it’s really not randomness.  In the inexorable way that life does, it proceeded to duplicate itself and divide into more complex lifeforms.  Eventually, the chaos event horizon broadened into macroscopic lifeforms by the development of the cell- particularly those of the eukaryotic variety which allowed organisms like us to overcome the problems of osmosis and diffusion.  A giant, human-sized amoeba (or even a non-microscopic one) is impossible because substances absorbed through the membrane wouldn’t diffuse to the nucleus and other structures.  So lifeforms like us are composed of trillions of little cooperating microbes which don’t violate those rules.  How does this relate to determinism?  Well, it could be said that the development of life exactly as it was, including down to the individual organism level, was predetermined.  Does this change how, beforehand, it couldn’t have been determined how the future would have unrolled?  Asking what would have happened had the universe proceeded in a slightly different manner is exactly the same as asking what would have happened if one of those cards in the deck was a different card.  Guess what?  The answer is very simple.  The card you had drawn would simply be different, leaving you to ask the same question.

So now we’re ready to address the true issue on determinism.  We live in a causal reality where effect follows cause all the time.  We can formulate models and simulations to meaningfully represent the world around us and make predictions about our world.  Let’s do an experiment.  What happens if you throw a rock up?  It falls down.  Well, I’m sorry to tell you this, but you’ve just proved that we live in a deterministic universe.  The fact that our universe is composed of immutable, consistent laws acting on a consistent basis means that it is possible to predict the future.  Let’s take a more useful example.  You’re walking along some mountain trail, and you come upon a gorge.  Across this gorge are three bridges.  One of them is extremely rickety, and if you try to cross it then you will fall.  The second is very stable, but on the other side are some soldiers with guns, and you of course have no papers!  If you try to cross there, you will be shot.  The third bridge is a small townie bridge that looks safe.  Which bridge do you cross?  If you answered bridge #3 then I’m glad we can agree that we live in a deterministic universe compatible with free will.  Due to the deck-of-cards-effect, whatever happens to occur was probabilistically certain.  However, we live in a causal universe so if you choose to cross the rickety bridge and you fall to your death, you were predestined to arrive at the choice, choose the first bridge, and fall to your death.  If you choose the second bridge, the same concept holds for you being shot.  And if you choose the third bridge, your fate is to make it across and go on your merry way.  If this sounds like I’m ignoring the deterministic aspect of my argument, that’s because your perspective of determinism is fundamentally flawed.  You seem to think that the fact that it is predetermined has meaningful import on what is predetermined.  You seem to think that if determinism is true, that makes it possible to say things like “your destiny is to take the first bridge and die.”  This is ridiculous.  Let’s modify our situation so that, back in Phuket, some mystic told you that you would be faced with this choice and that you would choose the first bridge and die.  When you arrive at that situation, you choose the safe way, you live, and then laugh at the insanity of the mystic.  Or perhaps you’re of the religious bent and you decide to run headlong down the rickety bridge, and fall to your death because the mystic said you would.  Obviously any sort of mystic divination is impossible.  Unless that mystic is blessed with an absolutely unbelievable amount of brainpower, their prediction is futile- more on this shortly.  And even if the prediction was effective, the fact that they said it (actually just the fact that they predicted it) changed the conditions and thus invalidated the prediction.  Lots of time travel fiction has all sorts of weird, twisted, self-referential paradoxes.  For example, later on in your quest you come upon another bridge which looks perfectly sound but then as you’re crossing it gets hit with a meteor and you fall to your death while the mystic laughs over your corpse.  Or maybe whichever bridge you choose turns out to be the rickety one and you fall to your death.  Or maybe something even more bizarre.  Such paradoxes/improbabilities/insanities are entertaining, but they embody a truly stupid way of understanding the world if they push it as truth.

Now we’re at an interesting understanding of fate.  We can make useful predictions about stuff like rocksy flying, but not about the nature of the universe.  Why are our simulations good in some circumstances, but not in others?  Simple.  Imperfect models will produce imperfect results.  It turns out that our model of the rock flying is more than sufficient to predict something so simple.  It’s a solved system.  However, if you wanted to be perfectly accurate in describing the nature of the rock’s motion, down to the last particle, you would still require a massive amount of processing capability.  That’s unnecessary because a simplified model is good enough for our practical purposes.  Tic-tac-toe, young children eventually figure out, is a solved game.  It’s possible to at least tie every single time.  Theoretically, a sufficiently powerful intelligence can represent any information set or solve any such problem.  If we can predict the way the rock will fall, a vastly more intelligent agent might predict the chemistry of a microbe and thus its activity.  An even more intelligent agent might work on an organism as complicated as a human.  An even more intelligent one might “solve” the planet and its ecosystem.  We can’t play chess with that type of knowledge because the game is so fantastically complicated relative to our mental faculties that we cannot just solve it.  In fact, we can’t even verify if it can be solved.  I would bet it can as long as you don’t employ “infinite intelligences” in your proof, but now we’re getting off topic.  Back to the real world, if you thought chess was complicated, then how on earth would you even begin to go about solving the behavior of, say, a squirrel?  The task boggles the mind.  However that wouldn’t even require that much processing capability- you only need all the data about the squirrel and its surroundings out to the limit of the squirrel’s perceptual ability, plus an exact model of the squirrel’s behavior.  Now consider doing the same thing with the earth as a whole.  Simply impossible by any modern standard.  As we expand our simulation’s purview to a galaxy, a cluster, and so on, the amount of processing power required expands to insane levels.  Eventually we reach the edge of the universe, but probably long before then we’ll have run out of real estate with which to run a simulation.  In order to create processing capability, you have to store information somehow.  Fundamentally, all our information storage methods involve the placement, polarization, or other modification or use of some form of the universe’s substance.  It therefore follows that it is impossible to simulate the complete universe because in order to do so you would need one bit of information for every bit in the universe.  Basically, you would have to represent the universe with itself, which gets us nowhere as to predicting it.  However, more efficient but imperfect models can probably make fairly accurate assertions about the future, such as the case with the rock.  The use of heuristic models in place of pure simulations is what gives intelligence its power.

Now I need to close the loop- free will and determinism.  So we live in a predetermined universe because the universe follows causality, in the form of consistent laws and a consistent representation of itself.  Yet at the same time the fact that it is predetermined alone gives us absolutely no information about its nature, and just like the deck of cards which is predetermined but at the same time unknown, the universe’s causality is exactly what makes it useful to us as organisms.  You choose to cross the safe bridge because you know you’re going to get across, and you can make that prediction because you implicitly understand and respect the causality of the universe.  Yet at the same time, because your intelligence allows you to do that, you are forced to acknowledge the fact that a more intelligent predictor could make more powerful predictions than you, and so on and so forth up until all solvable problems are, or can be, solved.  However, it is the fact that these predictions can be abstracted that gives us the foundation upon which free will is built: choice.  Without the power to abstract features of the universe into utility and options, there can be no choice.  If you were unable to predict in the simulation sense, then trading money for food would have no meaning because food would have no meaning for you.  In fact, the continuity of your existence would have no meaning, time itself would have no meaning.  When you make a choice, it is implicitly assumed that there is a positive action being taken- “I choose this over that.”  But in order to do that, you first have to know what this and that are, and you can only do that by extrapolating into the future.  In fact, consciousness itself cannot exist without extrapolation into the future.  It’s what processing power does that distinguishes it from the rock at the core of the earth with random electrical impulses flashing through it.  Abstraction is an extrapolation into the future by creating, combining, refining, or modifying concepts derived from the past on the basis that such extrapolation will have utility later, even if it’s a split second later.  Without “If I do this then this will happen” free will is completely worthless.  A simulation takes data from the past and computes the future, and a hypothetical takes data that perhaps hasn’t happened (yet) and computes the potential future.  The inference I was talking about back with the deck of cards is your mind rearranging and making manageable the objective world around it, in this case the deck of cards.  You were simulating a few known conditions of the remainder of the deck when all the clubs, or all the kings were drawn.  And clearly you can handle a hypothetical under the same conditions because you’re reading this right now and thinking about what would happen if all the kings or all the clubs were drawn.

So you really can’t get away from the conclusion: while the universe is predetermined, the fact that it cannot be simulated perfectly means that your experience right now is the best shot you’re going to get at it.  You have free will because exactly what’s going to happen cannot be known, and must necessarily be unknown.  It’s an endless deck containing an infinite variety of cards.  We have an endlessly cascading moment of probabilistic chaos, and while we can throw imperfect simulations at it until we’re blue in the face, nobody can know with absolute certainty exactly what’s going to happen.  The universe is predetermined, but each and every one of us is blessed with limited perspective.  Enjoy it.

Of course I have another caveat, however.  If we were somehow to have total perspective on our universe, it would be conclusive proof that there existed at least one other, more grandiose universe encompassing it that we couldn’t have total perspective on.