Axiomatic Human Properties

In any philosophy of human nature there are certain parameters of the human condition which are inserted axiomatically. These properties are extremely significant to the formulation of any philosophy governing people, namely ethics and politics, but usually aren’t addressed in a uniform and clear manner. The following elements are single pieces that might be composed together to create complex ethical theories or political philosophies. Simply rattling off a list of beliefs about human nature being one way or the other in reactionary mode is pretty much a waste of time. Connecting them together to create a model that accurately reflects the world, or some piece of it, can be very important to the advancement of human knowledge. Big names in political philosophy like Hobbes, Locke, and Nietzsche have built their ideas up from the same basic elements, but they’ve done it in such a creative, novel, and useful way that reflects the way many people see and interact with the world. I believe that spreading a little understanding about what exactly the building blocks of such thinking can improve the quality of thinking in the US and around the world.

The first and most commonly addressed one is whether people are fundamentally good or evil. This question has so many ramifications for all aspects of any philosophy. If people are inherently evil then it is necessary to use some form of philosophical machinery to control, alter, or ameliorate the evil nature of humanity. This is a totally different viewpoint from someone who believes people are fundamentally good, who doesn’t need their philosophy to do much to control human behavior. Indeed, the entire realm of philosophy, particularly ethics, is more focused on what individuals decide virtue is, and each person can have their own philosophy and you can trust them to be virtuous anyway. Their virtue is given, the philosophy is a result instead of the other way around. If human nature is evil, however, then philosophy must come before human virtue can be achieved, and it is necessary to identify the philosophy most conducive to society and then enforce that point of view on everyone. If they can’t be forced to accept it, they must be forced to at least obey it through the application of laws and punishments. Most political philosophers of sufficient import are in the camp of humans being evil, and most of the governments derived from their philosophy depend upon coercive application of laws and police and courts in order to control their population. Whether people or philosophy come first is the ultimate chicken-or-the-egg question, and its primary embodiment is the debate over whether human nature is good or evil.

There is also a question about whether one man is competent or not, regarding whether one man has great powers available to him or if one man is nothing by himself. It is reasonable to have a point of view where human nature is good, but naturally stupid. This is more akin to the Stoic idea, where everyone has virtue as a driving force. Every murderer has a justification for why they saw fit to commit murder (assuming they aren’t innocent), and they really believe their justification. If they were fundamentally evil, they could care less about virtue. They may still be trying to dress up their actions as virtuous to cynically try to escape punishment, and we arrive at a Chinese Room dilemma of having to verify whether or not someone “really believes” something or if they’re just pretending. In most all cases, however, they truly believe their rationale, despite the fact that it is highly irrational. Murder and other crimes, viewed in a broader context by a rational being, are all stupid, even discounting the additional punishments inflicted by laws. If you lie for your own benefit, then nobody has the incentive to trust you. In the extreme short term, perhaps you don’t care, but if such a person was actually rational they would realize the immense value of having a perfect reputation and rock-solid name can yield far greater dividends for their own success than simply cheating and running. The law is an attempt to make this choice “more obvious” by putting a direct penalty on undesirable actions, making the line of reasoning a little easier for the less rational in the populace.
It is also possible to have a worldview, and this is the particularly sinister Hobbesian or Machiavellian view, that people are both cunning and malevolent. If this is the case, the only recourse is to make people act outside of their nature. Indeed, not only is distrust of everyone to be expected, but there’s no authority to look to for protection who isn’t subject to the same rule- they can’t be trusted, they will seize power and abuse it. Hobbes is the more primitive philosopher, and his answer to the cunning-and-evil dilemma is to put the most cunning and evil of them all in charge, the better to protect the people under the power of the ruler. Obviously he didn’t phrase it like that, but in effect creating a single all-powerful ruler in such an environment will only magnify the problem. Machiavelli addresses the issue more accurately by saying yes, it is the most cunning and evil who will be in charge, and the more cunning and evil he is the better a ruler he will make because cunning and dirty tricks are the best way to get ahead. An extremely pessimistic view, but at least it’s internally consistent. It’s actually very difficult to disprove that argument because it contains within itself its own genesis, but I believe it fails on the grounds that people would shy away from a world like that and attempt to make it a more pleasant place to live in for themselves and others.

Whether people are rational, whether people are social, whether people are natural leaders, natural followers, etc. Indeed, there is always a huge debate over what properties we can ascribe as natural to humans, and which ones are learned or inculcated, and by whom they are or should be conditioned by, whether it’s the parents, the community, the government, the religion, etc. Different philosophers have proposed different traits as being innate, and I imagine that at some point some thinker has claimed each and every imaginable aspect under the sun must be natural and innate. The oldest anachronism of this type is that humans are innately social beings, and indeed this is backed up by recent discoveries in biology, anthropology, and genetics. If we are innately social creatures, then we will congregate into groups and there is no modification you can make to the human condition that will overcome this. You can compensate for it by conditioning behaviors, but the natural tendency will still exist. The idea of human nature is actually a special case of the naturalness argument which argues that people have both a natural ethical decision-making faculty and also makes a statement about the tendencies of that faculty. The argument that there is no such faculty can be used to construct nihilism, pragmatism, and numerous other theoretical frameworks. The same can be said of any given property that you wish to ascribe as natural to humans.

What properties are innate to a person, and what properties can change through the course of their lives. This is a similar issue, but quite distinct, from the question of whether a person has the capability to change themselves, and to what extent such willed self-change is possible, or what properties or aspects can be changed this way. The same question applies to other vectors such as parents, the state, etc. Innateness is distinct from natural appearance in that a property that is innate is dependent entirely on physical (or other immutable) composition. A naturally emergent property is merely said to exist, with no particular emphasis on how or why it is that way. If it’s innate then it is a product of the human physical (possibly soul or spiritual) existence. If it’s not innate then it is acquired at some point over the course of your life. Note that non-innate properties can still be natural. For example, humans lack the capability to walk at birth so it’s not truly innate (I use a philosophically difficult example because this is highly debatable, I apologize, but there is no example of something that is obviously not innate but is natural) but it is natural because it is a naturally emergent behavior. A better example may be language, where it could be argued that a natural faculty for languages in general exists, though perhaps not innate, but the faculty for any particular language such as English is definitely not innate (although it also probably isn’t natural because saying “humans naturally speak English” is obviously wrong. We can get around this by citing a particular unspecified instantiation, such as “Humans naturally speak some language” but this is rapidly becoming too complicated to use as an example).
An argument for extreme nativism puts total emphasis on innateness. The entire course of your development is preprogrammed into you as a baby, and is fully contained within your existence at any point in time. Extreme nativism is a more or less extinct line of reasoning. The opposite end, what has been called “tabula rasa” or “blank slate” is the idea that you have zero internal programming at birth- you are totally blank, and you acquire a mind and life over the course of your life. While this seems a lot more reasonable, purist tabula rasa thinking is also more or less extinct. It’s clear that there is some mixture of the two going on, but exactly how much of each is present is not entirely clear. I dislike this phrasing of the issue, but this debate has been called “Nature vs Nurture.” I hate saying that because nurturing is a natural process- indeed humans have certain parameters for raising children encoded into our genes (preying mantises have different ones…).

Part and parcel of the natural human condition debate is what is mutable about human nature, and what is immutable, which of course form a continuum between hard wiring and total flux. A certain trait might be imparted at birth, but still be changeable such as through changes in gene expression. My hair color is different than it was when I was eight (I was blonde, now I have brown hair) and this is a property that is usually associated with genes and assumed to be immutable. We usually assume that the Nature side of the debate assumes immutability, and the Nurture side likes mutable traits. There is no requirement that these assumptions be the case, but nevertheless they tend that way. It makes intuitive sense because after all, if you were born without a certain trait, it must have been installed at a later time and must therefore be reversible, right? Wrong. Conditioning received as a young child is often highly immutable and tough to change, and mental models touching core beliefs are often very difficult to change as well, even if they are destructive.

The reason why these human properties are axiomatic is that for the most part you can come to any conclusion you like and have it result in an internally consistent model. These are fundamental building blocks from which you can construct any theory you like. While someone may disagree with you on axiomatic grounds, a direct proof of their argument will not be sufficient to disprove or otherwise dislodge your position. As it should be, an argument made from such axiomatic points can be incorrect from premises, or improper in logic, and pushing an alternate position will not influence the impact of an argument made by someone else. There is an immense possible composite-theory space that can be created just from the extremely few basic axioms I have chosen to mention here, and there are many, many, many more that can be used reasonably.

Psychic Phenomena

I am going to make some statements in this post that are going to shock most of my readership, but I expect that you’ll consider me sensible if you read it the rest of the way through. I believe that psychic phenomena are real. However, I do not believe that they are physical manifestations of any sort- they are purely in the minds of the people who “experience” them. It is important to note, however, that no other criterion is required to determine if these phenomena are real or not. Let’s say that someone believes they communicate with ghosts- they have what might be called visions and might be called visceral hallucinations. My question is, is there really a difference between these two phenomena or is it simply a matter of connotation of the words used to describe them? True, there is no “ghost” existing in objective reality, this much is obvious. However does it necessarily follow that hallucinations of this type indicate insanity?

Consider the emotions felt by normal, healthy people. An emotional reaction is a complex sequence of chemicals and neural firings to produce a sensation or a reaction, and the mechanisms used are significantly different from other systems in the brain such as those used for memory, spatial or linguistic manipulation, reasoning, and others. They are of course intimately linked because they’re all in the same brain. Consider the fact that there exist drugs that can be administered to produce a “religious experience” which is essentially a complex of emotions, sensations and thoughts that is more complicated but not fundamentally different from more primitive emotions like contentedness. Does this mean that religious experiences don’t exist? Of course not. Indeed I would say this is conclusive proof that religious experiences are a fact. Whether a religious experience means what most users of the idea think it does is a separate question entirely. The hardcore religious who passionately believe their religion because of a personal religious experience, perhaps of connecting with their god or something along those lines, are justified in their sensation, but fatally in error about what that sensation means. Their religion has told them that if certain protocols are followed, a certain religious euphoria will follow, and provided a very intricate framework of religious scripture and ideology which backs this up. When someone experiments in the religion, they might truly surrender to the experience or do whatever else is required and then when they get exactly the reaction promised to them, they take it as visceral emotional proof that everything else that they were told must be true as well. This is, when phrased this way in words, fairly obvious, but it’s actually a very easy mistake to make, even for the highly rational. There is a specific emotion that most people don’t name expressly which I call the “convincement” feeling. It’s that feeling you get when you read or hear something and become convinced by it. This can powerfully bias your view on the matter that convinced you, the author or speaker, and also your future thought on the subject. Indeed, I am actually in quite serious doubt over whether a significant body of my reasoning has been tainted by this “convincedness” on the subject of anarcho-capitalism, among other areas. It happens to me all the time reading articles on the internet but I’m well accustomed to dealing with such things- it just requires fact checking and appropriate degree of due process. The reason the “I’m convinced” feeling is so tricky is because it is the tool you use to gauge whether or not you actually are convinced. In the vast majority of situations, it’s an incredibly useful tool. However, when squared off against an act which is carefully designed to fire off that convinced feeling and thereby sway your reasoning, extra care must be taken. There should be a fancy Latin name for this fallacy, like “argumentum ad convincem” or something. Latin being a dead language, though, coining new Latin phrases is something of a pointless exercise. The point I want to communicate is that just because a reaction only exists in one person’s perceptions, that doesn’t make it non-real, only non-objective. What types of dreams someone has, what ghosts or voices they hear might be very useful for psychoanalyzing that person.

Instead of turning this on religious phenomena only, I want to discuss a broad range of paranormal issues. Those that are obviously nonexistent in reality, and are products of mere superstition, are relatively easy to pick on and done by many other thinkers to great effect. I propose a new category of paranormal phenomena that are real, but only because people experience them, and the fact that they are experienced is the totality of their existence. Psychokinesis is obviously impossible, but is telepathy possible by building on intuition and body language? Mind-to-mind communication is also obviously impossible, but consider the fact that you can look at someone’s face and identify their emotional state. To what degree is that communication, and to what degree is that divination of information that lies in the other person’s mind? A polygraph is a technological attempt to “mind read” using subtle cues. Is it feasible that one person might understand enough of someone else’s thoughts and mannerisms to deduce what they are thinking? To one degree this is a trivial question, people have been guessing what others have been thinking since time immemorial. My question is how much information is actually available, being broadcast continuously by each of us, and available for sufficiently observant people to effectively read our minds. Consider that poker players, especially very good ones, can often deduce exactly what hand the other player is holding. They aren’t using some sort of pineal gland to probe the other person’s brain- they’re studying the other person’s face and behavior, as well as the strategies that they choose to play, and have played in the past. Is this obvious, or is this telepathy? My argument is that the distinction between “duh” and telepathy is meaningless. The fact that it is easy for us to figure out what other people are thinking to some degree proves that “telepathic” phenomena are real, it’s just that it’s, well, normal. The reason why the idea of “super-telepathy” which allows complete observance of another person’s mind persists so strongly in culture is because it’s easy for us to extrapolate the abilities we have to their logical extremes. We can easily conceive of super-strong, super-intelligent, or super-anything people, and indeed all of these caricatures persist in culture as well. These characteristics are treated differently because less subtle human abilities are much easier to verify. If there was a super-strong human, we could just say “lift that bus.” A super-intelligent human should be able to perform similar feats, but of a mental nature. A flying (extrapolation of walking- additional freedom of motion into the 3rd terran dimension) human could just lift off. Abilities like telepathy are difficult to prove or disprove, and so someone could posit “hey, I can mind-read” and get some attention out of it. People like Uri Geller who claim to bend spoons have a carefully constructed magic trick to accompany their act, which in a way acts like the religious experience. Because he claims to bend spoons, and does so on film, therefore everything else he says about telepathy and such must be true as well. Fallacious on exactly the same grounds, but convincing to many.

Clairvoyance and precognition fall into exactly the same mold as telepathy. They can be treated in more or less exactly the same way. These are faculties that all humans have- the ability to deduce what is happening at a different location in space or time, respectively. When someone tells you that twenty minutes ago the lights were out, you can picture in your mind the room in exactly the same state, or with whatever other alterations are supplied to you or fabricated to order by your mind, with the lights off. This isn’t some superhuman power, although the ability to do it with impeccable accuracy is certainly superhuman. The fact that you can conceive of what winter in Russia might be like, even if you’ve never been there, is proof of the power of so-called “clairvoyance” although its accuracy is highly questionable, and you naturally treat it with the appropriate level of confidence (virtually zero). Truth be told, there are actually very few “new” superpowers being coined in a cultural sense, and all superpowers as we know them stem from some natural faculty, trait, or principle carried to a logical, or illogical, extreme. Even very weird powers such as being half-man half-something are formed in the same way by combining concepts of man and something else, usually an animal. The adventures of man-onion don’t sound particularly entertaining because the suite of powers available to an onion are hilarious but boring, and those available to a man, while tremendous, are common to everyone and are dismissed as merely normal.

Now on to UFO encounters. First off, I am quite possibly the most convinced human being on the planet about the existence of extraterrestrials. The Drake Equation is a tough argument to beat. However, the reason why the Drake Equation is so powerful is because space is BIG. As a result, the odds that the aliens are anywhere within a million light years of earth is… extremely small, let’s just leave it at that. Also, the odds are dramatically in favor of alien sea sponges as opposed to interstellar civilizations (and finding sea sponges, or even xenophilic bacteria would be badass). Even in the event that they developed some form of faster-than-light or warp travel, why would aliens have any interest in a society as primitive as ours, relative to their own? Human civilization is at a sub Class-I state. We haven’t even gained control over the energy of our home planet yet, much less our home system. A civilization with both the capability and the need to build an interstellar drive would dramatically outstrip our own in terms of size, population, resources available, culture, etc. etc. Plus, such a society would necessarily have evolved a very intricate social form as well, in a similar way that human societies have evolved modern governments and social conventions to better preserve human life, well-being, property, industry, self-esteem, etc. Iain Banks’ Culture novels present an amazingly accurate view of the type of interactions interstellar civilizations might have (I’ve only read The Player of Games, but it was awesome on so many levels). Such a society “studying” us would be somewhat like humans studying an ant colony. There are plenty of methods by which they would never need interfere in any detectable way, and there are a plethora of methods by which they just step in full-force and there’s not a bloody thing the ants can do about it. Anyway, enough of my geek-out analysis of why the picture painted by UFO fanatics is absurd. The ultimate proof is that there has been no objective verification of claims made on objective reality- namely, the detection of UFO’s. This isn’t strictly true because UFO stands for Unidentified Flying Object, and there have been many, many incidences of objects detected on radar which could not be identified, perhaps by refusal to transmit or lack of IFF or digital uplink technologies. Enemy planes aren’t going to identify themselves, perhaps buying a few seconds before the interceptors are scrambled to engage them. Proposing the existence of aliens in flying saucers is completely apart from the UFO case, even though for some reason they have become synonymous. Show me a crashed UFO, wreckage of a self-destructed one, conclusive photos, or a depth of proof sufficient to confirm the existence of a new species of monkey, and I’ll believe you.

Now, faith healing is an issue I have a very hard time with because the weird thing is that it works. Of course, placebos also work, and it is quite obviously the same principle at work in both faith healing and placebos. Considering that sugar pills are cheaper than real drugs I can imagine a great value to being able to identify where a placebo is sufficient, and where real medication is required. Now, sugar pills are dirt cheap so whether faith healing is cheaper is doubtful. However faith healing does obviate the medical issue with giving out dud medicine. In order for the placebo to work, it is necessary that the patient not be aware that it is a placebo, and this type of treatment is totally unworkable for a reasonably managed medical establishment. One case of prescribing a placebo and having it fail will draw malpractice flak like a giant kite carrying a metal box and trailing a NUKE sign in enemy airspace. A doctor could refer a patient to a faith healer and avoid this type of legal insanity because the faith healer is a separate agent who a pissed off patient could sue independent of the original doctor. Using faith healers as a litigation scarecrow is actually quite an elegant solution to both the overly litiginous medical establishment and also puts people’s ridiculous beliefs to good use, quite neatly killing two birds with one stone. Interesting saying because I would be quite content to kill one bird with one stone- it’s just a rock after all, and reusable at that, but I digress. Faith healing is sticky because while it is totally bogus, it actually does work, and verifiably so. I am amazed at the amount of garbage they can churn out, take a look at homeopathy- it’s just water. Nevertheless people swear by it, citing some assumption about a new property of water which is totally unsupported by chemistry. Damn good thing too, because the water that’s in your body has probably been in contact with all kinds of stuff, and I find it rather comforting that water is just water, no complications- it’s just H2O regardless of history.

There are a lot of gullible people out there. They’re gullible because they want to believe in something, or there’s an engine of social acceptance or consistency behind the choice to “believe” which drives them into accepting irrational precepts without looking at them too closely. This is the secret of getting anyone to believe anything- provide an incentive for them to agree with you that is irrespective of the argument at hand. Then, get them to publicly confirm their belief to someone else, or even just say it aloud, and then a commitment to consistency or self-simplification will push them to actually fully accept and integrate it, a process which when divorced from its rationale is known as cognitive dissonance. Cults use very extreme persuasion and conditioning tactics, and it’s part of the structure of cults to hit each member as hard as their “belief” can handle. To acquire new members, use subtle tricks which appear reasonable. If they accept those, move on to more intensive material. The more extreme and unreasonable the material that they can make that individual confirm to themselves and others, the more deeply ingrained the ideology of the cult becomes, allowing even more extreme material to be put to them. Scientology is remarkable because it has such a rigorous methodology for maximum conversion effectiveness, even going so far as to explicitly call them “levels.” They need to keep their higher-level material secret because if it leaks (as it has) it exposes them as ridiculous frauds spouting utter insanity. If we only knew their outermost material, designed to pull in the unwary using relatively reasonable methods, we might suppose them an acceptable religion. I have a lot to talk about on this subject and I intend to go over it some other time, particularly as it relates to social conditioning by degrees. I bring it up here because psychic phenomena function in the same way. Small topics like palmistry or graphology lead up to more intensive phenomena like full-blown astral phenomena and UFO sightings. Because there is no centralized purveyor of material, there is no controlling agent to make sure that each person receives only material they are ready for, but the sheer volume of information acts like a smokescreen instead. As a result, only people actively searching for a certain subset of information are going to discover a full set of details. The rest of us are left with a stereotypical picture which we recognize as clearly simplified and inaccurate. This means that if at some point we become activated to seek out such phenomena, we can uncover additional information and naturally “refine” our perceptions with the new information, resulting in a new convert to paranormal phenomena. Effectively, you persuade yourself when you are ready to find out. Once again, there is no authority that causes this, there is no conspiracy theory, this just happens. Fiction writers make extensive use of this faculty, particularly science fiction and fantasy writers. They can easily concoct an alternate explanation which is equally fictitious but fulfills the same criteria for “why I used to think that” as the explanation that the true believers of the real-world phenomena ascribe to. For example, a writer about a common myth such as vampires or dragons has a well-known set of properties to address such as blood-drinking or fire-breathing, and these act as an interface that any science fiction or fantasy writer can implement with whatever explanation they like. Myths like this are so powerful because the explanation can be easily adapted with new information or discoveries. Note that the explanation can actually short-circuit the properties of the myth as long as it produces a “common misconception” situation. For example, maybe vampires can go out in the sunlight if the writer desires, but there has to be a reason why everyone thinks they shrivel up in the sun. Writers like Terry Pratchett are so good because they can create a compelling and internally consistent world, and this same principle applies to the real world. People will believe models of the real world that are compelling and internally consistent relative to their own framework. Note that a model can be internally consistent and be fraught with contradiction. A contradiction in such a case is a result of an external inconsistency and can be resolved by placing the model above the actual world, as is commonly done with the Bible. If the real world and the Bible disagree, people so conditioned will side with the Bible because otherwise their world won’t make any sense, which can be why such evangelicals are impossible to convince with reason. They have bitten off so much of the religious conditioning and publicly acknowledged it that they base their identity on it so much they cannot stop. A religion is simply taking the most potent aspects of a collection of stories, myths, phenomena, etc. etc., often based on what phenomena people believed a long time ago, and crafting it into one grand model which can be passed out in pieces the way Scientology does to maximize communicability. The Ten Commandments are an excellent example. Due to the decentralized nature of paranormal beliefs, they aren’t a “real religion,” they’re piecemeal. People who use it as their only belief system are “pagans.” There are no Ten Commandments of UFO sightings because such a centralized and widely agreed-upon document cannot be agreed upon, or even created in the first place.

On Antisocial Stoics

I would like to address a claim that is sometimes made against stoics, particularly against some of the ideas of Marcus Aurelius, who said, among other things, “Permit nothing to cleave to you that is not your own, and nothing to grow upon you that will give you agony when it is torn away.”  Given the extremely elevated status of friends and interpersonal relationships in our society, this concept doesn’t jive well with the idea that we all have to form deep bonds with one another.  The idea of being stoic and of suppressing your emotions as subservient to your mind seems to conflict with the idea that we’re supposed to share our feelings with others.  Why the belief is that if someone else is aware of the factual state of your existence creates a bond is beyond me, but it is implicitly assumed in our interactions with one another.  The most canonical example is when you encounter someone you know and ask them how they’re doing, what’s going on with them, or the like.  Both of you probably know, if you thought about it, that the other person’s answer is irrelevant.  Neither of you could give a damn.  But it’s the greeting you use because it is a sharing of information of a moderately personal nature, or at least it’s a question requesting that information which implies a certain closeness.  Whether you’re doing it to provoke that sense of intimacy in the other person, in the impressions of people listening in, or to convince yourself, I don’t know.  However I do know that very little of what is commonly thought of as conversation is an actual sharing of empathic significance or deep thoughts.  What is commonly accepted as “small talk” is the norm of human interaction, and it is accepted as having zero functionality.

Now, I am of course being a little over-literal here.  The purpose of small talk is that it is talk where everyone concerned might be uncomfortable in having a real conversation, it fills up the time and allows people to get comfortable with one another.  However it is not and will never be the goal or endpoint.  It is vital that just “being with” other people is never something you’re setting out to do, because standing next to other humanoid figures and flapping your vocal folds is, in and of itself, not really a worthwhile activity.  If you’re interacting on an empathic, mental, philosophical, or whatever medium in a way that gives you genuine enjoyment such that you would actively choose to enjoy that person’s presence in favor of some other activity you enjoy then of course it’s a good thing- that’s just a basic pursuit of your own satisfaction.  This is obvious and a trivial proof, but I think I need to inject it here so I’m not scaring off exactly the people who need to hear this.

The best corollary to this whole mess is our modern conception of sex, especially among men.  Men tend to be in a position of weakness and insecurity due to having conflicting internal models and programming and all manner of other nonsense going on in their heads leaving them a little lost and confused.  One of the dominant themes that result is a pursuit of sex that is driven more by social power than actual personal satisfaction.  Many men are more gratified by the fact that they are having sex than they are enjoying the sex itself.  They’ll brag to their buddies about it and allow themselves that extra iota of self-respect because they “got laid.”  The self-destructive side of this thinking is that they honestly believe they aren’t worth anything unless they can convince a woman that they are worthwhile enough to sleep with.  I am unsure of how many women have this problem, but it is widespread among men.  I suspect that because women are dealing with this population of men, they live in sexual abundance and don’t develop the same complex- attractive women at least if not all women.  I am speculating now, but I find it probable that women have a similar complex revolving around marriage, gratified more by the fact of being married than they enjoy the marriage itself, resulting in the “must get married” effect at a certain age.  Many, many people of both sexes are gratified more by the presence of other people than they are actually enjoying being with them.

The simple fact of the matter is that if you go out seeking deep bonds, what you will find is the most superficial of relations with people as desperate for companionship as yourself.  Deep bonds, described as such, actually don’t exist as we conceive of them.  It’s not that you spend a lot of time with someone or that you have known them for a long time, or even that you know a great deal about them and their personal preferences such as their favorite flavor of ice cream.  In fact, I would go so far as to say that knowing a huge amount about their preferential minutiae actually subtracts significantly from the goal that most people are seeking.  If there’s a woman I like, I could care less what her favorite flavor of ice cream is.  The question is whether or not she is fun to be around.  If I was to feverishly try to get her to like me or memorize her personal preferences, that’s work.  Stupid, counterproductive, and manipulative work, at that.  That’s all.  Perhaps we have deep empathy, perhaps we’re alike, maybe we have good discussions or great sex, it makes no difference (OK, I lie) the question is only if she’s a positive presence in some- preferably many- ways.

Part of the problem is the widespread perspective of the “personality.”  And for the love of life NEVER evaluate someone’s “personality” as ‘good’ or ‘bad.’  Both those words are the most abused semantic identities ever created, and they both can mean nearly anything while being very specific about one thing and one thing only- and by hiding the implementation of that judgment there is no way to argue with it.  There is no such thing as a personality- a person is composed of the sum of their mind and actions derived from it.  There is no way that you can ascribe someone a personality which if they do something that is “not like them” then they’re being fake or somehow not being themselves.  Whatever the circumstances, they are merely exhibiting a decision-making pattern you haven’t previously observed or were otherwise unaware of.  It is the same person, ergo they are the same person.  This idea that we can understand someone else, ascribe them a simplified model that will predict their behavior and then expect that behavior from them is disgusting.  People are very complex- one person is far more complex than the sum of all of their understandings of other people, much less someone else’s understanding of them.  It can’t be your personality that you like coffee, and that you’re doing something bad when you don’t drink coffee.  The drive to be consistent is not a natural one- it’s a societal stamp mark on the inside of your brain that tells you to be simple so that others can understand you better.  But who gives a flying shit about whether other people understand you?  Do what you want!  If you wake up and wonder if eggs scrambled with cocoa and baking soda tastes good with ketchup, then go right ahead and try it!  It doesn’t have to be your personality that you eat weird things- it’s just something you want to do, so you do it.  That’s a bit of a weird example, but it holds.  Why we don’t expect one another to do what we want is just beyond me, especially in our day and age with so many options available.  There are all manner of stigma against jocks, nerds, cheerleaders, sluts, you name it, there’s a stereotype that someone wants to slot you into.  So, how about, just to screw with them, completely break their model of the world by totally not fitting into the model they would like you to.  Just for fun.

So here’s the question.  “Permit nothing to cleave to you that is not your own, and nothing to grow upon you that will give you agony when it is torn away.”  The idea here is that you are your own pursuits and not permitting external people or objects to influence you or your goals.  This is both a warning against addictions of all forms, perhaps especially social ones, and a caveat emptor for everything you allow into your life.  You control your personal sphere- to the best of your ability at least.  It is your responsibility and nobody else’s to make sure that only elements you want are a part of your life, and it’s your duty to yourself to safeguard the vaults against the thieves that would seek to plunder your wealth.

I have something to say about victimization here.  Blaming the victim for a crime committed against them is the original scam.  It is the classical attempt to cheat and then get away with it, and the more serious the crime, the more potent a tactic it becomes.  The idea that you control your person means that yes, to a degree, you are responsible if something bad happens to you.  There are precautions you could have taken, etc. etc.  No matter the event, there are always choices you could have made to avoid that outcome you deem makes you a victim.  However part of the idea of being actually in control means that you are never a “victim” of other people’s choices or actions, because the very idea implies that you aren’t actually in control.  So you are only actually a victim when the aggressor has actively applied intelligence to disable, short-circuit, or otherwise evade whatever defenses or precautions you have taken against being taken advantage of.  Think of it like this: if you’re on a desert island and a bear comes and steals your food, then you’re a victim.  But you could have done any number of things to prevent your food from being stolen, such as hanging your food from a tree, out of reach.  The bear is fundamentally at fault here (I don’t believe the conventional idea of “blame” either, so this explanation might be a little awkward without a background but I’ll have to go on anyway) but that doesn’t mean you can sit there and rage about how that damn bear has made you a victim.  Your actions, to the degree that you invested resources to prevent an undesirable outcome, resulted in some probability of that undesirable outcome occurring- a risk.  Now, there are obviously far too many *possible* risks to address, but we can exercise our reason to determine which ones we need to address, which ones are worthwhile to address, and which ones we can safely ignore.  If you ignore a risk you should not have, then you are responsible for that mistake, even if you aren’t the acting agent of the aggression committed.  A bear is too animate.  Let’s go with physics.  You leave your food outside for a long time, and it rots.  Well?  You are responsible because you misjudged the risk of it rotting, didn’t take sufficient precautions, and now your food is gone.  In this case, there is no aggressor at all- it’s you against the laws of physics, but the situation is exactly identical.  You can mope around claiming to be a victim, perhaps go to the government and demand that your food be replaced…  yada yada.  Now, I absolutely do not want this concept of judgment and addressing of risk to be confused with actually blaming the victim as the active agent in their own victimization.  These are completely different concepts entirely.  An agent acting in a way that is exploitative of another agent is doing so because their incentives line up appropriately to make that a course of action they find acceptable.  The idea of punishing them is to tip these scales enough that it is no longer economical to exploit others.  There is of course the problem of giving the power of retribution to who, exactly, which I won’t go into here because this isn’t a post about anarchism.  The reason why you can’t have the punishment be equal to the crime (remove connotations of law or government) committed is that the risk of capture is never 100%.  Let’s say a thief steals purses.  If he gets caught 50% of the time, but each time he’s caught he only has to return the amount he stole, then it doesn’t really change the thief’s decision-making circumstances that much.  However, if the cost is losing a hand then the thief will think twice before stealing that purse because there would need to be a lot of money in there to justify a 50% chance, or even a 1% chance, of losing a hand.  Now, the funny thing about punishment is that you also have to account for a certain probability of false positives.  So if an innocent man is accused of stealing that purse and gets his hand cut off, well that’s pretty damn unjust, isn’t it?  So we have to scale back the punishment until it is enough to stop thieves while being acceptable to the innocents based on the risk of being hit with that false positive.  Keeping in mind that we are assuming the populace has a say in what the punishments are.  If you’re a totalitarian government, you could give a damn what the civvies say, and drastic punishments make sense because it’s less crime you have to deal with, freeing up resources for you to put towards your own ends.  Draconian methods of control are, pound for pound, more efficient in terms of resources spent versus results achieved.  Their main problem, in fact, is that they are so efficient that it makes life a living hell for nearly everyone.

After that long digression, back to the main issue.  If you’re simply enjoying another person’s presence, then there’s no further expectation in the matter.  If they leave, you’re no longer enjoying their presence.  You start to run into problems when you ascribe ultimate value to people or objects, because you can’t unlink ultimate value as long as you actually perceive it as “the ultimate good in the whole universe.”  Now we run into a very controversial edge case when dealing with the loss of loved ones.  I say it’s an edge case because it doesn’t happen very often relative to our lifetimes.  We’re not losing loved ones every other week.  A model that was focused primarily on dealing with death of the most intimate friends (I will not say “and family” because if your family are not your close friends then why are you with them?).  You know what, I’m going to elaborate on that parenthetical thought.  Your family, especially your nuclear family such as parents and immediate siblings, are people.  You know them for longer, and have more opportunity to become very good friends with them, and when you’re a child there is a certain amount of not-having-a-choice in the matter that forces you to make friends or make war, and rational individuals choose the former in all but the most extreme circumstances.  So there’s just very close friends.  The fact that you’re biologically related is of no philosophical significance whatsoever.  Medical significance, yes, but only because knowledge of your family’s genes can be used to deduce your genes.  Social significance, of course not.  So I will treat death of family as the death of friends who were equally close as family members.  Now, to be honest, this is a topic that I’m reluctant to exercise my usual methods of beating to death because there may be readers who have such a powerful subjective experience of the matter that I will waste my time if I try to dismiss the bits that require dismissal, focus in on what is significant , and use it build up a new model that more accurately fits reality and rationality.  We have arrived at the idea that being with people is something you do for yourself, but it seems like lunacy to say that the death of a loved one shouldn’t hurt because you aren’t able to enjoy their presence any more.  That’s just not strong enough, right?  BUt isn’t that exactly what mourning is?  You won’t speak to that person again, or see them, or talk to them, or whatever else.  If you could do those things then you wouldn’t care if they were technically dead- that’s just a cessation of some bodily functions.  If they could die and leave the person intact, now wouldn’t that be a wonderful thing- you wouldn’t have to worry about death.  This is actually a fairly direct deduction for most people, but the idea that the physical death isn’t the source of their trouble, isn’t.  It is the result of the event of death that they’re mourning.  Many religions exploit this weakness in thinking to interject “But life does continue after death!” and then the explanations, the fairy tales, and the bullshit that follows.  They are careful, however, to always exclude the very functionality that death precludes because they are unable to provide it.  They can’t help you talk to your dead loved ones, so they hide them away somewhere as ghosts or in heaven where you will go, too, once you die.  The intuitive universality of the death process makes this nearly logical, except that a slight elaboration can add a significant degree of control over the behavior of the people who want to believe.  And some of the crueler religions take advantage of exactly these people, and make this death process conditional upon your life, and exactly prescribed behaviors.  The most common trick is to exploit vague semantic identities such as “good” and “bad” which enable retroactive changing of what exactly those conditions are for live updating of the behavior of the believers based on what is expedient at the time.  I’m always amazed and fascinated at the complexity of religion as an organism, and the huge potential that religion proves memes have as a life form.

I am not suggesting that you shouldn’t feel pain- what a ridiculous assertion for a stoic.  The idea is that pain, like other sensations or emotions, are there to help you, not govern you.  If you felt fear and were unable to do anything else but freeze up, curl up into the fetal position, and pray, then what use is that?  For animals like the possum, it is an irresistible instinctive reaction programmed into them because in 99% of cases (at least in the genes’ experience) this is an effective defense mechanism, and giving the possum control over the matter would just screw up the system.  This isn’t strictly accurate because possums evolved their primary featureset in the time before memetic delegation had been “invented” by evolutionary processes.  The application of reason is itself a major feature of humanity, and quite novel in genetic terms.  If you wanted to be truly biological about it, you can look at memetic evolution as the ultimate genetic trick, but the problem is that it is so effective it makes genes obsolete.  Also, intelligence is so effective that genetic evolution can’t keep up with the rate of change.  For the prurient example, we have invented cars and now they’re everywhere.  And now possums, with their very effective defense mechanism of freezing up when afraid, causes them to get run over by speeding cars, and the genes can’t un-wire that feature given the new environment because they aren’t able to perceive and judge.  I would like to say, though, that genes are definitely alive.  Not just in the sense that a person is alive, but the gene of HUMANS is alive in a strange information amalgamation of the genes in every person in a way that we really can’t quite comprehend because there’s too many people, too much noise, and too much uncertainty about genes themselves.  The day that we truly understand genes completely, we won’t need them anymore because we’ll be able to construct our own biological machines to any specification or design we like.  They’re just like any other machine, but far more complicated and sophisticated.  Especially the organic ability to reproduce.  Interestingly, though, the body is itself one of the few things that we are currently unable to separate our selves from.  Some can conceive of what that might be like, and most of them have it wrong (I guarantee that I do, but it’s more complete than most, at least).  Note that the objective is to separate your self from as much as possible of what you don’t want, of that which subtracts from your good or your happiness.  I would argue that, for as long as it works, your body adds immensely to that happiness.  And as far as it doesn’t, it subtracts immensely.  So an ability to perfectly fix the human body, a hypothetical perfect medicine, would obsolete the need for mechanical bodies unless their features were so far beyond those of a human body (which is the case) that you could get even more out of one.  Probably the main advantage is the ability to add processing power and memory, and the ability to have direct inputs.  Anyway, permit nothing to cleave to you that is not your own.  I am not my body, but insofar as I use it, rely upon it, and wish to keep it, it is mine.

So if I don’t even value my own body enough to want to keep it, what does that mean?  Well, I never said that I didn’t value my body, just that the value it provides is of the material sort, similar to eating a burrito, except that instead of the satisfaction of the burrito, my body contains the hardware necessary to eat the burrito, and without it any sort of gustatory satisfaction would be impossible (not strictly true- a perfect simulation of the experience is an identity).  This is similar to having a computer.  The computer in and of itself doesn’t actually provide a whole lot of satisfaction, but the things you can do with it will.  Perhaps the computer hardware hobbyists who make it a point of pride to have the best possible machine wired up in the best possible configuration get significant enjoyment out of simply possessing the hardware itself.  However, even with that example, we see parallels with the human body, such as with fitness junkies who make it a point of pride to have bodies sculpted out of steel, and enjoy simply having it.  Important note: most of these “fitness junkies” are doing it because of other people, not because they genuinely enjoy it, or because they even want the results.  And they get further conflicted by the fact that they are causing a change, which might conflict with their perception of themselves, or with others’ perceptions, and for some reason they’re anxious to step outside of that box.

Anyway, my entire point is quite simple, as usual, but it’s dressed up with many trimmings like mirrors in every corner of the room to show off the gleam on the little gem in the middle.  The idea that you should be dependent on others, the idea that that constitutes good social practices, the concept of a social personality, all of these things are foisted upon us because others had them foisted upon them.  We are the monkeys conditioned not to reach for the bananas within our reach because someone, at some point in the past, was punished for trying.  So now we have to live with everyone else.  But the most vital point is this: they don’t matter.  If you want to reach for that banana, they could physically stop you, but if they do then you have a clear and objective obstacle in your way, which can be overcome, instead of the hazy, confusing aimlessness of contradiction.

Impulsiveness

Is impulsiveness a desirable characteristic?  I am the categorical thinker- I like to think about things before I do them.  However, as part of that thought process it’s important to be able to suspend thought when necessary.  As such, whether or not impulsiveness has a place in the repertoire of the contemporary rationalist is an interesting question.  Firstly, we need to look at where impulsiveness is typically used.  Impulsiveness is often associated with interpersonal exchanges, with social people and people who enjoy parties.  It is strongly disassociated with business or financial decisions, with some exceptions such as small purchases and gambling.  So while common sense thought acknowledges that impulsive action is improper for weighty decisions, for more trivial matters it helps a great deal.

Before we get into the topic, we need to make some distinctions.  There is impulsiveness and then there is recklessness.  The way I conceive of the terms, impulsiveness is thinking of an action and allowing it to proceed into reality without too much analysis.  Recklessness, on the other hand, implies a full knowledge of the action beforehand, but doing it in spite of your analysis that it is foolhardy.  I will talk about both, but first let’s cover the less complex issue of impulsiveness.  In social situations, impulsiveness is a great aid because you can’t think too much about what you’re going to say.  There are a large number of very smart people who have difficulty in social situations because they don’t realize that their strategy for dealing with reality is not universally applicable- it needs to be changed to fit their needs of the moment.  When I was a kid I was like this.  I have since learned to pragmatically and completely apply rationality and can piece together the solution to such puzzles.  Basically, if you think too much about what you’re going to say, you give an unnatural amount of weight to when you do speak.  So unless you’re able to spout endless amounts of deep, profound thoughts, invariably you’re going to be putting a lot of weight behind fairly trivial statements, and the inconsistency comes across as awkward.  Impulsiveness will decrease the weight of what you’re saying and give it a sort of throwaway characteristic which helps you in a number of ways.  Firstly, if it doesn’t work out, nobody really notices, and you can keep going with whatever suits you.  Secondly, it puts you in a more dominant position of just saying whatever you feel like saying.  You aren’t vetting your thoughts to check if the rest of the group will approve.  This brings us to the second flaw in the introverted thinker’s social rut, the fact that they are attempting to apply thought to the situation to do better and it shows very obviously to the rest of the group.  This is a complex point that I can’t encapsulate in one post, but basically any attempt to earn approval guarantees denial of it in direct proportion to the effort spent.  The introverted thinker’s goal is to earn approval, and his model for deciding what to say is, logically, fixed upon achieving that goal.  While their intentions are good their entire approach has so many incorrect assumptions they aren’t even capable of recognizing the fact that their whole paradigm is nonfunctional.  They just dive right back in with a “it must work” attitude instead of reworking from first principles.

Impulsiveness is also a pragmatic tool to be used liberally in situations of doubt.  When it is clear that hesitation will cost more than immediate action, you have to go.  When I was younger I had this model of “going for help” which essentially contained the idea that the concept of help was distant.  So “going for help” would take a long time, and there was a significant chance that the window would close for whatever the situation was.  So my primary course would have been to just go do it myself.  This is an incorrect application of impulsiveness because of incorrect information.  A proper application of impulsiveness might be, for example, you are handed a test with 100 4-answer multiple choice questions, you have 100 seconds.  Now there is no way you could conceivably cover 25% of the questions if you legitimately tried to answer them.  However, if you guess randomly you have a 1 in 4 chance on each question and so over 100 questions you should get 25 correct.  This is clearly your best strategy given the rules of the game.  You concluded that the best strategy is to suspend rational inquiry into each question because it is simply not worthwhile.  You wouldn’t work for an hour to earn a penny, and you wouldn’t think for X seconds per question.

The other fallacy that makes impulsiveness distasteful to many is the idea that the answer actually matters.  With our test example, you don’t actually care what the answer to any given question is, you have all the information needed to create a sufficient strategy.  For social impulsiveness, the simple fact of the matter is that your actions really don’t matter that much.  Provided you don’t do anything truly inappropriate, at least.  The, and I use this term very reluctantly, “antisocial nerds” ascribe a great deal of value to their interactions and to what each party says.  This is a misunderstanding of the nature of the communication.  The actual content is unimportant.  Nobody cares if you’re talking about the weather, cars, or anything else.  True, this doesn’t make logical sense, and in a perfect world people would communicate usefully instead of feeding their egos by the fact that they’re talking to people.  Most of the “extroverts” are pleased by the fact that they’re talking to people, and are anxious when seen by themselves- this mentality is communicated to introverts and affects them quite adversely because they prefer to be alone for some part of their day and they may believe that there is something wrong with them.  Don’t buy it, please.  The people who *need* to be around others to validate themselves are the unstable ones.  It’s similar to the way men and women treat sex.  Men are usually sexually insensitive and are more pleased the by fact that they are having sex than they are enjoying the sex itself.  They are usually seeking validation from society instead of their own enjoyment.  Of course, most women can pick this up immediately and they would prefer not to be some boy’s tool to self-validation.  Women, you aren’t off the hook, you do the same thing, but not with sex.  Instead, you get validation from men paying attention to you while others are watching.  Don’t get me wrong, it goes both ways.  Some women perceive that they get validation from having lots of sex, and some men get validation by attention from women, they’re just not as common as the other way around.  Impulsiveness as a concept is often bundled with these behaviors which, although nobody really knows why, are widely believed to be “creepy.”  That’s just not the case.

Now, recklessness is a whole ‘nother can of worms.  Doing something that you know to be crazy, or doing something because it’s crazy, has a completely different backing behind it.  Most reckless people do it because the cost of the reckless action is balanced or outweighed by the enjoyment or rush they get from it.  This is the same mechanism that makes skydiving fun, even though skydiving is actually reasonably safe.  If you had a significant chance of dying you wouldn’t be able to sell it to people as a recreational activity without some serious social pressure backing it up.  Ziplining is another example- there has only ever been one zipline death, and that was under suspicious circumstances.  But we perceive it to be dangerous and enjoy a rush from it.  There is, however, a time when outright reckless behavior can be a rational course of action.  Usually these circumstances fall into two categories though, 1) you’re trying to make other people/agents believe you’re reckless, or 2) direct and/or thought-out strategies can be expected or countered easily or are otherwise rendered ineffective.

Category 1 is the more common of the two and can potentially occur in any game or strategic situation.  Essentially your strategy is to do something stupid in the hope that your enemy will misjudge your tactics or your capabilities, enabling you to take greater advantage later on, or in the long run.  In poker, it is sometimes a good thing to get caught bluffing.  That way, next time you have a monster hand your opponent might believe you’re actually bluffing.  If you’ve never been caught bluffing before, they would be much more likely to believe you actually have a hand and fold.  Obviously, if you get caught bluffing enough times that it seriously impacts your pile of chips, you’re just bad at poker, but a single tactical loss can be later utilized to strategic advantage.

Category 2 is much more interesting.  Let’s take a game like Total Annihilation.  By the way, TA: Spring is totally free and open source, and it’s easily a contender for the greatest strategy game ever made.  Although it’s not fundamentally that complicated, there is no in-game help so it can be very confusing for new players.  Feel free to log in to the multiplayer server and just ask for a training game- after one or two you should be up to speed and ready to play for real.  Anyway, in Total Annihilation, at least the more standard-fare mods, there are dozens if not hundreds, there are huge weapons that deal death massively and can pose a serious threat in and of themselves to the opposition.  Things like nukes, long range artillery, giant experimental robots (and you can FPS any unit, bwahaha!!), etc. etc.  Anyway, the construction of one such piece can actually end the game if it stands uncountered or undestroyed for too long.  However each has a counter, which range in effectiveness.  For example, antinuke protects a fairly large area, but if you throw two nukes at it, it can only handle one.  Shields protect against long range artillery but they have a small area and cost a lot to run, and so on.  Now, a calculating player can probably figure out the ideal choice for the opponent in a given situation.  If he’s focusing all his stuff in one place, he may as well get both shields and anti-nuke, but the other player(s) could then steal the whole map.  If he goes for the whole map himself, the other player would probably get air units to attack his sparsely defended holdings.  If he consolidates in a few carefully chosen locations, nukes might be in order, and so on.

This is where we get to the recklessness-as-tool element.  Potentially the greatest advantage in complex games of strategy is surprise, or doing something that the enemy did not expect and must react to.  Ideally the enemy has limited ability to reorganize to counter the new threat.  This is true of real-world military action- there are issues with communication, chaos, and a host of others that make reacting quickly difficult.  The more resources sunk into the threat, the more resources that will be necessary to counter it (assuming that the attacker isn’t just stupid).  There would have been no point in the Manhattan Project, for example, if the enemy could put horseshoes on all their doors to render nuclear weapons impotent, and it would never have been started.  Now let’s say we have a game of TA where it would be obvious that hitting the enemy with a nuke would be the best course of action.  Of course, this same idea will have occurred to the person about to get nuked.  OK, so then big guns are the best strategy.  Except that your opponent can think of that, too, because he might guess you’re not going to use nukes because it’s too obvious.  And so on through all the possible options, whatever one can think of, the other can too.  Whatever strategy you might use to maximize your utility can be equally though of by the enemy.  We are dealing with a perfectly constrained system.

But what if we de-constrained the system just a little bit.  We remove the rule that says we must maximize value.  Now we could feasibly do anything up to and including nuking ourselves.  So we need a different rule in its place because now we’re working with a screwed up and dysfunctional model.  This is where the trick is.  Because you might still have a meta-model of maximizing value in your selection of an alternate strategy, meaning you will be just as predictable, albeit through the use of a much more complex algorithm.  No, you have to truly discard the maximizing value paradigm in order to get the additional value from surprise, and the trick is to not lose too much to put you behind after your surprise factor is added in.

My problem here is I’m trying to reduce a complex and multi-dimensional strategic game to a single aspect under consideration.  My other problem is that many of you will have never heard of Total Annihilation.  The same idea applies to more or less any other sufficiently complex game, such as Starcraft, but value is too directly transformed in most modern games to make such meta-strategies significant.  If you have more troops, or the right kind of troops, you win.  If you’re behind, you’re behind and there’s not a lot you can do about it other than try harder in doing what you were doing before.  So while surprise might give you some advantage, it’s probably not going to be worth enough to be behind to get it.  Careful application of force certainly helps, but it’s not as vital as in Supreme Commander or Total Annihilation.  No, I’m not harping on the games in question, I’m not demanding that you must play them, I’m just sharing my particular taste in video games.

Impulsiveness once again.  I seem to be digressing more and more these days.  Basically what I’m trying to communicate is that in some situations (games to use the theoretical term) the act of analysis must be take into consideration in your planning.  How much time can you spend analyzing, what should you be analyzing, how is the enemy thinking, etc. etc.  Once you bring the act of thinking into the purview of strategic considerations, impulsiveness is one option for a viable strategy that just does not occur to someone who cannot conceive of the act of thinking as a strategic concern.  They implicitly believe that life is a game of perfect information with unlimited time for a given move.  The truth is, you’re acting when you decide what to do, and that act will have an effect on the world and on the results you get.  There are lots of proverbs about hesitation, but they don’t seem to extend to when to think and when to just act.  On the whole, I think most people have an implicit understanding of this type of decision making- it comes pre-packaged with the HBrain OS, but they haven’t really considered exactly what it is they’re doing on a consistent basis.  I’m just here to point it out so those who haven’t can read about it and be provoked into it.

The St. Petersburg Paradox

I’m in more of a mathematical mood right now, so I’m going to cover a piece of abstract mathematics.  I want to talk about the St. Petersburg Paradox.  While a famous problem, you can wikipedia it for more information if you like, here’s a short summary.  Imagine we have a game of flipping a coin.  Starting at $1, every time the coin lands heads, you double that amount.  When it eventually lands tails you win however much you have earned so far.  How much should it cost to play?

Now I very much enjoy this problem in a pure mathematical sense, but Daniel Bernoulli, the man who invented it, apparently took the mathematics of this problem rather too far.  Bernoulli noticed, as the more astute among you probably either deduced, or probably already knew, that the game’s expected value is in fact infinite.  This means that no matter what the cost to play, you should always accept.  However most common people wouldn’t pay even $50 to play this game.  Bernoulli deduced from mathematical bases a utility function of the game which would explain this behavior using a logarithmic idea of value.  He supposed that people’s valuation of money decreases as the amount of money they possess increases, or to use another term, he proposed a diminishing marginal utility function for money.  While this approach, I guess, works, the even more astute among you will have noticed that this doesn’t actually solve the paradox.  You can just have a game’s payoff function that uses the inverse of whatever utility function and still end up with an infinite payoff that nobody will take.  Other mathematicians have wrestled with this problem, and so far the conclusion, as far as I am aware, is that utility must be bounded in order to resolve this type of paradox.

Now, I am not a professional mathematician, but I believe that I have solved this paradox.  SImply put, all these mathematicians have been assuming that people have the same conception of reality that they are working with; a mathematical one.  These mathematicians have assumed that people think of money as a number.  That seems obvious, right?  Money is measured numerically.  Well, yes, but the fact that different people have different ideas of what money or other commodities are valued at means that it isn’t a number.  Numbers are objective, inherently.  Two people must categorically agree that a 7 is a 7, it always was, is, and will be 7, and that 7 = 7, which also equals 6 + 1 and an infinitude of other identities.  However we all know that two people might have a differing opinion of various exchanges, such as $3 for a mango, for example.  Someone who loves mangoes might buy at that price, someone who doesn’t, won’t.  So we can’t say that $3 = 1 mango in the same way that we can say that 7 = 7, even if all mangoes in the world were always bought and sold for that price.

The issue here is that these mathematicians, while brilliant direct deductive thinkers, think of the universe in a flatly rational way.  While this is probably the best single perspective through which to view the universe, it fails when dealing with people that lack a similar rational strictness.  Have you ever been beaten by someone at a game you were clearly better at, simply because the other player just refused to play “properly”?  This happens all the time in poker and numerous gambling or card games.  In games like chess this rarely happens because in a game of perfect information, “proper” play can be categorically proven to be superior during the game itself.  If it would result in a bad situation, then it isn’t proper play.  Where information is limited, “proper” play might land you in situations you couldn’t predict or prevent.  Anyway, a more textured view of the perception of the universe would allow for nonlinear and unconventional conceptual modes for perceiving the universe.  For example, perhaps a certain subsection of people conceive of money like power.  The actual number isn’t as relevant as the power it holds to create exchanges.  The numbers are negotiable based on the situation and on the value sets of the parties involved  So the St. Petersburg Paradox could be equally resolved by saying that power doesn’t scale in the same way that money does.  If you offered someone a utility function of power, it would mean nothing.  Power is not infinitely reducible: the ability to do something doesn’t blend seamlessly into the ability to do something else.  The atomic unit of power is much larger than the infinitely fine divisions between any given numbers.  Having ten very small amounts of additional power is also not the same thing as one very large new executive power.

People can link together abstractions and concepts in many, many different ways.  For example, some successful investors say that instead of looking at your money like it’s your fruit, look at it like your bag of seed with which to grow more seeds.  True, you’re going to have to sell some of those seeds to get what you need, but its purpose is to grow.  As you accumulate more and more, the amount you can draw off increases while still maintaining useful volume.  This gives a completely different outlook on money, and will generate different decision behavior than looking at money as something to be spent as it is earned.  This same principle can apply anywhere at all, because in order for something to exist in your perceptual map, you have to think about it.  You might think of movies like books that have been converted, like picture books, like snatches of real-life experience, like a sequence of scenes strung together like string being tied together, or like a strip that runs through its full length in only one direction the same way every time.  There are other possibilities of course, but that’s as many as I could think of while I was in the process of typing this post.  This is only looking at a small slice of the possibilities of conceptual remapping (analogues and analogies, specifically) but other forms would require a great deal more explanation.  I think you get the point though.

Back to mathematicians and the St. Petersburg Paradox.  The paradox only exists if you look at utility in the mathematical sense.  There exist models, such as the one that “common sense” seems to indicate, that don’t see a paradox.  These models instead see a game that has a sliding scale of value and beyond a certain point the value is zero (or negligible).  This gradual fading of value is responsible for the probable effect of many people deciding to play the game at differing values.  I don’t think even the most hardcore mathematician would play the game for $1 million a round, even though it will eventually pay for itself.  The utility solution fails to take into account the common sense evaluation of time and effort as factors in any given activity.  You could factor in such an evaluation, but you would probably then be missing something else, and so on until you have built up a complete map of the common sense and shared perceptual map of the most common conceptual space.  But then you have duplicated the entire structure you’re attempting to model and created a simulation instead of a simplification.

On simulations and conventional models, we currently use both.  Our simulations, however, tend to be based in the real world, and we refer to them as experiments.  This is how we collect evidence.  The problem with the natural universe is that there is such an unimaginable profusion of activity and information that we can’t pick out any particular aspect to study.  An experiment is controlling all those other extraneous factors, or removing/minimizing them from a confusing universe so we can focus on a single test.  Once we have our results from that test we can move on to test another part of reality.  Eventually we will have built up a complete picture of what’s going on.  Simulations are data overkill from which we can draw inductive conclusions because we don’t understand all the underlying mechanics.  Models are streamlined flows, as simple and spare as possible, which we can use to draw deductive conclusions.  For example, the equation for displacement for a falling object [dp = v0*t + (1/2)a^2*t] is a simplified model, subtracting all other factors than the one being considered, allowing us to deductively conclude the displacement for any values of v0, t, and a.  Mathematical conclusions are a sequence of deductive operations, both to make mathematical proofs and to solve/apply any given instance of an equation/expression/situation/etc.

Our minds operate on the most basic level using models primarily, and simulations second.  This is because most of the time, a model is close enough.  You don’t need to include every factor in order to get an answer at sufficient precision.  You don’t have to factor in the time, the temperature, or the quantum wobble of each atom in a baseball to figure out where it’s going to land.  If you wanted a perfect answer you could simulate it, but you can get it to an extremely high level of precision by simply ignoring all those marginal factors.  They are not worth computing.  Now we are beginning to factor in the distinction I’ve brought up before between algorithms and heuristics.  Models are often heuristics, and simulations are often algorithms.  Models can include algorithms and simulations can include heuristics, but on the whole a simulation (given correct laws and good starting conditions) will algorithmically compute exactly what is going to happen.  A model, on the other hand, is a much more efficient process that throws away data in order to make calculation simpler.  Usually a lot simpler.

Now I am willing to bet that some readers will be confused.  I just said that simulations need the right laws and starting conditions- isn’t that the same thing as a deductive process needing the right logical framework and initial premises?  Well, yes.  That’s because a logical construct is a simulation.  However, it is a simulation constructed using information already stripped of extraneous information by creating a model of it.  The line between model and simulation is not black and white- they are simply approximate labels for the extremes of a spectrum, with conflicting ideals.  The perfect model is one law that determines everything.  The perfect simulation is a colossal, gigantically massive data stream that represents everything, down to the last spin on the last electron.  This is also where we get the fundamental distinction between philosophers: the conflict of rationalism versus empiricism.  The rationalists believe the model to be the “one true philosophical medium” and the empiricists believe it’s better to use simulations.  The tricky part is that in order to construct a simulation, you have to have models to run each of its laws and each of its elements.  In order to have a model, you have to have a simulation to draw patterns from.  So we have an infinite recursion where rationalists and empiricists are chasing one another’s coattails for all eternity.  Fortunately, most people who have thought about this much have come to more or less the same conclusion, and figured out that rationalism and empiricism go hand it hand quite nicely.  However there is still a preference for choosing to understand the world through one mode or the other.

How does all this apply to the original issue of the St. Petersburg Paradox?  So we have mathematicians who are definitely rationalists- I imagine there aren’t many professional mathematicians who are empiricists.  And these mathematicians construct a model that represents a certain behavioral set.  Their problem, however, is that reality doesn’t actually support the conclusion they are saying is the most rational.  So they change the model, as they should, to better reflect reality.  All well and good.  Their problem, though, is that they are actually doing their job backwards in one concealed respect.  Implicit in their model is the idea that it is the case in the simulation they are describing that the population they are describing has the same conceptual map that the people who created the model did.  I am aware that I could have simply said we have some ivory tower mathematicians who are out of touch with reality, but I wanted to cover in-depth what the disconnect with reality is.  They are correcting their model by making it better reflect empirical reality in one respect, but in so doing they are simultaneously doing the same in reverse by assuming things from their meta-models onto reality.  We have rationalism and empiricism, simulations and models, inductive and deductive thinking, all chasing their dance partner around.  But the most vital thought is that the process must only go one way.  You must always push forward by correcting both to better fit the other in reality, rather than working backwards and assuming things onto reality which are not the case.  If you do this, and then entrench your position with a rationale, you are screwing up your meta-model of reality.  And, like a monkey with its hand caught in a banana trap, the tighter you squeeze your fist the more surely you get stuck.  For every ratchet backwards on the progress ladder, you get more and more firmly stuck in place, and it even gets harder to continue to go backwards.  The wheel spins one way, it grinds to a halt in the other.

The Intelligence Process

I have generalized the scientific method, at least for my own use, because while the scientific method works perfectly for science there is as yet no model which ideally describes the application of intelligence against objective reality. Now, this basically is the scientific method, but factors in a number of elements which are useful to exclude in scientific discourse.

1.) Assumptions: Intelligent agents always begin from assumptions, and although there is nothing we can do about it, it’s not a bad place to start unless you use poor assumptions and do not recognize them as assumptions. Also includes circumstantial evidence about surroundings, self, etc. The initial information set at any reference point you choose.

2.) Deficit: Any set of assumptions will find a case or situation where information is lacking, possibly a method to do a certain thing, maybe a rule about the world, or perhaps unknown circumstantial information. Formally phrased this would end up something like a question, spurring the creation of a solution. This step is also significant in providing us with the drive to seek stimuli.

3.) Hypothesis: A solution/guess is derived based on assumptions, utilizes rational, predictive, and imaginative abilities. Given accurate starting information and sound methods, this result will be useful. Otherwise, it is suspect (although it may still be useful or accurate- by the “the moon is made of cheese, therefore the sky is blue” effect, it’s just not reliable).

4.) Ecology Check: The hypothesis is actually cross-checked with the assumptions before being tested against reality. This is, for example, why people who don’t like broccoli may decide not to eat broccoli. Without this step, there would be no reason to assume that you wouldn’t like broccoli now, regardless of how you thought it tasted yesterday. While not a strictly logical approach, this is usually an immensely useful heuristic process.

5.) Test: I have actually combined a number of the scientific method’s steps here- steps like “prepare” and “procedure” and somewhat pointlessly specific and I just rolled them into this step. The objective of the test is to analyze the effectiveness of a piece of information you have put into “sandbox mode” as a hypothesis. The reason for this is that you cannot test a deficit, you can only test positive information. It can be disproved. Statements like “there is no such thing as a goose” are disprovable- they are simply about the nonexistence of something rather than its existence, all you need to do is find a goose. A negative statement might be “a goose can transform into an elephant under some conditions.” [s] Wow, that’s helpful [/sarcasm] Now, here’s the rub. Testing is the most important part of intelligence, but at the same time it is the most liable to fail. It is inherently an inductive process, as I have said before. So statements like “all swans are white” cannot be proven authoritatively. They can, however, be disproven, by finding a swan that is not white (as there indeed are). Yet if you have seen a million swans and they were all white, and you have no reason to believe your field of swan observation to have been constricted by some other factor, then you may conclude that all swans are white, and you would be quite rational in doing so. Provided that you recognize that you are making an assumption for practical purposes.

6.) Inference: The test only provides you with the data to make an analysis. Deciding what the test means is a whole ‘nother can of worms. In the case of our goose example, perhaps I’m a goose breeder who wants to grow a better goose. This is a subjective and situational step, so I’m just going to make something up here, but let’s say this here goose breeder is of the entrepreneurial variety and decides that because there are only white geese, if he could produce multicolor geese he would make a killing. Goose show spectators around the world would be shocked into buying spectrum geese at exorbitant prices. Now, even in this extremely short example, look at all the other factors and assumptions I brought to bear to determine what the meaning behind “there are only white geese” was. I needed some ideas about the nature of the world, the economy, my own experiences and tendencies, all these things which are a complete construction on top of the conclusion “all geese are white.”

7.) Compression: Another step which, while being illogical most of the time, is highly useful. Concept compression takes a number of forms, usually dependent on someone’s learning style. There are auditory learners, visual learners, kinesthetic learners, etc. etc. My experience is that each of these labels is an oversimplification. When I’m learning a method or a set of information I mainly gauge how familiar any given piece feels. This is extremely effective for nonlinear processes like abstraction, but extremely poor for rigorous linear processes, or arbitrary elements like rote memorization. If I have to give a presentation, I cannot memorize a script, and memorizing bullet points is even tougher. I can, however, just learn holistically about the entire topic to be covered and then just stream of consciousness about it and do quite well. Now, I have other methods for lots of different things, as do we all, but I’m reasonably sure that’s my main label. I have my own theories about how we label thoughts and sensory data, but that’s probably for another time. For now, I think we can agree that we don’t encode in memory the actual sensory data or concepts or ideas received/conceived/whatever, but actually a compressed interpretation of that information.

8.) Association: The issue with putting this step at number 8 is that association is the sole purpose of the neuron in the brain, so this is actually going on all the time, at every step along the way. Whenever you string two bits together in your brain you’re making an association, so the entire process itself is associating one step with the next. Also, anything that happens to be going on might be associated with the thoughts you had at the time, or maybe you’re connecting together two similar things, maybe tests you’ve made or hypotheses from different times, whatever. However I think this is the best place to put it because in the strictest sense, you can’t associate anything that you don’t remember, and you can’t remember something until it has been compressed. If you’ve ever done that experiment where you have to count the number of R’s in a sentence, but the question afterward asks about the number of H’s, or similar, you know what I’m talking about. You didn’t encode how many H’s there were. (Actually, to be proper, you didn’t encode the number of R’s either, you created a program on-the-fly that would increment a number whenever you saw R as you scanned the line, encoding a single number which is much more efficient. Encoding the number of R’s would be memorizing “there are 7 R’s in the sentence [blah]” which you probably didn’t do because it’s stupid and wasteful.)

9.) State Hook: This step has the same issue as associations in that you are experiencing some sort of state all the time, however it goes after association because it is used as a sort of meta-tag on top of any inter-idea associations you may have made. If you make the association of press button->get candy conceptualized and ready to go, realizing that you can now have candy if you want it, then your state, perhaps happiness, sadness, hunger, or other conditions (not necessarily related to your body) are applied. If you wanted candy, for example, you’ll get a state change, some different associations, and a different resulting behavior than if you just ate. For example, you might be more inclined to find that candy tasty.

10.) Framing: I’m wrapping up all the higher-level thinking into one big category, because you’re basically just repeating this step over and over again to go from beliefs to values to paradigms or whatever else. Ascribing synthetic meaning to things is framing. Rearranging models or performing manipulations on your conceptions is performing operations by adding synthetic meaning to delete, replace, or augment bits. Naming something is a framing operation. Grouping things is a framing operation. Note the distinction between associating two things, and grouping two things. When two things are associated, one might lead you to the other. However a fir and a poplar can both be trees without the mention of firs causing you to think of poplars. There are also a number of interesting oddities of peoples’ histories of associating groups with individual members, or maybe something else entirely. Free association: “Tree” and they say “Larch” then that’s one model they have of the standard tree, perhaps representative of trees as a class to them.

11.) Confirmation: Any given piece of information has several stages to go through before it is really accepted, and some will always be more respected than others. This level of trust or integration is a full spectrum extending from violent opposition to devil’s advocate thought experiment to skepticism to acceptance to total faith. Your belief may increase due to emotional reaction, resonance, application, utility, or any of a number of other reasons. Healthy systems of thought will tend to eradicate false beliefs in one shot once they are disproved- systems that are unhealthy may have a tug-of-war with emotional reactions, etc. pulling in both or (god forbid) more than two directions. Persisting beliefs will tend to gradually increase in acceptance due to increased association and exposure, and extinct beliefs are just not even in your head anymore. I’m going to use this step as a placeholder for several significant levels of acceptance, to the point that a given piece of information is trusted/believed to 1. the same degree as your original thoughts, 2. the same as your perceptions, and 3. on the level with your beliefs.

12.) Utility: The function of intelligence to maximize its utility given a specific information set, defined by the previous steps.

13.) Morality: The function of intelligence to deduce and follow morality. The reason why this is a product of intelligence is that morality is simply the application of reciprocity in society to utility. Morality is doing what is best for everyone, an abstraction out from doing what is best for you, with the significant difference that morality is a higher level, and therefore guides and supersedes personal utility. On a slightly related note, arbitrary social laws are a hijacking of this function to no real benefit- or more commonly, to an impossibly small benefit at the expense of a potentially massive gain. If they did blatant harm they would be abandoned as corrupt and pointless by the lower-level and more powerful utility principles.

14.) Creation: Intelligence seeks to produce. Artistically, socially, culturally, whatever. We’re seeking to stimulate others’ perceptions and minds, satisfying the sensory deficit with the richest material we can produce because we want to experience also. This works even if nobody else existed in the world because the act of creation is a bottomless supply of auto-stimulation.

15.) Self-Actualization: Realizing your potential, from Maslow. Drives artists to be artists and accountants to suicide. Just kidding. Not everyone’s greatest potential is in direct creation of memetically or mentally stimulating material.

16.) Philosophy: Understanding and wisdom. The drive to understand ourselves, our world, our thoughts, everything. The problem is that we are like computers seeking to describe their own code. We can’t do it because every line of code used to help the computer understand its other code…. is one other line of code requiring explanation. What makes me happy? What do I want, really? What should I do? If you had everything you could conceivably want- infinite utility, morality, etc. then is life pointless? Why or why not? What would you do?

History Is Stranger Than Fiction

History is tricky business. Mostly because everyone has an angle, even if they don’t know it. Any source you can use to determine what exactly transpired must necessarily be suspect by hearsay procedures. Try this: get 10 people together and have someone walk into the group, say hi and some unusual phrase, then leave. Wait an hour, and then give each of the 10 people a questionnaire about the event asking questions like “what color shirt was the guy wearing” and “what was the first thing he said?” Depending on the impact each characteristic made on each person, you’ll get accounts that differ, perhaps significantly. Now factor in the telephone effect of one person telling someone else what happened, and them telling someone else, etc. etc… Plus, any source you use to verify or disprove another source is suspect under the same circumstances. Agreement does not necessarily constitute truth, in the same way that correlation does not imply causation. Determining objective history is a nightmare, as any court lawyer or historian will tell you.

However we have reams of history books. You can go to any library and find volume after volume of books claiming to report “what happened” as far back as ancient Egypt, or even further. Hell, some books go so far as to claim truth based on evidence derived from oral history alone. We then apply our favorite “truthiness test” to determine if it’s accurate or not. What I mean is that we go “does that sound realistic?” and if it does then we accept it as true, and if not then we disregard it. We read about oral history claiming that ancient Aborigines would play music on long wooden instruments and we go “yeah, OK, I can believe that.” We read that their gods made it rain and today still control the future, and we go “OK, that’s wrong.” Basically we’re using our own independent standards of truth to judge a historical artifact which may or may not be true, based upon evidence that may be faulty, derived from a telephone-game obscured source, based on perceptions of people that cannot be trusted. History as we know it is flat-out fiction. I’m sure you already know that history changes as the world changes? We reinterpret events to suit our current situations. In World War II we didn’t congratulate German princes for the original rise of secularity (albeit in the form of nationalism) in Europe, we transformed Nietzche into a Nazi. China has a heavy hand in the interpretation of reality of its people, and American media has its tendrils deep into the ongoing perception of most Americans.

Now I’m not saying that “assisted perception” is necessarily evil, due to the sheer volume of information in objective reality we can’t hope to handle even a miniscule fraction of it, it is absolutely necessary. However, it is vital that we understand what we know, and differentiate it from what we think we know, what we don’t know, and what we want to know. When you perceive there are a number of “mental protocols” that should be adopted like a patient being admitted into a hospital. You have to assume, based on the knowledge that there exist some patients with extremely contagious and horrific diseases, that everyone you admit does. Even though the precautions are often unnecessary, you can’t know beforehand that they’ll be unnecessary *this time*. Your dentist always wears gloves, always wraps everything in disposable plastic, etc. When you hear something, the process is similar. You put it in the clean room like a virologist with a sample and think “alright, this is the information presented.” And, like a virologist, you need to confirm that this sample is based in objective reality, practical to adopt, accurate, useful, and consistent. In order to ascertain if this information is something you can use, you have to run a sizable battery of tests on it. There are an endless variety of tests, I’m sure you can easily think of more. I’ll just go over some critical, bare bones examples here. Test 1: Does this information set contradict itself, or contain internal inconsistencies, anywhere at all? This establishes that, insofar as and according to your current body of confirmed knowledge and powers of analysis, the information is logically consistent (although not necessarily sound if an assumption is false). 2: Does the information reduce to a set of organized principles or is it derived from a predefined set of postulates? 2.1: If so, are these postulates consistent within themselves, 2.2: and objective reality? 3: When measured against objective reality as you currently conceive of it, is the set inconsistent or contradictory? Most people begin with this step, but use shoddy methods and internally contradictory conceptions of reality to begin with. Starting with nothing and rebuilding the world using strict, nearly medical levels of rigor, you can be assured that this analysis will yield useful results to the extent of your own faculties and intelligence, factoring in the possibility for error at all points. 4: Does the information have any immediate conclusions? 4.1 Are these conclusions consistent along the previous lines? 4.2 If carried forward indefinitely, does any conclusion eventually lead to a result which is inconsistent/contradictory/outright ridiculous? 5: Assuming for the sake of argument that the information is true (after establishing that it’s materially and logically feasible), what else must be true? Are these deductions consistent?

I could go on for a long time on these things, but as you can probably guess it makes for uninteresting reading after a little while. Besides, you don’t even need to know exactly what you’re doing. Allow me to give you an example. Let’s say you hear on CNN that George Bush has claimed Iran already possesses nuclear weapons, are shown a picture of a van, and an administration aide claims they have confirmed the truck contains weapons-grade plutonium and the TV then shows you a picture of documents to that effect. What do you actually know?

Well, in a profound sense, you don’t actually know anything because CNN is the one telling you that. Most lose the battle at square one with the implicit assumption that CNN must be objectively correct. Actually we have to add that we are implicitly assuming for the sake of practicality that CNN could be incorrect, and that information contained within that shell will be distorted to some unknown degree due to the telephone effect, not to mention the probability of deliberate misdirection to an unknown degree along the lines of unknown motives with unknown magnitude. So we’re going to assume for practical purposes of further analysis that CNN is telling “the truth,” never forgetting that that is a large assumption made for our sanity’s sake. Next, we have been told by CNN that George Bush has claimed (oh, dammit, now we have to go through all that, again…) that Iran has nuclear weapons. He does not claim to have been told this by Iran, and neither is he claiming that he personally saw nuclear weapons in Iran. So he’s implicitly saying that some unknown person removed from him an unknown number of degrees of intermediary people witnessed or otherwise inferred that Iran has nuclear weapons. By now you’re probably screaming “why are you doing this to me, you vile torturer! I don’t want to have to do this every time someone tells me anything!” A better question would be “show me the evidence already, damn you!” Direct observation, and inference if necessary, would remove all this mucking about. Of course, you’re never going to see any direct evidence with your own two eyes about whether or not Iran has nuclear weapons. And even if you did, that evidence would be suspect as well, did someone prepare or otherwise modify the evidence? Has your perception been artificially altered to limit your natural observation of objective truth, such as with a prior analysis? Etc. etc.

My point is not that you need an IQ of 2 billion in order to understand anything with any degree of certainty. My point is, rather, that it is not possible to know anything with certainty. It is possible to be certain to the degree to which it is possible to know anything, which is pretty far if you ask me. But you always have to allow for the possibility that your model is wrong, that your evidence is faulty, that there are variables you haven’t accounted for, etc. etc. ad infinitum. The possibilities of all these things added together is still fairly small, but absolutely vital nonetheless. Anyone who claims to know something and all the evidence can go to hell is just outright flipping insane. Very nearly by the definition of insanity; the inability to distinguish reality from fantasy.

So that leaves us with…. what, exactly? I seem to be saying that we need to live in objective reality, but at the same time also claiming that objective reality is so mind-bogglingly complex and so fraught with unknowns that it’s impossible for us to know anything. If that’s where you are, congratulations. You are a philosopher. I would like to introduce you to my friend Descartes, to whom we are all indebted for a singular stroke of genius. “Cogito ergo sum.” I think, therefore I am. The one thing which we can know a priori with absolutely no reliance on objective reality for input riddled with unknowns is that “I exist.” The reason for this is simply that you are able to think that you exist. Maybe you’re just a brain plugged into a computer, maybe a demon with a spear is prodding your soul, but in some way your consciousness exists because you can formulate the thought needed to think you do. From this we can begin to deduce all that we know about reality. Now, keep in mind that while Descartes is the source of this idea for you, actually everything he says is suspect under the same unknowns of perception. However, I hope you can see that when you conceive of “I think therefore I am” you can verify its own truth independent of objective reality. In a way you are conducting an experiment and getting your own results right now. However, anything else you read about in philosophy must necessarily be subject to the same questions and unknowns as the rest of objective reality. If you can’t prove it for yourself, don’t believe it just because this or that person said so. Ideally you can formulate your own conclusions from that single solid core of absolute objective reality. Your first task is doubtless going to be to create some sort of process for proving other things, but not necessarily because that suggestion is itself suspect.

Basically what I’m saying is that you have to find your own answers for yourself. Anyone who would get you to accept their perspective/views/reality because they believe it to be true is being inconsistent. And yes, I can respect the ironic ‘contradiction’ that I’m trying to tell you to think for yourself yet if you do as I say then you’re not thinking for yourself. While an amusing argument, it’s not truly a contradiction. It’s like telling a child to brush their teeth. You have basically two approaches. You can tell the child that they have to brush their teeth because you say so, because they’re a bad person if they don’t, because you’re providing incentives like toys for doing it and punishments for not brushing. This style of corrupt power is inconsistent when they’re trying to get you to adopt a specific perspective on reality because they believe it to be true. The alternative is to tell the child that you don’t care one way or the other if they brush their teeth. They’re free to do whatever they want. But, you provide them with the information needed to make that choice for themselves. Presumably you want them to brush their teeth for rational reasons, right? Otherwise you’re being a corrupt psychotic despot who wants your child to ritualistically do what you tell them and to worship arbitrary power irrespective of morality and objective truth. You can show them medical data about how their teeth will basically be taken over by germs, they’ll rot, and they’ll need expensive, invasive, and painful operations to fill cavities to keep their ability to eat functional, etc. etc. When presented with valid information for both sides, the child can choose either and you can have no issue with it. By assuming that there is an outcome that that person *must* take, you’re basically arguing that freedom is conditional on rationality. Haha! “In order to be free, you must do X, and nothing else.” How amusing. That said, I doubt that there’s a single person who would choose not to brush their teeth when they fully understand the implications of their choice.

Going on a medium tangent here- I will agree that young children tend not to be the most rational agents, and that as a parent you have something of an incentive to get your child to be rational in their own interest. Perfect. You’re free to add incentives to their choice as much as you like. There’s absolutely nothing wrong with altering the conditions of someone else’s choice- it’s the fundamental principle of the free market. Whoever can offer the most incentive (utility) for the least cost gets the most buyers. Simple. Doesn’t impinge on anyone’s freedom one whit. However, it’s vital that you understand the difference between the corrupt, abusive incentivization (is that a word? It is now!) and the freedom-creating form of incentivization, considering that you as a parent have unparalleled power, and equal ability to do either, so it can be a thin line. I have an easy way to make sure you never cross that line. Any and all contracts are optional. You can offer someone (not necessarily just children) the option to take your use of power, as well as whatever power you’re going to apply. So if you make the offer “whenever you don’t brush your teeth I’ll whip you 100 times, and if you do then I’ll say ‘good job.’ Deal or no deal?” They’re going to go “Pass!” However if you offer them “whenever you brush your teeth, you can have an extra 10 cents put forward into an ice cream account, and when it reaches enough to pay for ice cream, we’ll go get some.” They’ll probably agree to that, and they’ll brush their teeth like you want them to. You also need to get them to agree to terms under which their contract can be enforced, of course. “So you’re agreeing right now that if you void your responsibilities on this contract, I can do X?” Magic key to freedom: any and all agreements are always optional. Nobody makes an agreement that doesn’t give them a net benefit in some sense, and that applies to both you and whoever the other party is.

Back to finding your own answers for yourself. History is worse than fiction. At least fiction doesn’t pretend to be *true* in the purest sense. Fiction is true because it is exactly what it is, and you are free to draw whatever conclusions from it that you like without deception. That said, the process for analyzing either is basically the same. You have to assume the same level of doubt: total, for both. But with fiction, the assumed truth level is already virtually zero, so you can chill out a little. You’re already applying much of the needed doubt. Either way, triage your information intake. Otherwise you might read something online, or watch some speech, and just get sucked into believing it without really thinking about it. Information is organized to be transmitted from mind to mind, even if it’s only been encoded into language. It’s not that much of a stretch to assume that some might naturally be more contagious than others by its inherent nature. Your mind is a library, a laboratory, a temple, a hospital (and more of course) all rolled into one, and you need to triage your information so you don’t catch something nasty.