I just read a fascinating article, about how ancient knowledge is, strictly speaking, younger than modern thought. Somewhere in the middle I was struck by the remarkable feeling of “why on earth didn’t I think of this?” I suppose to some degree I’ve always thought this, but never articulated it. When you read older texts the natural inclination is to pick bits out, I like that, I like that, that’s just nonsense, and that’s freaky but there might be some truth to that… and so on.
That the article is a response to Marcus Aurelius’ Meditations is actually a coincidence, but I suppose I need to address some of the ancient wisdom aspects of the stoic school of thought. At its core, stoicism is a single tenet in similar fashion to cynicism (philosophical Cynicism, not the colloquial use). Stoicism is the belief that the pursuit of truth, the application of rationality, will make you happy. Cynicism says that being virtuous will make you happy, and stoicism extends that by saying that being virtuous is the pursuit of truth. All evil is the product of ignorance, bias, error, or some other irrationality. To me this seems perfectly obvious, but I imagine most of the people reading this are puzzled by that assertion.
Well, let’s start with the basic question here: What is morality? Morality is adherence to a moral system. A moral system is a set of principles governing ideal or preferred behavior for human action or choice. Ethics is the art/science of determining what system of morality produces the most desirable behavior. So basically what we’re saying is that if everyone could conclusively figure out what the best behavior for them would be, they would do it because by definition that’s the behavior that is best for them. The basic argument here is that morality is not arbitrary: it has a very definite function of producing the most positive gain for you, as well as everyone else. Therefore, the application of reason to figure out the best course is essentially the basic building block of morality. Once you’ve figured it out, assuming you’re still sane, you really have only one course because it’s the one that gives you (and everyone else) the most utility.
The clinch comes when we try to distinguish utility from utility, for example by saying that it’s more moral to help others than to help yourself- which is of course nonsense. If you have a job, you are under no moral obligation to give away all your money to the unemployed. In fact, such an action could (and I would) be construed as evil because you are incentivizing unemployment for those most vulnerable to it. Now there is nothing wrong with giving them money, but do not attempt to argue that there is a moral compunction to do so. Reasoning such as “the greatest good to the greatest number” sounds very appealing on the surface because it is the heuristic that most people use to maximize utility. As a heuristic, it is dazzlingly effective. However, applying that principle without having a firm grasp on what morality actually is, treating the heuristic as the primary model, produces some absolutely stunningly delusional behavior.
Let’s take an example. You have a month’s supply of food, and you go to Africa where there are millions of starving children. Your month’s supply of food is meant to feed 1 person for 30 days, or about 90 meals. That could easily be stretched out to feed 90 starving African children for 1 meal, or maybe even 180 kids for half a meal. But then of course you have no food for a month and nearly starve yourself. Not to mention that all those kids you fed are still in the same position: plus one meal, looking for the next one. What exactly have you accomplished? The greatest good to the greatest number is quite authoritative: um, yeah, 180 starving people versus 1 well-fed person? Hey, you should even volunteer to be killed and eaten (sarcastically speaking- a real utilitarian places an extremely high negative consequence on death so they wouldn’t argue that). I would instead argue that you have, at best, accomplished nothing. At worst, you are actually helping to create more starving African children, making it impossible for African agriculture to get off the ground, etc. etc. etc. resulting in a net negative utility for both parties. I’m not saying you shouldn’t help, I’m just saying you picked a damn stupid way to help out. How about this instead: employ 180 starving African children in such a way that you can pay them and make money at the same time. That way you feed them indefinitely, and turn a profit at the same time. Sound like a smarter plan? Just maybe? Perhaps big companies find it unprofitable to purchase African labor for one reason or another, perhaps Chinese labor is cheaper, or it’s riskier, or something. Well, that’s a perfect situation for charity to step in. Sure, we’ll accept paying slightly more for labor, or suffer some extra risk, because we want to employ starving African kids. We can charge just enough extra to cover it and slap a label on the front with a truly pitiful, heart-wrenching picture of African poverty to make up the difference. People are nice- if they think you’re doing something worthwhile they might pay slightly more to help you out as a kind of donation. Here’s the magic behind this tactic: the more money you get in donations the more resistance you’re going to get to getting more (of course you also have more resources so it may even out, but that’s more complicated). It’s harder to get a $100 donation than a $50 donation, basically. However, as you sell more African-labor products, you can expand and employ even more starving African children to make even more product, which even more people will buy, and so on. And on top of that, you have more money in profits which you can turn around and do what conventional charities do as well.
Wow that went off topic hardcore like a sky plunge. Before we return from the wilderness, I want to address utilitarianism again. Utilitarianism is a technique, a mental tool, that we all instinctively use to maximize value for ourselves, and presumably others as well. However, holding it up as the foundational moral rule is an error because it doesn’t actually maximize individual value- it’s a systemic attempt to justify social equilibrium and equal distribution. Consider: if everyone followed the principle of “the greatest good to the greatest number” then what would result? Well, for starters, it’s pretty obvious that poor people get more value out of money, so if you’re rich you would give them money until you were equal, after which giving them further money is a net subtraction because you would have less, and thus be losing more “value” than they would be getting. What about relationships? Clearly people with fewer friends would get more value from having an additional friends than people who have lots of friends, so you have to identify how many friends each person has, and attempt to address that inequality to the extent of your ability (I’m not even going to go into romantic relationships). Regarding ability, clearly anybody with a skill is deriving more value from it than those who don’t. However skill is difficult to transmit, so while it would be your moral duty to teach everyone your skill, during that process (continuous due to new people being born) you would have to surrender all products of your specialized skills to the community as a whole, etc. etc. etc. We’re going to call this here. Utilitarianism basically says that because you are one, and the world is legion, you deserve to have nothing that everyone else doesn’t already have. You’re powerless and worthless, and must sacrifice to others. Of course, this thinking is then applied to everyone- so who exactly is gaining from all this sacrificing?
Now we arrive back at stoicism. Your life and freedom are preconditions unto themselves- the fact that you are alive justifies your life, and while you have a natural right to it, you most certainly do not have my guarantee of it. In order for me or anyone else (read: government) to make that guarantee I would have to be prepared to defy legality, morality, perhaps even reality. Would I kill that guy over there to save this guy over here because this guy has my guarantee that he will live? No. The same holds true for your liberty. You’re free, but I have no responsibility to uphold that freedom. Even you have no responsibility to uphold your own freedom. Hey, maybe you enjoy being locked in a cell while prison guards deprive you of sleep and torture you with batons. Maybe you would pay good money to have that happen to you. And you’re thinking “uh uh, only if they’re strippers!” As you may have gathered, these aren’t fundamental tenets of stoicism: there is only one. These are corollaries that I have arrived at. Your mileage may vary, and that’s great. Marcus Aurelius is the same way- he’s got his answers, I have mine. Mine are the later draft, and I think I am more right than Aurelius was in his day, but in the future there will be other philosophers who have teased even more truth out of reality. However there is a constant: the power to pursue that truth. Previous thinkers always help- if those future thinkers were born into today’s world they probably wouldn’t do any better than I can. I have the unbridled arrogance to say that my own musings into this blog are as valid as the most famous thinker you can name: who do you want- Socrates? Rand? Newton? Nietzsche? Anything you happen to write falls under the same heading. While there exists some absolute truth, the simple fact is that we aren’t omni entities that can discern reality in such different ways from one another that one of us can say the other’s line of thinking is bunk to the extent that I can denounce a squirrel. True, if you say something that is just hogwash then I’ll call you on it. But this is just one squirrel talking to another. Albert Einstein is a squirrel. I’d love to see that sentence end up quoted on some other blog. “Today on the Zen Stoic: “Albert Einstein is a squirrel.” What do you think?”
I’ve gone from Ancient Wisdom and starving children, to dominatrices and “Albert Einstein is a squirrel.” I think I’m done here.