Human I/O

The essential purpose of having a mind is to manipulate information. All minds do it incessantly. Input from the senses is filtered, perceived, categorized, and filed away in memory. Processing memory is updated, scanned, and stripped of significant thoughts to be processed in the next cycle. A combination of select sensory data, accessed long term memory, and the products of short-term memory’s routines is used to actually make decisions- this element of our minds is called our consciousness. Is there something that makes consciousness special in terms of processing of information? Is there some reason why it is impossible to get human-level thought out of a computer we built ourselves?

Here’s a thought. Imagine that you were a computer. What would that be like? Many would say that it would be inhuman. You would be forever occupied with minute and nonsensical computer actions like Windows 95 printing bytecode. However, I suspect you’re drawing that conclusion from stereotypes of our day. In the early days of computers they thought computers would never exceed [insert tiny statistic of choice here]. Now we know computers can be powerful, but we’re still biased against their complexity, thinking they can only do mechanical things, can only live as a linear binary mind. The human brain is different in several significant ways, the first being that it runs on neurons instead of fixed silicon circuits. This much can be simulated with either analog neural network hardware, or emulated entirely within a Turing computer, endowed with a couple times more processor power. Also, the human brain is divided into specialized cortices with separate functions and limited communication between them. You just gave me consciousness as a form; limitation of computational perception. In your computer brain, “you”- the part that makes you human- is the higher-order information and the higher-order information only. If you include the lower-level information such as the neurochemical reactions within the brain, you can’t really conclude that that’s “you.” I would argue that you can derive all the higher-order information from that raw data, but whether it’s additional information or derived information isn’t really relevant. What is relevant is that above a certain unknown threshold of abstraction you have “you,” and below that you have data and simple processes and functions. If you’re a programmer you might have figured out where I’m going with this. Human thought is a very high-level programming language. Higher than English (or any other spoken or written language), lower than paradigms, or systems/frameworks of constructing human thought. More interestingly, the constant tendency for computer languages has been a generalized increase in abstraction in proportion to the processing power available. Yet more interestingly, it appears that above a specific threshold of abstraction we see an emergent property; the ability to understand and manipulate itself. Consciousness.

Well obviously we could build a computer that has just as much processing power as our brains do. So the question is instead- how do we program such a computer so it will be human, instead of a machine? The existence of humans means that it can be done because we are composed completely of inorganic matter organized into organic forms. So, what does it really mean to be “human”? Do we mean a human body, or do we mean a human spirit? It is possible we may conclude that a prerequisite of being human is to have a human structure and eliminate all except natural-born, carbon-based, earth-bound, and above all “normal” humans as being truly human. I, of course, utterly disagree with this assessment. We should be asking better questions. For example; is there some property of a human that you could subtract and leave something that is a non-human? I would say no, that any “property” that you could name would be a vague abstraction of something esoteric you simply couldn’t define. Such a dilemma means that we’re approaching this problem in the wrong way. Rather than asking what it means to be human, we should be asking what is the most significant property of life information that humans demonstrate? This is an immensely deep point, but I’ll reduce it as far as it will go here. Basically, if we were to transfer completely over to silicon-based consciousness or some other form and then discovered an alien race that had already made that shift, what would distinguish the human variety from the alien variety? Human and alien would be on the same substrate, but would clearly be different. Bear in mind that this situation is flawed as an example because there will be very clearly delineated cultural differences simply because the two species/races were not in the same geocultural area- both would have their own but they must by necessity (the improbability of parallel evolution) be different, though it doesn’t mark a significant difference between the two. To prove this point, any property that would be different between two human civilizations on different planets separated by that amount of time cannot be considered between a human and alien civilization because it cannot define a “human” property.

Life as a class is an example of self-organizing, self-reproducing, self-refining adaptive information. I have already covered my basic concept of what life as information is at some length, and this paragraph might not make sense if you haven’t read it, you have been warned. Life is a pattern of exponential growth, due to constant and improving refinement to improve growth and improve further refinement, etc. etc. ad infinitum. Humans are merely the bleeding edge of evolution, we are causing change faster than change has ever been caused before, and technology is the new font of evolution where our brains once were. We see a clear, direct, and unimpeded curve of the evolution from microbes to humans, the human mind, and the exponential increase of technology from a hammer to augment the hand to a computer to augment the mind. So to say that it is a human trait to expand and to push new horizons is not really accurate- that would be a property of any and all life. Our alien civilization will have been programmed by evolution with a similar drive due to the sum effects of competition, limited space/resources/whatever, and the ease of access to new untapped resources. By the same token we can disqualify traits such as drive to survive, perseverance, competition, cooperation, communication, and a bevy of other decidedly “human” abilities and traits. The truth is, there isn’t a hell of a lot left after that, none of it good. So we are left with no value to our “human” identity. So we are forced to conclude we should associate more with life than with being a particular species. Which makes sense because what it means to be “human” has evolved dramatically, and will evolve in the future, while the human status as living will not. I am not a citizen of the USA, I am a citizen of the world. Neither am I a human, I am a citizen of the great ecology of Life.

My position is that after we grew accustomed to having a trillion times as much brainpower as we do now, and having worked out any irksome little psychological issues and base drives and refined our own processes sufficiently- it would be a never-ending process, but there comes a point where it’s close enough- we would be indistinguishable from an alien race which made the same transformation independently. The reason is that we live in the same universe with the same physical laws and the same mathematics. The same logic and reason governs any and all possible alien species. If A then B, if B then C. A, therefore, C. I don’t care how many where you are in the universe, what level of technology you have, or how many eyes, limbs, or sets of genitalia you have, reason will hold. The differences between aliens would rise from the differences of their physical forms, their environments’ conditioning, and the unique and strange specialization, expansion, and limitation of their functioning. So irrespective of alienness, were you liberated from such a state and given time to sort yourself out you would end up more or less the same as another on the same path. I can hear the cries of “But I don’t wanna be the same!” Well of course, you shmuck. Why would you think that? This is where those previously discarded details come back. The cultural differences, those unique attributes that cannot be created artificially and can never evolve the same way due to a field of countless compounding infinities, make up your memory. There is no way to have a “more perfect” culture-memory in the same way a memory of an apple isn’t more perfect than a memory of an orange, provided both are pristinely accurate. Not a problem for entities such as these. Each would be incredibly unique, but would have arrived at an internal perceptual set and (comparatively) basic functional set that was highly idealized and comparable to the others’.

If you were one of these superbeings, you would probably dedicate your time to the creation of more of such material. Why do people write books, create movies, pursue their passions? Because it’s the quintessential human pursuit- the taking in of sensory data and other stimuli, the unique reinterpretation of it with our own minds in our own way. And then, lo and behold, our minds create from the ether a work that has never before been seen and could never exist in any other way had we neglected to create it. Every second not spent in its creation is wasted. The creation of better tools with which to create “cultural” (damn that word is poor) information is both its creation and the use of that knowledge to accelerate the rate at which it can be produced. The reason? Because that cultural information is alive in every sense of the word! Memes are the expansion of life into a substrate we can’t even comprehend, and we have the power to expand that substrate by making more humans, inventing ways to write them down and communicate them, making computers, and countless other inventions to come. The day we have sentient memes, things start getting truly interesting. (I suppose we already do in that one person is one meme bound to a specific brain- we don’t currently have the means to communicate a complete person-meme). And the day that another as yet unimagined and incomprehensible substrate as expansive as the mental/informational is uncovered, life will expand into that too.


There is, in truth, no reason why psychohistory, as depicted by Isaac Asimov in his Foundation novels, can’t work in real life. Unquestionably there are observable patterns in human behavior that are within the ken of statistical prediction, and outside of the individual control commonly associated with historical processes.

The difficulty in the social sciences lies in that there is no such thing as a simple test. In sciences like physics, it is possible to simply toss one ball vertically into the air. There are two factors, gravity and air resistance. The mass of the earth and the mass of the ball. Results can be measured with precision, and data collected from repeatable experiments. Such an experiment is impossible when dealing with people. Even one person is far too complex a system to be analyzed with a single formula like the ball experiment. But it gets worse. The social sciences are not given one person and told to make sense of how they work, oh no. Social scientists are presented with entire countries based on as little as a few centuries of yellow-in-multiple-senses history. The type of rigor associated with the material sciences is not only impractical, it is utterly impossible unless that scientist happens to have a computer powerful enough to simulate every interaction taking place in the country, as well as every interaction that has taken place in all history relevant to that country. Which, essentially, means all history period. The patterns are not merely unintuitive, they are straight-up buried in a sea of unreliable muck, and even the sea is much too small to be useful.

That being said, the basic principles still hold. Though the system is, by definition, more complicated than the sum of all human experience combined, its behavior can still be predicted axiomatically. And, through abstraction, we can reduce the complexity of a practical representation of that system to the point where it becomes useful. Let’s consider a very simple person. Let’s call him Bob. Bob has a problem. He is obsessive about food. It’s all he can think about; the consumption of vast quantities of food, the tastier the better. But Bob isn’t stupid, he knows enough that he can actively move towards the acquisition of food. Bob’s simplicity lends itself to the easy selection of strategies for him. Using that as a basis, we can predict his behavior with formidable accuracy. For example, Bob’s complete obsession with food means that he is prepared to pay any price he can actually afford for it. And although Bob is prepared to pay exorbitant prices for food, he knows that if it’s cheaper then he can buy more of it. But he knows full well that if the person selling him food knew of his obsession, the vendor’s price would skyrocket. So Bob realizes he needs to keep his obsession secret. Also, Bob’s need for consuming food entails the acquisition and storage of food, probably in huge and ever-increasing quantities. Because Bob knows that money translates directly into food without the issue of spoilage, he begins to obsessively obtain and hoard money, siphoning it off to eat as much as he physically is able to. He could turn to a life of crime, either stealing money or just ripping off food from grocery stores and suchlike. This becomes a viable strategy if and only if Bob’s gains exceed his potential penalties. (If you doubt this statement, would you take $1 million if you had a 5% chance of spending 2 weeks in jail?)

However, psychohistory would be much more rigorous, and far larger in scale than the caricature I have outlined. But we know enough of human behavior now to begin on turning the social sciences into a formal discipline instead of a sort of wishy-washy all-answers-are-correct morass of opinion and artistic misconception. The goal of social sciences is to understand human behavior, society, government, etc. etc. and the logical consequence of that is to eventually describe human behavior rigorously. However, there is a problem. If knowledge of psychohistory becomes too common, then it becomes useless because the fact that everyone knows they are being sociologically guided changes their behavior. More interestingly, any rigorous analysis would have to take into account the possible effects of other rigorous analyses. If half the population has heard your prediction and believes it to be true, the conditions you made that prediction under are now completely different, rendering the prediction more or less void.  This is somewhat akin to the problem of observing a quantum particle. You cannot know both its position and its velocity because the act of measuring its position changes its velocity, and the act of measuring its velocity changes its position, operating at the time scale of a single planck time so you can’t exactly just take both at the same time.

Psychohistory is, to all available evidence, feasible. It has its flaws; it appears impossible simply as a knee-jerk reaction, it will be difficult to make it usable, and will probably have little application until it nears maturity.  Worst of all, predictions would be limited in strength by the possibility that input information was incorrect, or that variables exist that are unaccounted for, or the impact of future events that cannot be predicted. As a sidenote, I currently speak of natural effects such as a hurricane or whatnot. But in a similar fashion, those too should be predictable despite the insane complexity of the Earth’s environmental system. However, in the face of all of that, the possibilities of being able to predict the human future are enormous. More to the point, it becomes possible to push the compass towards a better future by subtle action in the present.

On Language

Language is the greatest example of how we have saddled ourselves with a ridiculous system which no rational entity could possibly concoct. English is by far the worst offender- and the poem The Chaos makes this point very well indeed on the counts of spelling. It’s incredibly irregular in pronunciation: with each letter having many potential ways to pronounce it, some vowels as many as 20. The methods of conjugation are random at best. See, seen, saw, but been, was, were? Go is to went as eat is to ate? Eaten? Eated? Every rule the language has is broken repeatedly. And then the lexical inventory is confusing and abstruse. Why does a ship carry a cargo, but a truck’s load is called a shipment? And even if you can get past all that, the language is often not particularly clear. Though some will say that ambiguity is what enabled Shakespeare to write such masterpieces, that’s a fairly weak reason to saddle everyone with a ridiculous mode of communication. In any case, all its eccentricities make the English language virtually impossible to learn. And once you have, you’re wasting a huge amount of brain hardware that might be better spent actually thinking. All natural languages are like this to some extent, but at least Romance languages have rigorous verb conjugations, and are mostly phonetic.

The solution is to design a better language. And the bar is not high. All that’s necessary is a language that is regular, and clear. An excellent endeavor to this effect is already created, check out Lojban. However, the possibilities for language are limitless. To give an example of the immense possibilities for language, check out Bogomol. But please ignore all the fluffy stuff about alien races. That’s just fiction the author wrote for fun. Unfortunately, creating languages has been associated with a special case of geekbrain syndrome involving orcs, elves, and fantasy worlds. So the practice of improving how we communicate, and even how we think has been ignored. Imagine a language precisely constructed to provide the fastest, most logical and accurate thought process possible. Imagine one that enables the most creative, associative, and innovative thought process at great speed. These would be wonderful tools to be applying all the time. Considering that you think more or less continuously for your entire life, a significant speed improvement (say, fifty times) yields fifty times more thought per person. Imagine the differences in society if 300 million Americans all did that.

On Methods of Thinking and Education

Thinking is something that we all do all the time. But everyone I talk to exhibits no concern and no care for their own personal thinking processes. Forget the obvious issues such as jumping to conclusions, implicit assumptions, illogical association, invalid cause and effect, fallacious reasoning, and other more formal thought misfires which are ubiquitous. I am also referring to the most basic functional processes which we take for granted, and the dramatic cumulative effect such small effects have if you apply them across an entire society.

For example, remembering something. Has it occurred to anyone how prodigiously inefficient the process of recall is? First of all, there is no guarantee of success, and that is a separate issue that needs to be addressed. Also, in order to reliably recount information at a later date with any degree of accuracy, memorization requires a huge amount of time and energy to accomplish a task that should not actually be very difficult. From what I’ve read on the subject of memory, the best tricks we have were created by the ancient Greeks. Are you telling me that our mode of thought has not advanced at all in three thousand years? That we have spent so much effort and energy inventing new gizmos, that not a single erg of creative juice has been directed towards improving our ability to invent new gizmos?

This produces a host of insane problems. Firstly, the education system. It does not take twelve years to master the disciplines covered in American high schools. Unless you’re retarded, it does not take five years to learn how to read, write, add, subtract, multiply, and divide. The years where you are most able to continuously absorb new information are instead devoted to inane repetition, and mindless mental drilling into the heads of students who are quite capable of understanding in one or two iterations. Complaints about students being unruly and wild are justified. But they are unruly and wild because they are bored out of their minds!

For example, most universities follow the quarter system. That is to say colleges cover the same material in one quarter that a high school covers in one year. So seniors in high school are instantly sunk up to their necks in a system that proceeds four times as quickly as the one they are used to, and that material is of a much more complex nature than the “fundamental” material they had been working with for years. But college students are notable for their passion in the subjects they study, and they apply a special youthful energy to their jobs as well. They are hardly overwhelmed.

I would be forced to agree with Benjamin Franklin, who originally posited the idea that all the necessary education in the fundamentals necessary to be a functioning member of society can be covered in three years. Not twelve. Three. Benjamin Franklin’s idea outlined a “civic education” for those three years, and the technical education would extend indefinitely past that. I change that to make it strongly resemble the current system, only much faster. And I make a slight modification to add a prequel year of kindergarten/preschool education to provide additional background in reading, writing, and other tools necessary for the other three years such as study skills, mnemonic devices, and a general “how to use your brain” course. So starting school at age 3 or 4, and finishing at approximately 7 or 8. After this, schooling becomes optional provided that the education requirements do not change. They would, but that’s a separate issue. After completing this high school equivalent education, programs in the area allowing for an additional four years of schooling at the same rate as before- an undergraduate level education- could be taken. Upon completing that at about age 12, students could theoretically be ready to head off to college. The difference is, they would have completed a four year college education already.

Does that sound crazy?

Why not begin college when the brain is at its maximum learning capacity? The learning capabilities of the brain begin to decline at around age 20. Homo sapiens’ extended juvenile period is explicitly evolved to enable us to learn at greater speeds for longer. Other large mammalian species are fully mature in most senses within one year, and lose their enhanced malleability of mind.

Those who ask “are students mature enough to leave the house at a younger age?” are sorely misleading themselves. Youth are only “less mature” because of societal pressures to keep them that way, by expectation. Youth among, say, the Bushmen are fully mature at age 13 or so, and given the responsibilities of a man or woman in taking care of the tribe. In modern culture, psychological neoteny is considered the norm. A century ago, children were allowed and even expected to roam around half the countryside. Now, most children are forbidden from leaving the house or the immediate environs- a leash averaging 30 feet. There are disturbing parallels to a parent warning their child not to play with their chemistry set and the label on the side of microwave dinners “CAUTION: package may become hot when heated.” Psychological neoteny is perpetuated on many fronts; the schools, the parents, the workplace, the commercial sector, and the government. Increasing dependence, increasing need for assistance, supersimplification, and increasing periods of education all go hand in hand. Memory difficulty leads to extended, inefficient schooling, which leads to psychological neoteny as students as old as eighteen are treated like children.

The root cause is a single fundamental issue: the difficulty with remembering due to absurdly inefficient memorization processes. If methods, even ones as primitive and cumbersome as the ancient Grecian loci method, were taught in before entering mainstream education, the time required to learn would decrease enormously. Even without it, school should be able to be shortened by a factor of four. With it, or perhaps with the invention of newer, superior methods, it could be shortened still further. If you doubt this is possible, you sorely underestimate the power of the human mind. Look into memory techniques: you will be amazed.