The Integration of Technology

Technology is a wonderful thing- but it has a serious problem. The only prerequisite to the access of technology’s power is knowledge. Or money with which to pay for the use of others’ knowledge. However in either case you can’t take advantage of knowledge that nobody has (yet) so it reduces to the same case either way. Knowledge is power in the most direct sense, in the same way that a lever is power- force times distance. For thinking, the equation is processing power times time. Knowledge applied over time produces results. While this is evidently true, few people notice it. When you get a job you are being paid to apply knowledge over time in the production of value. If you had less knowledge then you would be paid less because you would be less able to produce value. In the same sense as a lever with less force on the active end will not be able to lift as much mass. Hence the idea of property is inherently a part of being conscious; your thought and your time (freedom to use that time as you wish) belong to you exclusively. If you trade that time and thought, you can expect to receive something of equal value, at the very least, in return. However, it turns out that our minds are not just information floating in the void, and that they come prepackaged with some very sophisticated hardware we call a “body” including an advanced computer called a “brain”. So we can easily say you own your mind, therefore you own your body and therefore you own the products of time and the use of your mind and body.

So we arrive at efficiency. Two essentially identical people are told to move a hundred sacks of grain across the street. You give one a wheelbarrow. Who finishes first? They are identical, save one used a more efficient method to do their work. Technology is the exploitation of natural laws to maximum advantage relative to the human perspective. We need food, so we make agricultural technology to maximize the production of food per land area. If we ate rocks instead, we wouldn’t have agricultural technology. Rather, our mining technology would probably be significantly more advanced due to the agricultural time and thought being redirected into mining advances. Finding the tastiest rocks, if you would. Why I am saying all this? Now, we find my point. We usually consider such tricks as the physics behind a wheelbarrow as part of the natural sciences, but the design of the wheelbarrow itself is an act of engineering. Our bodies are very complex machines, but we consider their maintenance to be “medicine”, not mechanics or engineering. Philosophy is the answering of questions we can’t answer authoritatively, and science is the answering of those we can. Our minds are very powerful computers running a fascinating piece of software called Man V. 1.4 but psychology is distinct from computer science why, exactly? Where’s the boundary? These distinctions are imaginary. There is a small difference which I will get into in a bit, but right now I want to get the point across that the boundaries are like those between nations. Drawn on a map, but not actually there. Why do we split our knowledge into exclusive sections? Because that’s how we teach people, since job specialization is such a fundamental part of our economy and knowledge base. Why do we teach people that way? Because that’s how we define the different fields. “Can you say circular?” “I can say circular. Can you say circular?”… No, seriously, why is this the ideal model?

I did mention that there was a difference. If that corrupted your perception of my point as a whole, shame on you. Though there’s probably nothing you could do about it. Very few people put any serious effort into improving their thinking, despite the fact that they’re using it all the time. They’ll try to learn how to do things but use cobbled-together and terrifically random and useless methods to do so. Imagine that you are faced with a massive library full of great books to read, but you have only a rudimentary knowledge of reading. Which is the better course- to try to grind through them all at 10 words per minute, or first perfect your reading skills and then start reading? Sure it’s a down payment of time and energy, but the result is that from then on you’ll go many times faster. As another important point, it is always critical to include the method with the result because it is logically possible to have any conceivable process and get any result. I’m not saying it will always get a right result since they’ll probably be wrong under all other circumstances. For example, 16 x 4. You swap the 1 and the 6, and then swap the 1 and the 4. You get 64. Yeah? Well, you can’t prove that false without using another example because the conclusion is in fact true 16 x 4 = 64 is a true statement. So we actually just proved that the mind that thinks is inseparable from that which is thought. We are forced to conclude that an awareness of your own process is necessary irrespective of what you actually do with it. Engineer yourself a better mind. I need to do a post on just this topic. Some other time.

New paragraph for the difference. It’s called suspense. Not rambling. Certainly not. Medicine is different from engineering in the same way that constructive engineering is different from retroactive engineering. However since we haven’t been faced with a large need for retroactive engineering, that is not its own discipline. What do I mean by retroactive engineering? If we found a device buried in the earth that performed magical acts like making it rain when you pressed a button, that would be a perfect occasion for retroactive engineering. One field of it might be called “reverse engineering” or the decomposition of a complex machine or system into its functions and parts to figure out how it works. Under the conventional definition of engineering, we start from nothing and build a machine upwards from laws we understand. Reverse engineering is taking a machine we don’t understand and figuring it out.

To broaden this idea to knowledge in general, all fields reduce to one of two stances regarding a single contiguous mass called knowledge, or Truth; constructive or reductive. Natural sciences are reduction on the universe, the world around us. Conversely, if we start building virtual worlds by experimenting with fictitious natural laws, we start on the constructive side. The same principle applies to all knowledge. The intent of knowledge being, as stated above, to improve on our own power to get things done that we want done. As an important note, never is only one stance used. Whenever you construct something you then have to figure out if it works, how well it works, or why it doesn’t work and these are reductive in nature. Conversely, whenever you figure out how something works you have to construct something to prove it. The most common method is to construct an experiment which will produce specific results which can then be analyzed in reduction. Can you say circular?… But that’s the point! Knowledge is a constant circular feedback loop in the same way that consciousness is. Construction and reduction can even be reduced to the simple perspective of action and reaction, respectively. You do something, analyze the results, do something based on the results, etc. etc. And as you advance on the circular loop you are continuously increasing your knowledge, your power, your leverage. So we see the exponential increase of life proceeds clearly and continuously into technology. Where does biology end and technology begin? Biology is the study of already-evolved life and technology could be the creation of life from scratch such as self-replicating, evolving robots. Or genetically modified crops and animals.

The objective of learning should be to learn everything, not only to earn a living. This thought is a necessary corollary of my very Stoic ideal that Truth = virtue, the pair representing the only prerequisite to happiness. Of course, this also turns out to be a very profitable strategy because someone who knows… a LOT… is going to earn a large amount of money. I am not saying that I want to go to law school and medical school and get every degree known to man, though I must say that would be damned cool. No, I expect that in a short while we’ll crack the secret of encoding information in a human brain and be able to convert between our binary computer language and analog neural language. When that happens, omniscience is fair game. Any knowledge that anyone, anywhere, has is up for sale. This removes the time factor from learning and reduces the cost of transmission, shall we say, dramatically? When this happens, then I’ll just buy the knowledge I did not gain through schooling, using my vast fortune I used my limited schooling to obtain. Though that may not even be necessary because such a system would automatically drive the prices to zero. Whenever you sell it, by definition you just increased supply by one and decreased the demand by one. All you have to do is wait- before too long the knowledge will be “worthless” anyway since everyone will have it. As a matter of fact, I would be willing to bet that an engineering career in bringing this situation about would be about as lucrative as they get. “I can offer you immortality, omniscience, and omnipotence (over perfectly realistic virtual worlds anyway) for $100 million.” And the price drops as the rich phase out and the technology gets cheaper. By the very nature of industry, you will be making the most money exactly when you have the most capacity to capitalize on it, sufficiently soon before critical mass when nobody cares about money anymore because anything they want they can just make.

We began with an integrated field of knowledge, we specialized into ever-more-advanced subfields until eventually our technology becomes advanced enough that we can increase our own capacities to understand it. Can you say circular?…

Advertisements

Speculations on Symbolic Language

I theorize that it is possible to create a language that is both semantically unambiguous, and at the same time far more flexible in terms of expressible meaning than any current language in existence.  Better, increased precision lends itself well to continuously refining whatever metaphor you may be aiming at.  I intend to actually make a true language along these lines at some point, but it’s still in the theory stage.  Right now I’m just bouncing off a general concept, but explaining it is going to be tricky.  I have a great deal of experience speaking English, so I’m going to use that as a jumping-off point, even though a symbolic language would be sufficiently alien to natural languages that perhaps the comparison is meaningless.  Like trying to explain the alphabet in the language of mathematics- it doesn’t really get the point.

Alright, in English, we use constructs of nouns, verbs, and adjectives primarily.  A symbolic language as I envision it would use no such distinctions.  All units of meaning would be derived similarly from basic roots I will refer to as “atoms.”  An atom refers to the strictest-sense semantic meaning, such as “run” being reduced to “the act or essence of running-ness.”  Words would then be distinguished by formative types. Since there is no similar element in English, I’ll switch over to Lojban here.  Lojban uses articles such as “la” or “lo” to change the significance of the word that follows. For example, “la” essentially indicates that the following word is a proper name. An atomic language could use separate words as indicators, but I think it would be more convenient to merge the two together into a new entity.  The closest I can describe using English would be if an article such as “the” or “a” were included in the use of the word that follows.  Anyway, the effect is a new unit of meaning, such as “some dog”, “all dogs”, “a specific dog I have in mind”, etc. rather than just “dog”, which would be more along the lines of an atom.  Typefiers, once integrated like this, can begin to become more flexibly used.  For example, new ones such as the typical, stereotypical, connotative, subjectively experienced, perceived, observed, etc. etc.  Which ones will end up being used are anyone’s guess.

The second important element is the relation of symbols.  Once we have the bare-bones atom being specified so it’s useful, we need ways to further elaborate on what we’re talking about.  English tends to mash this area up badly, using “that” in very loosey-goosey fashion.  For example, “the dog that runs”, “the dog that is fat”, “that yappy dog” etc. etc. A specific relation, such as possession, can be laid out in explicit logical terms such that the first term applies to the second term.  So if we have two specified atoms, we connect them with a relation.  So we could say “(some person)-possesses/owns-(that dog, that I have in mind)”, indicating that we don’t know who owns some dog that we’re pointing at.  Once again, there are many relations possible from traits or characteristics to associations to physical causation, etc. etc.  Note that we’re still crafting a single entity we would commonly refer to now as a noun.
Also note that pretty much all the complexity is optional, used only if you have additional information to convey.  You could just say “that dog” as a subject, or you could use endless relations and connections to end up with a single subject, either way it’s treated as a single symbol that happens to be composed of other symbols.

OK, now we’ve established how to talk about what we commonly refer to as nouns. Before starting in on verbs or descriptors I think I need to make a distinction- all symbols would be treated essentially the same way, and atoms would be the fundamental building block of all forms of meaning, be they verbs, nouns, properties, whatever.  Types would transform their use.  So “dog” as an atom could become a singular noun, a group noun, a generalized dog, an unknown dog, a stereotypical dog, etc.  Or, it could become the property of dog-ness, the act of being a dog, etc. In order to reduce this abstraction back to nouns, verbs, and adjectives, a great deal of flexibility and subtlety is being glossed over.

Alright, let’s make a sentence.  The parentheses would enclose a single word-unit.  “(some person)-possesses-(that dog) (the act of walking)-with-destination-(some specific location)-with-property-(the quality of speed/haste)”

When written out in English with all the parentheses and haberdashery it’s obviously cumbrous. But consider what such a sentence would look like if atoms were bound to one syllable, enclosed on both sides by a consonant, and their types preceded them consisting of a single syllable, beginning with a consonant and ending with a vowel. Relations would be one or two syllables, beginning and ending with a vowel.  Using C for a consonant sound and v for a vowel sound (noting that a vowel sound could consist of multiple vowels- such as “ou” in “bout”), the sentence would fit into something like this framework, at the lengthiest and most explicit:

CvCvC–vCv–CvCvC CvCvC–vCv–CvCvC–vCv–CvCvC

I count exactly 16 syllables; this sentence is that long. Impressive, eh? It’s important to note that I intend speech to be rather more interesting than the pointlessly simplistic example I just gave- that was baby talk.  But baby talk is itself impressive considering I outlined it in about 1,000 words.  Given more words you could talk fluently, albeit boringly.  And the words are all in the structure of atoms, so they’re easy to learn.  More significantly, in my opinion anyway, is the immense expressiveness of such a free-range system. Basically, I think grammar should just get the hell out of the way and let you talk. In any case, the language (currently called S) is still heavily in progress.