I theorize that it is possible to create a language that is both semantically unambiguous, and at the same time far more flexible in terms of expressible meaning than any current language in existence. Better, increased precision lends itself well to continuously refining whatever metaphor you may be aiming at. I intend to actually make a true language along these lines at some point, but it’s still in the theory stage. Right now I’m just bouncing off a general concept, but explaining it is going to be tricky. I have a great deal of experience speaking English, so I’m going to use that as a jumping-off point, even though a symbolic language would be sufficiently alien to natural languages that perhaps the comparison is meaningless. Like trying to explain the alphabet in the language of mathematics- it doesn’t really get the point.
Alright, in English, we use constructs of nouns, verbs, and adjectives primarily. A symbolic language as I envision it would use no such distinctions. All units of meaning would be derived similarly from basic roots I will refer to as “atoms.” An atom refers to the strictest-sense semantic meaning, such as “run” being reduced to “the act or essence of running-ness.” Words would then be distinguished by formative types. Since there is no similar element in English, I’ll switch over to Lojban here. Lojban uses articles such as “la” or “lo” to change the significance of the word that follows. For example, “la” essentially indicates that the following word is a proper name. An atomic language could use separate words as indicators, but I think it would be more convenient to merge the two together into a new entity. The closest I can describe using English would be if an article such as “the” or “a” were included in the use of the word that follows. Anyway, the effect is a new unit of meaning, such as “some dog”, “all dogs”, “a specific dog I have in mind”, etc. rather than just “dog”, which would be more along the lines of an atom. Typefiers, once integrated like this, can begin to become more flexibly used. For example, new ones such as the typical, stereotypical, connotative, subjectively experienced, perceived, observed, etc. etc. Which ones will end up being used are anyone’s guess.
The second important element is the relation of symbols. Once we have the bare-bones atom being specified so it’s useful, we need ways to further elaborate on what we’re talking about. English tends to mash this area up badly, using “that” in very loosey-goosey fashion. For example, “the dog that runs”, “the dog that is fat”, “that yappy dog” etc. etc. A specific relation, such as possession, can be laid out in explicit logical terms such that the first term applies to the second term. So if we have two specified atoms, we connect them with a relation. So we could say “(some person)-possesses/owns-(that dog, that I have in mind)”, indicating that we don’t know who owns some dog that we’re pointing at. Once again, there are many relations possible from traits or characteristics to associations to physical causation, etc. etc. Note that we’re still crafting a single entity we would commonly refer to now as a noun.
Also note that pretty much all the complexity is optional, used only if you have additional information to convey. You could just say “that dog” as a subject, or you could use endless relations and connections to end up with a single subject, either way it’s treated as a single symbol that happens to be composed of other symbols.
OK, now we’ve established how to talk about what we commonly refer to as nouns. Before starting in on verbs or descriptors I think I need to make a distinction- all symbols would be treated essentially the same way, and atoms would be the fundamental building block of all forms of meaning, be they verbs, nouns, properties, whatever. Types would transform their use. So “dog” as an atom could become a singular noun, a group noun, a generalized dog, an unknown dog, a stereotypical dog, etc. Or, it could become the property of dog-ness, the act of being a dog, etc. In order to reduce this abstraction back to nouns, verbs, and adjectives, a great deal of flexibility and subtlety is being glossed over.
Alright, let’s make a sentence. The parentheses would enclose a single word-unit. “(some person)-possesses-(that dog) (the act of walking)-with-destination-(some specific location)-with-property-(the quality of speed/haste)”
When written out in English with all the parentheses and haberdashery it’s obviously cumbrous. But consider what such a sentence would look like if atoms were bound to one syllable, enclosed on both sides by a consonant, and their types preceded them consisting of a single syllable, beginning with a consonant and ending with a vowel. Relations would be one or two syllables, beginning and ending with a vowel. Using C for a consonant sound and v for a vowel sound (noting that a vowel sound could consist of multiple vowels- such as “ou” in “bout”), the sentence would fit into something like this framework, at the lengthiest and most explicit:
I count exactly 16 syllables; this sentence is that long. Impressive, eh? It’s important to note that I intend speech to be rather more interesting than the pointlessly simplistic example I just gave- that was baby talk. But baby talk is itself impressive considering I outlined it in about 1,000 words. Given more words you could talk fluently, albeit boringly. And the words are all in the structure of atoms, so they’re easy to learn. More significantly, in my opinion anyway, is the immense expressiveness of such a free-range system. Basically, I think grammar should just get the hell out of the way and let you talk. In any case, the language (currently called S) is still heavily in progress.