Macroscopic Decoherence

Macroscopic decoherence is a fancy name for the theory in physics of “many worlds,” a resolution to the dilemma presented by quantum physics that, to some, makes a lot of sense.  Before I discuss what it is and what it means if it is true, first I’ll go over the more commonly accepted modern viewpoint more specifically its aspect labelled the Copenhagen interpretation.  OK, here’s the dilemma.  Heisenberg’s Uncertainty Principle, a verifiable precondition of any theory of quantum physics, states that you cannot determine both the position and the velocity of a particle.  The practical reason for this is that, for objects as small as particles, the act of measuring their properties has a significant effect in changing those properties.  For macroscopic objects such as a table, the photons bouncing off the table into our eyes don’t change the position or velocity of the table and therefore we can ascertain both.  However, there is no yet discovered tool which can be used to probe a particle without changing it in any respect, thus preserving its condition for a second measurement.  Hypothetically, I guess you could measure both properties simultaneously- within the exact same Planck time- but this is utterly impossible with current technology, totally incapable of operating on anything close to that time scale with simultaneity, and there may be other limitations I am not aware of.  Now, strictly speaking, this isn’t an accurate model of quantum decoherence.  Actually, particles behaving like waves exhibit a linear relationship of definition between variables such as, say, position and velocity.  This means that the more certain an agent is about one property, the correspondingly linked property can only be known with a correspondingly limited precision.  So it’s possible to have a continuum of accuracy about both properties.  This seems like a mad system, but this is due to the nature of waves.  I think I should stop and leave it at that before I get sidetracked from the main point- I haven’t even gotten to the standard model yet.
This gives modern physicists a dilemma- it would appear that our universe is a fickle beast.  Let’s say that we ascertain a given particle’s position with perfect accuracy- doesn’t that mean that it is categorically impossible for us to make any statements at all about its momentum, due to total uncertainty?  With the caveat that perfect accuracy is impossible, yes.  So what happens to the velocity?  Or, more importantly, what happens to all the other places it could have been if we hadn’t measured it?
The Copenhagen interpretation of quantum physics claims that the other possibilities do not exist in any case.  This more closely parallels the way we think about the macroscopic world in practical terms because even if we don’t know where a table is, we know the table has a given location that is not subject to change unless someone or something moved it.  The act of measuring the position of the table only puts the information about the table’s position into our heads, and does not change any fundamental properties about the table.  So, the Copenhagen model concludes that the act of measuring where the particle is collapses its waveform into one possible state.  It actually changes the waveform by nailing down one of the variables to a certain degree, leaving the other one free to flap around in a similar degree.  This collapse model causes particles to behave similarly to macroscopic objects in one sense.  However, in order to reach this conclusion, the Copenhagen interpretation has to violate numerous major precepts of modern science- I won’t go into all of them, although it is a laundry list if you want to look it up, universality and objectivity of the universe for one.  The fact that there are observers begins to matter because it appears that we can change the fundamental nature of reality by observing it.  This raises the question of what exactly constitutes an observation, perhaps one particle bumping into another counts as an “observation”?  But relative to us, the uncertainty principle still stands relative to both particles, so there really is something intrinsically different about being an observer.  This is the most serious flaw in an otherwise excellent model, and it is to address this flaw that I add my thoughts to the camp of macroscopic decoherence- the other one being that this causes particles on a small scale to behave in a fundamentally different way than larger objects.

Macroscopic decoherence does not require a theoretically sticky collapse, hence its appeal.  Instead, the theory goes that the other possibilities exist too, in parallel universes.  Each possible position, momentum, etc. exists in an independent parallel universe.  Of course, due to the number of permutations for each particle, and the number of particles in the universe, this causes us to postulate the existence of an indescribably large number of infinities of universes.  Now, if you accept that postulate, it allows a theory that explains particles in the same terms as macroscopic objects, you only have to accept that this same permutation mechanism applies to any and all groupings of particles as well as individual particles.  So there exists a parallel universe for every possible version of you, every choice you have made, and so on into infinity.  This is something of a whopper to accept in common-sense terms, but it does create a more manageable theory, in theory.  The linchpin of the theory is that, rather than the act of observing causing the mystical destruction of the other probabilistic components of a particle’s waveform, it only pins down what those properties are relative to the observer in question.
In other words, the act of observing only tells the observer in which parallel world they happen to be.  Each parallel world has only one possible interpretation in physical terms- one position and velocity for every particle.  Unfortunately, there are an endless infinity of future parallel worlds, so you can’t pin down all properties of the universe, or a distinct set of physical laws would necessitate the existence of a single universe derived from that one.  The flaw in this theory is that this same approach can be taken to a variety of other phenomena, with silly results.  Basically, there is no reason to postulate the existence of parallel worlds beyond the beauty of the theory.  The same data explains both the Copenhagen interpretation and macroscopic decoherence, which is why the theories exist.  Both produce the same experimental predictions because they’re explaining the same phenomena in the first place.  We can’t go backwards into a parallel universe, and similarly we can’t go back in time and find information that has been destroyed by the act of observing the information we observed then.  It appears to me that, given current understanding, both theories are unfalsifiable relative to each other.  Overcoming Bias makes a fascinating case as to why decoherence should be testable using the general waveform equations, but the problem I see is that theoretically the Copenhagen model could follow the same rules.  True, it lends serious weight to macroscopic decoherence because it systemically requires those equations apply whereas it could incidentally apply to the Copenhagen model.  Or some souped-up version of the Copenhagen model could take this into account without serious revisions, it’s difficult to say.  I do disagree with the idea that macroscopic decoherence must be false because postulating the existence of multiple universes violates Occam’s Razor.  This is a misapplication of the razor.  Occam’s Razor doesn’t refer to the number of entities in question, but to the overall improbability by complexity of the concept or argument being considered.  It just so happens that you have two options- either there is some mechanism by which observers collapse a wave into only one possible result, or there exist many possibilities of which we are observing one.  It is not a question of “well, he’s postulating one function of collapse, versus the existence of an endless infinity of universes.  1 vs infinite infinities infinitely…  Occam’s razor says smaller is better so collapse is right.”  This is not correct by any stretch.  True, currently there is no way to verify which theory is correct, but a rational scientist should consider them equally probable and work towards whichever theory seems more testable.

Well, let’s consider the ramifications if this theory of macroscopic decoherence happens to be correct.  It means that every possible universe, ever, exists.  Every possible motion of every single particle.  According to quantum physics as we know it now, there exists some possibility that the statue of liberty will get up and take a stroll through New York.  It is a…  shall we say… exceedingly small… probability.  I won’t even attempt to calculate it, but I bet it would be a 10 to the 10 to the 10 to the 10…. so many times you couldn’t fit all the exponents into a book.  It could easily be improbable enough that you couldn’t write that many exponents on all the paper ever produced on Earth, but I won’t presume I have any goddamn clue.  However, according to macroscopic decoherence, there actually exist a very large number of infinities of universes where this occurs- one for each possible stroll, one for each particle’s individual motion inside the statue of liberty for each possible stroll, etc. etc. etc.  And this is for events that are truly so unlikely as to be totally impossible, let alone for events as likely as intelligent choices between reasonable alternatives, such as what to order at a restaurant, or what to say every time you open your mouth, and then every minor permutation of each…. gah!  Any attempt to describe how many possible universes there are is doomed to fail.  Trying to diagram the possible life courses on the grand scale that each person might make, I will leave to your imagination.

So now we get to the interesting bit- the reason why I am writing this post.  So in all of these parallel universes there exists a version of you that is doing all of these different things.  So the question I have is, are they really you?  Seriously, there are versions of you out there that are exactly, exactly the same in every respect and living exactly the same lives in exactly the same universes, with a single particle moving in an infinitely small way elsewhere in the universe in a way that does not and could not possibly affect you.  However, because of this schism of universes, you are separate consciousnesses inhabiting different parallel universes.  Now there is a high probability that these universes are not totally discrete- rather they inhabit a concept-space that, while isotropic, could be conceived of as having contours that describe the similarity of the universes, with very similar universes being close together and very different universes very far apart, in a space with an infinite infinity of dimensions.  As a result, even with respect to these parallel universes, these versions of you will be infinitely close to you and could be said to inhabit the exact same space, with versions splitting off into space while remaining identical, and other versions experiencing physical changes on the same spot (some of them infinitesimal, and others rather drastic, such as turning into a snake, a werewolf, or anything else you can conceive of).
So which of them is the “real” you?  Or have you figured out that the concept doesn’t have any significant meaning in this context?  If we narrow down this infinite schisming into a single binary split, then both sides can be said to be equally “original” based on the preceding frame.  By the same token, an exact copy of someone in the same universe should be treated as synonymous with the “original.”  Please note, those who are unfamiliar with this territory- I get this a lot.  I am NOT referring to cloning.  A clone is genetically the same, but so utterly disparate from its progenitor that this level of identity is not even approached.  I am referring to two entities that are so identical that there is no test you could perform to tell them apart.  Obviously, with any time spent in different physical locations in the universe they will diverge after their initial point of creation, but it is that critical instant of creation where the distinction matters.  If the two are synonymous, there is no “original” and a “copy”- indeed, the original is merely existing in two places at once.  If they could somehow be artificially kept identical by factoring out particle randomness and their environment then they would continue to act in perfect synchrony until something caused a change, such as a minute aspect of their environment or a tiny change in their body’s physical makeup, such as a nerve firing or even a single particle moving differently (although that probably wouldn’t change much, somewhere down the line it might due to chaos theory).
So now we get to the difficult bit.  What about alternate encodings of the same information, but represented in a different format?  Are the two synonymous?  I argue that it is, but only under certain circumstances.  1) Using a rigorous and perfectly accurate transcoding method to encode one into the other, 2) the timespan of the encoding must be fast enough that significant changes in the source material are minimized, if not completely eliminated, and 3) the encoding can, theoretically, be converted back into the original form with zero loss or error.  The first requirement is the only ironclad one- if you make an error in the encoding then the result will not be representative of the original.  The second and third are more complicated, but easy to assume in an ideal case.  The reason for this is that there is a continuum of identity, and a certain degree of change is acceptable and will produce results that are “similar enough” to meet identity criteria.  If it’s the “you” from a year ago, it’s still the you from a year ago even if it isn’t identical to you now.  So if the encoding takes a year then it does preserve identity, it just doesn’t preserve identity with changes into the future, which is an utterly impossible task because even a perfect copy will diverge into the future due to uncontrollable factors.  Thirdly, if there is no method to convert the new encoding back then it cannot be verified that it is indeed synonymous with the original.  It is possible to produce an identical representation without this clause, but if for some reason it is impossible to convert it back then you can’t know that it is indeed a perfect process that preserves material identity absolutely.  This is the test of a given process.  Now, for digital conversion, reconversion back into physical media is impossible, but simulation in a perfect physics simulation and producing the same results is synonymous with re-creation in the physical world.  I am aware that this appears to be a figure-eight argument, depending upon the identity of a simulation to prove the identity of digital simulation as a medium.  However, this is false because I am referring to a test of a specific conversion method.  In order to create a proven physics simulation, other provable methods might be used to compare the simulation’s results with the physical world.  Once the simulation has been proven to produce the same results as the physical world, given the same input, then a given instance of simulation can be added and compared with the exact same situation in the physical world, using the simulation as the calibrated meter stick by which to judge the newly simulated person or other digitized entity’s accuracy.

Advertisements

4 Responses to “Macroscopic Decoherence”

  1. Claudio Says:

    Ahaa, its fastidious conversation about this post at this place at this webpage, I have read all
    that, so at this time me also commenting at this place.

  2. Daniele Says:

    Hey I know this is off topic but I was wondering if
    you knew of any widgets I could add to my blog that automatically tweet my newest twitter updates.

    I’ve been looking for a plug-in like this for quite some time and was hoping
    maybe you would have some experience with something like this.
    Please let me know if you run into anything.
    I truly enjoy reading your blog and I look forward to your new updates.

  3. Ezramedicalcare.com Says:

    Woah! I’m really digging the template/theme of this blog.
    It’s simple, yet effective. A lot of times it’s hard to get that “perfect balance” between usability and
    visual appeal. I must say you have done a amazing job with this.
    Additionally, the blog loads extremely quick for me on Chrome.
    Excellent Blog!

  4. pregnancy check at home Says:

    Hi there! Quick question that’s completely off topic.

    Do you know how to make your site mobile friendly? My weblog looks weird when browsing from my iphone4.
    I’m trying to find a theme or plugin that might be able to correct this problem.
    If you have any recommendations, please share. Many thanks!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: