The Magic of Information Tools
“When we discover something new, what we always find is that it expands in all directions rather than closes in to a point.”—Carver Mead, Caltech.
Confined to our homes by gangs of viral predators in the streets, we find ourselves considering the magic of information tools.
These paths out of a broken world are paradigms: The Microcosm of chips leading to teleputers of mobile computing, the Telecosm of the electromagnetic spectrum in fiber, wireless, and artificial intelligence, the Medicosm of bio-informatics from CRISPR to protein folding, and the blockchain bringing a new security architecture to the internet and a new monetary system to the world economy.
All these technologies crucial to our lives in a plague year entail asserting the primacy of mind over matter. Even the virus itself gains its menace from its stripped-down vector of information that seeps into human cells and reprograms them. They all defy the materialist superstition — the primitive academic idea that matter is primary and ideas, consciousness, and imagination are emergent or derivative from it.
All of these technologies spring in important ways from the information theory developed throughout the 20th century and climaxing in 1948, with the formulation of Claude Shannon. In a crucial breakthrough, he defined information as disorder rather than order, as unexpected or surprising bits rather than predictable patterns. Information is bits and bytes — messages — not determined by the machine.
Breathing Life to the Information Theory
This theory began in 1877 with Ludwig Boltzmann’s description of entropy as “missing information.” Boltzmann’s entropy is thermodynamic disorder, the tendency of all orderly systems to become disorderly over time unless shored up with new inputs of energy.
Beginning perhaps with the finding of Charles Sanders Peirce that reality is triadic — all symbols are linked to their objects, maps to their territories, by interpretant minds — information theory transcended mechanical determinism. Consciousness is not emergent; it is the source of meaning.
Using a model that we can now recognize as software, Kurt Gödel in 1930, extended this principle to all logical systems, from mathematics to Boolean mechanics. No logical scheme is complete or coherent without reference to axioms outside itself, which cannot be reduced to the system. He showed that all logical systems become incoherent if they are self-referential loops.
Gödel’s proof led Alan Turing to invent an abstract universal computer architecture — a “Turing machine.” He used this concept to demonstrate in 1936 that computers necessarily depend upon an “oracle” or programmer, outside the computer.
What is this “oracle?” Turing did not know, but commented that “it cannot be a machine.” Once again, the futile circularity of self-reference dictates that all machines, including so called artificial intelligence, are dependent on consciousness outside themselves.
Turing extended this finding to physics. He showed that quantum uncertainty really stems from the self-referential impasse of using electrons and photons to measure themselves. Thus, he anticipated the new field of quantum information theory. As explored by Christopher Fuchs and Gregory Chaitin, among others, quantum mechanics like any other logical scheme depends upon outside observers or oracles.
A New Model of Physical Reality
At the beginning, quantum theorists did not imagine that they were developing new forms of information theory. Back in the glory years of in the early 1930s, Erwin Schrödinger, an Austrian, and Werner Heisenberg, a German, both grandly delivered new models of physical reality based on waves and particles.
Inspired by the findings of Jean Baptiste Fourier, Schrödinger offered a wave theory and a canonical equation, which is still widely used whenever engineers seek to model quantum systems.
Inspired by the findings of Max Planck and Albert Einstein, Heisenberg offered a matrix mathematical theory based on vectors, which led to his famous “Uncertainty Principle.”
Schrödinger depicted reality as wave phenomena spreading infinitely through the world. Heisenberg depicted reality as particles in vector matrices fraught with enigmas and uncertainties.
But it turned out that integrating the product of two waves is equivalent to finding the inner product of vectors. Adding up the vectors repeats the Fourier series of waves. Vector projection and Fourier coefficients are the same thing. Finding an inner product is the same as integrating a product. Uncertainty, as Carver Mead shows in Collective Electrodynamics, is just an effect of Fourier transforms.
As Walter Moore described in his Schrödinger biography: “Here then are two theories, one based on a clear conceptual wave model of atomic structure and the other based on a radical statement that any such model is meaningless [in the uncertainty principle], yet both lead to the same final results.”
Today, however, the uncertainty principle is becoming increasingly quaint, while wave theory moves from triumph to triumph. As Schrödinger pointed out: “Observation presents us with two kinds of structural linkage between events: Longitudinal linkage across time and space and transversal [or horizontal] linkage based on structural interlacing.”
Particles offer a causal model over time. One thing leads to another in a determinist line. But determinism means no information. “The transversal view” of wave functions said Schrödinger, “is a relation between simultaneous world points or… at a space like interval, so that there can be no causal relationship…”
The transversal view of waves corresponds to a parallel model of computation. The longitudinal time domain corresponds to serial processing.
The breakthroughs of neural networks, artificial intelligence, and machine learning all stem from parallel processing of multidimensional vectors. As more of the volume crowds toward the surface, proximity in the result suggest affinity in the phenomena. Things that are close together resonate as waves.
Schrödinger offers an analogy: “Bertrand Russell sees this kind of structural interlacing to be at least as important a source of inductive inference as the causal relation of permanent succession.” The copies of a book are not the cause or the effect of each other, yet the similarity is undeniable. For the points on a wave surface the structural similarity is their resonance, meaning that the phase or timing is the same.”
Schrödinger believed that the particle approach triumphed by “describing minutely the [atom’s] so called ‘stationary states’…i.e. in the comparatively uninteresting periods when nothing happens, the theory was silent about the [dynamic] periods of transition or ‘quantum jumps.’”
Today all of academic economics should learn this lesson as it pores through meaningless static states while ignoring the surprising leaps of human creativity.
Schrödinger pointed out that wave systems would usually use up the average interval between two transitions, leaving the quantum jumping atom no time to be in those ‘stationary’ states. Now the whole “sustainability” craze seeks to reduce the entire economy to a stationary state in self-referential materialist dead end.
In quantum information theory, the quantum wave function is often depicted as a map of probabilities that collapse in the act of measurement by an observer. But the quantum observer is just another instance of Peirce’s “interpretant” between symbol and object, Gödel’s outside axiom giver, Turing’s Oracle, and Shannon’s creative surprise, with the bandwidth of possibilities (the alphabet of symbols or possible messages) collapsing with the choices of the messenger.
Translating Boltzmann’s analog entropy into digital terms, Shannon’s entropy is informational disorder and the equations are the same. Information is not order, or “negative entropy,” as all too many analysts have said. Information is entropy, disorder, disequilibrium, and surprise.
Behind Shannon’s model was the 19th century development of geometry from Euclidean flatlands and three-dimensional shapes to arbitrarily multiplied dimensions in vectors, which combine magnitudes with directions. In right triangles, Pythagoras gives the sum of the squares of the adjacent sides as equal to the square of the hypotenuse, which in a sphere can be mapped as the radius.
Multidimensional geometry used by Shannon generalizes Pythagoras to use the number of dimensions as a vectorial exponent. Thus, a cube is the length of one dimension cubed, and a 99-dimensional hypersphere is the length of one dimension to the exponent 99: its multidimensionality.
All the “volume” in such a hypothetical hypercube migrates close to the surface. This clustering phenomenon enabled Shannon elegantly to calculate his channel capacity theorem for networks: How much information could be carried by a specific bandwidth under particular conditions of noise and interference.
The same multidimensional geometry of vectors is at the heart of the successes of artificial intelligence and machine learning. Guess, gauge the error, adjust the answer, feed it back up and down hierarchical ladders of abstraction. Once again multidimensional vectorial matrices produce clusters that enable the mapping of proximities into the recognition of affinities. Escaping all the labored linkages of causality, that’s what AI is all about.
At the root of the technologies we follow in this prophecy are the triumphs of information theory, now evolving toward quantum information theory. Overthrowing determinist causality are wave theories of resonance and affinity.
In the end, these information tools will overcome the crude strategies of the virus with surgical strikes of intelligence.
Editor, Gilder’s Daily Prophecy