Old Book: Chapter 2
It all begins with an idea.
Anti-jump muscles flexed
"You keep samin' when you oughta be changin'." (These Boots are Made For Walking)
"Heaven / Heaven is a place / A place where nothing / Nothing ever happens." (From Talking Heads' Heaven)
The Cloud of Not-Doing
When I was 16, the National Lampoon magazine did a parody of the National Enquirer. The headline for one of the stories was something like "Family of Five Spared as Oven Fails to Explode" (I haven't been able to verify the actual wording.) The doubly absurd concept of causation really appealed to me. It teeters on the brink of making sense. Surely if an exploding oven can cause a family's demise, then its failure to explode can be said to cause the family's survival.
Now, if we looked upon all of the things which could have caused something to happen and then use their non-occurrence as explanations for why things didn't happen (i.e. if we use one set of nonevents to explain another set of nonevents), then the world would be a much stranger place than we usually assume it to be. Duh! Think of all the nonevents that in the last second alone have caused me not to convert to Islam or caused you not to get a headache or to stop reading this sentence. Are these causes? Take them away and what happens? The spontaneous exploding of ovens? Mass conversion to Islam? If these nonevents are causes then there is literally an infinite number of them buzzing around everywhere, helping to hold the world as it is. We stand still because our anti-jump muscles are flexed. Conversely, things must happen by the suspension of these holding forces, by relaxing those anti-jump muscles. Obviously, this image really appeals to me. It has a spooky, quantum reality feel to it. But does this cloud of not-doing have a place in a theory of causation? I would say that it does—as long as we keep it separate from our usual version of causation. It's bound to work just fine if we just make a wholesale swap of this version of causation for that one. With that said, I feel compelled to point out that a full-fledged Cloud of Not-Doing, much like the infinite level hedge, is probably incomprehensible.
It's as if everything were trying to happen at once, but most of those things are held in check by the fact that their opposites are also trying to happen.·There are no things that continue. The phenomena of the world are remade with every tick of the cosmic clock.
In the standard version of causation that emphasizes change, reasoning is essentially instrumental. We try to isolate causes so actions can be taken, so we can do stuff. This alternative version for nonevents is, on first glance, the opposite of isolating. The Cloud of Not-Doing multiplies causes; the non-occurrence of infinitely many things are causing me not to get a headache. This embarrassment of riches may even obscure the paths to getting stuff done. It may well be, therefore, that this particular Assumption Switch does not make a commodious fit with the action-oriented human variety of thought. It explains but to no avail. The cloud almost seems more appropriate to some kind of passive vegetable consciousness. The quest for a correct epistemological stance, however, runs deeper than our biological blinders. We are free to muddle along as best we can to understand the implications of this caused stasis if that's what the situation demands.
My preferred approach to causation is broader and less instrumental than the one whose point is to get things done. Reasoning has the fundamental role not of helping us do things but of helping us to understand the way of the world in as deep a way as possible. It ought to shed light on being as well as doing. Each foreground-background choice that we make in order to extract meaning from our experience represents an artificial orientation imposed on an egalitarian world by the inegalitarian structure of language. Changelessness is no more the natural condition of the deepest reality than flowing change or abrupt cessation. No orientation is the natural one. There are no privileged frames of reference. Arkansas's license plates assert it is "The Natural State," but nothing else can legitimately claim that title.
If Only
The fact is that people use some sorts of nonevents as causes and effects all the time, although such reasoning is mostly weeded out of formal arguments. Any act of omission rather than commission, any explanation about why what-might-have-been didn't come to pass, any subjunctive of the form "If only I had..." presents a nonevent as a cause and/or effect. Let's look at a few fairly typical cases:
•I never became a world famous author because no influential people ever discovered me.
•I didn't fall flat on my face because I missed stepping on the banana peel by half an inch.
(In other words, I didn't fall because I didn't step on the banana peel. The non-event of not stepping on the banana peel caused the non-event of my remaining standing.)
•The only reason Prince Charles has become King of England instead of me is a mere accident of birth.
(My non-royal birth caused my non-ascension to the throne.)
•We lost the game because our star player forgot to eat breakfast and played sluggishly.
(This one is a little trickier. I think of forgetting as a passive thing, an anti-jump rather than a jump -- mustn't either forgetting or remembering be a non-event? -- but one can clearly make the contrary argument. Hard-headed, rational thinkers might say that the player forgot to eat breakfast because some brain cells died or because she was preoccupied with the coming game, and those would stand as the real causes. I can't fault that argument, but I can offer this amusing if slightly weaker version in response: We lost the game because our star player didn't invent her fantastic spin move until a few days later.)
Granted, there is something strange about the above explanations, but each is a fairly ordinary expression of human thinking.
[The more familiar form these usually take is:
If only an influential person had discovered me, I would have been a world-famous author.
If I'd stepped on that banana peel, I would have fallen flat on my face.
etc.]
Each is something that our minds might formulate in response to our experience even if our logic has a hard time with it after that. That is, such thoughts are meaningful but ultimately rejected because they don't fit into a logical scheme. They sort of sound like lame excuses, don't they?
Survival
There is substance behind the whimsical examples above. The "occurrence" of a non-event is equivalent to the persistence of things as they are (and the persistence of non-things as they aren't). I want to be able to think about causes for the persistence of conditions and of mere being in general as well as for the usual causes for change. That is, I want to focus on nonevents as effects. Why do things "survive" from moment to moment? This is a very broad question which can be taken in any of several ways. The deepest philosophical interpretation asks why existence doesn't just flicker out, why, in the parlance of metaphysics, is there Something rather than Nothing. It is not the kind of question scientists tend to ask, nor is it one we can in all likelihood resolve, but that doesn't mean it isn't an excellent question to ponder.
We can also interpret the question more empirically as asking what makes systems stable or what constitutes organization—why do things keep going rather than falling apart? Complementary theories developed by Assumption Switching require some concept of causation, even if it is not the usual one. Because the idea of nonevents as effects is so strikingly odd, at least at first, it may require an equally strange notion of causation to make sense of it—perhaps even including the non-explosion of ovens.
The whole venture might strike some readers as silly or pointless. Our intuition does not seem to demand an explanation for things staying the same. One of our most deeply held, if unacknowledged, assumptions is that things won't change (and thus things will go on) unless something happens to them. If we could exempt anything from the need for explanation, it would be the simple enduring or continuing of objects, physical space, and certain processes. You don't have to worry that the chair you are sitting in will suddenly cease to exist. We tend not to think that things spontaneously begin or end. Thus non-events cannot be the effect of any cause. I will argue that withdrawing this assumption undermines our basic Western ways of thinking about the world. From the point of view of the Fully Automatic Model, the idea of causing families to survive comes down to a confusion of the background for the foreground. Causation can operate only against a natural backdrop of nothingness/changelessness. Things are caused to change. Staying the same, in other words, is not something that happens but something that just is. And as something that just is, it is without cause or explanation. I have repeated a number of times now that, at the level we perceive as deepest, where foregrounds and backgrounds achieve a kind of equality, all events and objects have the just is quality. At this level there are no causes or explanations.
Contrary to our intuition about stasis, however, certain kinds of stability or staying-the-same are quite obviously caused. The so-called far-from-equilibrium steady states of rivers and hurricanes and people must be fed in order to be maintained. They very clearly will not stay the same unless special conditions prevail, unless they get their food. Supposing a human gets all of the food, air, love she needs to maintain herself, however, we will expect all of her aspects to endure—her stubbornness, her ability at chess—even the ones for which it is impossible to say what feeds them.
A rock or a chair, as opposed to a person, seems to be thoroughly inert and self-contained—at least, that is, until you look to the stasis of its constituent parts. What keeps an atom or a proton or a quark, of which the chair is made, going? What binds these knots of energy together? Gluons? The closer we look at systems, at their structures and particles, the easier it is to accept or even demand caused stasis.
If we ask why things are the way they are, it seems to me, we must accept explanations both for how they got this way and how they stay this way. In the terms of the previous sections, changelessness seems to be an important aspect of the background against which we can reason or measure. But we have few intellectual tools for reasoning about changelessness. Our sharp cultural distinction between the active and the passive seems to contribute to our standard causal orientation, but there may also be a biological component. Apparently our attention responds to change. There are specific receptors in the eye that "see" movement but none that do the same for stasis. That is, when change happens our neurons fire, our brain works. When things stay the same we may even lose our ability to perceive them.
Big Tent Causation
The validity of this attitude toward stasis is, as I mentioned before, tied up with our standard notion of causation. Causation is an extremely troublesome and yet seemingly necessary concept. Philosopher's have debated its meaning for centuries and will continue to do so. My take on the idea here is not very sophisticated. I have set up a straw man called Standard or Coercive Causation against which I will contrast Passive Causation, the kind that might account for changelessness. Standard causation has to do with the Fully Automatic Model. Things happen because of force, contact, pushing.
The Fully Automatic Model uses the simple formula "causation = force" as if nothing would happen unless it were physically coerced, as if the world were made of inert, even sluggish and resistant components. Although physics no longer officially views force this way, our thinking is still very much under this notion's influence. Gravity, electro-magnetism, and the strong and weak nuclear forces push and pull the stuff of nature, controlling collisions and trajectories, orchestrating the entire show, including, some would say, your reading of this sentence. We can see the culmination of thinking in terms of forces in the sustained effort in the physics community over the last century to unite these forces in a Unified Field, one superforce that gives rise to them all. Once this unification is achieved we will have found the physical Prime Mover. It will embody the cause for all possible changes. Finding such a superforce has been called the ultimate goal of physics, even the end of physics.
These physical forces, however, can only be indirectly responsible for causing ovens not to explode, keeping people from becoming King of England or, I guess, maintaining the integrity of a proton. If caused stasis has any validity, there must be some analog of force that corresponds to Passive Causation which we cannot see. The physics of stasis needs to be much different from our current system. We can see right away one peculiar characteristic of the forces of not happening; they must act absolutely constantly, because any momentary cessation will allow change to occur. As soon as the nonexploding "force" subsides, the family perishes. As soon as I lose control of my appetite, no matter how good I have been every day for years, I gain weight. These holding forces "act" in the indeterminate spaces or rests between events, but, from the perspective of our switched view, they must constitute events in their own right. Gravitation, electromagnetism, et al. act more or less constantly as well, but we tend to associate them with fitful or punctuated action in a sea of stasis as in the sudden impulse imparted in the collision of billiard balls. The friction and inertia that damp out that action and make it appear a mere punctuation must have some close association with Passive Causation.
[Another thing we may notice immediately about Passive Causation is that it must somehow be an act of a thing on itself. The rest of the world causes us to change. In some sense we must be the agents that cause ourselves to remain the same. Presumably there is no external agent keeping protons from decaying. Whatever it is, it must act from the inside out. Self-causation, of course, gives rise to a chicken and egg problem. We need to have a system going to produce the process that keeps the system going. Where can it begin? How can a system and its persistence arise mutually? In what sense does the oven keep itself from exploding?]
It is difficult to find appropriate words to describe this idea of holding forces or Passive Causation in English. Our language has developed to serve the opposing point of view and my borrowings require quotation marks. 'Happen,' 'action,' 'event' all imply change against a backdrop of changelessness. Each concept has an obvious analog in this alternative world, but English offers no words for them. 'Not happen,' 'inaction,' and 'non-event' carry the wrong connotation. In general, the words at my disposal, verbs like preserve, hold, maintain, control and keep sound weak and anthropomorphic. We must remember, however, that "force" has anthropomorphic roots itself which we have slowly managed to deaden. The inadequacy of our vocabulary in itself is not a reason to reject this alternative version of force, but it does make statements seem stilted.
It is pretty easy to find problems with Standard Causation. First of all, there is no clear procedure for selecting the "real" cause of an event. What level of explanation is satisfactory? If, for example, we want to find the causes for winning a baseball game, are the answers going to concern talent, practice, strategy or chemical reactions in the muscles and minds of the players? Even working with the limited notion of the standard model, it is virtually impossible to meaningfully trace the overlapping sequences of contact for any but the simplest events.
More importantly, the free use of Standard Causation quickly leads to an infinite regress. In reply to each proposed cause, we can always ask "And what caused that cause?" Unless at some point it is possible to respond "That's just the way it is" or attribute everything somehow to a final cause in the Aristotelian sense, we can never extricate ourselves from this web. When my son was at that certain stage of development he frequently engaged me in the WHY game. Even with hours of practice, I never managed to get past the fifth or sixth "why?" without creating a circularity in my causal structure. I came to appreciate more than ever that causation is really an infinitely complex web of interdependence, interaction, and feedback. The regress leads us straight back to assumptions. Since causation is presumably a property of "egalitarian" nature and not of our "inegalitarian" theories, we ought not to be permitted a stopping place.
Linear causation is a kind of useful fiction, a simplification. Mutual interaction, simultaneous co-causation is the way of the world. Unfortunately, as I've said before, rational discourse is confined by its sequential, narrative structure. No one has found a way to perfectly express simultaneous interaction within the context of language. Art may be able to communicate such subtlety by inference, but scientific description can't.
The impossibility of a linear description of a non-linear world is closely tied to the Both-and-Neither Model. The boundaries between things are fuzzy, and thus, among other things, the points of contact are blurred. Bart Kosco, a self-proclaimed "fuzzy thinker" calls the situation the Mismatch Problem. Standard causation and the standard logic it implies are inadequate. His way to mitigate the Mismatch Problem is to overturn Aristotle's law of excluded middle and replace ordinary logic with so-called Fuzzy Logic. In the world of fuzzy logic we assign fractional truth values (between zero and one) to propositions. Two or more simultaneously partially true facts split lines of deduction into several paths each of which will have limited or probabilistic validity. My problem with Fuzzy Logic is the assignment of values. Just as when we considered "averaging", competing ideas are not half true or 90% true-10% false. They are completely true in their appropriate contexts of assumptions and completely false under other assumptions. The Foreground-Background Switch is motivated by the same Mismatch Problem. Passive Causation, for example, can produce an alternative description and, in combination with the standard approach, overcome the mismatch. Both switching and fuzzing solve the problem by hedging one's bets, taking things simultaneously in two different ways.
The Natural
I believe we have imposed an orientation on causation through our choice of changelessness as the natural state or default value of the world. The choice of a natural state colors everything we see in a most profound way. The natural state gives an absolute orientation to our explanations. It determines what is interesting, what is real, even what counts as an event. Consider, for example, the question of the underlying human nature. Are we naturally good but subject to temptation or inherently corrupt and selfish? The position one takes on such an issue has profound ramifications in every area of life from politics to child rearing.
The closer we look at naturalness, however, the more it recedes, especially in the realm to which we have the best access—human psychology. Time and again I find that when some side of a dichotomy seems to be natural, upon inspection it will turn out to be a matter of choice. Many women tend, for example, to feel that the default state of a toilet is with the seat down, since from their point of view there is rarely an occasion for it to be up. They find it unnatural, I think, rather than merely rude when the seat is left up. To their vilified male companions, however, neither position is any more natural than the other. Okay, so maybe this isn't the most compelling example. There are others.
Maleness, for example, is often given automatic priority as the default gender, especially in certain areas. God, the source of all things, is, of course, seen as a Father in the popular imagination. Our dominant creation myth has the male as the temporally prior and thus more basic human variety. Women are derived from men (Adam's rib) and are thus secondary. It is clear to most people nowadays, however, that maleness and femaleness must arise mutually. You can't have one without the other. It seems to me always the case that when one side of a duality is looked upon as the natural side, it turns out to be that the pair arises mutually, that each half can only be defined in terms of the other.
We tend to equate ontological depth with temporal precedence—that is, being first implies being more natural, but, as with the chicken and the egg, it may be meaningless to try to assign temporal precedence to things that arise mutually. The book of Genesis holds that Non-existence and Chaos comes before creation and we therefore see the Void as undergirding reality. When we strip away the superficial strata of reality, the Somethingness gives way to an underlying Emptiness that was there all along. But you can't really strip away layers of reality. Perhaps the idea of layers is our own invention. It could just as well be true that all of reality is of a piece and arises mutually and that there is no nothingness under all the somethings.
In the current paradigm of physics, the underlying and essential unity of the forces of nature (the Unified Field) was manifest before its inessential and overlaid separation that occurred a fraction of a second after the Big Bang. In the fantastic heat of that first instant, particles and the four forces had yet to "freeze out" of the products of creation. Again, being more natural and fundamental has been equated with coming first.
A number of years ago, it was noticed that the deeper, interior layers of the human brain were structurally very similar to our supposed distant ancestors, the reptiles. As we evolved, apparently, rather than changing those primitive structures, we merely overlaid newer, more sophisticated ones. This reptilian brain in the core of the human brain appears to be responsible for, among other things, our most basic emotions—fear, anger, lust, etc. Some writers have used the fact that this part of the brain is older to support the argument that these emotions and the instinctual behaviors that go with them embody the true human nature. Older = deeper = more natural. Our "higher" faculties and capabilities are tacked on afterward and are thus more artificial and less basic and essential.
As one last and more subtle example of temporal precedence implying ontological depth, let's look at Chomskian linguistic theory. It is proposed that the semantic level of thought, that is, its meaning, is deeper than and occurs prior to its syntactic expression. An idea first arises in a prelinguistic or metalinguistic form which must then be sent to the language organ of our brain in order to be transmuted into a particular grammatical and linguistic form. There are very good reasons for making such a split between semantics and syntax. It provides the beginning of an explanation for the universality of various syntactic structures found in all human languages. The translation to speech is the product of the same genetically inborn syntactic mechanism whether one is a speaker of Swahili or Hindi.
Indeed, we have probably all had the feeling that we had an idea which somehow we just could not express in words, so perhaps thoughts do, in a sense, precede their syntactic expression. On the other hand, the experience of "waiting to see what would come out of my mouth next," where expression almost seems to precede invention is equally universal. From the point of view of a theory of mind, the problem with this semantic/syntactic duality as an explanation of linguistic behavior is that it begs the question of where the metalinguistic idea comes from to begin with. In solving the problem of universal grammars, it just pushes aside the more difficult and more interesting question of the origin of meaning in the mind in the first place. Supposing, on the other hand, that meaning and grammar arise mutually, that we can't have one without the other, we lose the ability to make hard and fast distinctions between the two that temporal sequencing provides. (By the way, in so doing we also appease two extreme wings of the philosophy of mind who make very strange bed fellows—the holistic mystics who see all things as part of one unitary process and the supporters of so-called "strong" artificial intelligence who believe that meaning is just a side-effect of the playing out of formal and syntactic rules.)
When we try again to approach the apparent language-like structure of the M-T relationship, we will further discuss how meaning and its expression can be simultaneous and how language organs may not be necessary.
Be Yourself
Anyone who has ever tried to just be natural in an awkward or difficult situation knows what a paradox that presents. The harder we try the more anxious we become and the more stiltedly we act. On the other hand, there is a sense in which it is impossible to be anything but natural. Isn't our anxiety and stiltedness part of our nature? Is it unnatural to try? Again, the idea of being natural seems to be tied up with the idea of a default way of being to which we revert when free of outside influences. But we are never free of influences. We just exchange one set of conditions with another.
For years I have heard various athletes implicitly belittled because it is supposed that their talent is natural while others have been given credit for making it through hard work. Does that mean that hard work does not come naturally to hard workers? Perhaps hard work does come naturally to some but others have to work at it!
We are presumed to have default values for our states of mind or body that occupy very deep parts of ourselves, to which we have access and to which we will naturally return when we are free of external or situational influences. These values will come out spontaneously when the unnatural or artificial is let go (or is it held back?). I think, on the contrary, that there are modes of behavior, habits and patterns of behavior to which we fall back when pressed, but no natural or deepest behavior. There is probably no such thing as operating without external influence nor is there any inherent hierarchy to behavior that marks one behavior as deepest. We merely select which influence to become subject to, and simply behave differently under different conditions.
As mentioned a moment ago, there is a tendency to equate human nature with instinctual behavior. Does that mean the statistical tendency of men to seek sexual attention from women mark as unnatural or artificial (or "tacked on" or higher or lower) the behavior of men to sometimes behave toward women without regard to their sexuality? Is cooperation and sharing ontologically more or less deep that individualism and selfishness? The conservative wing of evolutionary thought has struggled mightily and with some success to explain apparent altruism in terms of a deeper selfishness in our genes, but I find their formulation of the question misguided. I see no particular distinction between the sides of these dichotomies which singles out one side as natural, no default value for what we call instinct. I can imagine that the vast list of human behaviors is somewhat circumscribed by our biology, but it is hard to imagine a net large enough to capture human nature. Explanations of human behavior based on instinct or on genes have a certain validity, but only when what remains of these explanations after any kind of naturalness for these behaviors is drained from the explanations, when they have been cast in complementary forms.
If mutual arising is the way the world works, then there are no hierarchies like from deepest to most superficial. We create the idea of deeper, natural states like changelessness precisely in order to linearize causation and communication and make them more clear cut. You can only begin arguments once the premises have been set. Ontological egalitarianism, on the other hand, obscures distinctions and leaves us with little to say. Mutual arising, once again, makes good philosophy but bad science. In order to make deductions we need an orientation. We need assumptions about temporal and logical priority. At the same time that we recognize the relativity of choices about natural states, we must appreciate that some idea of the natural must prevail, at least temporarily.
Sinking Back to Our Non-being
Over the course of history the boundary between what is considered natural and what is caused (and thus in need of explanation) has shifted considerably. Before Newton, for example, the fact that objects fell to Earth seemed to be merely natural, as simple as the distinction between up and down. Aristotle's version of gravity depicted objects as moving to their natural levels in the celestial sphere with earth the basest and lowest and stars the most refined and highest. It may be possible to cast all great scientific breakthroughs as the peeling back of the idea of the natural. It is clear, however, that we will never get to a stopping place in the process of peeling back. There is always some portion of reality which serves as the natural background which puts the foreground into relief.
At least since the dawn of the age of science, we have typically considered changelessness to be natural. Newton's calculus gave classical science the tools for studying the causes of change and change only. The Fully Automatic Model has steadily conditioned us to think in terms of the movement of the essentially inert parts of the Machine rather than concerning ourselves with the nature and permanence of those parts, except to assume that the parts are themselves made of even more natural and permanent parts.
Over the ages, however, there has been a great deal of debate about which side of the dichotomy, permanence or change, is the more real and thus the more natural background. Zeno devised his famous paradoxes to prove that the deepest reality is permanent and immutable and that true change is impossible. Plato too saw the ultimate reality as consisting of immutable Ideas with our world of impermanent forms consisting in nothing but the flickering shadows of those Ideas on the walls of a cave.
Heraclitus, among others, took the opposite stance. For him the true reality resided in the great flux and fire of change. You cannot go to the same river twice, as he said, because neither the river nor you have any permanent aspects.
In a remarkable formulation of the naturalness of change, the Kabbalist Meir ben Gabbai, as quoted by Robert Nozick, says that the enduring qualities of the world are a result of the continual writing and speaking of the Torah. "Were it to be interrupted, even for a moment, all creatures would sink back into their nonbeing." The natural nothingness, the cessation of existence must be actively held off by the Word just as John's gospel claimed that the Word created the world to begin with. Once again the holding forces must act without let up. Later we will look closely at the notion that description of the world is the glue that holds it together.
Eastern thought in general and Hinduism in particular seem less bound by the dualism of foreground-background orientations. The Hindus have gods both for change and for persistence. Shiva (the destroyer) and Vishnu (the preserver) operate on equal terms, with neither taking priority over the other. They engage in mutual struggle, the tenuous balance of which is the phenomenal world. Brahma (the creator), the third part of the essential Hindu trinity resolves the actions of the two lesser deities. Creation embodies aspects of both persistence and change.
It is perhaps not too much of a simplification to say that this question about the natural background of reality concerns which of the pair -- things or processes -- has priority. Do things set processes in motion or are things nothing but the relatively stable patterns of pre-existing processes? Here is the Both-and-Neither problem again. Our theories seem to require a choice between these perspectives, but neither is adequate to understand reality in the deepest way.
The perspective of science has slowly been shifting away from essential priority of things and permanence toward the priority of processes (fields) and evolution. In many areas it is starting to make sense to ask how things stay the same. A succession of the most powerful and important scientific ideas in history have slowly eaten away at the naturalness of changelessness. A short list must include the notion of deep time in geology, biological evolution, entropy, radioactivity, relativity, quantum mechanics, the expanding universe, cybernetics, general systems theory, information theory and complexity theory. These developments show that it is sometimes in things' natures that they will change.
These ideas and discoveries have shown that evolution rather than equilibrium is the rule. The "sciences of stability" are taking their place among the sciences of change. In the midst of this slow shift, we see thinkers struggling with the new order. For example, Einstein, fixed on the idea of the essential permanence of the universe, could not accept the evolutionary aspects implied by his ideas. In one of the few blunders of his career (so it is said), he incorporated a "cosmological constant" into his General Relativity equations in order to maintain the basic structure of galaxies over time.
Changelessness has retreated from large scales down to small. From the time of Democritus until the discovery of radioactivity, the atom was taken as a permanent entity without an interior, and it was thus impossible to frame the question about what held it together. Atoms were thought to be as inert and passive as a rock and indivisibly small. We now know there is something to hold together within atoms and other particles, and with the knowledge we have gained about the inner workings of atoms it has now become possible to ask for a scientific description of the processes that hold them together. For example, what makes a naked proton permanent but a naked neutron ephemeral? Still, as a holdover from the earlier thought structures, science continues to be dominated by ideas better suited to describe change than persistence. If we apply only the standard conception of causation, staying the same will always appear to be a non-event with no cause.
Old Book: Chapter 3
It all begins with an idea.
The Joy of GLOI
[In this section, I'm going to play at amateur physics in a way I really have no right to (having next to no training in the subject), but I can't help myself. Wise readers will question all my purported expressions of physical fact and theory.]
The ostensible naturalness of stasis to which I've been referring can be stated as a law: An object will tend to remain as it is unless acted upon. Persistence in time comes pretty close to defining what it means to be an object—something that naturally persists. I will refer to this law as the Generalized Law of Inertia (or GLOI) because of its resemblance to the Law of Inertia (LOI): An object at rest or in constant linear motion will tend to remain so unless acted upon by a force.
Newton saw LOI as natural, a brute fact for which he sought no explanation. With LOI as a starting point, he derived the mathematical definitions of force and energy as the "stuff" required to change things from this natural condition. GLOI, like LOI, has been a fundamental tenet of science not because its denial has never occurred to anyone, but because it too has seemed to be a brute fact, and there have been no reasonable sounding alternatives to it within the confines of the Fully Automatic Model. It has always seemed, in a Euclidean sense, to deserve the status of a "simple truth."
However, in science, and in mathematics as well, the notion of a simple truth has been steadily eroded during the last century or so. Many scientists no longer seek true theories (or at least so they say) but only formal constructs that are consistent with observations and can be used to make predictions. This modern view does not transcend the need for assumptions since any argument still must have a starting place, a foundation on which to build, but these assumptions become provisional or ad hoc; none has a special truth status. There is no longer the sense as there was in classical times that the world was made so as to be comprehensible, so as to conform to linguistic or mathematical forms or to logical arguments, so that people can come to understand it. The world just is and whatever comprehensibility it has now stands either as the supreme mystery of existence or as the crowning achievement of brain evolution.
Since its formulation, quantum mechanics and its implications for the nature of reality have perplexed any who have tried to understand them. So much of it seems to contradict common sense. Part of its strangeness I feel comes from our almost built-in expectations of a GLOI world. Quantum mechanics doesn't obey GLOI. Particles can no longer be thought of as perfectly isolated "billiard balls" but rather must be seen as stable processes affected in principle by every other particle in the universe. Within quantum mechanics we find that, for example, the decay of a particle is not caused by an outside force nor caused at all in the usual sense, but is seen as inevitable, spontaneous and probabilistic. There are no inert parts of matter whose enduring we can take for granted. In this case, it seems perfectly reasonable to ask of neutrons what we can of people: What are the conditions conducive to their enduring? Why do they persist in time? In fact particle physicists are interested in such questions although the mathematics of the presumed stable solutions to the quantum equations are too complicated given our current state of knowledge. Well now, if we can ask these questions of electrons, then we can ask them of atoms, rocks and ovens.—————
If we are willing to suspend the presumption that only change ought to be explained, then we seem bound by logic to replace GLOI with some other postulate about the natural state of things, presumably with something deeper, something that yields persistence as a prediction. Perhaps, for example, the apparent fact of persistence can be subsumed by the principle of least energy. That is, patterns will remain stable if they reside at the bottom of an energy well (like a marble in a bowl) and require outside energy to come out of that well and settle in another. Better yet, maybe a more sophisticated assumption than GLOI would be to explicitly cite the 2nd law of thermodynamics, the law of entropy: Unless a system is acted upon, it will tend toward equilibrium, the state of maximal entropy.
If states of least energy or entropy are the source of all stability, then it is indeed a deeper axiom than GLOI, but since they don't ever and can't ever transcend the need for some assumptions at the bottom of explanatory schemes it must eventually suffer the same fate as GLOI. In an egalitarian world, every natural state must ultimately be discarded, the principle of least energy no less than GLOI. All possible patterns of matter, energy, and space in general are equally natural and equally subject to explanation.
If no axiomatic system is sufficient to epitomize nature, then to arrive at an understanding of nature we must admit the limitations of one-way approaches. We must find ways to combine different and even contradictory approaches. Rather than look for something deeper than GLOI, perhaps we should postulate some explicit reversal of it to generate a complementary theory that fills in the gaps left by the mismatch between GLOI and the world it describes. There are many such reversals to choose from. We have:
1. objects will naturally and spontaneously cease to exist
2. objects will naturally make a slow transition into something else (as in growth or aging)
3. objects will suddenly undergo transformation (as in radioactive decay)
, etc.
Suppose we take spontaneous cessation of existence as the natural tendency of systems in the manner of Meir Ben Gabbai. Inertia would then occur as a kind of action against that natural tendency. We'd have to imagine the world of persistent things being remade or at least held in existence at every moment. That hardly seems parsimonious, but Occam's Razor is about our ability to understand and not about nature. For one thing, since it requires energy to do anything (in the usual sense of doing), the rock that is persisting quite nicely in your backyard would seem to have an unlimited and constantly acting energy source at its disposal, though it would be energy of an odd, nonstandard sort.
Assumption Switching Revisited
Let me try to give a general form to the Assumption Switching process. Suppose we observe that system x sometimes has property p and sometimes property p' or "not p". We can choose to say that p is a "natural" property of x and build up a theory from there that describes the cause of the occurrence of p' or we may choose to suppose that p' or is natural and we must explain how p sometimes comes to hold.
It may seem odd or even silly to suppose as I have here that something which is not always the case is the natural state of things, but that's exactly what we always do. For example, when Newton framed LOI as a natural law, he said that constant motion was natural, but constant motion never really happens. One would be hard-pressed to find a projectile which keeps going without change. Everywhere things are speeding up, slowing down, colliding. Balls stop rolling, orbits slowly decay and photons are absorbed. Friction, as I will label the general heading for those things which act against constant motion of an object, is as close to a universal as I can imagine at least on the human scale of events, and yet from the point of view of LOI it is "epiphenomenal." Friction comes up immediately as a consequence of the existence of other things besides the projectile. Constant motion only can happen in perfect isolation. We act as if constant motion is the default value, and friction is glommed onto existence at some later stage. It must be clear that motion and friction arise mutually. Which one takes precedence in our thoughts depends only on a foreground-background orientation, a choice.
In fact, since cessation of one sort or another is inevitable for nearly all systems—decay, death, dissipation—an anti-GLOI assumption (law of decay? law of friction?) is as reasonable ultimately as GLOI as a beginning point for inquiry.
Using the facts of nature (whatever those might currently be) as a guide, any given set of assumptions, especially ones about what is natural and what is caused, we can generate a whole new and equally valid way of seeing the world. Assumption Switching is a very powerful and efficient way to produce insights very different from those of the original formulation of a theory. Some theories generated in this way certainly will seem absurd, but only, I believe, because our minds are habituated to the culturally (and biologically) perpetuated ways of seeing. Again no foreground-background orientation is natural.
Won't the dual theories generated by Assumption Switching lead to contradictory conclusions about almost everything? How do you superimpose contradictions and arrive at anything like a single body of facts, the universe? There is a misconception lurking here. Only the axioms of a given theory would be reversed. The way the theory develops, the laws we conjure up, on the other hand, would still be constrained by the facts of nature. What will change after an Assumption Switch are the processes which require explanation. It would of course be silly to try to develop a theory based on an assumption that the helium atom really contains 3 protons rather than two or that 2 + 2 = 5. We ought to confine these switches to more abstract and general assumptions about foregrounds and backgrounds where both sides of the dualism are represented in the world (both p and p').
Alternative modes of explanation arise to bring the different theories into line with the facts and those differences will lead to different sorts of predictions, but I would not expect contradictory conclusions. In fact, I would expect that the predictions of one theory would largely overlap with those of its dual theory but that each theory will tend to encourage different sorts of questions and make certain types of derivations simpler.
An anti-GLOI theory, for example, will not look at all elegant in explaining the facts which a GLOI theory illuminates. The dual explanation is intended to illuminate the other half of the question, those areas in which the original version fails to shed light. A theory only seems parsimonious when we use it to look at the very questions it tends to lead us to ask. I imagine that all theories lead to messy explanations when investigations go in different directions from those implied by the assumptions behind the theories.
In the formative centuries of scientific thinking, there was the assumption that God created the world with a subtle perfection and that this perfection offered an absolute criterion for choosing between competing theories. The simplest and most elegant one to fit the facts is the truest one. I see no particular merit in that reasoning. I see no particular relationship between the simplicity of what is and the simplicity of our descriptions or theories. The world is rich enough to be described in infinitely many ways. Territories aren't maps.
We tend to think that the law of gravitation really exists, but laws are just maps for what really is. Yes, it's a little disconcerting to think that we can just conjure up alternative laws on a whim, but I believe we can do just that. The relationship between scientific laws and the world they describe (the MT relationship) is a very subtle one which requires new approaches to be understood. Ultimately, however, the world does not follow laws. The world simply is. Again, the real mystery that needs to be solved is to account for the apparent lawful behavior of the world.
The Main Attraction
Despite what I've said so far, ordinary causation is not diametrically opposed to the processes, whatever they may be, which hold things as they are. Dynamic open systems—hurricanes, people, ecologies—won't persist without being caused to do so in very nearly the usual sense. That is, we can imagine tracing their stability through chains of contact, through the application of coercive force. Hurricanes and people require constant upkeep in the form of information and energy—heat, food, oxygen, stop signs, heart surgery, etc.—which can be said to cause their enduring. Left to themselves in a perfect vacuum each would quickly dissipate or die (Pardon the gruesome image of a person being placed in a vacuum).
A kind of mathematics of stability is simple and well-understood, although it is not clear how to apply it in many cases. Quantitative stability can arise when repeated applications of a function to its own outputs are "attracted" to a single solution or set of solutions. The mathematical field of chaos (or "strange attraction") has overshadowed the more ubiquitous and fundamental process of simple attraction in the popular imagination. A later chapter will discuss the importance of mathematical attraction to any coherent theory about how things stay the same. Persistence can be seen as arising out of a kind of iterative feedback.
As a simple example of this process, let me suggest something to try on a calculator and with paper and pencil. Pick any number from 1 to 100 as a starting value and call this number X. We are going to make successive changes in the value of X by repetition of the following recipe. The new value of X is going to be the average of two quantities. The first quantity is just the current value of X. The other quantity is the result obtained by dividing the current value of X into 25.
Suppose 10 is the original value of X. The second value of X is the average of 10 and 25/10. The computation (10 + 2.5) / 2 yields 6.25.
The third value of X is the average of 6.25 and 25/6.25 (= 4), and we get 5.125. Continuing with this same process, the next few values for X are 5.001524, 5.0000002, 5.0000000, 5.0000000, etc.
The outputs get sucked toward or attracted to 5. Had we started with 78 as the original value of X, the sequence of values would have been 39.1602464, 19.8993294, 10.5778266, 6.4706305, 5.16712082, 5.00270260, 5.0000007, 5.0000000. In fact, any input whatsoever gets sucked toward either 5 or -5.
To see how attraction relates to the persistence of systems, we will have to use a little imagination. Suppose that this attracting process which can expressed as
x5 = (x4+ 25/x4)/2
x(n+1)=p*xn+ (1-p)*f(xn)
is a mathematical representation of the dynamics of some system whose current state is X and whose steady-state is 5. To help picture what is going on we can use a guitar string as our system. The state of the string is the shape, tension and direction of movement of the string at any given moment, while its only steady-state is its straight line rest state. Generally, one can't specify the state of a system with one number as I'm doing here with my mathematical attractor. Theoretically, however, one could specify the state of even very complex systems with some huge array of numbers which might for example give the locations and momenta of its constituent particles. A reasonable one number summary of the state of a guitar string might be its maximum deviation from the straight line connecting its endpoints.
This means that its undisturbed steady-state is given by 0. Now, if we pluck the string and once every second record its "state" given by that maximal displacement, we will get a sequence of numbers not entirely different from the ones given above that will slowly head toward zero. The dynamics of the guitar string system uniquely determine the next measured state given the current one.
Likewise, if we sneeze, our normal mode of being or "state" of mind may be disturbed for a few moments until some internal process "attracts" that normal state. We might call it recovery time but it is equally attraction time. Or if a community is shocked by a tragic event, there will be a time of alternative experience that we might call mourning that will replace the normal day-to-day "feel" of living in that community but that will ultimately be "damped" out. The new normal state may be somewhat different than the old (as in the case of epochal events like, for example, the assassination of JFK), but most of the specific mourning phenomena will be gone, and normality will continue to be an attractor; slow change will take place in the context of stability. Attraction is the mathematical reason there is such a thing as a normal state.
If there is some kind of iterative procedure or feedback responsible for the return to normal states, it is generally very difficult to specify it. Tension, friction and the unchanging physical properties of the materials involved produce the continuous feedback system of the plucked string. For the case of the sneeze or the tragedy, however, one can barely imagine the analogs for tension and friction. The physics of psychological activity in particular are hard to picture.
This sort of thinking does give a starting place for discussing the nature of stasis, but, as laid out here, we are explaining stasis of a system in terms of a deeper stability, that of the iterative procedure itself, like the formula above, that creates the stability. That is, we have not accounted for the ongoing nature of the system itself but only for a particular state of that system. This can be a fairly subtle distinction. What system, for example, is my current self a steady state of? My real and permanent self? I have already rejected the idea there is a deepest, most natural me.
There is also the problem of understanding what it means for the state of a system to be fed back into the stabilizing procedure. It's not as if the guitar string is checking at regular intervals to see how far from equilibrium it is. If there is feedback here, it is instantaneous and continuous rather than discrete and punctuated like the example equation.
Self-Facilitation
The odd twist on standard causation and the causes of stability for dynamic open systems like people and hurricanes is that each such system seems to take an active part in its persistence. They cause themselves to endure. There is no "outside" force involved. Humans, for example, seek the food, shelter, etc. necessary to survive. In fact, this seeking is among the most fundamental activities of people (which we generally chalk up to the pressure of natural selection or the survival instinct). In a lesser but real way, hurricanes also tend to maintain themselves—by wandering toward conditions conducive to their development. Systems of rain clouds consist of moisture but somehow seem to be able to replenish themselves as that move across the seascape.
Lancelot Law-Whyte, an undeservedly forgotten polymath, put it succinctly: "Systems facilitate those processes which facilitate them." This deep and crystalline statement goes a long way toward expressing the two-way character of causation, the mutually supporting bootstrap between things and processes. It sounds like natural selection at the level of being rather than biology. People facilitate feeding and sex, and feeding and sex facilitate them. Bureaucracies create environments that make bureaucracies essential. There is feedback between a system and its environment, between a system and its products. Systems that fail to facilitate the processes that facilitate them will not persist for long. The survival instinct goes far deeper than our genes. Only the very special processes which possess this self-facilitating quality even get to play the evolution game.
Rocks, of course, take no part in maintaining their own existence, at least not in the standard sense. If rocks participate in their own continuation, it is in some much more subtle or invisible way. They are assemblages with little coherence and consist in molecules and the geometry implied by the relationships between those molecules. Half a rock is still a rock, but half a hurricane isn't a hurricane. Rocks persistence in time is the persistence of these particles and relationships. Thus to consider the persistence of rocks largely boils down to questions about the persistence of its parts. These parts must be engaged in self-facilitation as well. But in what does self-facilitation consist? The mind boggles. We have a logical bootstrap problem. You need to have a system to begin the facilitation but you need the facilitation to produce the system. Once again we seem to be faced with a case of mutual arising that is offensive to logical linearity.
Do Rocks Naturally Persist?
A system is said to be closed if it is completely isolated from outside influence and open otherwise. The law of entropy says that for closed systems, the total amount of disorder can only increase or hold steady but never decrease. Another way to say this is that the only stable or steady state of a closed system is equilibrium with heat energy spread roughly evenly across the system. The ecological system of the Earth is extremely open because much of the energy that sustains it is gotten from the outside, from the Sun. Thus the extraordinary fact that a bunch of chemicals can be drawn from highly disordered states to states as highly organized as clouds and pine trees and human beings is not a contradiction of the law of entropy. The ecology of the Earth maintains a steady-state; which is to say that various measurements remain roughly steady—the amount of oxygen in the atmosphere, etc. even though oxygen is constantly being captured and released, used up and created. The complex structures on the Earth are such that the amounts being used and created will always be brought back into balance unless the levels are knocked too far out of whack (by cutting down the rain forests perhaps.) Things operate in much the same way a thermostat-heater system assures a constant level of heat. Some kind of mathematical attraction is holding sway. The idea that the Earth will find ways to maintain these levels through creative adaptation as if It knows what's going on is known as the Gaia Hypothesis. In the 1980s this was a hot and hip idea, but has grown more and more politically tenuous with the rise of concern about climate change. If the Gaia is assuring our steady-state, what have we to worry about rising CO2 levels? Still, we can't throw away the baby with the bathwater; there may be something to Gaia.
If there is an ecology on Pluto, on the other hand, it will be almost entirely closed since little energy of any kind comes in from the outside there. Much is often made of the distinction between open and closed systems and between equilibrium and steady-state, but we will not focus too much attention on these distinctions. There are no truly closed systems (except perhaps the universe as a whole) and equilibrium can be seen as nothing but a steady state that happens to be in agreement with the surroundings. That is, if the environment changes the equilibrium disappears. Open and closed systems and their stable forms are defined by their environments.
People, hurricanes, and rocks all rely on conditions conducive to their enduring. All maintain themselves through some range of conditions. Open systems in dynamic steady-state seem to be more sensitive to and dependent on conditions, but that's an illusion generated from our presumption of the naturalness of certain conditions—those approximating the human milieu. At extraordinarily high temperatures or at high velocity impact, for example, the rock will be very sensitive to conditions. It will melt or break into pieces. The commonplace nature of a rock's appropriate conditions for enduring and the relative rarity in our human environment of adverse conditions for rocks fool us into thinking there is a difference in kind here.
For rocks, the conducive conditions seem more like non-events, but once again this has to do with a Foreground-Background orientation. Is a conducive low temperature, for example, any less of an event than the very high temperature which will end the rock's existence (as a particular rock)? Only in the sense that heat is more real or more of an event than cold, an idea I will look at shortly. It comes down again to what one considers to be natural, what constitutes the background against which we see things "happening." Stable ovens are among the conditions conducive to the family's survival.
The rule "X endures under X-conducive conditions" seems to hold equally well for all things.
X-conducive conditions cause X to endure while those unconducive cause it to change. Is Passive Causation nothing but creating conducive conditions for persistence? The word "conducive" evokes the sense that the event takes part in its occurrence rather than being coerced to happen. The thing just "behaves" as it does, not by force or by decree or natural law. It gives causation the feeling of human communication and influence, with systems "deciding" to comply with wishes.
"X-conducive conditions cause X to endure" sounds rather bland, bordering on tautological, but it may be the price we have to pay to salvage a concept of causation inclusive enough to contain both our usual meaning and the passive, holding version I'm seeking. When we try, as mentioned before, to combine the two complementary Foreground-Background orientations in a single system, blandness may be an automatic consequence. We end up with a more complete but rather toothless model, too general to create definite statements. Good philosophy, bad science.
Euclidean and Non-Euclidean
In discussing Assumption Switching, I may have given the impression that scientific theories are generally given axiomatic treatments. This is almost never the case. Neither are mathematical theories for that matter. The important thing is that mathematicians and scientists act as if such a treatment were possible, in the faith that it is possible, as if there is an internal consistency to nature that can be reduced to a few simple principles. The less fundamental can be derived from the most fundamental, and the most fundamental must be accepted without proof. The only proof is consistency and induction.
One of the few systems of thought that has been thoroughly axiomized is Euclid's geometry. It is the inspiration for a general belief that axiomization of theoretical systems is possible, proper and desirable. Many of us first learned of the beauty and power of mathematics by building up ever more sophisticated geometric theorems beginning from the most trivial and "self-evident" facts. In a deep sense, the beauty of this treatment rested on the simplicity of the postulates. Since the time of Euclid, one of the five postulates has always seemed less simple, elegant and self-evident than the others and has compromised the beauty of geometry in the eyes of many mathematicians. Countless hours of work by hundreds of amateur and professional mathematicians were devoted over the centuries to proving this parallel postulate from the other four, but through the centuries, no one managed it. Finally in the first half of the 19th century it occurred to several mathematicians independently that this approach was doomed to failure. The parallel postulate must be independent of the others. This raises the possibility that the parallel postulate seems less obvious because it is untrue. At least two alternatives that contradicted the postulate were put forth as possible postulates. These were used to generate whole new bodies of theorems. As far as I know this is the only historical precedent we have for Assumption Switching, but it shows beautifully the benefits of the approach.
The theorems of these two noneuclidean geometries and the old Euclidean one overlap, and the ones that differ do so in a subtle enough way that we cannot yet tell for sure which is the true geometry of Nature. The current wisdom says that one of these alternatives rather than the original euclidean version is correct. I would submit that any such final absolute solution is meaningless. Each alternative will give insights under appropriate conditions. What I am seeking at the moment is a non-inertial or anti-GLOI physics analogous to our non-Euclidean geometry.
Coldness
It would be nice to see an example of an anti-GLOI theory to test its feasibility. Of course that is a rather tall order. Such a theory is not about to spring fully developed from my brow. With that disclaimer, however, here's a glimpse at an anti-GLOI theory.
Until the scientific era gave us a kinetic theory of heat, the warmth of a fire and the chill of an Arctic blast of air would both undoubtedly have been treated as having equal ontological standing. They were opposing forces of equal but opposite power, and they were both equally real and both presented equal threats to the well-being of temperate human beings. I imagine that even today most people still routinely picture hot and cold as ontological equals despite the fact that science no longer supports such a view. Heat, we are told, is nothing but the motion of particles, and cold is simply the lack of heat. If we take away this overlaid kinetic energy from a system, its default temperature value will be absolute zero. In other words, we are now told that heat is a something and cold is a nothing, just the natural background temperature.
In the manner described above, the natural state of matter and space is the motionlessness of absolute zero while deviations from that motionlessness (i.e. heat) are (caused by) infusions of energy. Absolute zero is the ultimate in stasis. It, by definition, means no movement, no change. Here there are no steady states, no vibrations, not even chaos—only equilibrium, stasis, perfect persistence! But persistence of what? Experimental findings seem to confirm a hypothesis of Bose and Einstein that at temperatures very near absolute zero, the whole idea of a particle-as-we-know-it completely transforms and so therefore does the very notion of temperature since it is defined in terms of the kinetic energy of particles. All that remains at these temperatures is a sort of plasma with its own new set of properties. That is, absolute zero cannot be reached because, aside from technical difficulties, before we get there, temperature as a quality of nature ceases to exist. Whatever causes particles to persist is absent by the time we get to zero.
The sequential nature of our reasoning makes it appear as if heat is a layer of reality that comes temporally after the background coldness layer is laid down. In fact, however, heat and cold only make sense in respect to each other. They must arise mutually and simultaneously.
What if we try to flip the relationship between heat and cold? One major stumbling block is the asymmetry of the temperature scale. There are no generally recognized limits on how hot a substance can get while there is an absolute zero on the other end. A zero on a scale can be seen as a reason to choose that side as the default value, the side that implies nothing is happening. For example, the fact that there is a zero on the jump scale and no obvious maximum, may contribute to our feeling that a non-jump is more natural than a jump. In the next chapter, we will discuss the peculiar status of zero in some depth, but for now we can say that it is not always the hard stop that it appears to be. An instructive image is the single perspective point in a drawing of, for example, railroad tracks heading off into the distance. That single point is really a compressed infinity.
We assume that the linear scale we have chosen represents something real, but nature does not come with natural scales, and we are free choose another one that stretches that last millionth of a degree out to any length whatsoever. We accept the naturalness of our linear temperature scale only because it works out well in several physical laws like the ideal gas law. A compelling reason to accept that naturalness is that, when you mix a cup of 40° water and a cup of 60° water, we get two cups of more or less 50° water. A point that has come up a number times, however, is that the world has no tendency to choose one thing as natural over another. There is a simple mathematical demonstration that can help illustrate this essential point about scales. The diagram below illustrates that there is a simple mapping of any interval, like that between 0 degrees and 1 millionth of a degree for example, onto an infinitely long scale.
Bend the open interval into a half circle, place it above the real line and mark the center of that implicit circle with the point P. Notice now that every line through P that goes through the interval also goes through the line, and for every point on either the interval or the line such a ray exists. This simple association of points on one scale with points on the other turns one "linear" scale into another that has just as much right to be called linear but extends forever.
In my example, .1 degrees Kelvin might be mapped onto -10 modified Kelvin, and .01 degrees to -100.
We can try to imagine that the natural state of space is, far from being a simple void, filled with some maximum quantity of (heat) energy and then something, an as yet unrecognized process, causes deviations from the natural state. The something must have plenty of neg-energy at its disposal because it brings this immense background energy way down the scale to 4° Kelvin (the so-called background radiation left over from the Big Bang) in much of space and a few paltry million degrees even in the centers of stars. David Bohm, in a very different context, has said that empty space is full of such unexpressed energy.
"If one computes the total amount of energy...in one cubic centimetre of space,...it turns out to be very far beyond the energy of all the matter in the known universe.
What is implied by this proposal is that what we call empty space contains an immense background of energy, and that matter as we know it is a small...excitation on top of that background, rather like a tiny ripple on a vast sea. In current physical theories, one avoids the explicit consideration of this background by calculating only the difference between the energy of empty space and that of space with matter in it. This difference is all that counts in the determination of the general properties of matter as they are presently accessible to observation. However, further developments in physics may make it possible to probe the above-mentioned background in a more direct way. Moreover, even at present, this vast sea of energy may play a key part in the understanding of the cosmos as a whole. In this connection it may be said that space, which has so much energy, is full rather than empty. (p.191 in wholeness and the implicate order)
The existence of vast quantities of neg-energy necessary to mask the equally vast reservoir of energy would go a long way toward explaining where rocks get the strength to go on. The fundamental characteristic of neg-energy, as I envision it, isn't doing as it is for ordinary energy, but reining in, damping, not doing. Instead of asking what has caused a particle to move as we would in the standard model of heat, now we must ask what is keeping it from going at its natural infinite speed. We want to study the nature of the anti-jump muscles or friction. We can imagine for example that space has a kind of viscosity, an inherent and real chill. Motion is the lack of this neg-energetic chill. Or we can imagine that a tiny piece of space is filled with innumerable little neg-energy vectors pointing independently and chaotically so that they cancel each other out. The trick will be a align them, as we'll illustrate shortly
Brain Power
The passive versions of force and energy don't seem to make much sense in our usual physics, but they may be reflected in our everyday language. We speak of the "force of will", "mental energy" or "brain power" as if physical force and energy are involved, but, in actuality, these phenomena are about stability rather than referring to expressed movement. They have more to do with discipline, control, concentration, resisting temptation, and resisting influence. That is, they have to do with holding things together, maintaining a continuous focused attention. They don't involve movement but rather keeping still. It is a truism that listening requires more mental energy than speaking or expressing oneself. On the other hand, I doubt that more calories are burned up when listening than when speaking.
The standard model sees mental exertion as essentially computational, dealing with abstract information, and although computation necessarily requires some theoretically minimal expenditure of energy, it is not essentially energetic at all. It takes as much energy, for example, for a PC to continually refresh the screen as to compute the digits of pi. Likewise mental concentration would require no more energy than spacing out or passively watching TV. I think neg-energy, the energy of holding, probably must bear some relationship to information—in the sense of "forming within." Perhaps the term "influence" expresses the notion of something that is essentially information but which has some power to induce change.
The concept of energy in physics comes out of GLOI. Energy is that which can cause change. In the world of passive, holding causation, this neg-energy or inertial energy would be that which causes stasis against a natural condition of change. And somehow, again, rocks and other stable, relatively closed systems seem to have plenty of it and seem to get it for free out of nowhere.
We have to allow ourselves to fully enter this strange neg-energy world where cold and heat are switched. We can take nothing for granted. Particles may be completely something other around here. Perhaps the concepts of space and particle exchange places, background becomes foreground as in the face/vase picture. Particles may be like bubbles in the pudding rather than pebbles in the void. One similarity in all such speculations which reverse the place of cold and heat is that "empty" space takes on a more central role.
The unintuitive notion from relativity that the speed of light is an absolute limit could result in this model from the nature of that pudding. The speed of light is a property of space rather than of photons. It would be analogous to the terminal velocity of an object falling through the atmosphere. Like the classic finger-trap puzzle, the harder you pull, the tighter the space squeezes, and it squeezes infinitely hard at the speed of light.
The heat/cold reversal brings to mind a similar switch between light and darkness. Imagine once again that, underneath it all, space is pure energy but it has the property that its points emit fantastic quantities of damping inertial negenergy. Thus space would be like a veiled or masked fullness, a self-canceling process. In the next chapter, I discuss this image of emptiness as perfect cancellation. According to this view, what we see as sources of light, the Sun for example, have the capacity to affect these points of space so that the veils are partially lifted, perhaps by orienting or aligning those points in a particular way that inhibits the inertial neg-energy. Imagine that the points of space are discreet and fixed (despite the fact that General Relativity explicitly forbids such a formulation). The alignment process could work point by point the way that a magnet magnetizes a pin which magnetizes another and so on. [Insert photo here].
The points are like these little magnetized pins or compass needles which get oriented toward the light source. The apparent outward movement of the photon is only virtual. The only real movement is orienting inward. The apparent motion outward is the crest of an expanding wave. To help you picture how inward movement can appear as outward movement imagine that we position the nozzle of a vacuum cleaner over a dusty floor and turn it on. For a moment we get an expanding circle of clean floor, but the only motion is the dust toward the nozzle. The sun then in this model is a tremendous neg-energy inhibitor or darkness vacuum. The more intense the light in the standard approach, the more effectively the shades have been subdued to reveal the background energy. What is described as a photon in the standard model is the ripple of the "suction" wave as it propagates through space. The speed of light is the speed of darkness suction. The points of space, being fixed, cannot move toward the source but are merely reoriented or aligned. Quantum vacuum fluctuations are like leaks in the points of space, the failure of these points to rein in their immense spontaneous creativity. The relative nothingness of space in the standard model is replaced with a plenum whose infinite potential is ready to burst out all over.
Is the Universe a Cellular Automaton?
Again, in this model there is no motion of photons away from the light source. On the contrary, all we have are the energy shades orienting toward the source. Despite the fact that General Relativity precludes such a possibility, we can think of each point of space as fixed, a tiny cell surrounded by other cells that can take on a number of states -- naked, veiled, etc. The darkness attractor snatches darkness away from a surrounding cell of space. Each cell in turn snatches its neighbors darkness as the darkness wave spreads. Here is a wave that, because it advances cell by cell, can appear to be a particle. This cellular approach gives the fundamental reality to space rather than matter, with matter occurring as a specific state of space.
As these ideas begin to sound more and more absurd, I want to repeat in my defense how difficult it is to re-imagine a whole scientific theory in one fell swoop. The chance that any specific detail of this image will ultimately prove useable is small, but there still may be some insight or inspiration to be gained. It is clear to me that, with sufficient knowledge, creativity and contrivance, it must be possible to produce alternative pictures like these that are consistent with the known facts. Again, I have no desire to replace the particle approach which renders space as the natural background with a theory that has space as a foreground. All of this is proposed an alternative description. Reality is too subtle to be captured by any one approach.
Computer scientist, physicist Edward Fredkin has developed what he calls digital physics which similarly treats space and its contents as a vast collection of discreet point-like "cellular automata." These are envisioned as the real world counterparts of a genre of computer programs, the most famous of which is Conway's Game of Life. In that program each square of a rectilinear grid, which we see represented on the computer screen, is thought of as a cell which is either dead or alive, turned on or turned off, (much as the neg-energy model has them as naked, veiled, etc.) as determined by applying a rule to the previous "generation" of cells. Living cells in the real world might reproduce if there is sufficient space and food available and yet perish if they are either too crowded or too isolated from other cells. The rules of the Game of Life reflect this idea in the following way.
1) Each cell has eight neighbor cells. If a "dead" cell in one generation has exactly three live neighbors, then in the next generation that cell will be alive. We can imagine that those surrounding cells have given birth to the new cell.
2) If a live cell from one generation has either two or three live neighbors, that cell will stay alive in the next generation. Otherwise it will die from loneliness or overcrowding.
(diagram)
Depending now only on the configuration of live cells at the beginning of the run, the program will create generation after generation of configurations following these rules for each cell in each generation. The mathematical property of attraction is very important here. The initial arrangement of lives cells may quickly be attracted to a stable dead state, or it may flicker in rather random looking patterns, but, amazingly, they may also exhibit periodicities and stabilities and generate beautiful images on the screen. One especially significant phenomenon for us, known as a glider, looks like a stable object that moves across the screen square by square, but a particular cell X of the stable pattern does not continue in any normal sense. Rather some group of cells (that may or may not include X) from the old generation create the new cell corresponding to X. (diagram of simple glider). The interative feedback and mathematical attraction that are at play here will be discussed in the next chapter.
Fredkin's model of physical reality, like that of the Game of Life and my alternative picture of light propagation, sees all change, even the simple movement of a particle through space, as the result of local communication between adjacent points. In his model, the apparent movement of a persisting particle (or even, presumably, some arrangement as complicated as a human being or a solar system) through space is really just the propagation of a computational stability, a glider, consecutively inhabiting points or cells in space. Physical reality is just information pattern of a glider rather than any absolutely tangible stuff of matter, and the only thing that has an immutable existence is the playing field for rules of propagation—the space of cells and the underlying laws of recombination available uniformly to each of the cells that takes the place of the computer program and the computer itself.
David Bohm, on a completely different basis than Fredkin, also models the unchanging phenomena of physical reality as a kind of mirage. He sees a persisting object as a standing wave or stability in the "holomovement" (or deepest reality) so that, once again, objects are really just chimera which appear to recreate themselves at each moment. Here is an extraordinary, if somewhat disturbing and unsatisfying cause for persistence. We are more information than substance. We are caused to go on by a kind of algorithm that recreates us moment by moment. The object's very existence is recreated in each generation, is held together by the rules, and at the same time consists, one might reasonably say, in information alone. There is no stuff there. You could even say that there is no persistence at all but only gliding. What was in the last generation, is no more, except perhaps the background cells themselves and their program. This formulation, therefore, more or less bypasses the question we've been looking at: Is change or changelessness more natural?
To keep things relatively simple up to now I have acted as if the superposition of one side of the dichotomy and the other would circumscribe the descriptive possibilities, but now we can see that there can be way more than two sides to the issue. Just as there are two denials of the Parallel Postulate that lead to fruitful alternatives to Euclid, there is more than one denial of GLOI.
·Things spontaneously cease to exist unless they are continuously maintained by holding forces.
·At the deepest level, continuing is essentially random and probabilistic. Each kind of phenomenon has its characteristic half-life.
·Neither persistence nor change is more natural. The evolution of the world is the playing out of the struggle between these two equal and opposite forces.
·There are no things that continue. The phenomena of the world are remade with every tick of the cosmic clock.
·Everything is trying to happen at once, but most failing due to massive cancellation of influences
All of these and the myriad others that I have yet to imagine must be given their weight in coming to the most complete picture of the issue (Don't forget about the Melonquescence!).
The 7 Ages of Man
Up to now, our discussion has generally turned toward physics in the search for caused stasis, but some of the most obvious justifications for such a notion come from the life sciences. There are some glaring failures of the standard scientific models to provide insight into seemingly basic questions of biology. Many writers have observed, for instance, that every kind of organized living system from a person to a species to a civilization seems to have a natural life span, from birth to adolescence, maturity, senescence and death. GLOI implies that we should stay roughly as we are unless something makes us change, so we must ask why people, animals, cultures grow up, grow old and die in somewhat predictable ways. So far, science has not been able to do much with this question. Why is it, for example, that the number of heartbeats allotted to the typical life span of all mammalian species is roughly the same, from a chipmunk to an elephant? The law of entropy is often invoked to explain why things decay, but it is hard to see how that applies to people and other far from equilibrium systems.
Most people may vaguely picture aging as a simple process of parts wearing out as if we were made of gears and pistons. This metaphor doesn't hold up under scrutiny since our parts are constantly being replaced. For some reason our cells slowly lose their ability to make copies of themselves, but it is not because of wear and tear. And remarkably, whatever the age of the parents, a newborn gets a clean slate as far as its cells' abilities to make copies. There was some enthusiasm a few years back that biochemists had found the mechanism of this limitation on copies (research!). Is this built-in limit on copies a mere necessity of chemistry or has it been selected for. One feasible GLOI theory of aging says that we die to make room for the young. In order to keep a species adapted to its changing environment there must be new potentially mutated generations. And the old generations are competing for natural resources with the new. Thus the old generation must get out of the way. If the level of selection is the gene, as many argue, rather than the individual, it may be possible to show that it is in our genes' best interest to terminate the lives of their carriers (us) after some period of time, particularly if we assume declining fertility throughout post-pubescent life.
There is another argument for the cause of aging which we might call the associationist approach. Somehow the genes that produce aging and death also produce other characteristics that are of very great selective advantage or necessity.
Like most arguments based on natural selection, these have a certain appeal, but until researchers find a universal aging gene in all higher life forms (which unlike all other genes never mutates) we get no insight into the nature of aging, the way our cells progressively lose their ability to replicate themselves. Such arguments also fail to account for the strong analogy to non-genetic systems—species, societies, etc.—that also exhibit life cycles.
An antiGLOI approach, on the other hand, is more consistent with the idea that aging is natural; it is stasis that's caused to happen. If aging is a given, an inherent and inevitable consequence of existence, then our focus shifts to how we maintain our identities in the face the natural tendency to dissipate. The exact course of aging may provide clues to the nature and limitations of the holding forces involved. Since holding together seems to be related to the idea of control, one guess would be that as we age there is somehow more of us over which to maintain control as if having a past literally weighs us down and that ultimately the quality of control is reduced.
Ever since I learned as a teenager that one leading model of the expanding universe held that the cosmos is the 3-dimensional surface of an expanding 4-dimensional sphere, I have amused myself by imaging that each of us is a 4-dimensional creature in our own personal space-time. This is closely related to the Hollow Earth image I brought up in the introduction. In this cosmological model, the past states of the universe are in the interior of this 4D sphere. Thus in my extrapolation, our pasts are, in a sense, within us. To this way of thinking, therefore, the idea of a real past which is in touch with and affects the present is not so far-fetched. It could be that the reason researchers have failed to find the seat of memory in the brain is that memory actually consists in real if imperfect connections to the past. The argument goes something like:
fact of aging <=> loss of control over copying procedure <=> more to control <=> reality of the past <=> new theory of memory
Pretty wild, huh?
It is important once again to keep in mind here that I do not wish to replace a GLOI world with an antiGLOI one. My position is that we will gain a more complete understanding of the world by superimposing switched theories. No one side has the market cornered on truth. We can just add another formulation to list of denials of GLOI:
·Continuing is a product of control and control is a function of history.
Neither do I want to give the impression that GLOI is our only assumption whose reversal will yield interesting theories. Here is another assumption:
Events in the past cause the events in the present.
To put it figuratively, the past pushes us into the future. The reversal of this one is even harder to swallow than antiGLOI, but there are plenty of circumstances for which the idea that the future pulls us out of the past is the most obviously practical choice. In General Systems Theory, we see a concept called equifinality that refers to the way certain systems such as developing fetuses seem to be attracted to predetermined endstates no matter their specific histories. In theory, we get just about the same baby whether the mother preferred to eat pickles with her ice cream or not. As we have seen, attraction in general tends to erase histories and emphasize endstates. Since our physics is remarkably time symmetric, its laws will allow any of our traditional causal agents to be transformed into one operating out of the future, but there are clearly hurdles. This approach will probably be poor at describing entropy—the reason that a smashed plate will never unsmash—but it could be much more effective than the conventional view in providing simple explanations for physical healing as well as other phenomena of nature.
________________________________________________
Here are a buncha ideas that may eventually work themselves into this chapter
life consists in balance of "change"-"same" identity and influence. What happens when you superimpose GLOI and anti-GLOI perspectives.
Selfhood is a singularity (like the intensity of gravity at the center of an object. The infinite spring of stasis force is this singularity. It would be good to look at the equation Gmm/r2=F to define sing.
Forces from the inside looking out. Forces that emerge from identity, systemness, suchness. Forces are not things but explanatory principles. Hard not to reify them.
Consciousness is being from the inside looking out. Energy is not real in this model, its opposite is. Does conservation of mass and energy translate into a conservation principle of negenergy? Mass is negenergy? Persistent of position, inertia. Einstein's general theory (thought experiment showed that an elevator being accelerated through space was indistinguishable from resting in a gravitation field) showed that gravity (standard force) and inertia (negforce) were two aspects of a single process.
When one coerces, or applies force to, the self rather than against others, we call it discipline or ,more neutrally, control. When we think of the forces of not happening, perhaps the image of control will be instructive.]
Is there mathematics which shows a neutron to be a least energy state (( of what?)) ((what is so fundamental about least energy? Is there an explanation for the principle of least action. I know from QED that the subhead of principle of least action that says that a beam of light will follow a shortest path has a quantum explanation-- the equations miraculously give shortest paths as most probable)), that thinks of neutrons or stable atoms or molecules as having an inside. Is feedback, attraction involved? On a larger scale what are the causes of the stability of a solid or a crystal or a glass?)
Supposing an electron is a stability, it must be a stability of some process and thus the underlying process must, from the point of view of an antigloi theory, be more fundamental that the particle.
The aether (or at least the idea of space as a plenum) is not dead. Lorentz conceived of fields as "configurations" of the aether.
self mass or electromagetic mass-- clamping of space on an object. As a charged particle moves, some of its kinetic energy must transform from the motion of the particle to the surrounding electric field. Well isn't that what inertia is in some yet to be determined sense?
Massless particles are traveling at the speed of light. As they gain relativistic mass the tendency toward infinity comes into balance
Need to work on the action-of -the-self-on-the-self angle. self mass, self action, renormalization
Summary: If there is a cause for stasis it is constantly acting, it involves a reworking of the idea of doing and therefore of energy. It seems to place space, what we normally see as a kind of nothingness, in a positon of increased importance vis a vis objects. May be the act of a thing upon itself.
The standard becomes less effective and the antigloi version more effective as we look at softer and softer sciences-- psychology and anthropology
{Superconductivity cold frees movement negenergy performs an action of a kind.}
naturalness of stasis built into null hypothesis of statistical tests
More about maintenance as perpetual "work", how you cannot isolate a single event. If I resist the temptation to eat excessively on 100 consecutive days and then give in to temptation on the 101st, I gain weight anyway. You can't store manitenance. Many other phenomena are like this but something in me suggests a one time character to them. One can create or destroy in a moment but keeping is forever.
Earlier we got a look at what this antiflow when we saw how light sources could be conceived of as darkness attractors. By assuming the alternative stance that spontaneous change is natural while stasis must be caused and exchanging the relative priority of matter and space, we saw how emission was like attraction. Just as the emission of light can be thought of as an attraction of a sort,
Old Book: Chapter 4
It all begins with an idea.
Much Ado About Nothing
In order to justify my idea of assumption switching, I've tried to make the case that all explantions involve a foreground and a background, every assertion needs a context. That is, there is no logical argument without a set of postulates in the Euclidean sense. In contrast to this, the world itself is presumably indifferent to foregrounds and backgrounds; the world simply is. Maps have an orientation; the territory doesn't. This fact is the heart of the MT problem. The choice of what constitutes the background as opposed to the foreground ultimately leads to the limitations of the explanation so constructed. Here I want to try to take the idea of the background as far as it can go to see if that will shed light on the issues. It's a fun ride for me.
Well, what is the ultimate background when one is trying to explain everything? What is left when everything is stripped away? Nothing! Nothingness is the ultimate background, the final context. Boy howdy, however, nothingness is hard to talk about and is rife with paradoxes and natural puns.
In fact, I assert that the concept of nothingness can't quite make sense. In the history of philosophy there is a very deep question (it can actually be interpreted at many levels of depth), "Why is there something rather than nothing.'' My reply to the question: "What the hell is this nothing to which you refer?" What is utter absence? How does one go about stripping away all the somethings until there is nothing to strip away? We feel that there has to be a nothing that's the opposite of something, but to me it's utterly unimaginable.
We can imagine various quasi-nothings, substitutes-for-nothing rather than the (un)real (un)thing itself, and each of them turns out to be a kind of thing, a something. All our apparent examples of nothingness are just canceled out somethings. A vacuum, empty space, etc.
Consider silence in a radio broadcast. It isn't the total lack of a signal; it's a signal which designates an emptiness. The true lack of a signal manifests as random static. The chaos of static is overridden in order to communicate a rest. Thus, apparent emptiness can convey order. I think this example may be onto something. Maybe the best opposite of somethingness is random chaos rather than nothingness. My favorite contradiction of "things only change when caused to do so" is "everything is trying to happen at once but fails to because of cancellation." Static is the sound of everything trying to happen at once.
The silent spaces in a radio broadcast have meaning and provide information as much as the sound before and after. They aren't really in the background but have equal ontological status with the sound parts. A written message is in the something of black ink and the nothing of white paper, in the letters and the spaces between them.
Profoundly blind people don't live in darkness, don't see blackness. They live without sight, black or otherwise. Our minds automatically substitute trumped up relative nothings to stand for the deepest nothings. We picture the Void as empty space rather than as a non-thing without the attributes of space nor anything else.
Many philosophies from various forms of mysticism to existentialism have held nothingness in a central place. We will see what a rich and elusive idea this nothingness is.
Zero, in the meantime, is our quantitative correlative of nothingness. Since math epitomizes certainty and precision, we'd expect something as simple as zero to be an unambiguous and definite object, but... zero has issues. Let's start there.
He doesn't have any!
Math Problem: A man has two apples. He then eats one of them and gives the other to a friend. Now how many apples does the man have?
Solution #1: He has zero apples.
Solution #2: He doesn't have any!
Is there a difference between these two solutions? They certainly feel different. One sounds like math and the other like just plain common sense. Solutions like #1 have always bugged me. Our schooling has helped make zero seem a real and natural thing, but something deep inside me says that it is not.
People have been counting since long before the dawn of recorded history and so the counting numbers 1, 2, 3 and so on are very old. The name given to the counting numbers by mathematicians, namely the natural numbers, indicates their special status and presumed priority. "God made natural numbers; all else is the work of man." Zero, on the other hand, is a relatively recent invention. There is no acknowledgement of it at all in Greek mathematics. Wow! When Arab mathematicians and accountants first started using zero in the middle ages — literally to keep the books balanced -- the symbol 0 had no meaning as a quantity. It was just used to distinguish, for instance, 308 from 38. It was, as we called it in elementary school, a place holder, meant to indicate that we should skip the tens place — three hundreds, no tens and eight ones.
The Babylonians, who invented place value numeration, where the value of a digit depends on its position, used a space to serve the same purpose as the Arab 0. The lack of an actual symbol suggests it may never have occurred to them that they could treat nothing the way they treated something. A lack must have seemed to them, as it seems to me, to be in a different category than a number. Spaces ultimately lost out to zeros not for abstract mathematical reasons but because spaces led to ambiguities when they were placed at the end of the number (380) or when there was more than one space (3008). Thus zero arose as a lexicographic convenience, a mark rather than a quantity or even a concept.
In the 17th century, zero acquired a second equally important role. René Descartes may have been the first to explicitly use the idea that locations on a line could be labeled with numbers. This simple labeling idea led directly to the invention of the most important bit of technology of the last thousand years, analytic geometry, which exploits the insight that algebra and geometry are equivalent. Spatial relationships can be represented as equations and vice versa. You may never have thought about that when you learned about graphing in middle school. This realization paved the way for Newton and the rest of quantitative science. Geometry-algebra equivalence may be the first triumph of Descartes' mechanistic thought, adding confidence to the notion that the MT relationship is simple and one-to-one: Space and spatial imagery, the seeming antithesis of linear verbal expression, had turned out to be expressible in "words." Perhaps, therefore, everything is expressible in words.
Since lines go on indefinitely in both directions, locations to the left as well as the right of 1 need labels. Zero and the negative numbers had been around long enough in other contexts that they were the obvious choices to be used here.
Zero therefore has important and clear meanings as a place holder and as a location on a number line, but can it be interpreted as a quantity? Can one have zero apples? The difficulty boils down to zero's strange status as a sign which stands for absence when all other numerals stand for presence. Zero makes "that which is not a thing" a thing. The nothingness that can be named is not the true nothingness.
The strategy mathematicians have used to avoid this paradoxical situation about naming nothingness is to say that zero is not really a quantification of nothing at all but rather refers to a simple object called the empty (or null) set. That is, numbers are ultimately only tokens which refer to abstract sets (or collections), and zero is the token for the special set which has no members or elements. The bag is empty, but at least there is still a bag. Zero is not an absolute nothing. It does have appended to it the units of that set being examined — apples, for example. But what can we possibly mean by a set with no elements when "set" seems to be defined as something that has elements, as nothing but its elements? The bag was never really there; it was a contrivance.
When conferring a value of zero on some measurement we are committing what Whitehead called the fallacy of misplaced concreteness. We are reifying a mental construct rather than a "thing" in any reasonable sense of the word. When we have no apples, the quality of "appleness" is missing and there is nothing to measure. One cannot sensibly say "I counted the apples and there were zero of them." We can get as far as preparing to count, but we never quite commit the act of counting. Zero, interpreted as "nothing with units", has a quality of readiness or potential without content, as if something were about to appear.
I've written a lot about the unbridgeable separation (and the extraordinary connection) between words and what they signify. It may seem a small thing to adopt an MT model that ignores the distinction between a name and the supposed thing it names, but there seems to be a far greater gap between nothingness and a name for nothingness. Names are labels for things and not for non-things. The substitution of "zero things" for "not anything" is a misleading reduction (or inflation) of the situation as much as the substitution of "seeing blackness" for "not seeing anything" misrepresents the experience of blind people.
Since nothingness is not a thing, if we are to reason with it or around it, some sort of "approximate" substitutes must be invented. Nothingness has no numeric properties (or any properties at all), so we have conjured up zero as a numeric substitute for nothingness, just as we created the image of total darkness to stand in place of the nothingness of sightlessness.
Zero is the exception to many mathematical rules. We all know, for example, that you can't divide by zero. That is, there is no answer to the questions "How many slices will a pie yield if each slice is of zero size?" or alternatively "How big will each portion be if we divide a pie into zero portions?" From simple arithmetic through calculus, rule after rule makes an exception for zero.
From the middle of the 19th century to the present, one of the strongest trends in mathematics has involved stretching the idea of a number system to its most abstract forms. In doing so, mathematicians have looked for the most general properties of numbers systems and the operations such as addition and multiplication that we use to combine the elements (numbers) of the system. One such property is closure. A system is said to be closed under an operation if the result of performing the operation on two elements of the system results in another element of the system. The counting numbers, for example, are closed under addition since the sum of any two counting numbers is also a counting number. On the other hand, that system is not closed under subtraction because differences like 3 minus 5 do not yield counting numbers. If we expand our system to include negative whole numbers, then the system is closed for subtraction.
By further expanding the system to include rationals, irrationals and complex numbers, we can arrive at systems that are closed for multiplication, exponentiation and other more exotic operations. Because of the exception we must make for zero, however, no simple system which includes zero will be closed under division. That is, the result of dividing a number by zero is never in the system. In order to create a system closed under division, we have to tack on a specially contrived extra infinity element. Why infinity? Well, look at what happens to 1/ X as we substitute smaller and smaller values for X:
1/1 = 1
1/.1 = 10
1/.01 = 100
1/.001 = 1000
The closer the divisor gets to 0 the larger the dividend gets. It seems almost reasonable then to say when the divisor gets to 0, the dividend will have gotten infinitely large. Thus we can think of zero and infinity as reciprocals. This formulation does lead to a little problem however. If 1/0 = ¥ and 3/0 = ¥, then ¥/ 0 = 1 and ¥/0 = 3. Also, if we approach zero from the negative side, the dividend heads toward - ¥.
Zero as a Measurement
So far we have looked at the problems associated with considering zero as a counting number. Similar difficulties arise when we consider it as a real measurement. Suppose, for example that we look at the controversial proposition that there exist massless particles. Assigning a value of zero to the mass of the particle means that the concept of mass does not apply to the particle. It will neither exert nor be subject to gravitational effects, it won't transfer momentum in a collision, etc. In other words, to say that a particle has zero mass is a little like saying that the spirit of democracy takes up zero milliliters; volume has nothing to do with the conceptual world, and mass has nothing to do with a particle that doesn't have any.
Similarly, when we say that the electric field potential in a portion of space is zero, what that really means is that the portion of space has no electrical character. You can't take a measurement and get zero for an answer; you can only fail to get a measurement. Perhaps a counterexample springs to mind -- a temperature of 0° Celsius, but 0° is not a measure of quantity at all but rather uses zero in its number line sense — a marking on an arbitrary scale. It is only by convention that we label the freezing point of water as 0°. A measurement of 0 degrees Kelvin might count as a sort of quantity (related to the amount of heat in a sample), except that absolute zero has never been achieved and can probably never be achieved. What seems like a hard stop at the end of the temperature scale turns out to be fuzzy and open-ended like the set of real numbers between 0 and 1 that does not include the endpoints. It is instructive to imagine that such an open-ended set is really like an infinitely deep well. No matter how close to the end you think you've come, there's always plenty of room left to travel downward.
Physicists managed to produce a mathematical theory that pushes our knowledge of the state of the universe back to a small fraction of a second after the supposed Big Bang, but there are theoretical and logical obstacles to reaching all the way back precisely to Zero Hour. One current response to the powerfully anti-intuitive notion of a hard beginning to time is the idea that, like the temperature scale, time fuzzes out toward the presumed endpoint. The theory of Hawking and Hartle says that, working backwards toward a tiny fraction of a second after the imaginary Big Bang, time becomes indistinguishable from space in just such a way that the hard endpoint, the beginning of time, smears out completely. There may be no hard zeroes at the ends of our "absolute" scales.
Zero also is used to describe the size of a spatial point or a point in time, and this usage is also fraught with difficulties. First, there are no events of zero duration and no camera capable of capturing a single instant and there is nothing that exists in space but doesn't take up any.
A line segment of length one inch ostensibly consists of nothing but points. If each of these points have zero width, how can a collection of them, no matter how numerous, fill up that angry inch?
This is not to say that most uses of zero as an accounting convention or as a measure are not clear and unambiguous. A batting average of zero means no hits in some number of at bats. Zero apples means no apples. I'm not trying to say that everything we know about zero is wrong. It is my intention only to peel back your sense that zero is a simple and easy concept. It is sophisticated relative to one, two, and three. It is especially sophisticated in that it is a map without a corresponding territory, even in principle.
Even mathematics, this bastion of Platonic perfection in which science has laid its faith as a kind of ultimate reality, itself is no more than a map of something which ultimately is not perfectly mappable. Math can't get very far without a zero concept, but we can't act as if nothingness is something without losing certainty and definiteness. Also, if nothingness is considered an ultimate and absolute background (one that cannot ever be a foreground), as a mere stage on which all phenomena act out their parts, then you automatically lose half the story. My modified nothingness substitute makes no pretense of being absolute nothingness and that may allow for a Foreground-Background Switch.
Ghosts of departed quantities
The ancient Greeks had no zero concept. Greek mathematicians, for the most part, reasoned geometrically and geometry yields no perfect counterpart to zero. In Euclid's geometry, the closest thing to a zero is the point. Points are purposely left undefined in the modern treatment of geometry, but they are to be conceived of as locations without size or dimension and as the constituent parts of lines, planes and space. We might be tempted to say that points have zero length (as well as zero area and volume), and, in fact, that is what many of us have been taught in school, but we are about to look at a paradox that arises from such an equivalence that will shed light on the difficulties we have in treating zero as a quantity.
If points have no width (or widths of zero units), then no matter how many of them we lay side by side, even an uncountable infinity, we will never get a line segment of finite length. And yet we know that lines are somehow "made of" points. On the other hand, if a point has a width greater than zero, then there is more than one location represented, and we therefore have more than a single point. To put the paradox plainly, points seem to fill space but to take up none.
The closer we look at this situation, the more confusing it becomes. Zero width implies we can't make lines while nonzero width implies the nonuniqueness of points. It is interesting to note that one of the famous paradoxes of the Greek philosopher Zeno described this problem with points hundreds of years before Euclid wrote and compiled his Elements. Euclid simply side-stepped the issue. I read somewhere that awareness of such paradox steered Greek mathematics away from arithmetic and number toward geometry.
Before we go on to think about solutions for the point-width problem, I'd like to bring up an additional absurdity built into the concept of a point. To ask what space would be like without the attribute of extension is like asking what the world would be like without the property of time. What would happen if there were no time? Nothing I presume. Clearly, time and the world are inseparable. They arise mutually. Likewise a bit of space without extension is difficult to imagine. We tend to think it is okay to separate out qualities which arise mutually, and, as I've said, creating these artificial foreground-background orientations that impose linear order on a nonlinear world is what intellectualization is all about, but in the long run such separations are bound to lead to difficulties. This absurdity is for me very much like that implicit in the idea of zero. What would number be like if we removed the attribute of quantity? Whatever it is, it wouldn't be much like a number.
A Small Suggestion
Mathematicians have devised many ways to patch up the problem about the widths of points. The one I will present is far from the received version, but it has a certain intuitive appeal. It's my alternative quantitative version of zero. We postulate the existence of a new kind of number, called an infinitesimal or an indivisible, which bridges the gap between the hard stop of zero and tiny finite numbers (like .0000001). We can think of infinitesimals (rather than zero) as reciprocals of infinity. These infinitesimals are not all identical in size and thus we avoid the problem with zero as a reciprocal of infinity that was mentioned earlier. I mentioned a moment ago pie slices of size zero. Imagine instead that each slice has an infinitesimal size. In that case we would get an infinite number of slices.
We will say that points have infinitesimal width. Any sum of a finite number of these infinitesimals will still be infinitesimal, but an (appropriately large) infinite collection will break through into finitehood. That is, we would get a finite sum such as 3.
It may help you understand the way infinitesimals sum to finites to look at the analogous "level-breaking" we see with finite numbers. A sum of finitely many finites will always give a finite result, but an appropriate infinite collection of them, for example, produces an infinite sum. That is, finites too can break through to the "next" level. Thus, infinitesimals are to finites as finites are to infinities. There is a continuum of levels
. . .2nd order infinitesimals, infinitesimals, finites, infinities, . .
extending in both directions.
We will see shortly what these numbers have to do with our zero problem, but notice now the similarity between the zero/not zero ambiguity of infinitesimals and the quantity/no quantity ambiguity of zero.
The history of the infinitesimals and the debate over their existence is fascinating and sordid, having stirred up more passion and acrimony than you would expect. One vestige of the debate is the often lampooned scholastic question about the number of angels that can dance on the head of a pin. I think the question is not whether 10 or 100 fit but whether a finite or infinite number fit. That is, are there infinitesimal embodiments? Assuming that angels can manifest themselves only in physically possible ways, can there possibly be infinitely many in a finite space?
These tiny numbers have been reinvented several times by scientists and mathematicians in order to solve problems but have been consistently discredited by critics who see them as abhorrent, strange and unnecessary. There is no question that infinitesimals lead to correct answers when used properly, but mathematicians have often gone to great lengths to avoid them. Their strangeness and consequent controversial character account for some of that aversion, but there is also the fact that reasoning with them can easily get you into a muddle of inconsistencies.
It is well known that the logic and arithmetic of infinity is very different from ordinary logic. For example, even though the set of all natural numbers {1,2,3,...} contains the set of even numbers {2,4,6,...} the two sets have the same size. That is, they can be put into one to one correspondence—each number in the first set matching up with its double in the second set. So we would expect that the logic of infinitesimals will also require special attention.
As an example of reasoning by the free use of infinitesimals, I will give a demonstration of the area formula for a circle which is attributed to Nicholas of Cusa, a 16th century scholar. Divide a circle of radius R into infinitely many pie slices. It is impossible to picture one such slice precisely, but we will represent it like this:
Since a circle is like an infinite sided polygon (!), we can imagine that the arc of each of the infinitely many slices is actually a straight line segment. Thus the slices are triangles with an infinitesimal base (whose length I will designate with the symbol ·). Since the triangle is infinitely skinny, its altitude is the same as its side which is R. Thus the area of one slice is ½R· (or half the height times the base). Now let's add up all the slices of the pie to get the total area
A = ½R· + ½R· + ½R· + ½R· + ½R· + ...
Factoring out the ½R from each term, we get
A = ½R (· + · + · + · + ...).
The sum of the infinitesimals in the parentheses is the sum of the arcs which make up the circle and thus equals the circumference of the circle. By the definition of p, the circumference is 2pR, so
A = ½R (2pR) = ½ 2 p R·R = pR2 QED
You can probably imagine the sort of reaction this sloppy looking reasoning would provoke from rigor-minded mathematicians. Clearly there are many possible objections. What does it mean to divide a pie into an infinite number of slices? Are we sure we can consider these things as triangles rather than sectors? How can the altitude of the isosceles triangle be equal to the side? Can you factor from infinitely many terms? On the other hand, it can't be a coincidence that we got the right answer. Something essentially right must be going on here.
Infinitesimals seem to have first been used in Greece but much of that work is lost. There is some evidence that Democritus used infinitesimals to derive several volume formulas. None of his written works have survived, but references to his books in the writings of others make it appear more than likely that infinitesimals are prototypes for his better known invention, atoms, which he believed are the constituents of all things. His concept was not much like our current model where atoms are strictly finite.
Infinitesimals have tremendous practical significance. They were the central concept in the development of calculus, though they are rarely mentioned in today's calculus courses. Calculus "reduces" curves to many straight line segments of infinitesimal length (differential calculus) and reduces areas and volumes to infinitely many infinitesimally thick slices (integral calculus). We saw both of these aspects in the above demonstration. Calculus and its offshoot, differential equations, are the most widely used bits of mathematics in the sciences, so it is a little scandalous that it rests on such dubious entities.
The debate over whether infinitesimals really exist as mathematical objects parallels the equivalent and better known quarrels about the existence of actual, as opposed to potential, infinities. All mathematicians agree that, for example, one can talk about continuing the sum 1/2 + 1/4 + 1/8 +... indefinitely, but many balk at the idea that we can speak of an actual completed sum (which equals 2) that includes all of the infinitely many terms. We are asserting that such actual sums exist when we say that .9999999999999... equals 1, since µ § is just a shorthand for the completed infinite sum µ § An infinitesimal thinker, by the way, would say that .999... and 1 differ by an infinitesimal.
Belief in infinitesimals has always been associated with mystical thought and thus has been consistently disparaged by the scientific orthodoxy. It is believed (weasel words) that Zeno's paradoxes were formulated partly to prove that infinitesimals could not exist.
In an extraordinary treatise called "On the Method", rediscovered early in the 20th century, and written by Archimedes, the greatest of all Greek mathematicians, we see how he used infinitesimals to arrive at some of his most famous discoveries. He used them to find the volume of a sphere, for example. Archimedes, however, was an early leading proponent of rigor in mathematical proof, and he therefore carefully expunged any trace of these suspect quantities from his completed demonstrations, generally replacing their use with his famous and aptly named method of exhaustion, a far less intuitive approach.
Newton too was suspicious of the validity of infinitesimals, despite their prominent place in his discoveries. His worries about the controversy their use would cause may have persuaded him to delay for many years publishing the reasoning behind the ideas of the Principia. In the Principia itself all demonstrations have been made rigorous in the fashion of Archimedes. When Newton's calculus of "fluents" and "fluxions" was finally published (because Leibniz was about to get all the credit), his fears of controversy were realized. In an infamous critique of the new science, Bishop Berkeley assailed infinitesimals as the "ghosts of departed quantities." and claimed that belief in their existence required as much faith as the most dubious point of theology.
Leibniz, who independently developed many of Newton's mathematical results, had no such qualms about infinitesimals. He took up mathematics as an adult and in just a few short years was producing results of the greatest importance. Perhaps his rapid arrival on the scene or his background in philosophical thinking allowed him to ignore the traditions and taboos of mathematics. He was clearly captivated by the mysterious "zero/not zero" quality of infinitesimals which, I believe, inspired some of his deepest philosophical thought. His Monadology, which owes much to Democritean atomism, conceives of nature as an abstract and infinite collection of point-like consciousnesses in mystical communication and interaction. These atoms formed wholes or beings in the same ineffable fashion that points form lines. We "consist in" these monads, but we are not "made of" them, just as lines consist in points but cannot be made from them. One of the models of reality I will ultimately propose as a complement to our usual view shares a great deal with this vision.
For the most part, the conservative elements in mathematics seem to have prevailed in the case of infinitesimals. In the 19th century, embarrassed by the dirty little secret that they thought infinitesimals to be, several mathematicians put calculus on a firm foundation by replacing infinitesimals with the perfectly rigorous and elegant but unintuitive idea of limits. That is, they replaced actual infinitesimals with variables that take on values which approach, but never reach, zero, just as the sequence 1/2, 1/4, 1/8, ... approaches but never reaches zero. Many claimed, perhaps rightly, that limits were what Newton had in mind all along.
Infinitesimals can't be counted out yet, however. In the 1960's, mathematician Abraham Robinson showed that infinitesimals could be made rigorous. Using wonderfully inventive structures derived from modern logic, he showed that limits and infinitesimals are equivalent formulations of one phenomenon. Unlike in the past, mathematicians nowadays, rarely even address the question of actual existence for the tools of mathematics. They feel free to bestow mathematical existence on objects so long as they pass the test of consistency, if they lead to no contradictions. Robinson's concept of infinitesimals leads to no contradictions and, because fresh approaches can create fresh insights even in fully mined veins, infinitesimals have increasing become an active area of mathematical research.
Sweet nothings
It may not be clear what is meant by the question of the existence of infinitesimals, but we can well ask whether infinitesimals might ultimately be found to have a more conventional kind of reality. Like so many other ideas that started out as mathematical abstractions, could infinitesimals have a place in the physical world? As far as I know, no theoretical physicist has seriously considered the idea of actual infinitesimals, but here are a few vague possibilities.
First of all, infinitesimal physical quantities could achieve finite embodiment through a kind of summing or amplification provided by a feedback process, not unlike the one shown in the last chapter to illustrate the idea of mathematical attraction. The difference is that this amplification would involve infinitely many iterations or terms with each step being performed in infinitesimal time. One can imagine a quantum-like spontaneous event produced from the amplification of an infinitesimal "seed" without ever violating the law of conservation of energy, much as the "zero energy" information in a book or periodical can be amplified in the brain to bring about huge changes in the world. A relative nothing produces or causes a something. A quantum itself (as represented by Planck's Constant) may be the finite "arrival point" of such an amplification. Remember in our demonstration of the area formula for a circle that the sum · + · + · + · + ... arrived at 2pR without ever passing through 1 or .1 or .01. Here's a little BASIC computer program, for those of a mathematical turn of mind, that outlines one possible path of amplification.
FOR N = 1000 TO 100000 STEP 1000
X = 0: Q = 1
FOR I = 1 TO N
LET X = 1/N * Q + (N-1)/N * X
NEXT I
PRINT X
NEXT N
(revise, expand, explain)
We are told that virtual particles possess only a quasi-reality. They bubble in and out of existence in the quantum froth of space, like the "cloud of not doing" from the beginning of chapter 2, apparently at the exact moments they are needed to complete interactions. To me they have the feel of infinitesimals, simultaneously existing and not existing, both zeros and not zeros. They once again show the lack of a hard stop in the physical world. Stuff percolates out the relative nothingness of space, so there must be a soft edge to nothingness that I'm saying can be crudely pictured in terms of infinitesimals. The quantities that are now associated with virtual particles are not infinitesimal, but I believe an infinitesimal substratum which is again amplified by feedback may give rise to these strange objects. It is significant that infinitesimals, like virtual particles, can be had for free.
As of today (1995), astrophysicists feel that they have only accounted for 10% of the universe's apparent mass. The search for the "missing mass" is a major area of study. Infinitely many particles of infinitesimal mass could make up some of the other 90%. The assumption that there are finitely many particles in the universe is deep-seated but not motivated by any important a priori considerations except the limits on our ability to imagine the infinite. Perhaps it would be useful to lay aside our assumptions about the finite nature of reality in general. Infinities crop up as singularities in fields and black holes. Why not find infinitesimals out there as well?
All areas of science have their "dirty little secrets" like infinitesimals in math. Most theories contain large gaps that tend to be glossed over. Evolutionary biology has its missing links, and physics has the so-called renormalization process, in which practitioners remove pesky infinities, arising from the considerations of singularities, from the equations of quantum mechanics and miraculously arrive at correct answers. The alternative mathematical techniques offered by an infinitesimal approach may provide a key to legitimizing the renormalization process.
Many physicists have argued that time and space are not continuous but quantized like energy. Infinitesimals may offer a kind of reconciliation between the two hitherto irreconcilable domains of the continuous and the discrete, between smoothness and graininess. They can be a substrate for either realm. Continuity can be seen as infinitesimal change in infinitesimal time, and, as I said a while back, quanta can be arrival points of feedback with infinitely many recursions or iterations in finite time.
Infinitesimals have been trotted out occasionally, although not as often as Heisenberg's Uncertainty Principle and Gödel's Incompleteness Theorem, by those of a mystical frame of mind to serve as loopholes through which consciousness can slip into our presumably computer-like brains. God has long been associated with the infinite, and there is a certain appeal in the idea that the reciprocal of infinity is a bit of the Godhead enfolded in us, the macrocosm in the microcosm. We could imagine, for example, that the representations of the world in our mind's eye have an infinitesimal nature. Memories could be modeled as actual infinitesimal residues of experience rather than symbolic representations, which would provide essentially limitless storage space in a finite space. This model could explain the consistent failure of scientific models of memory. Rather than seeing the brain as a computation and storage devise, this point of view sees it as a modulator/demodulator between finite and infinitesimal realms, an amplifier of some sort. On the face of it, the hardware of the brain seems as suited to being a radio or modem as it is to being a computer.
It is of course more than a little troubling to scientists to posit the existence of things which are in principle undetectable, but, on the other hand, lack of detection has not hurt the popularity of gravitons or quarks or superstrings.
Infinitesimals for Zero
How are infinitesimals related to the Zero Paradox where a nothing is treated as a something? I want to suggest that infinitesimals can act as surrogates for zero in cases of counting and, especially, of measurement. These relative nothings which can sum to something clearly avoid the paradox. They are somewhere between absolute (and thus nonsensical) nothingness and something. To begin with, I will try to show that some problems presented by assigning an absolute "nothing" value to zero can be ameliorated by inserting an infinitesimal value in its place. Rather than being an intermediary value between the quantity zero and finite numbers, it replaces altogether the quantity zero. This approach is just as problematic as the usual zero approach but, in my opinion, no more so. It too provides the insights that come with Assumption Switching.
With such a replacement the division rule, as we have seen, would now have no exceptions. A number divided by an infinitesimal is an infinity and a number divided by an infinity is an infinitesimal.
The most obvious and compelling argument for the substitution of infinitesimals for zeros comes from probability theory (and/or measure theory). A simple definition of the probability of an event is the number of equally likely ways that the event can happen divided by the total number of possibilities:
Thus, for example, the probability of randomly selecting an ace from a standard deck would be 1/13 since there are four aces to be had from among the 52 cards.
We might suppose from this intuitive definition that a probability of zero could only be associated with utterly impossible events, but this is not the case. Suppose we drop a coin on the floor, pick it up and then drop it again. What is the probability that it lands in exactly the same place? Technical definitions involving measure theory yield an answer of zero which is highly anti-intuitive because we sense that it could happen. In fact since every possible position yields a probability of zero, we have that peculiar situation we had with the point-width paradox. How can even an uncountable sum of zeros be more than zero? Looking to our simple definition, however, we get one favorable outcome out of infinitely many possible outcomes or µ § for the probability of repeating the position of the coin. As we have seen, this ratio can be interpreted as an infinitesimal. Thus, despite it being infinitely improbable for the identical position to recur, our intuitive sense that such a thing is possible is satisfied by a infinitesimal rather than zero value. It would seem to be more appropriate to reserve a probability of zero for the truly impossible.
0 vs. 1
Binary arithmetic, in which only the digits 0 and 1 are used as opposed to the ten digits used in the decimal system to represent any quantity at all, is given a great deal of attention these days. This is mainly due to their ubiquitous use in computer science. That use is based on the strong analogy between Boolean logic and the "ons" and "offs" of simple electrical circuits or logic gates. But another aspect of the interest in binaries comes from the simplicity and elegant "ultimateness" of the binary system. It takes the idea of a number system to its logical extreme. What could be more stripped down than the somethingness vs. nothingness of 1 vs. 0?
Leibnitz was among the first mathematicians to take an interest in the binary system. He saw something very special and meaningful in the binaries. Laplace, the 18th century French mathematician, comments:
Leibnitz saw in his binary arithmetic the image of Creation... He imagined that Unity represented God, and Zero the void; that the Supreme Being drew all beings from the void, just as unity and zero express all numbers in his system of numeration...I mention this merely to show how the prejudices of childhood may cloud the vision even of the greatest men!
Rudy Rucker, in his book Mind Tools, points out that it is probably no coincidence that the symbols 0 and 1 have evolved to look a little like an egg and a sperm. The zero egg, far from being a nothing, is dormant but symbolically full of potential. Once it is "fertilized" by the finite, it produces all of the multiplicity of forms. This image of zero as the cosmic egg goes better with soft infinitesimals than a hard zero. [Zero as absolute nothingness is sexist!]
This "relative nothingness", unencumbered as it is by positive or negative qualities, is free to become anything. Like space, it can be seen as a reined in 'everything' (the reciprocal of infinity?) from which 'something' sometimes leaks out. As children we were very aware of the potential for stuff to arise from nothingness. Four year olds, for example, appreciate that the darkness is not just the lack of light, a mere nothing. My son used to worry that the darkness outside his window would come into his room and get him. I remember from my own childhood how monsters were born of the shadows on the wall. Likewise the nothingness of sleep gives rise to the astounding multiplicity of dreams. Nature abhors a vacuum in the sense that as one aspect of the world is suppressed others are amplified. Of absolute nothingness we cannot speak, but it may be that this relative sort of nothing which we experience in the world is better represented by infinitesimals than by zero. Infinitesimals conjure that sense of potential; they give that feeling of readiness for becoming. Remember that any sum of zeros is still zero, but infinitesimals can become finite, can become manifest.
It must seem bizarre to be told, in effect, that there is no such thing as zero. I do not expect to convince you that that is true. Operating from the assumption that many truths can peacefully coexist through Assumption Switching, I have been trying to develop, as I did in the previous chapter, a complementary description that will encourage new insights. The standard zero concept fits better with the Fully Automatic Model of reality which sees space as an absolutely dead, passive non-factor. Infinitesimals seem more appropriate to the complementary holistic approach. Zero goes with clear edges and hard stops. Infinitesimals work better with fuzzy edges and soft landings.
Perfect Cancellation
When we are in the mode of considering the world holistically as a product of the interaction of processes, zeros occur only as various influences cancel each other out perfectly. Two waves will interfere with each other and when they are perfectly out of phase they cancel each other's effects.
Noise-cancelling headphones exploit this fact to protect the hearing of people who must work in extremely loud areas like certain factories or airport runways. The phones detect the soundwaves and produce a counter wave to cancel it in the region around the opening of the ear (and less well elsewhere). A canceled wave is a strange thing, a relative zero. A certain logic would indicate that there is no longer any wave. The vibration has been damped out. On the contrary, outside the area of cancellation the wave "reconstitutes." We have all seen this in the expanding rings produced by raindrops on the surface of a pond. There are places where part of a circular wave seems to disappear only to reappear a moment later. The wave did not cease to exist but was only masked as it continued to propagate. A canceled wave once again has this property of unexpressed potential, like an infinitesimal.
An image of cancellation that has been very meaningful to me in this regard comes from elementary physics and calculus. I will develop this image in some detail because I'll be referring to it later and because I find it truly fascinating. Suppose all of the mass of a "planet" is miraculously concentrated in an infinitesimally thin layer on the surface of the sphere. That is to say, the planet is hollow. Newton's gravitation laws say that we can calculate the total gravitational strength of this planet by summing the tiny force vectors of the infinitely many infinitesimal "patches" of which it is made, point by point if you will. As a result of this summing, we can show that an object outside the planet will be attracted to the planet exactly as if it were solid and, equivalently, as if the mass were all concentrated at the central point of the sphere. People on the surface of the planet would not sense any gravitational difference between this and a solid planet of the same mass. By symmetry we are forced to conclude also that an object placed in the exact center of the hollow planet will not be moved by its gravitational force. All of the force vectors acting on it from the many patches cancel each other out.
Unexpectedly, however, the object would experience the same weightlessness at any point in the interior of the planet, rather than be accelerated toward the nearest point on the surface as intuition suggests. By a strange mathematical quirk, the sum of force vectors at any point in the interior cancel perfectly. This is Newton's Shell Theorem. The patches pulling an interior object toward the near point do so strongly but are fewer in number than the ones pulling the other way, trading-off just so that the forces balance. The forces are not gone, mind you, but no acceleration is experienced. In an imaginary solar system, the interior object would move under the influence of all massive bodies in the system except the body it is within. Until it struck the interior surface, the object would move as if the planet were not there. Here again is a strange kind of nothing, full of potential but not realized unless the object is outside of this special region or until, say, someone digs a hole in the planet and thus throws the forces out of balance. This image has a lot in common with our anti-GLOI theory of darkness vacuums from the last chapter.
A similar thing happens with a light source when we are considering light as a classical wave phenomenon (i.e. without quantum effects). As a flash of light, for example from a camera's flashbulb, bursts out in all directions, what happens is mathematically equivalent to the following description. Each of the infinitely many individual points or patches on the crest of the spherical wavefront acts as if it is a mini-flashbulb, sending out light of diminished intensity in all directions. The light heading back toward the source into the interior of the sphere tends to perfectly cancel with light coming from other points along the surface. Thus we can say that, in this model, the apparent outward movement of light from the source is, in a way, an illusion resulting from this cancellation. Without this cancellation, light from a single source would appear to come from many directions.(diagram)
If we look carefully at the standard mathematical computation that proves these odd gravitational and electro-magnetic facts (it is clear that this could probably never be experimentally verified) there is actually a second order infinitesimal error in the approximation of the area of each patch which can be safely ignored in the standard approach to calculus but not when we give real existence to infinitesimals. It implies that there might be an infinitesimal error in the final sum. I suppose then that the object could suffer an infinitesimal acceleration (whatever that means). In any event, infinitesimals once again better capture the status of this strange nothing than does the standard zero. My working assumption is that there is no such thing as perfect cancellation. All events leave at least an infinitesimal trace. "Ghosts of departed quantities," indeed.
The phenomenon of perfect cancellation may seem like something rather exotic and rare, but, on the contrary, it is very commonplace in quantum computations and elsewhere. One fascinating example, due to Feynman, has to do with quantum waves that, again from the point of view of their mathematical description, seem to spontaneously travel backward in time and affect events in the past. The cool thing is that these backward moving waves stimulate wavelike reactions in these past objects that move forward in time and perfectly cancel the backward moving waves, so that there is no net effect of the present on the past (or of the future on the present). If there is some kind of infinitesimal residue from these events as my approach suggests, it opens up the intriguing possibility of sending messages to the past, an idea exploited in Gregory Benford's brilliant Sci Fi novel Timescape, in which scientists desperately attempt to save the world from an ecological catastrophe by sending a warning to scientists in the past.
In the next chapter I will eventually use the image of canceled waves to produce an unusual model of the world.
Old Book: Chapter 5
It all begins with an idea.
Ex-planation
She is moving to describe the world.
She has messages for everyone. Talking Heads
Keep the juices flowing by jangling around gently as you move. Satchel Paige
It is time for a quick rehash. Contemplating the special relationship between maps and territories has led me to abandon the idea of a simple correspondence and adopt a modified form of what might be termed explanatory relativism. I maintain my faith in the existence of an outside world, but I have lost faith, at an intellectual level, that the outside world looks anything like it does in my head. The relationships that my mind interprets as space, distance, and separation, for example, may be something completely other, perhaps nothing but information with no material substrate. The MT relationship is completely mysterious. Something is going on there, but nobody seems to be able to say what.
In the first few chapters I've tried to indicate that our theories about how things work are always at least implicitly axiomatic, but that the world our theories are trying to epitomize is not based on assumptions or on mathematics. It simply is. To think otherwise is to confuse the map with the territory, to settle for a simple and unrealistic MT perspective. Thus while assumptions are necessary for arguments, it is a mistake to take any of them very seriously. Various reversals of assumptions, such as from the naturalness of stasis to the naturalness of change, can lead to valid insights about how the world fits together. My own most basic assumption, paradoxically, is that we can assume nothing. Everything is worthy of explanation (Shall I try to reverse that one!). If we want to explain everything then we are going to have to do without fixed assumptions, at least in the long run. Both and neither will be the rule.
This point of view leads in many directions, but the next path we must take goes into the nature and purposes of explanation itself. In offering my explanation for the nature of explanation, I will be treading on the shifting sands that always results from the application of an idea to itself, but we should be getting used to the subtlety by now.
What is an explanation that it may meaningfully connect with the real world? Is explanation an action or a computation (physical or symbolic)? Is it fundamentally functional and instrumental or truth-seeking? Do explanations create knowledge or summarize it? In short, what are explanations and why do we create them? The answers to these questions depend to a large extent on your MT orientation. A One-Way view might see explanations as the verbal or symbolic counterparts of physical causes. Thus far I have used many capitalized catch phrases — Both-and-Neither, Assumption Switching, Explanatory Relativism, Many Truths — to convey my various attitudes toward the MT question. I am seeking a view of explanation which can occupy a middle ground. It must have the potential to be intimately tied to underlying reality like naïve one-to-one MT orientations so that explanations can really reflect what is happening.
The most pervasive sense we have as we explain is that we are trying to understand and communicate why something happens and/or what is really going on. We are trying to get below the surface level. We seek explanations in order to know the truth (or at least an approximation thereof). The understanding that we get offers varying degrees of utility, from satisfying our curiosity to helping us build a television. Explaining is not an activity reserved for scientists, philosophers and intellectuals. All humans are explaining machines. We seem to do it spontaneously, without even trying. I would like to suggest that there is a degree of explaining, taken in a sufficiently broad sense, in virtually our every thought. It seems that we are forever either engaged in the process of explaining or applying an explanation, metaphor or model arrived at earlier, often to explain what is going on in our minds or in the world. Our every action implies a stance, an attitude based on some assumption of what is happening to us, and each such assumption constitutes an explanation or at least predisposes us to certain explanations. Even in dreaming, we are trying to make sense of, account for, and explain the feelings we are experiencing, from confusion and fear to frustration and lust.
I'm going to concentrate for now on how we explain things to ourselves, ignoring those aspects tied up with communication per se.
*We explain why the car is making a funny noise so that we can find and repair the problem or give reassurance that the old heap isn't about to die.
*We explain why we lost the baseball game so that we can absolve ourselves of blame, figure out what we need to practice, square our expectations with the results or work through disappointment.
*We explain to ourselves why a friend is angry with us so we can smooth over the difficulties, make ourselves better people or convince ourselves that it was really all their fault.
*We explain why there is injustice in our society so we can bring about change, alert or convince others, give our anger expression or justify our preferred, self-serving position.
*We explain how the universe came to be so that we can define ourselves in a greater context and give meaning to our lives or to contemplate the mystery of existence.
This is a wide range of motivations. Can we put them under a single umbrella? You can see that in writing down these motivations I am not making much of a distinction between intellectual theories and emotional rationalizations. I am convinced there is a strong association between, for instance, the structures of theories about how objects fall to Earth and about my defenses against hurtful words. They both offer satisfaction of a kind. The following pseudo-etymology of the word explain gives the essence of my approach. "To explain" means literally "to smooth out." What, then, are we trying to smooth out as we explain things to ourselves and to each other? Ruffled feathers? Bumps? One reasonable possibility is that smoothing out refers to metaphorically clearing a path or building a road to ease travel through a bumpy world. As such our explanations are designed to get us somewhere but not necessarily to help us see the bumpy world in all its complexity. Explaining means simplifying.
In my version of things, smoothing out refers to smoothing out disturbances to the stability of the self. Picture a self as a balloon or a bubble. Its internal pressure and the nature of its skin produces its shape, which we will equate with its state, until an outside influence disturbs that condition—a finger pokes it, maybe. When the finger withdraws, the balloon pushes back to smooth out the disturbance—imperfectly and excessively, at first, and then better and better as the vibration is quickly damped out.
I see the process of explanation, in its most general sense, as analogous to the snapping-back response of the balloon. An explanation's role is to smooth out a disturbance to the self. Explaining therefore is like the psychological analog of physical healing. Right away we can see a relationship of this idea of explanation to some the other ideas of the book. Ex-plaining, taken in a sufficiently broad sense, is involved in the process of dynamically holding things as they are. This image says that explanation is tied up with stability. It is the cause of stasis from the inside looking out.
It is probably troubling for most readers to see the mental (explaining) and the physical (stability) mixed in such a direct and general way. When a plucked string or a balloon damps out disturbances there is no mental aspect, and likewise, when we feel offended it's not because some neuron is out of kilter. However, when mental processes are assumed to come from brain structures and activity, it is hard to see how the physical and the mental hook up. Nowadays it is often assumed the gap can be filled by computation. That is, it is assumed that brains are to minds as computers are to computer programs in operation. Our feeling of offense takes place at a symbol-processing level or in the positions of electrons in our microchips, the level of pure information rather than physical or material imbalance. I find the computation metaphor to be far from compelling. My own view is that brains are somewhat more like radios than like computers. If there is some doubt that neurons do give rise to mind, then there is plenty of reason to explore alternative formulations like this Smoothing Out Metaphor. We will leave aside for now, however, the degree of physical reality to impute to this smoothing out process and will treat it strictly metaphorically.
A case can be made that, for each item in the list of motivations given a few paragraphs back, the explanation is subsumed under the desire to smooth out. The fact of social injustice, for example, impinges on our picture of how the world should be, it disturbs our world-view, wounds us, throws us out of balance, bends us out of shape. We can choose to rid ourselves of the disturbance by incorporating injustice into our picture of reality (thus changing our balloon) or finding a scapegoat that lets us blow off steam or finding a cause that will ultimately give us a pathway to change the situation (voting for a particular candidate, for example, or boycotting a product). In any case, we can say that the resulting explanation of the injustice has the effect of reducing the disturbance like the pressure of the balloon reduces its disequilibrium. There may be differing degrees of instrumentality and efficiency in these many explanations but they all make things smoother.
Besides the word explain, the words describe and express also have pseudo-etymologies of my own design that fit nicely with the balloon metaphor. To de-scribe is to remove what has been written (upon the self or upon the senses), and to ex-press has the sense of ridding what has been pressed upon the self. We conceive of all three of these actions as doings, but the prefixes mark them as undoings, responses to disturbances, ways of canceling out a problem. Whether or not my humble attempts are true etymologies, it is my intention to look at explaining, describing and expressing as a kind of complex of associated processes which encompass a large part of mental activity and which have an essentially homeostatic role, at least in the simplest version of the model. Later this reactive scheme for explanation will be supplemented with a proactive one through an Assumption Switch, and the Beacon metaphor will take its place beside the Bubble -- and The Bubble & Beacon meta-metaphor will predominate. (Ultimately, this pair can be replaced with a quartet, octet, etc though both-and-neither thinking. See the spheroid model.)
One thing that the Smoothing Out Metaphor for explanation has going for it is its simplicity. What it lacks in completeness it makes up for in comprehensibility. Isn't it generally true that we explain what throws us out of equilibrium? Isn't there an extent to which any human thought or answer to a question is a response to a disturbance (either externally or internally generated) and thus an explanation in the smoothing out sense?
How does this idea differ from the idea of explaining as laying out the truth? To what extent does this perspective do damage to our normal attitude that our explanations are attempts to understand the world as it really is? The two versions feel very different, and there's no doubt they are not entirely compatible, but there is a closer than expected relationship. Although the metaphor says nothing about right answers and only speaks about the effects explanations have on the explainer, to the extent that explanations will smooth out better the more precisely they match (or actually invert) the original incoming influence (or deformation), good explanations still, by definition, represent the world well. What works best will necessarily correspond in some sense to the underlying reality. But the smoothing out version clearly leaves more room for the validity of multiple explanations that may each undo influence in their own way. We end up with a certain relativity to our explanations that does not fit our usual One-Way View of things. That is, it wouldn't be surprising if a rich person's explanation of unequal wealth in society will work as well for her as a poor person's perspective will work for him.
There are many ways that I can imagine to undo a disturbance. Picture the disturbed self as a dented fender that needs to be fixed. The ideal and never-realized solution would be to somehow "reverse the direction of the film." Molecule by molecule, instant by instant, we could micro-engineer the deformation back to its original shape along the identical but reversed path followed during the accident. In that case, we would have exactly the original fender without so much as any metal fatigue—no side effects at all. This is the perfect ex-planation in that we have smoothed out the disturbance in such a way as to precisely mirror (and invert) what really happened. If such a perfect reversal were accomplished, we would have reason to say the ex-planation made a perfect fit with the underlying reality. Unfortunately, such a reversal of the arrow of time is a practical impossibility. What is more likely to actually happen, however, is that we will either buy a new fender or hammer out the dent. In either case our process of smoothing will not mirror the path of the original event/influence, but neither will it be independent of the event. A good hammering out job will have to follow the path of the accident to some extent in order to do the job efficiently with minimal metal fatigue. The ex-planation or de-scription will contain some aspects of the underlying reality without epitomizing it or reversing it perfectly.
As with our hammering-out job on the fender, the balloon's response to the poking finger is imperfect from the point of view of undoing what has been done. Its reaction reflects the energy the finger put into it and rebounds beyond its original shape. The deformation now spreads across the entire balloon and sets up a vibration which continues until friction damps it out of the system. This is far from the ideal ex-planation if the ideal involves no side-effects.
In the stasis chapter, we talked about how odd it is that pervasive characteristics of reality like friction and damping were seen as ancillary epiphenomena from the point of view of classical dynamics. Every set of assumptions will produce descriptions that give the status of side-effects to certain phenomena. Complementary explanations produced by Assumption Switching, on the other hand, will tend to put those phenomena at center stage. We have just seen how the side-effects of the explanation process can be interpreted as having a very important role in creating the whole world.
It is probably true that a perfect reversal of what I have been calling the ideal explanation is contrary to the laws of physics. As far as the laws of physics are concerned, time is reversible at the level of particles but this is not the case in the friction-filled macroscopic world. It is theoretically possible to "unbreak" the shards of what was a drinking glass; it could even happen all by itself, but the odds against that are astronomical. In the case of the balloon, if we tried to reverse time and replaced the poking finger with an equal and opposite withdrawing finger, it would be difficult to get back the heat lost to friction, etc. Friction (qua entropy) gives a direction to the arrow of time. As the finger lost contact with the balloon, the surface of the balloon would not suddenly go smooth but would go into vibration as before.
In the last chapter I described an earphone system designed to cancel sounds. These devices, like balloons, offer a good image of my version of ex-planation and demonstrate an important side effect. As they "experience" changes in their environment, they respond by trying to undo those changes. One distinction is that homeostatic systems like people respond automatically or at the level of being rather than at the level of programs. They are analog objects rather than digital devices. Like the balloon example though, the earphones fail to achieve the ideal situation of perfect cancellation. They can only do the job locally and with a slight time lag (unless using the speed of an electric signal to outrun the sound). Outside the local region, the phones are producing a new sound which is spreading out into space.
From the point of view of the smoothing out process, these sounds are the side effects of explanations. All ex-planations must have such side-effects. We might call it Explanatory Spillage. Such spillage is necessitated by the energy of the disturbances lost to friction and by the approximate nature of the explanations. The One-Way Model which says there are monolithic true explanations for phenomena implies a kind of perfect matching that leaves no side-effects or inaccuracies. I hope it is clear by now that I reject the idea of such monolithic true explanations.
Consider for a moment what would happen if we were to put several of these noise reduction systems together, each separated by a small distance. When we introduce a small sound in the midst of these devices, each system will produce waves that will tend to cancel this sound in the local area of the devices but will cancel imperfectly outside that region. The actual sound energy is increased but the actual noise in the vicinity of the devices might be small. If the region we are considering is large enough, because all of the systems are responding at slightly different times, there may be places where the waves actually augment each other rather than canceling out. The secondary sounds produced by each device will in turn generate responses or de-scriptions in the other devices. We eventually get a cascade of side-effects producing diverse conversations among the machines. The sound energy may fall into an orbit or pattern or may damp down to nothing, explode in a loud screech of feedback or become chaotic.
If each machine is working well, all of this buzzing will have little influence on the tiny islands of silence near the devices amidst the cacophony. Now, if we take this set up and replace "sound" with "disturbance" and "silence" with "changelessness", we get an idea of the implications of the Smoothing Out Metaphor for explanation. This is meant as the beginning of an abstract description of how the world can come into existence and sustain itself. We have a World of Describers sustaining their own individual identities by de-scribing or imperfectly reflecting the influence of all others, this influence being itself derived from their imperfect descriptions of disturbances.
When, inevitably, the ex-planations fail to achieve perfect cancellation, we get a slow evolution of the identity of each describer under the influence of the de-scriptions of the others. In the beginning is the word, and there is nothing but chatter thereafter! Repeatedly I have asked why language might fit reality. One answer is that reality is language-like; the World is the net effect of myriad describers de-scribing. This again looks a lot like the monadology of Leibnitz with its infinitesimal monads doing nothing but reflecting the images of the other monads. Reflection is perhaps the ultimate de-scription. The sensible world is a host of beings describing each other and in so doing maintaining themselves.
It was an image very much like this World of Describers Model that got me started during my senior year in college along the strange path that has led to my current beliefs. As I recall, I was doodling on the back of a math problem-set, drawing concentric ring wave patterns. I pictured myself as the central wave source of one such pattern. What am I sending out, I thought. I am exemplifying myself for all the world. I am sending out the message "me, me, me. BE LIKE ME" I jotted down the caption "Being Waves." I was amused by the jargony sound of it. This was around the time that EST was very popular. A roommate and I were thinking about how to achieve easy riches, and we had come up with the idea of creating The Brain Passage Clearing Institute, for which we invented several catchy terms. Being Waves fit in nicely with this feeling of charlatanism and at the same time seemed to correspond in some rough way with something true about the world. Almost immediately I realized that the spillage from this message was to create a kind of identity inertia like the resistance or self-mass an electron experiences as it moves through its own field. The monad best able to receive the BE LIKE ME message was the sender. Thus Being Waves produced stasis; they explained why things stayed the same. Before that time it had never occurred to me that being required a cause. Being waves gave us anti-change muscles. I continued to toy with the notion for some time. Slowly I have reformed the initial idea in accord with things I've learned. In the next chapter I will try to show how BE LIKE ME and Smoothing Out are related, how beacons are also bubbles.
These radical images can hardly be taken seriously as yet, but I hope that further switches and complementary descriptions will make that task easier. We have to imagine, for example, that a molecule of water is a describer, as are you, your pet cat and a cell from the lining of your stomach. Furthermore, the model asks us to take seriously the notion that consciousness of a sort is a simple phenomenon (assuming that explaining requires consciousness), and that the material universe is purely a construction of these "minds" based on the exchange of mere information with the rest of the world. I use the word influence as opposed to information. Influence is the vector form of scalar information -- information in a direction or on a mission. I'm thinking metaphorically about how physical force is in a sense the vector form of scalar energy.
Sledgehammer explanations
The Smoothing Out Metaphor is so general it would seem it could be stretched (like a balloon?) to fit almost any circumstances. This flexibility has good and bad aspects. Clearly there is an appeal in explaining a lot with a little. Simplicity is a fundamental aspect of my intuitive notion of a good explanation. The downside from a scientific perspective is that it lacks the quality of falsifiability. What value does a theory offer in understanding the world and discriminating between true and false statements if it can account for absolutely anything, even those things that do not happen? We'd like it to account for facts but exclude impossibilities. Something must count as evidence against an ideal explanation. But the Smoothing Out Metaphor can be manipulated to get around any counter-example with sufficiently convoluted arguments.
However, several of the most powerful images of science and everyday thought are similarly unfalsifiable. It has been said, for example, that the old summation of Darwinian selection, "survival of the fittest," is a mere tautology rather than an explanation at all. We can always take "fittest" to be synonymous with "most apt to survive." Thus, the most apt to survive will survive. on average. If one were to say that the proliferation of near-sightedness in the population showed that in some cases the less fit survive, an evolutionist could say that near-sightedness must be associated with some other characteristic which confers fitness (perhaps people with glasses are more sexually attractive). There are several examples of prevalent behaviors and features of the world which seem contrary to what you'd expect from a world ruled by Darwinian selection. Current theory holds, for example, that altruistic behavior, the gamut of emotions we deem virtuous, have arisen in humanity and various other species through the essentially selfish motivations of natural selection. We do someone a good turn only because such behavior was likely to elicit reciprocation that added to the perpetuation of our genes in the ancestral habitat. In fact, the elegant and convincing theory goes on, it is often a better strategy to be thoroughly duplicitous, feigning a favor for someone but then stabbing them in the back; thus we receive a double benefit. In one fell swoop we reduce many human characteristics to a single cause which sounds like it would imply the opposite characteristic.
Another example that, on the face of it, might seem to deal a death blow to natural selection is the pervasiveness of self-destructive, risk taking behavior and also of homosexuality. The desire to jump out of airplanes or a propensity toward addiction to alcohol ought to decrease the likelihood of that characteristic remaining in the gene pool, but that doesn't and shouldn't prevent an evolutionist from trying to accommodate these facts within Darwinism.... In other words, any possible counter-examples merely serve to change the concept of fitness rather than disprove the concept of natural selection. Still, few would abandon the insights offered by the pithy phrase just because of this peculiar drawback.
I call such elastic and general notions as Darwinian selection or the Smoothing Out Metaphor, ones that can't really be disproved, sledge-hammer explanations. They ex-plain broadly, in the manner that a sledge-hammer knocks out dents. The sledge-hammer will work on any dent but not in the subtlest or most sophisticated way. This name reflects my belief in the intrinsic trade-off between generality and completeness. If an explanation packs a wallop, it may leave the finer detail work a mess of smaller dents and fatigued material. We would prefer an explanation that undoes disturbances in a way that most directly merely reverses the arrow of time and reduces the "side effects" of the undoing treatment. Sledgehammers leave major side effects and that is the main problem with them from the point of view of the Smoothing Out Metaphor for explanation. The greater the side effects, the worse the match between the explanation and that being explained. Perfect matches are impossible, just as the reversed film scenario is physically impossible without added energy, but presumably it is the goal of a "good" explanation to match as closely as possible. Again, we will see that side effects of undoing are a key component of the phenomenal world.
Other examples of prevalent sledgehammer explanations are:
1. The workings of the physical universe are mechanical.
2. The brain is a computer.
3. Sex drives are the key to understanding human psychology.
4. All human actions, even the seemingly altruistic, are motivated by self-interest and the pursuit of rewards.
5. The cream will rise to the top.
Each of these has tremendous power and can really hardly be argued against, but each is also a gross reduction of a world of infinite variety and apparent subtlety.
As far as the Smoothing Out Metaphor is concerned, the part that flexes to fit the occasion, as fitness did in the case of Darwinian evolution, is disturbance. Any action can be considered to be a response to a disturbance. All explanations are certainly reactions in the sense that we would never use them if there were no phenomena impinging on our consciousnesses. We explain nothing if we experience nothing. The Smoothing Out Metaphor says that explanations are the equal and opposite reactions to actions taken against the state of the perceiver.
In defining sledgehammer explanations in terms of the Smoothing Out Metaphor and then citing the Smoothing Out Metaphor as an example of a sledgehammer explanation, we have once again encountered the problem of self-reference. Again we have to be aware of the paradoxical situation that comes up when we apply a concept to itself. In this case particularly, we have an especially troubling sort of self-reference, one that declares its own inadequacy. The scenario goes something like this:
a) The Smoothing Out Metaphor implies that irrefutable sledgehammer explanations mess up the details and have extreme side-effects.
b) The Smoothing Out Metaphor is a sledgehammer explanation for the nature of explanation.
c) Thus the Smoothing Out Metaphor messes up the details.
d) Therefore we cannot trust it.
We end up with a version of the Epimenides' Paradox: The Cretan says, "All Cretans are liars."
Can we believe a theory that points out its own weakness? This is only a problem if we accept the exalted status of true explanations implicit in a One-Way View of truth. If there are no monolithic truths but only relative ones, such a self-limiting quality may actually be a sign of consistency. Shouldn't we seek theories which immunize themselves against the destructive effects of the self-serving quality of explanation? It may seem that we should prefer a self-exemplifying theory of explanation. If, for example, I have a theory which says that all explanations are rationalizations of power relations, then belief in my theory ought, I suppose, to rationalize power relations or else there must be some limitation on the applicability of my theory.
But suppose, for example, we had a choice between these two competing theories:
1. All phenomena have reductionistic explanations.
2. There is no way to express absolute truths in words.
The latter may seem to be an argument against itself since its truth would contradict its content, while the former, in a sense, seems to argue for itself since it is a reductionistic explanation. I, for one, however, tend to find more (relative) truth in the latter. If we start from an assumption that our ability to explain is limited by the profoundly complex nature of the MT relationship, then we may want explanations which point out the problem by criticizing themselves. This Inverted Self-Reference Problem, as I call it, is very important, I think, and will be discussed (without resolution) in another chapter. In any event, the Both-and-Neither Model and Assumption Switching are creations designed to deal with the issue so that we don't have to choose between the two generalities offered above.
Self Preservation
It is often powerfully brought to our attention that people's explanations for the states of their lives, their positions in the world, and their responsibility for those states and positions are very often self-serving in the extreme (present company excepted, of course). Rich people, for example, are apt to believe in the necessity of unequal wealth which serves to motivate the entrepreneurship that fuels prosperity for all. They are also more likely than poor people to believe that we live in a meritocracy where the cream will naturally rise to the top. The poor, on the contrary, are far more likely to portray themselves as victims of greed, bigotry and a rigid socio-economic structure. We've all had coworkers and friends who seem sincerely to believe that they are responsible for everything that goes well at their workplace and bear no relationship to all of the screw ups. This book even, presents evidence that I seem determined to rationalize my tendency toward lethargy and inaction/stasis. I would like to believe that "just being" is a legitimate kind of "doing." I would rather think of myself as a fantastically creative person whose creativity is severely blocked than to think of myself as intrinsically uncreative. All of this leads naturally to a feeling that stasis is caused and all that is required is for me to remove impediments to my natural creative tendency. I want to believe my anti-jump muscles are flexed rather than admit I can't jump.
The Smoothing Out Metaphor clearly places this self-preserving/self-validating role at center stage since it says that explanations undo what has been done to us. It would be easy therefore to equate this homeostatic role for explanation with the above stated Hobbesian, Darwinian, Marxian views which imply that all human actions, even ostensibly noble ones, are performed in the pursuit of a form of self-interest. A connection between Smoothing Out and these theories clearly exists, but we will see there are many distinctions as well. Darwinism, for one, would have this self-serving, self-preserving behavior arising from the blind process of natural selection where the Smoothing Out Metaphor pushes back the source to the nature of consciousness and being itself. We will also see that the Both-and-Neither scenario and Assumption Switching will broaden this explanation concept to go beyond these more mechanistic theories of self-interest to a scheme which pictures selfishness and selflessness as related and complementary.
There is an intriguing fit between the Smoothing Out Metaphor of explanation-description-expression and some aspects of many psychotherapies. The idea of a talking cure -- where one alleviates emotional problems and re-integrates oneself essentially by talking about the problems with an analyst and placing them in a theoretical scheme -- sounds just like Smoothing Out. We undo traumatic disturbances by reliving and explaining them. To some extent the remedy consists merely in identifying the cause and allowing that knowledge to dissipate the problem. In this context, to explain is to exorcise the stresses which we cannot accommodate (make a part of the steady-state of the bubble), things we can't square with our self-image or our understanding of the world.
We can see the unconscious as a storehouse for disturbances which have not been smoothed out. To repress is to cover up but fail to smooth out, and the more dented we become, the more fragmented our behavior and the more confused our thought. Good explanations, one's that smooth out well, are seen as a necessity here.
The Smoothing Out Metaphor seems to imply a very non-standard view of perception. "The brain is a computer" is the metaphor that holds sway in current cognitive research. Thus the standard model sees perceptions as the processed end-products of inputs. Photons enter through the eye, stimulate sensors which send signals as if over wires to the CPU which processes the data and turns them into, for example, the Mona Lisa or whatever image is being observed. The missing link in this explanation is the moment where data becomes meaningful and coherent.
In my version of these events, where admittedly the role of neurons is obscure, we see the mind or self as a coherence or a whole to begin with. It is a homeostatic system like that which maintains body temperature. The mind possesses a steady-state which gets disturbed by influences that it actively and automatically seeks to damp out. Perception presumably takes place with the ex-planation, the smoothing out. Perhaps perception is the ex-planation.
The Smoothing Out and computer metaphors of perception are partially compatible. The fundamental difference comes in their respective treatments of wholeness. In one, wholeness is barely relevant and is manufactured by the brain and in the other wholeness is inherent and essential. We might say that the Smoothing Out Metaphor treats consciousness as fundamental in so far as explaining implies consciousness where the standard approach treats it as a product of computation (if it acknowledges consciousness at all.)
Artificial intelligence researchers and science fiction writers have frequently observed that, in their model of thought, consciousness is completely unnecessary. We can theoretically reproduce human thought and behavior without it. If consciousness exists, and all but a lunatic fringe agree it does, it is an add on, an epiphenomenon that probably serves no purpose.
Old Book: Chapter 6
It all begins with an idea.
Eye Rays
When I took my first physics course as a junior in high school, we studied the basics of optics. As a learning tool we used light ray diagrams in which always appeared an eye ball in profile to represent the observer.
(diagram)
We were supposed to draw lines with arrows to show the path of photons as they, for example, reflected off a mirror and into the eye. At first, much to my teacher's amusement, some of my classmates and I insisted on drawing these rays pointing in the wrong direction, originating in the eye and going to the object rather than the other way around.
(diagram)
I later learned that my classmates and I were making the most natural of assumptions. Euclid himself created this postulate for his Optics: "The rays which come from our eyes travel in straight lines." Euclid, my classmates and I all seemed to feel that vision is an active thing, more like reaching out to objects rather than passively receiving them. Our eyes aren't just passive portals but send out "feelers" into the surroundings and bring back images in the manner that our hands bring food to our mouths. This naive version of vision occurs along a two-way channel—out and then in. The creators of Superman must have shared with us this misconception of vision because they endowed the Man of Steel with the rather peculiar capacity of X-ray Vision. The mere perception of light in the x-ray part of the electromagnetic spectrum is not enough. It also requires that he send out penetrating x-rays that somehow get reflected back. Thus X-ray Vision requires that same two-way channel. His Heat Vision gives an even more clear cut example of rays coming out of the eyes.
I have read that a 19th century Native American tribe referred to cameras as "spirit catchers" and disdained being photographed. They reasoned, I presume, that cameras must have the power to reach out and grab some bit of the person's essence in order to produce a replica of that person. The subject of the photograph would be changed by the experience of being photographed. Likewise, the Eye-rays Model of vision seems to imply the creation of replicas in our minds using some of the real "stuff" of the observed thing that the rays have brought back.
You may have guessed that I don't think this is entirely a mistake.
Years ago a well-known parapsychological experiment was performed that claimed to show that subjects experienced a galvanic skin response when they were observed by others even when the subjects were unaware they were being watched (elemental mind-- 240 braud shafer andrews. More recently, Sheldrake has curated many more such experiments). A perceived thing "feels" and responds to being perceived as they are being "grabbed."
Receptivity and taking can be seen in this light as endpoints of a continuum of processes that can't really be distinguished from the outside. From the outside all we see is the movement. Passivity and action are orientations of language that are not reflected in the underlying reality. (Yin and yang are about descriptions not about that being described.)
It is easy and dangerous to go too far when applying this form of complementarity to human affairs. The concepts of co-dependence, passive-aggressive behavior and the "victim personality" recognize the validity of the active-passive switch in the descriptive realm. For example, it is possible to imagine that our victimhood is a kind of action for which we must share in the responsibility, if only slightly.
All events are interactions or exchanges. To influence is to be influenced. For every description of an event there is an equivalent and opposite complementary description. Another way to say this is that every event has both an active and a passive aspect.
Physicists tell us that all changes in physical systems happen as a result of the four known forces of nature—gravity, electro-magnetism, the strong and weak nuclear forces. Each operates on the subatomic level, we are told, as the exchange of particles—gravitons, photons, gluons, and bosons. We saw that our traditional idea of causation boils down to a coercive kind of force. At the level of these subatomic exchanges, force itself starts to look like a two way street-- more like communication and cooperation than like coercion. Action is always interaction.
How does a bolt of lightning know where to go, that it ought to hit the lightning rod rather than the roof? The explanation for the path a bolt will follow involves the principle of least action, an optimization law of great generality in physics. If, for example, at each moment during the flight of a projectile we calculate the difference between its potential and kinetic energies, the laws of mechanics guarantee that the sum of all those differences for the path followed will be smaller than that for any other path with the same endpoints. This is a remarkable fact that has given many scientists, notably Richard Feynmann, the peculiar feeling that projectiles must be "trying" to travel in the most economical fashion. In the case of lightning at least, it is tempting to say that this feeling represents something real. There is a kind of communication involved. Many people have experienced a tingling or felt their hair stand on end just before a lightening strike. The lightning seems to be feeling out a path for itself before it actually cuts loose.
I have long been a recreational volleyball player. One of the many great pleasures of that sport is successfully digging an opponent's spike. When I have felt particularly in the flow, I have frequently experienced a feeling of certitude that I knew exactly where the spike was going to go. I could feel my way to the spot. There are certainly plenty of subtle physical signals a spiker gives to indicate where she or he will hit the ball and there is also plenty of room here for delusion, but since I feel no compunction to seek the simplest explanation for things that happen, I am happy to contemplate the possibility that I have tapped into that two-way channel of causation. This two-way channel seems capable of embodying both Coercive and Passive Causation, both past-pushing and future-pulling causation.
An unpopular but not disproved interpretation of quantum theory also involves the idea of particles feeling their way around. In the 20's, Louis De Broglie, the discoverer of the wave nature of the electron, proposed the existence of what were called pilot waves.... Much later David Bohm, who along with Einstein did not believe in quantum indeterminacy and the essentially probabilistic nature of quantum reality, revived this idea of pilot waves under the new name of the quantum potential....
The word "perceive" itself literally means "to thoroughly take" suggesting again that what we feel we are doing when we sense the world is going and getting. The Eye-rays Model, right or wrong, suggests a complementary description that views what we normally think of as passive in an active light, that switches our assumption about the passivity of perception with an assumption that we take part in the process and change what we see. This switch makes eye-rays an attractive image for me to explore. We have looked briefly already at exchanging holism for partism, flux for permanence, anti-GLOI for GLOI, coldness for heat and the pull of the future for the push of the past. We have said that both sides of these dichotomies must in their turn be seen as an assumed premise to gain the fullest descriptive power but that neither side nor any combination captures the essence of the territory. Now, we will look at the implications of active perception.
Of course, there are actions involved in our usual idea of seeing or perceiving. Cognitive theorists tell us, for example, that the brain filters what we see, even constructs it to large extent by filling in expected but missing data. We divide perception into two parts, the first of which, sensing, is perfectly passive, while the other part, processing, is thoroughly active. Our minds can seek perceptions as well. We can concentrate our attention, turn our heads toward a sound, ask someone a question or go skydiving (presumably seeking the "rush" of perceptions), but these are not the kinds of reaching out I'm talking about. Eye-rays are active in a different and more intrinsic sense.
Dozens of pop science books by reputable, even eminent scientists lend a degree of support to the notion of active perception. Certain paradoxes in quantum mechanics have prompted these writers to speculate that the act of observing somehow helps to determine the outcome of quantum events by "collapsing the wave function." Schroedinger's probability wave equation, far from being a statistical approximation of reality as it was once thought to be, seems to represent the actual probabilistic nature of quantum reality. Reality has a wave nature but the waves are waves of probability. The equation seems to imply that Schroedinger's celebrated cat is both alive and dead until a measurement is taken, until an observer intervenes and the exchange takes place.
The idea that consciousness "interferes" with reality was first put forth by the brilliant Hungarian-American mathematician John Von Neumann in his seminal book on the mathematical foundation of quantum theory. "Quote"
Most scientists would not choose a position like observer participation if there were any reasonable alternatives. That's because observer effects tend to contradict or at least complicate an assumption upon which the whole validity of experimental science rests—the assumption that there exists an independent world out there that can be measured. Scientists would like to believe that there is a definite cut off between the "in here" and the "out there". Where is science to find impartial and objective reality if we can't help but meddle with everything we observe, if we spill over into it? There would be nothing that would count as evidence for conventional physical laws if the scientists themselves participated in manufacturing the evidence.
Consciousness itself has always been looked upon with great skepticism by the scientific establishment because it seems to explain away too many difficulties in one fell swoop. The presence of consciousness is also notoriously hard to test for or even define. We know we have it, but we can't say what it is. When its existence is acknowledged at all in hard science circles, consciousness is usually assumed to be an essentially materialistic process which somehow consists in simpler materialistic interactions, an epiphenomenon to more fundamental activity. But if consciousness plays a part in creating the world in the definite form that we know, then it has to be marked as intrinsic and fundamental.
Could it be true that consciousnesses do somehow come out into the world and grab a hold of things as the Eye-Rays and collapsing wave images suggest? If this were the case, then perhaps the particular way that the metaphorical reaching hand takes hold of the object would affect what was brought back—the dead cat or the live one. A particular consciousness would have a characteristic effect. An alien perceiver with a different "grip" of consciousness may bring back a different result. In the parlance of physics they may take a different measurement.
I have already mentioned the idea of reflection as an example of de-scription. I want to show in what sense that is the case. When we throw a ball toward a wall at an oblique angle, assuming there is no spin on the ball, it will bounce off at an angle equal to the angle of incidence. This action is summarized as a rule that every beginning physics student learns, "The angle of incidence equals the angle of reflection." Light seems to bounce off of mirrors following the same rule, which, in this case can be mathematically justified by the optimization rule, the Principle of Least Time. That is, of all paths from A to B that bounce off a mirror somewhere, the one which will get from A to B fastest will be the one that follows the rule incidence = reflection.
It seems natural to assume that photons bounce like balls, but that simple image is contradicted by the current understanding afforded by quantum mechanics. Richard Feynman's wonderfully lucid lectures from his book QED: the Strange Theory of Light and Matter describe how we can assign a probability to the possibility that any given photon may follow fantastically more circuitous paths than those implied by simple "angle of incidence = angle of reflection." For every point on the mirror, for example, there is a certain vector whose length indicates the probability that the photon will really strike there. But after summing these vectors for the various paths, the ones for most locations will cancel with others, and we get a very high probability that the photon will indeed travel at or very near this simple trajectory.
The reality of these alternative paths is revealed by an experiment. If we carefully cover up bits of the mirror at regular intervals, we can make it so that probabilities for certain paths will not be canceled out. The path of a particular photon then becomes rather random, and a beam consisting of scads of photons will actually reflect off several different spots on the mirror rather the one simple spot. Amazing. I said this perfect cancellation stuff was a tricky business. Causing gets mixed with holding back. Anti-bounce muscles are called upon. Covering up a part of the mirror remote from the predicted region of reflection causes changes in the status of that region as favored.
What's more, these photons don't really bounce at all. They get absorbed by the mirror. This absorption then raises the energy level of the atoms in the mirror which thus emit photons of their own as the unstable states collapse to their lower energy levels. These photons don't come out in the predicted direction particularly but in exactly such a way that these probabilities of unexpected paths get canceled out. Simple reflection turns out to be an immensely complex process, a grab and throw rather than a bounce.
This grab and throw looks a lot like ex-planation (Bubble and Beacon models) and suggests again that an analog of our consciousness acts even on the atomic level...
Putting some form of consciousness in at the bottom of our explanations rather than adding it at the top does, as mentioned before, explains a lot of otherwise hard problems, and we must be careful how we tread here, but, ultimately, the difficulty of those problems suggest that something more is needed. There is no reason to exclude consciousness as a factor. I think, further, that the Smoothing Out Metaphor and the active/passive switch it involves make the assumption of consciousness seem rather plausible. That assumption itself can be framed as a switch from the dominant scientific view. Matter is prior to consciousness vs. consciousness is prior to matter. Each of these is no more than a starting place, not something that could be proved or disproved in some neutral system. We have seen that logic goes with creating such artificial hierarchies. The meta-system in which our logic is harder to apply says that consciousness and matter arise mutually.
The Eye-Rays metaphor has profound consequences for any model of perception which takes it into account. This metaphor will have important implications for revising the Smoothing Out Metaphor. The standard model of perception comes out of an image of the mind or brain as a computer or information processor. This model sees perception as inputs of inert data, as, for example, photons reflected off an object's surface or the chemical activation of sensors in the nose, not as stuff of the perceived object itself. What we eventually become conscious of is far removed from the thing itself. The reflected photons strike the retina which translates the phenomena into electrical neuronal signals which are then processed and finally consciously experienced. Most scientists would say that objects do not have odors, only our minds do. Chemicals emanating from the object produce reactions in us which our mind somehow experience as a smell. Thus the perception itself consists of our stuff rather than the stuff of the observed thing.
__________________________________
All the news that aint fit to print
Cameras, of course, aren't conscious perceivers. They certainly don't reach out. And yet they succeed perfectly well in creating images at least something like the images our eyes provide for us
Consideration of the grabbing metaphor of perception raises the question "If all perceptions involve exchanges, then might not all physical exchanges involve perception or consciousness?" Perception may not begin with sense organs and brains but may be in the very nature of material being. The sense organs and brains and memory may have evolved as enhancements to the intrinsic property. In putting forward the smoothing out model of explanation, I have already suggested that mind is an holistic affair rather than a mechanistic information processor. Minds have as much in common with an inflated ball as with a Macintosh.
{Pantheism}
According to the smoothing out theory of explanation from the last chapter, we have been put slightly out of kilter by the incoming stuff and will have to cancel it out or accommodate the change in order to re-achieve homeostatic balance.
This preliminary version assumes the passivity of perception. Thus explanations always occurred as reactions. We will see that active perception along a two-way channel will alter and broaden the notion of explanation. {+ pages}
Up to now I have used smoothing out and undoing interchangeably, but they that has only been in the service of simplicity. Rather than undoing or rejecting influence to maintain one's current state, it is equally feasible to incorporate or accommodate that influence to create a new but equally "smoothed out" steady state.
The question is raised "how do mere words undo actual influences?" This undoing stuff sounds more like shamanism than science. Reminds me of "In the beginning there was the word". That ultimate mystery of the relationship between the grammar of language and the grammar of existence continues to rear its ugly head
"Smoothing out" doesn't always mean "undoing"
(we then use explanations in a variety of ways. explanations are undoings but become models for doing, to impress our influence upon the world. we use explanations to communicate! this is a big aspect I have ignored. communication is proactive rather than reactive. I need to get out the complementarity of rejecting influence and influencing early on. the continuum of undoing and accommodating. i'm annoyed that its not coming across easily
Functions and their inverses.
There is the sense that our most active state is opening ourselves up to the flow. The ancient world of spirits and gods that animated us has been replaced with the world of outward directed selves. western action vs. eastern movement with the Tao. in a position to make a new synthesis.
tendency-intentionality. Influence-information (data with an intent-tendency vector). Here's a clue: negenergy=concentration, distilling. we recognize passive causation but only at the psychological rather than the ontological level to give is to receive. Indian giver expresses insight that gifts always have strings attached
determination of radio station
(suggestion of perpendicularity influence--effect complementarity
[I would suggest that the mind image created in this process is the de-scription. The image is the "inverse" of the influence. There is an extent to which the object is sending out stuff ad we are sucking in stuff as we send out our own. This is the essence of my give and take, foreground-background, two-way thang](What does this mean about sense organs? Am I setting up a distinction between some kind of direct viewing and ordinary sensation?)
clue:negenergy is related to consciousness, self-hood, concentration.
pilot waves, advance waves, two way nature of interaction (see advance waves in Genius) I'm now seeing "pre-events" bouncing back and forth as preliminary to the manifest event. This might also be the chapter to raise the idea of analog computation, how we know how hard to throw a basketball to reach the rim. If it were essentially computational rather than "real" what would the program look like? Intuitively, it seems unlikely that the brain calculates a signal strength and sends it to the biceps. two models-- computation vs. "living it" -- both seem preposterous. need a compromise like Bergson's "images"
principle of least action--quotes from Genius-- "how does the ball know what path will minimize... " leads into attraction and the future pulling the past. Reminds me of Bergson's image of the elan vital pushing (nondeterministically) like a jet rather than pulling events like a rope toward a definite end (point omega)
I had a dream-- the moment of decision, intentionality particle and advanced waves.
Emphasize that, at least at the start, this is a toy model. not intended to fit specific or actual cases but to seek maximal generality.
One can think of this by analogy to gravitation systems which can only be understood in terms of both gravity and inertia. Einstein's famous thought experiment showed that a person in room would experience the same physical effects whether the room was resting comfortably on the Earth or being accelerated through space at 32 feet per second per second. Thus there is a kind of equivalence between inertia and gravity. (more)
The passive side of this pair, inertia can be associated with persistence (of motion) while gravity is associated with change (or acceleration). In the system of chapter hhh inertia is the action of the gravitational field of the object on itself. Can we always associate passivity with action upon the self? (this is the subject of another chapter that I haven't a clue how to write)
More MT Noodling
The most general overarching theme of my philosophical speculations is:
THE MAP IS NOT THE TERRITORY, EXCEPT WHEN IT IS.
The hedging here is absolutely fundamental. That is, I'm not just being wishy-washy. The relationship between what is and what we can say about what is seems like it should be straightforward, but it isn't. It's recursive, mysterious, invertedly self-referential (like x=-1/x) and, I think, unresolvable...except when it is.
In theory, when we apply logic to our most basic Western assumptions about the nature of reality (which includes the legitimacy of logic itself), we must conclude that the territories and their corresponding maps ought to live in very separate realms. On the other hand, in everyday practice, we act as if there is no distinction between our explanations, on the one hand, and truths about reality on the other. That's quite a gap, and, I propose, neither point of view -- independence nor correspondence -- is remotely the case.
Presumably, we are animals who evolved words as tokens for objects in order to communicate and to reason. One tongue click means look here, a rolling r sound expresses horniness, a double grunt means hunger, etc. And grammars evolved to string these tokens together to create and/or express something we might reasonably consider a thought. How does such a thing lead to the Ten Commandments or a love sonnet or Newton's law of gravitation, let alone the certainty of a mathematical theorem? The mind boggles at all the leaps of complexity and deep ontological connection required to get from such a start to where language and culture and Knowledge are now. From the inside, it's very hard to tell; We are truly beguiled by the apparent validity of the things we say. Either our explanations are right or some other better explanations are. But, in fact, there's no a priori reason to think the world is susceptible to such linguistic or theoretical reduction. What would it mean if it were? Nouns, verbs, adjectives, and grammar have no correlates in the world an sich, unless God made it so. Theories simply can't be true in the way we'd like to think they are. Still, they work so darn well... They satisfy us. Maybe our modern conception of, say, weather is no more satisfying to us than our earlier ideas that the gods were angry or sad or happy, but that seemingly doesn't detract from our current level of satisfaction.
"It's more than a half mile walk from the door of the train station to the door of the Surinamese restaurant" no problem
"The apple fell because of gravity" big problem
The map-territory distinction isn't as significant for factual descriptions (like walking distances) as it is for explanations (like scientific theories). Things don't happen because of their explanations. Apples don't fall because of the law of gravitation or even because of gravity itself. Gravity is what we call phenomena like the falling of apples. And the law of gravity is a way of making sense of that appellation(!).
We might say that the apple falls to Earth because of gravity, but that's a confusion of the map for the territory. Gravity isn't the reason we don't float off into space; it's a way for us to distinguish between the contrafactual and the factual.
We might feel that the perfect simulation of reality which we might someday be able to construct from computers programmed with the laws of nature is as real as nature itself, but no simulation can create actual gravity (or any other physical phenomena). We wouldn't injure ourselves by falling in the simulation.
I'm stuck in the same place I always am. In trying to explain what's wrong with the idea of explanation, I'm caught in a self-referential maelstrom that careens between tautology and paradox. I can wave and point, but the fish can't explain what water is (until perhaps it's flailing and dying on the dock). We who are chained to the cave watching the shadows cast on the wall can't tell they are just shadows until we are somehow liberated from the cave (flailing and dying).
In the last few years, I've started to feel that a dim sort of light can be shed on the situation by contrasting statistical knowledge to that attained or understood through explanation and narrative. My thoughts on Fourier's Theorem and Ptolemaic thinking (elsewhere) highlight the arbitrariness of our explanations, but the possible insight we might get from that is contingent on a sort of legitimacy of the equivalence I posit between our descriptions-explanations-expressions and literal de-scribing, ex-plaining, and ex-pressing -- the idea that thinking, from a particular perspective, is merely about undoing the effect of the world on us. Cogitation is a homeostatic process.
Except when it isn't... There are at least two somewhat related proposals that I've talked about before that could mitigate the strangeness of the otherwise unreasonable connection between maps and territories or between words and truth. One is Biblical, I'm afraid: In the beginning was the Word. If words are prior to the world, if God saying "Let there be light" kickstarted everything, then of course there's a divine connection between maps and territories. This lovely resolution of my problem isn't quite enough, however, to make me believe in God. That would raise a thousand more unanswerable questions! Still it also suggests a neat way that maps and territories might have co-evolved -- one that I am more apt to accept. The key concept is a sort of monadology. What if the fundamental aspects of reality actually were, in a way, particles and fields? Not the dead particles and fields of physics, but beings or consciousness "bubbles" and their interactions. Yes, we're headed back into the Bubble & Beacon realm! If there really is nothing but a plenum (love that word!) of conscious selves, including subselves and superselves, and the chatter they produce with their streams of consciousness -- descriptions, explanations, narratives, and proclamations -- then it would be no surprise that the world would slowly come to resemble that chatter and those explanations. Influence is sort of the point of the chatter, and influence implies directed change and directed stasis. One missing ingredient in the above recipe is grammar. If descriptions are fundamental so must grammar be fundamental. That gives a whole new spin to Chomsky's Universal Grammar. And lately I've tried referring to this universal grammar as LOGIC=LOGOS. Logos is translated as Word in the biblical phrase cited above, but following my own take on the ancient Greek philosophy in which Logos is first used, Logos meant something more like the inherent intelligence of reality, a tendency toward order perhaps. If it's reasonable to say that consciousness might be fundamental, is it much of a leap to say that some kind of ordering or intelligence might also be? Can there be consciousness without intelligence in the broadest sense? What is this broad sense of intelligence to which I refer? Not entirely sure, but I'm calling whatever it is LOGIC=LOGOS.
If you perform my favorite assumption switch from "Nothing changes without force" to "All possibilities have a tendency to manifest themselves at once unless prevented from doing so," then intelligence such as our own doesn't have to be the product of computation in brains but blossoms forth when conditions, including especially brain conditions, are right. That's one way that LOGIC=LOGOS. Another is simply that logic, which appears to be an aspect of maps, seems to always apply in the territory itself. That is, logic is an aspect of everything reasonable. Duh!
I might be accused of a sort of mysticism (especially in this LOGIC=LOGOS thing), and that's a reasonable assessment. But there's little in my thinking that suggests the existence of occult and unfathomable powers. The word Mysterianism has been used to describe the idea that our minds are incapable of making sense of the nature of consciousness in particular. I would extend the list of things that our minds are incapable of making sense of to just about everything. I'm a mysterian about almost everything, except when I'm not.
Small Plates
This is a collection of short things that contain a morsel or two:
_________________________________
Epistemological Roundup
I want to try to lay out my epistemological stance as clearly as I can; mostly so that I understand it better myself. First of all, the whole matter is weird -- `queerer than you can imagine.' And it isn't going to get less weird. The weirdness derives partly but not entirely from the following built-in contradiction: I want to be able to say that no description or explanation, neither in words nor mathematical terms, can possibly make a perfect or indubitable match with the world, but, of course, this very statement purports to be an explanation or verbal utterance of exactly the disallowed kind! Never ever generalize! The existence of problems of self-reference points to the boundaries of logical and explanatory applicability.
How can the world follow laws? What could that "following" possibly entail? On the other hand, how can we avoid expressing our objections to that possibility as anything but a law? I have to perpetually remind myself: The world simply is -- paying no mind to our attempts to characterize it.
The very possibility of human knowledge appears to me on its face to be a highly dubious proposition, and yet we have mountains and mountains of stuff that seems to be knowledge, that seems to fill the bill quite well. I hope to develop here what I mean by the preceding sentence without resorting to hand-waving declarations, but at the moment it's hard to put my deep epistemological skepticism across in any enlightening way. Factual knowledge vs explanatory knowledge. I accept that which falls completely in the former camp and reject stuff clearly in the latter half. What about everything in between?
Science and measurement and repeatable predictability may be the key to spanning the chasm between a priori unknowability and apparent knowledge. Restricting our utterances to summaries of statistical and probabilistic regularities might also allow humans to bridge the divide. What passes for human knowledge seems to me to be essentially narrative in nature, while the territory isn't narrative at all. Statistical regularity seems to match the world much better than umpteen paragraphs of explanation. But, even after all the mitigating apologia, I'm still going to say that final interpretations and extensions of those measurements are necessarily incomplete and one-sided. That is, it's ultimately only metaphorical or subjective rather than unambiguous and objective to say anything like "Matter consists of 11-dimensional super strings" or "All spider behavior is hardwired in its tiny brain" or "Democracy is the best form of government" or "I believe in epistemological egalitarianism"
Ideas about this stuff have been rumbling around in my head for about 50 years, and my almost visceral desire is to achieve some sort of final disposition on the subject. Per Kurt Vonnegut, just as "the bird got to land," "(hu)man got to tell himself (s)he understand.'' So, in my old age, I seem to be finally settling on a kind of resolution of the issue, which, mundanely, focuses on a metaphorical extension of Fourier's Theorem. My "answer" is a bit like "Mentation is effective because just about anything would be." "All knowledge is Ptolemaic, and that's okay by an extension of Fourier's Theorem."
Unless you believe that "in the beginning was the Word," you probably believe that the human mind evolved from creature minds incapable of knowledge or even insight or opinion as we understand those words -- certainly if you carry the familial line back far enough (to, say, spiders). Language, for example, only arrived on the scene rather late. We probably also believe that knowledge, insight, and opinion are expressible through linguistic constructions (and not expressible in any other way). But human language is such a particular and seemingly arbitrary contraption. Had language evolved in sea creatures rather than animals living on the savannas, might not the whole structure of the particulars been very different? Does the elusive multidimensional world where parts are ultimately inseparable and the events simultaneous somehow really reduce to nouns and verbs, and can a truth really be expressed in the one-dimensional, syntactical and sequential stringing together of these phonemes? That would be odd indeed. The world itself would have to be language-like in some very deep way. You'd probably need to have a sort of God forcing that to be true: "In the beginning was the Word," pretty near. The map and the territory would have to share deep roots. BTW, I believe they do share deep roots , in a sense I discuss in other places, but still I have to aver that the map is not the territory. The implications of this assertion are huge! And totally unacceptable! The hugeness derives from the fact that everything we think we know would suddenly be grossly limited and diminished. The unacceptability is that we could no longer tell ourselves we understand. And we need to, right Kurt? "We now know the general structure of what a correct theory of human consciousness would look like." Good gawd!
______________________________________________________________________________________________
Double Dual
Talking on the phone: We speak, creating vibrations of varying pressure in the air that spread out spherically. The vibrating air strikes a diaphragm in the mouthpiece's microphone and makes it vibrate. A magnet attached to the center of the diaphragm is thus set in motion, producing an electromagnetic wave. This sets up a varying current in an attached wire, which by various means (analog land line, amplification, digital cell transmission, etc.) travels to another phone. (This is the dual of the speech). The varying current in the second phone creates magnetic field that moves a diaphragm in the second phone. The diaphragm moves the air that creates a sound in the listener's ear. (This is the double dual.) The key here is this extraordinary relationship between electricity and magnetism (and between sound and vibration). The mic/speaker makes a signal and then another speaker/mic makes a sound. One disturbance creates the other (or is encoded in it) which recreates the other ad infinitum.
sound wave -> jiggled magnet/diaphragm -> e-m wave -> jiggled magnet -> sound wave. Sounds are detected and produced by the same apparatus -- a diaphragm (or magnet on a membrane). This diaphragm setup is a self-inverting function which is rather remarkable and metaphorically significant to me.
Now, let's push the endpoints of the communication process further back at both ends -- into the heads of people. One has an idea to express or communicate. That idea (or chunk of meaning) presumably must be translated into language and spoken (on one end) and heard and finally translated back into meaning (on the other end). The brain device, if you will, that turns meaning into language doesn't seem necessarily to be identical to the one that reverses the process (as in the diaphragm setup), but it certainly seems to be closely tied. From a mechanical point of view, how could it be? They are doing different things. Stroke victims can seemingly possess one but not the other. On the other hand, one without the other is useless; the two abilities must have co-developed. This translation (encode and decode) system may be metaphorically linked to the electromagnetic, self-inverting phone apparatus.
Meaning in head 1 -> Chomskian encoding apparatus ->linguistic expression -> emitted signal (spoken words) -> received signal in head 2 -> decoding apparatus -> meaning in head 2.
The metaphor suggests that the encoding and decoding apparatuses ARE the same thing and that encoding and decoding are a self-inverting function. Apply it once and meaning becomes words, but apply it to words and the meaning comes out again -- the dual and the double dual.
Not really sure what my point is, but I'm pretty sure I have one. My World of Describers model of experience has a similar self-inversion. The deformation of the bubble (experience) is counteracted by de-scription that undoes the deformation and produces a seeming return to the steady state, perhaps accompanied by consciousness. Description is to consciousness as electrical wave is to magnetic wave as translating from meaning to signal is to translating from signal to meaning. This doesn't quite make sense, but the right metaphor is in there somewhere.
Meaning certainly goes with consciousness and signal seems to go with de-scription/ex-planation
Things act on themselves in a categorically different way than they act on everything else.
Recursiveness is so important and so resists narrative description/understanding
_______________________________________________________________________________________
Bottom Lines
Look, philosophical knowledge is an oxymoron, okay? I guess in one very real way, the world simply is: "It" is oblivious to our crazy attempts to epitomize it or even just make a truly general statement. In our attempts to make maps, we find that things are way too self-referential, recursively generated, arbitrarily arranged, and "queerer than we can imagine" to come to any final conclusions. Linear subject-object reductions can't contain it all. Still, we carry on and say what we can in the face of these obstacles. We employ epicycles on our epicycles and sometimes achieve impressive results (for a bunch of apes).
At the bottom of my humble philosophical project -- to find things we can reasonably do -- there's insight cultivation and assumption-switching. The assumption-switching part acts like lateral thinking to help us unblock the antijump muscles to generate those insights. We exploit any cracks we find in the cosmic egg, any metaphorical resemblances we find in the world, then let go and move gently down the stream -- rather than grasping the metaphor ever more firmly and bludgeoning the world with that sledgehammer until the world surrenders. The best we can hope for, I posit, is the loose and freewheeling superimposition of these mini-revelations, hoping that some may be less mini than the others.
Before moving any further, I ought to give examples of assumption switches:
____________________
•Things stay the same unless something causes them to change. vs. Things change unless they are prevented from doing so. Or everything is trying to happen at once but failing mostly
•Events in the present are determined by events in the past. vs. The state of a system is attracted toward certain pre-existent preferred future states (states of least energy, etc.)
•Things set processes in motion vs. Things are nothing but stable processes.
•The deepest background of physical space is a passive void in which things and processes play out their interactions. vs. Space is a plenum which is inseparable from the phenomena of the world.
•Behind everything is nothingness. When we remove the things of the world, nothingness remains. vs. There is no completely consistent way to represent absence. Nothingness is necessarily part of maps but not part of the territory. In other words, there is no such thing as nothingness. Duh.
•Every event A is caused by some set of events A'. vs. Coincident events, A and A', arise mutually.
•Perception is the passive reception of a separate outside world. vs. Conscious perceptions are exchanges.
•Consciousness is an epiphenomenon of materialistic processes. vs. Matter is all information and influence, and thus consciousness of a type is a fundamental aspect of all events and is in on the ground floor of existence.
•To understand a process or an object look to its components. vs. To understand a process look to the wholes of which it is part. That is, look at its place in its context.
•Forms perpetuate themselves through competition. vs. Forms perpetuate themselves by becoming indispensable parts of eco-systems
•All but the most fundamental things are made of more primitive components. vs. Things participate in their own development.
•If a statement is true, its opposite must be false. vs. Since the world is too rich to be subsumed by any single formal theory, the fullest description of the world involves complementary theories with contradictory assumptions (including the superposition of the two theories generated from these contradictory assumptions) If this statement constitutes a formal theory then it can be applied to itself repeatedly, implying a very messy world indeed.
___________________
Ultimately, nothing should be immune to an assumption switch, but in expressing my humanness I'm sure I've exempted many things. One that sticks out at the moment is the finality of my attachment to "The Map is not the Territory" or "The world simply is." Of course one can and must look at alternatives:
•The map is loosely tied to the territory (via Ptolemaic theoretical constructs)
•The map is tightly tied to the territory (via ever-true, evergreen theoretical constructs)
•The map is the territory (via mutual creation by God a la Let there be light)
The world exists independently of the maps we have made to describe it. vs. The world is part of the process of description and thus consists in maps of a sort.
If one of the most fundamental activities of existence is description/explanation, as I claim, and the residues and echoes and imprints and fossilized remains of those living maps are floating around everywhere among the monads, then they might themselves constitute a big part of the dark matter that's out there. Mapmaking becomes territory-making. Does this mean that the world can be molded to conform to our beliefs by making conducive maps? Well, sure. Definitely in the lowercase sense, but it surely doesn't mean that we can make of the world what we will. The world is our oyster but not our indentured servant.
What do we want from our minds? Maybe insights (A) plus the tools to exploit those insights (B).
A. Find or cook up correspondences, connections, metaphorical relationships. "This situation is kind of like that situation. In the old situation my first thought was to try this procedure. Is there an identical or cognate procedure to carry out here?"
B. Exploit the insights to get a sense of confidence and familiarity and predictability in some context. Contexts like
i. interpersonal relationships.
ii. engineering problems
iii. public policy
iv. cooking
v. growing crops/hunting prey
Another bottom line for me, it seems, that provides a background for meaning skepticism, is the fact of evolution. Earlier I referred to us as apes, and I find it impossible to question the idea that humans and thus human minds and human knowledge come from relatively humble origins and developed through a haphazard, historical process. The idea of knowing anything really is totally an act of hubris unless you equally ascribe that knowing to fish, insects, bacteria. By "knowing," I mean having final and correct explanations for things -- things like the significance of clouds or how to manage a baseball team or that democracy is the fairest and best form of governance.
________________________________________________________________________________________
The Multiplication Rule of Argumentation
In my view, the assumptions that form the basis of all logical arguments ought not to be fixed. No set of assumptions is correct or even ideal; each might accompany a sort of insight into a situation which has a certain degree of legitimacy or validity -- never 100% (percentages don't really apply). In this view, by the way, logic itself is thought to be perfectly legitimate --except under certain situations of self-reference -- although that fact is a profound MT mystery that I won't try to address here(see Logic=Logos). Perfectly legitimate means that the output of a logical operation is just as valid as the input -- logic preserves validity. Except in the way I'm about to outline.
To refresh your memory:
If 1/2 the cards in a deck are red, and 3/13 of the cards are picture cards, then the probability that a randomly drawn card is a red AND a picture card is 1/2 x 3/13 = 3/26. In general, if events A and B happen independently of each other then:
P(A and B)= P(A) x P(B)
Now, suppose you have a very good set of premises where each such premise is, say, 98% legitimate. I don't think it's actually possible to correctly assign a number like that, but I will do so for the sake of illustration. In any linear logical argument, one must, between the steps of pure logic, repeatedly invoke one of the premises or some lemma or corollary already arrived at. My informal claim is that each such invocation multiplies the ultimate validity or certainty of the argument by .98 or less. After n such steps, the validity of the nth statement is .98 to the nth power. If n is even just 20, validity is down to a mere .667. After 50 appeals to premises (not a particularly long argument), the validity would be .364. I hope you see the parallel to the AND rule above. Iteration of imperfect assumptions amplifies those imperfections according to a simple and standard law of probability. Premise A (.98) along with lemma B (.92) imply theorem C (.98 x .92 = .9016). That the probability that both A an B are true (or apply perfectly in this case) is .98 x . 92 = .9016
Before I try to invent a simple example (ugh!), let me clarify one thing. Because logical operations are perfectly valid (LOGOS=LOGIC), the 50th step cleaves to the premises perfectly so that contradictions will not arise unless they are inherent in the premises. However, the imperfection of the premises means the 50th step cleaves to the real with only .364 validity. Again, the assignment of numbers here is arbitrary and used only for illustrative purposes. Don't try this at home.
It may be that the axioms of Euclidean geometry are far better than .98 valid so that the theorems preserve great validity to the "true" geometry of nature, but repeated logic appeals in political or economic or moral or scientific arguments will slowly exaggerate or amplify the errors of the premises. The errors to which I refer may owe to incompleteness, excessive simplicity, one-sidedness, the illusion of objectivity, biases of perception, etc. or just the mismatch between maps and territories. Perhaps error therefore is an odd way to say it. What I mean is "uninsightfulness." Ordinarily, the existence of error implies the existence of correct solutions, and that's not the case here: All premises have limited insight into reality since reality is ultimately not a theoretical construction. It isn't a construction at all; it just is. That is, the map is not the territory.
This multiplication rule is my simple way to justify my dissatisfaction with elaborate philosophical (or other) constructions and arguments. The best stuff is always near the beginning, near the expression of insights with a few suggestive implications. Strike quickly and move on because there will be diminishing returns and accelerating error. Take what you can from these insights but don't expect too much. Don't expect a finished theory to preserve much validity of the original insight. (I'm thinking for some reason of my struggles to follow Bergson's and other philosophers elaborate threads here. Too many therefores!)
I might say that the brain is like a computer. Yes, intriguing. Both brains and computers are sometimes used in similar ways to sort through information. Both minds and electronic circuitry are good at logical operations. In fact, I can't think of anything else that is good at logical operations. We've got an insightful premise here. I can extend the premise in either direction: I can propose a fact about brains and see how it applies to computers, or I can look at how a computer works and suggest that a brain works similarly. But the premise isn't perfect.
The brain is also unlike any computer you've ever seen. It's: wetware, has no CPU, has no memory chips, is hard to program, makes lots of errors in simple programmable computations, rarely crashes or reboots, requires rest, gets tired, gets distracted, has moods and feelings that directly effect its outputs, feels as if it is conscious and unique rather than dead as a doorknob .
The brain is also like other machines. A radio is full of capacitors qua amplifiers and can tune into different stations. A telephone switching system relays information in organized ways. A factory turns raw materials into finished products. It's also like a spindle in that it makes disparate fibers into a single thread of argument or stream of consciousness. Gosh, where did I see the mind as spindle metaphor these many years ago (Jeremy Bernstein?)? If thread-making were the latest high tech development rather than digital computation, it might seem to be the last word in mind metaphors.
Building on this idea that no set of premises is perfect, I have suggested that by explicitly reversing the assumptions in some orderly way we can can get a fuller picture of reality. And by carrying this reversal process to its natural ends we may be able to arrive at a set of maps that circumscribe the territory (if not epitomize it precisely), and that's the best we can hope for. See my both-and-neither diagram.
It is often the case that A and ~A both hold but under different conditions and perspectives. It is human nature to sometimes take one of these two and mark it as natural or default and in no need of explanation while marking the other as the object of explanation. A brief example: What is the nature of motion? Without any motors, the planets keep going and going. On the other hand, here on Earth motions seem to want to come to an end. Balls roll to a stop, projectiles return to earth, living things die. That is, both persistent motion and inexorable stopping happen. The former under rarefied conditions, and the latter in day to day life. Aristotle took stopping or motionlessness as fundamental and held that motion requires impetus. Remove the constant application of impetus and motion ends. Going is artificial and temporary, and standing still is natural and eternal. On the face of it, this seems to be a reasonable premise. We give the tin can an impetus by kicking it. It moves, but unless we keep kicking it down the road, it quickly stops. You can build up a theory based on this first step. The burden of proving that theory correct amounts to accounting for the behavior of the heavenly sphere and for the difference between the tin can sliding a long time on ice and grinding to a halt on a dirt road. Aristotle developed such a theory and it stood for 2000 years. Then Newton came along.
One of his most astounding intuitive leaps was to challenge this common sense idea of motion. His first law: An object in constant motion will remain in constant motion unless acted on by a force. This is the falling apple insight. Once things are going, they'll keep going unless something stops them. This is the opposite of Aristotle's assumption. Newton has to account for the ball rolling to stop. (And he did so by saying that friction applies a force.)
Anyway, as I said, when two different conditions hold (constant motion and stopping), it's usual that one be assumed to be the default state and the other the subject of explanation. From the longer view, both and neither are the default. Newton's version supplanted Aristotle's because it gave a more comprehensible and mathematizable account. It is seen as a correction to the former erroneous view. According to my stated thesis, that might not be the case; a superimposition of Newton's view and some (possibly improved) version of Aristotle's view could give a fuller (read better) account of reality. I haven't given any reason yet why the Aristotelian account adds anything to Newton's, but I'm committed to the idea that it can.
Back to the multiplication rule.
My use of uninsightfulness above reminds me of the word unsatisfactoriness that is often nowadays given as the best translation of the Buddhist term dukkha (rather than its older translation as suffering or desire), and I think the connection is a good one. From Wikipedia:
While the term dukkha has often been derived from the prefix du ("bad" or "difficult") and the root kha, "empty", "hole", a badly fitting axle-hole of a cart or chariot giving "a very bumpy ride", it may actually be derived from du?-stha, a "dis-/ bad- + stand-", that is, "standing badly, unsteady", "unstable".
The bumpy road imagery is similar to my de-scribe/ex-plain/ex-press construction. An uninsightful explanation like an off-center axle hole doesn't give a smooth ride. We may need to add some epicycles to it to smooth things out -- or better centering to begin with. The Buddha would say that no amount of axial craftsmanship will ever eliminate all the bumps, so abandon the wheel (of reincarnation?), sit in one place (under a bodhi tree, maybe), transcend the road itself, and fly. So dukkha is the condition of constantly striving to smooth out all the disturbances to one's bubble -- and meditation might be a technique for ending that striving while at the same time perhaps tuning out the disturbances which might allow one to see without ex-planation?
__________________________________________________________________________________________
An Antijump Physics Fantasy
Space itself, far from being a void or vacuum, emits fantastic amounts of em-like force almost all of which is vector-wise canceled out by the similar emanations of neighboring points so that almost none of this spontaneous fecundity or plenum is realized (becomes part of the explicate order), although there might be extraordinary imbalances on a very small scale. This is Bohm's quantum potential. Imagine a sea of little vectors pointing randomly in all directions, summing to zero almost everywhere. Remember that, as in the hollow earth analogy, cancellation is not annihilation.
In this fantasy, photons are released not by stimulation or force but by developing asymmetries -- through alignment of vectors. In this case, imagine the little arrows becoming aligned like iron filings around a magnet. That is, in my fantasy, an apparent light source, say an electric bulb, can alternatively be described as a magnet-like attractor that aligns the pre-existing but chaotic vectors, creating an uncanceled path of vectors pointing in one direction -- which has the same reality as what we think of as an expanding spherical threshold of light.
The outward flow of x, is synonymous with the inward flow of anti-x -- where anti-x has to be carefully considered/defined. The simplest image to illustrate x flow and anti-x flow is a vacuum cleaner nozzle placed on a dirty floor. The inward movement of air and dust into the nozzle is simultaneous and co-extensive with the outward movement of the frontier of dustfree floor.
x is air/dust inward
anti-x is cleanness outward
Likewise
x is photons outward
anti-x is "magnetic" vector alignment inward
the difference between the implicate and explicit orders is a matter of cancellation. Cancelled things exist but are unexpressed or inexplicit. Cancelled = enfolded
__________________________________________________________________________________________
Connection and Separation
It seems to me there are 1000 different characterizations that the world appears to be 100% about. One of the 1000 -- near the top of my list -- is the dichotomy of connection and separation. We can never achieve total connectedness (i.e. union) with others nor total separateness (i.e. sovereignty/isolation) from them and with oneself. That ultimate nonbinary quality implies a constant movement toward one and then the other. From love, empathy, and generosity to willfulness, selfishness, and cruelty, from yang to yin, etc.. Connection can give pleasure but the simultaneous impossibility of achieving final union brings pain. Separation breeds healthy detachment but the simultaneous impossibility of achieving final insularity or selfhood leads to feelings of loneliness and emptiness. Our self-other relationships are always in need of rebalancing and that can be hard given the current circumstances, preexisting relationships, or the scars left by painful outcomes. Balance can be lasting and adaptable or fleeting. This all fits nicely with my image of nested selves and fluctuating identifications.
Watching the lovely film "Past Lives" has brought this all up for me.
As in all circumstances, humans develop narratives that make the difficulties of the ongoing struggle between connection and separation more bearable and palatable. The simple word bittersweet and its evocations are themselves strangely comforting for me in this regard -- in can draw out literal nostalgia. One narrative used in the film involves inyeon, an ostensibly Buddhist concept of fate and intertwining strands of connection and separation experienced through our countless past lives. It provides a comforting thought that there is nothing to be done about the melancholic and bittersweet feelings of life. It was ever thus and will always be. Maybe in the next go-round things will resolve more satisfyingly. "See you then."
So, we have a choice between connection with pain, on the one hand, and sovereignty with emptiness, on the other. Every romantic comedy urges us to find the former pair superior. The pleasure of connection might beat the pleasure of sovereignty for most folks, but does pain beat emptiness? Ask the Buddha. Balancing and accepting all of the above is the royal road, I suppose.
____________________________________________________________________________________
Self-Interest Corrupts
This is intended to bring to mind the phrase "power corrupts." I've posited this latter phrase as one almost everyone can agree on: The liberal, for example, wants to limit the power of corporations or people in corporations for fear that their monopolistic status will change their behavior from public service to public exploitation, and the conservative wants to limit the power of governments for the same reason. Anyway, power itself, without an underlying self-interest (defined broadly enough), isn't inherently corrupting. Power in a vacuum is just potential. [If I have the power to compel random people to say the word "banana" through some sort of mind trick, that won't corrupt me unless I find a way to derive benefit from exercising that power. Then again maybe I would inherently derive pleasure from controlling the behavior of others.] It's only when unequal power meets self-interest that corrupt actions are propagated. Is self-interest then itself, without power, corrupting. I'd say yes, if we can extend our idea of corruption back to include corrupt thoughts rather than merely corrupt actions. That is, I'm really just saying something everyone knows; our self-interest corrupts our explanations for things, makes us more apt to believe in ideas that benefit us. And even if we don't quite believe them, we're more willing to lie about it.
We could say that the corrupting influence of self-interest makes the existence of unequal power dangerous
Follow the money -> follow the self-interest