Wednesday, December 23, 2009

5.3 Is Nature Uniform and Predictable?

The previous chapter, How can we have confidence in our inferences, investigated Inference and the Problem of Induction. Predictability and continuity from past to future, from the known to the unknown, from the small to the large, and from the near to the distant underlie our ability to make meaningful statements about the world. Out of necessity we assume that our knowledge about that which we can access tells us something useful about that which we cannot access due to distance, time, speed, size, or practicality.

But can we really make these assumptions? The principles of uniformity, homogeneity, and isotrophism (for which we have no deductive proof) are foundational principles underlying our ability to make warranted and defensible universal statements about nature. Without them, we can only talk about what we directly experience, and must leave as utterly unknown and unknowable that which we have not yet experienced.

As we have seen, there appears to be no deductive proof of uniformity or for the inferential process which requires it, and it goes without saying that you can't use induction to prove itself. But for all the reasons presented so far, the existance of a uniform and predictable universe is very likely to be the case - so likely that any other possibility is vanishingly small. Whether we choose to defend this assertion with foundational axioms, coherent and mutually supportive lines of evidence, acceptance of an infinite series of increasingly more subtle explanations, relaxing of the requirement for a firm deductive proof, probabilistic methods (such as Bayes Theorem), relying on the "Criteria Of Adequacy", or inference to the best explanation, rejecting the basic principle of uniformity and the inductive method which assumes it requires a far greater effort than accepting it. This would indicate that we should probably cultivate a tolerance for uncertainty (since we seem to be stuck with it), and an understanding that absolute certainty about phenomena in the world is probably not possible.

Uniformitarianism is the principle and belief that the natural processes operating in the past are the same as those that can be observed operating in the present, and by extension, the same as those operating throughout the universe. This principle postulates that laws of nature that apply on Earth function the same throughout the universe. Its methodological significance can be summarized in the statement: "The present is the key to the past." This concept was introduced into modern thinking by Charles Lyell, James Hutton, and centuries before, by Avicenna and others. Although Lyell, Hutton, and Avicenna restricted their argument for uniformitarianism to geology, it quickly found application throughout all of natural philosophy. James Hutton wrote,

“If the stone, for example, which fell today, were to rise again tomorrow, there would be an end of natural philosophy, our principles would fail, and we would no longer investigate the rules of nature from our observations.”

This means that for us to be able to draw conclusions about the past, we must assume the invariance of the natural laws we see in operation in the present. Mere position in space or time cannot by themselves be relevant to whether some phenomenon occurs or not.

We start from the premise that the universe is homogeneous and isotropic, meaning that it is the same everywhere and of roughly the same distribution. At all times and everywhere, the laws of the universe behave exactly the same. Every observation ever made supports it, and none refute it. The light we see in our homes is the same as that we see from distant stars (as in Newton's “the light of our culinary fire and of the sun” from his Rules of Reasoning). We have measured light generated billions of light years away, and it is the same as the light generated by our refrigerator bulb. Countless observations support uniformity of the laws of nature across the universe. We see stars and galaxies just like our own as far as the universe stretches. We have recently discovered extra-solar planets with atmospheres circling some of those distant suns not dissimilar from our own. We see light, gravity, physics, and chemistry behave just as it does on Earth no matter where (or when) we look. I say “when” because much of what we see happening in distant space happened billions of years ago. Despite centuries of looking out into space since Galileo first viewed the moons of Jupiter, there is no evidence to support an argument against natural uniformity. Instead, there is overwhelming evidence in its favor. The standard caveats regarding physics at the boundaries of our experience (at the sub atomic level and at the galactic level) apply. The laws of nature at the human level do differ in kind from those we have discovered at these two extremes. But that is not an indictment of uniformity. Instead it is simply a widening of our understanding at these two scales. It is true that the behavior of quarks and leptons, and of dark matter and black holes, differ from what we experience in our daily lives. But we have strong reason to believe that these behaviors are retained at these levels no matter where (or when) in the universe we look.

Uniformitarianism was such a successful paradigm that, not surprisingly, it was eventually overplayed. It received such wide acceptance as a result of Hutton's influence that legitimate catastrophic theories such as volcanic eruptions, climatic changes, asteroid impacts causing mass extinctions, and of plate tectonics were rejected as being contrary to these principles. It got in the way of the acceptance of quantum physics, and erected barriers to the possibility of undiscovered dimensions and the possible infinity of time and space. These misapplications of the principle of Uniformity help us realize that it is a guideline, not a universal law. Uniformity was not “discovered” as the speed of light or the mass of a star can be discovered. It is a generally good assumption that allows us to make inferences about the parts of the universe we don't have immediate access to. But it is not always proper to employ it. It cannot be dogmatically and mindlessly applied.

Isotrophism and Homogeneity are concepts often paired with Uniformitarianism. They are the two legs of the Cosmological Principle which says that no matter where we look in the universe, we will see the same types and distributions of objects. Bound up in this principle is the idea that the shape, substance, and consistency of the universe in our local area is roughly the same as elsewhere. There is nothing priviledged about our frame of reference, as both Galelio and later Einstein showed. There is nothing unique about “here“ vs. “there“, no matter how distant. This is a truth that man has come more and more to realize. In pre-history, each tribe probably considered itself at the center of the universe (as they knew it). Among Earth's major historic cultures existed the symbol of the Axis Mundi, or “axis of the earth” which expressed their view that they inhabited a unique place at the hub of the universe. The Copernican revolution and expanding exploration of the Earth's surface literally widened Man's horizons, showing both that Europe was not at the center of civilization, and that our planet was not at the center of anything, but one of several planets (and a minor one, at that) in the solar system. Our universe grew even larger when Galileo's telescope showed there to be countless thousands of other stars like our own in our island universe, the Milky Way. In the 1700's Herschel added shape and texture to this fact by constructing the first accurate model of our galaxy. The next step came with Hubble's proof that Andromeda was not just another nebula, but a sister galaxy to our own. During the following years, many other galaxy's were discovered. It is currently estimated that there are about as many galaxies in the visible universe as there are stars within our own galaxy. Further, there may be much more to the universe than the mere 13.7 billion light years worth of stars that we are able to see – the universe may be expanding faster than its light can reach us.

As far as we can tell, the universe is both homogeneous (has similar structure everywhere) and isotrophic (has a similar appearance in all directions). On the small scale, we don't have homogenieity – the universe is full of “clumps”. Earth differs from Mars, our sun differs from other stars, the our galaxy differs from the surrounding magellenic clouds and the other galaxies in our local cluster. Our local cluster differs from other galactic clusters. However, on a large enough scale, even larger than this, you do get homogeniety. An analogy would be a sponge cake filled with raisens. If you stick a pin into the cake, you may pull out nothing, or you might get a raisen. On the scale of a pinhead, the cake is not homogeneous. But on the scale of a slice of cake, you always get roughly homegeneous slices with about the same amount of cake and raisins in each slice. On the scale a billion of light years, we observe homogeneous structure in the universe.

But for the purposes of understanding the universe we live in, there is no need to go that far to look for similar structure. The structure of the objects and phenomena that scientists study in their laboratories have the same structure, consistency, and substance of similar structures and phenomena we know exist outside the lab. There is no need to go to the ends of the universe to be able to make that assertion.

In the chapter, Isaac Newton's Rules Of Reasoning, two of Isaac Newton's four "Rules of Reasoning in Natural Philosophy" deal explicitly with the Principle of Uniformity.
  1. To the same natural effects we must, as far as possible, assign the same causes. As to respiration in a man, and in a beast; the descent of stones in Europe and in America; the light of our culinary fire and of the sun; the reflection of light in the earth, and in the planets.
  2. Qualities of bodies are to be esteemed the universal qualities of all bodies whatsoever.
So, along with Hume, Lyell, Hutton, and countless others, Newton advances this principle. Although impossible to prove, it is the only explanation that would allow our experiences in the universe to make sense. If every seemingly similar phenomenon in every region of the world was the result of completely different causes, our science, and our common sense would be completely useless and would not work. The fact that both do work is evidence enough that, as Newton says, "to the same natural effects we must, as far as possible, assign the same causes".

Sunday, November 22, 2009

5.2.8 Denying the consequent

There is a valid form of logical reasoning called “denying the consequent” (aka “modus tollens”) which can be used to show that induction is a valid and fully warranted methodology that we may rely upon. The form of the argument is:

If P, then Q.
Q is false.
Therefore P is false.

Following this logical form, the use of inference from real world experience is justified by the following sequence:
  • If (P) induction from sense experience to make inferences about the world is invalid and unjustifiable, then (Q) science (which relies on inference) has no hope of working.

  • However, (Q is false) science does work! There are countless examples of the progress that it has introduced, discoveries that it has made, and new technologies it has spawned. There are no counter examples to its success.

  • Therefore, (P is false) our inferences from the real world ARE justified and valid.
Obviously if there was significant evidence in support of the claim that drawing conclusions through the scientific approach was invalid, then opponents would have a case. But such evidence is entirely absent, and there is overwhelming counter-evidence. Nor is there any competing theory as to why science tends to produce correct, useful, consistent, predictive, and informative results. Barring the existence of a competing explanation that accounts for its success (trickery by Satan to test our faith is one such untestable explanation, as is Solipsism), it’s plain, obvious, common sense to accept as fact that inference from the real world is valid. It would require agonizing logical contortions to explain away the falseness of statement “Q” above (i.e., “science has no hope of working”) using some other argument. The rule of Parsimony would indicate that the obvious explanation, above, is the correct one.

We should not become over excited by the fact that an established formal argument supports the use of induction. Language can be slippery, and we have seen earlier in this document a case where the cousin of modus tollens, modus ponens, was used to prove both that reality IS an illusion and later that reality IS NOT an illusion. So be careful with these simple techniques, they can be misused.

Saturday, November 21, 2009

5.2.7 Foundationalism and Coherentism

After a slight divergence to study Postmoderism (a late addition to the 5.1 chapter which covered Faces of Idealism), we come back to the topic in section 5.2 - How Can we have Confidence in our Infererences.

Scientists attempt to justify their scientific assertions and interpretations by reference to other specific scientific statements and theories, which are usually more basic or fundamental statements. We have seen how this can lead to an infinite regression of assumptions and justifications, each of which must be proved. Hume, among others has written about this problem. To avoid this problem, the concept of Foundationalism was introduced, initially by Descartes, built up by Hume, and given modern form by Newton, Russell and others. This concept says that basic, self-evident, foundational beliefs exist and that these require no proof. These are said to be "properly basic". These, then, serve as the basis for derived, non-properly basic, beliefs.

Foundationalism may seem ultimately futile, because it says that at some point, you can’t have any more proof so you just have to accept some beliefs as being self-evident, or foundational. But if the alternatives are infinite regress or circular reasoning, some consider it the "least bad" route. In our normal lives we each intuitively accept some things as properly basic (such as the existence of the past, of other minds, of the external universe, our our own selves). No matter how much your system can explain, there will be something underlying your system that is unexplainable. This is true in geometry, calculus, and physics as much as in religion and mythology – that is the nature of explanation in all its contexts. This is called the Regress problem, and some people are uncomfortable with it. In the search for certainty, to have to resign after several deep iterations is unsatisfying. But the alternative (an infinity of ever more refined explanations) presents at least as many problems. Philosopher Paul Thaggard, no fan of Foundationalism or the need for absolute certainty, helps us put Foundationalism in perspective: "the foundational search for certainty is pointless, and that what matters is the growth of knowledge, not its foundations." Thaggard recommends Cohertism (discussed below) as a more satisfying alternative.

Foundationalists respond to the regress problem by claiming that these most basic beliefs do not themselves require justification by other beliefs. Such would be the case with Russell's Five Postulates, and Newton's Rules of Reasoning in Natural Philosophy, and Aristotle's Laws of Thought (all described elsewhere in this document). Sometimes, these “foundational” beliefs are characterized as beliefs of whose truth one is directly aware, or as beliefs that are self-justifying, or as beliefs that are infallible. According to one particularly permissive form of foundationalism, a belief may count as foundational, in the sense that it may be presumed true until defeating evidence appears, as long as the belief seems to its believer to be true (a "properly basic" belief). Others have argued that a belief is justified if it is based on perception or certain a priori considerations. In any case, it can appear to detractors as philosophical hand-waving.

In physics and the philosophy of science, it is beginning to appear that "Structural Realism" is gaining support in recent years. This theory states that underlying the most basic of objects - subatomic particles, there exists nothing but mathematical properties and structural relationships between properties that themselves are incapable of further reduction. They are the primitives from which the universe is constructed. They would represent the foundation (See Max Tegmark's, Our Mathematical Universe and James Ladyman, Understanding Philosophy of Science). In this view, our best theories in physics do not describe the actual nature of things, but the structure of reality. This allows retention of our structural understanding even as our theories change and even replace each other (for example, as Special Relativity replaces Galilean/Newtonian physics, or the Thermodynamic/Kinetic theory of heat replaces the Caloric theory, or Quantum Mechanics replaces Classical Mechanics). A crude statement of Epistemic structural Realism is the claim that all we "know" of reality is the structure of the relations between things and not the things themselves, and a corresponding crude statement of Ontic Structural Realism is the claim that there are no ‘things’ at all (at least at the lowest levels) and that structure and relationships between structures is all there is. This is the "state of the art" in Physics as far as Foundationalism goes.

Coherentism is a competing solution to the "infinite regress" problem of induction - and is also a way to avoid Foundationalism. This model of knowledge asserts that scientific statements can be said to be valid if they fit cleanly into an existing, coherent system of other known facts or beliefs. In other words, if they form part of a coherent whole (such as the existing body of science), they can be said to be correct. In this view, there is no requirement that scientific statements always be supported by more fundamental statements, instead they can be said to be provisionally “true” if they successfully serve their role in a network of mutually supporting scientific disciplines. Similarly, the fundamental statements that support more complex concepts in several disciplines are buttressed by their repeated successful application. For example, it is not possible to “prove” Newton's Law of Gravity:

Every particle of matter in the universe attracts every other particle with a force that is directly proportional to the product of the masses of the particles and inversely proportional to the square of the distance between them.

But it plays such a consistent and predictable role in so many situations that it is considered as true as any scientific principle can be (leaving relativity and quantum gravity aside...). Supporters of this way of looking at scientific statements include Willard Quine and E. O. Wilson who popularized another word for this concept: consilience. However, it was not Wilson who came up with the concept. William Whewell coined the term in 1840 when he said, "The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs." Stated differently, Consilience is an assertion of the truth of the Theory in which is occurs. However, when a new observation conflicts with the existing body of knowledge, either the observation can be said to be incorrect, or the body of knowledge (e.g. existing theories) need to be modified. This is exactly what has happened with Newton's theory in the face of Einstein's discoveries.

Innumerable scientific observations from many disciplines support each other and provide confirmation and support for each other in very convincing ways. For example, Eddington's observations of light bending during a 1919 solar eclipse is considered the first evidence to provide solid support for Einstein's theory of General Relativity. This support didn't come from physics, per se, but from astronomy. Other astronomical phenomena (gravitational redshift of light) have provided equally compelling support.

Genetic research, discoveries in paleontology, in molecular biology and anatomy support and explain a mechanism for Darwin's theory of Evolution and are coherent with it. Plate tectonics explains how mountain ranges formed, which is coherent with much earlier discoveries of submarine fossils atop the peaks of our tallest mountain ranges and fossil similarities on the east coast of South America and west coast of Africa. Other coherent discoveries in geophysics involving magnetic field orientations in rocks on formerly adjacent plates have added additional support.

The chief criticism of foundationalism is that it can lead to the arbitrary or unjustified acceptance of certain basic beliefs. If we can all use personal preference to arrive at our unproven axioms, then strange and divergent belief systems can, will, and do emerge. The criticism of coherentism is that it is basically circular: A explains B, B explains C,and C explains A. A strong objection to coherentism is that it would be possible to have two sets of separately coherent data, that are internally consistent, but which conflict with each other. For example, Young Earth Creationists and Flat Earthers have gone to great extremes to create very details networks of facts and evidence to support their claims which they believe are interally consistent, but which disagree with the coherent set of scientific data. There is nothing within the definition of coherence that makes it impossible for two entirely different sets of beliefs to be internally coherent, but which conflict with each other.

The only other alternative that is generally suggested is to accept the infinite regress and move on. These three choices (foundationalism, coherentism, and infinite regress) bear a close resemblance to the three legs of Münchhausen's trilemma (so named because Baron Münchhausen supposedly pulled himself out of a swamp by his own hair). Simply put, the trilemma factors all possible proofs for a theory into three categories:
  • The circular argument, in which theory and proof support each other (coherentism)
  • The regressive argument, in which each proof requires a further proof (infinite regress)
  • The axiomatic argument, which rests on accepted precepts (foundationalism)

Monday, November 2, 2009

5.1.1.9.3 Postmodernism and science

The Postmodernism argument runs that economic and technological conditions of our age have given rise to a decentralized, media-dominated society in which ideas are “simulacra” and only inter-referential representations, mere copies or echoes of each other, with no real original, stable or objective source for communication and meaning. Globalization, brought on by innovations in communication, manufacturing and transportation, is often cited as one force which has driven the decentralized modern life, creating a culturally pluralistic and interconnected global society lacking any single dominant center of political power, communication, or intellectual production. Scientific publications, whose conclusions change from year to year demonstrate (in their opinion) that there is no solid basis to scientific investigation - that scientists publish to boost their individual reputations and to secure grant money, not to advance a solid body of knowledge.

Given the antipathy of postmodernism to reason, logic, and science, what is the basis of its attack? Fundamentally, it consists of an attempt to reduce science to yet another belief system supported by cultural norms and biases – no better or worse than any other, and on a par with religion, political dogma, or historical tradition. The Postmodernists, radical skeptics of all knowledge, claim that science is a mythic narrative - one among many others. It is just one other way of looking at the world, subject to its own faith claims, with its own priesthood, just like a religion.

Susan Haack, Professor of Philosophy at the University of Miami, admits that it is true the science has some figures that are regarded with some deference, and it has its share of jargon which is practically impenetrable to the lay person. But it is not just one of many legitimate ways of figuring things out. Every day all of us engage in various kinds of empirical inquiry. You might try a variety of routes to get to work, finding some that get you there faster than others. What is it that makes one route superior? Fewer stop signs, faster speed limits, less traffic? The inquiry of science is continuous with this sort of ordinary every day inquiry. As Thomas Huxley wrote, science is more careful, it's more detailed, and it's more scrupulous. But even though its language is difficult to master, it is not impenetrable at its core. It is an extension of how we all, everyday, get through the world. It is continuous with an activity with which each of us is familiar: ordinary, everyday, empirical investigation. Ordinary common sense is continuous with science. The practice of science is not different in quality from the normal empirical activities we exercise every day during our normal interactions with the world, as we test the environment around us.

However, it should be emphasized that scientific knowledge does not always resemble common sense knowledge. In many ways, the presently accepted scientific theories of the world are very unlike what we think of as common sense beliefs about the world. On the contrary, they frequently defy common sense. The argument that the methods of science are like the methods of everyday life is not a claim about the body of currently accepted scientific theories, but is rather a claim about how scientific inquiries proceed. What distinguishes the sciences in this area is that they have developed an enormous array of techniques and tools for conducting their inquiries that make them much more powerful - mathematics, methodology, computers, statistics, measuring tools, instruments of observation that extend our unaided senses, and a centuries old expanding and self-correcting body of knowledge. Science has organizational methodologies that allow enormous amounts of information to be evaluated, cataloged, understood, and related to other knowledge. However, given all these advantages over commons sense, we should keep in mind the quote from Einstein, "science is a refinement of everyday thinking".

The postmodernist argument also misses the important point that science is an open system of inquiry that is subject by its very methods to outside falsification – even from external reality itself. Faith and dogma, on the other hand, are closed belief systems reliant upon authority or revelation. Stanley Fish, a Postmodernist apologist, argues,
“But what about reasons? Isn’t that what separates scientific faith from religious faith; one is supported by reasons, the other is irrational and supported by nothing but superstition? Not really.”
He asks this rhetorically, because the article in which this quote appears attempts to show that the reasons for trusting rationality and evidence are as arbitrary as the reasons for trusting any other non-rational explanatory system.

This perfectly expresses the core misunderstanding of Postmodernist criticism of science. Although there is plenty of “reason” to have confidence in the explanatory structure which is science, it is not this set of reasons which separates science from faith, but rather, it is methodology. Fish paints a picture of science as a game of inventing reasons to explain specific beliefs (the context of discovery) in something of a post hoc manner. Rather, science is much more about testing those reasons against reality and previously elaborated theories (the later justification which follows discovery). This second, often overlooked, aspect of science is utterly lacking from faith-based belief systems. The glamour is in the discovery, but the bulk of the work is in the painstaking justification, cross-checks, tests for consistency, and confirmation exercises that follow.

Postmodernists also counter scientific falsifiability by attempting to argue that science picks and chooses convenient sources for what it would consider adequate falsification, just as any other belief system would. For example, they might ask,
“Is there something that would falsify a religious faith in the same way that some physical discoveries would falsify a scientist’s belief in natural selection? As it is usually posed, the question imagines disconfirming evidence coming from outside the faith, be it science or religion. But a system of assumptions and protocols (and that is what a faith is) will recognize only evidence internal to its basic presuppositions. Asking that religious faith consider itself falsified by empirical evidence is as foolish as asking that natural selection tremble before the assertion of deity and design. Falsification, if it occurs, always occurs from the inside.”
This is consistent with Thomas Kuhn, who wrote that paradigms can only be judged from within the paradigm itself, not falsified from the outside. And when one paradigm shifts to another it happens for quirky and subjective (i.e. cultural) reasons. Kuhn and Fish miss the whole “later justification” thing that is central to scientific methodology. They miss that science itself is not a set of beliefs but a set of methods. Yes, culture plays its role as it does in every human endeavor. But it is not the driving force, and (in a free inquiry) science does not reach its conclusions to achieve social goals.

Two common Postmodernist critiques of science runs like this: “Because of the subjectivity of the human object, anthropology, psychology, and other human studies, cannot be science; and in any event the subjectivity of the human subject precludes the possibility of science discovering objective truth of any sort. Second, since objectivity is an illusion, science subverts oppressed groups, females, ethnics, third-world peoples" (Spiro 1996). These objections are self contradictory. They purport to make some sorts of truthful statements about the world. Any argument that is based on the assertion that “Everything is subjective." runs into immediate problems. This idea is nonsensical. Anti-postmodernist Thomas Nagel has written, "for it would itself have to be either subjective or objective. But it can't be objective, since in that case it would be false if true. And it can't be subjective, because then it would not rule out any objective claim, including the claim that it is objectively false."

The imminent analytic philosopher, Willard V. O. Quine, maintains that scientific reality is indeed a somewhat arbitrary social construct. He says:
Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer . . . For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits.

Sokal and Bricmont, in their book Intellectual Impostures — Fashionable Nonsense, highlight the rising tide of cognitive relativism, the belief that there are no objective truths but only local beliefs whose truth value is relative to the social group or individual which holds the belief. They draw attention to the abuse of concepts from mathematics and physics, such as:
  • Using scientific or pseudo scientific terminology without bothering much about what these words mean.

  • Importing concepts from the natural sciences into the humanities without the slightest justification, and without providing any rationale for their use.

  • Displaying superficial erudition by shamelessly throwing around technical terms where they are irrelevant, presumably to impress and intimidate the non-specialist reader.

  • Manipulating words and phrases that are, in fact, meaningless. Self-assurance on topics far beyond the competence of the author and exploiting the prestige of science to give discourses a veneer of rigor.

Relying as it does on deconstruction, postmodern analysis is built on questioning the assumptions underlying any text, “deconstructing” its meaning. The problem is, it’s rare that a postmodernist critique of anything doesn’t consist of some of the densest, most impenetrable verbiage in existence. These sorts of arguments often claim that rationality, logic, and empiricism are nothing more than a hegemony of the dominant power structure being imposed upon the very definition of “data” or “reality,” the implication that it is the “dead white males” whose hegemony is being served. Ironically, it is Postmodernism itself which commits this intellectual crime in the most flagrant manner. As Steven Novella wrote,
Philosophers of science have largely moved beyond the postmodernist view; they now understand that this view was extreme and not an accurate description. In fact, the specific criticism is that this view confused the context of discovery, which is chaotic and culturally dependent, with the context of later justification. Regardless of how new ideas are generated in science, they are eventually subjected to systematic and rigorous observation, experimentation, and critical review. It is later justification that gives science its progressive nature

Sunday, November 1, 2009

5.1.1.9.2 What's Wrong With Postmodernism?

The several examples of beneficial Postmodernism in art, architecture, literature, science, and anthropology given above show it to be a mind opening, cobweb clearing, refreshing way to look at old issues from new and creative directions. What’s wrong with that? What’s wrong with being creative, free from dogmatism, and open minded - with seeing things from new perspectives and thinking outside-of-the-box?

No doubt during the mid 20th century there was an excess of conformity and conventionalism. Of course the historic “certainty” of Western Civilization in their religions, history, political systems, art, and culture was highly chauvinistic and ill-informed. That supposed superiority was concluded in ignorance of the rich and varied alternatives from other parts of the world and other times in history. But the West had no monopoly on cultural bigotry – the same sort of provincial thinking was common to most isolated cultures, which describes most of the world up until just the last few decades (as we enter the Information Age). And it continues today - stories of racism, bigotry, violence, and condescension in countries with highly homogeneous and uniform populations in eastern Asia, the Middle East, northern Europe, and Africa abound. And America is not cured of this problem, either.

Clearly, taking off blinders to the cultural, historical, artistic, and philosophical riches available from other cultures and civilizations is a good thing. But Postmoderinsm has taken this good thing too far, and it has already begun to consume itself. What started as a movement to discover new and personal meaning, to broaden horizons, expand thinking, and to break stifling limitations on creativity became a license to reject all established meaning, value, and significance – a total repudiation and revolution against the history of acquired knowledge - destruction of the old, sweeping it away to introduce the new. In tune with the disposable society of the late 20th century which values "newness" as implicitly good and the what has gone on before as passe, Postmodernism struck a chord with the rebellious sentiments of the 1960’s and 70’s. The movement probably peaked around that time with the various social revolutions in France (1968) and the counter-culture movement in America (1960's and early 1970s). But it's heyday is over. As we roll into the 20th century, its shortcomings have overwhelmed it, and it has now retreated to a fringe position, still popular with various Liberation Movements, though. Criticisms of postmodernism include assertions that postmodernism is meaningless and purposely obscure and unintelligible. It immunizes itself from criticism through various techniques (such as redefining terms when convenient, or rejecting outside analysis as invalid, prima facie). Noam Chomsky wrote that because it adds nothing to analytical or empirical knowledge, it is without value. He asks why postmodernist intellectuals do not respond like people in other fields when asked,
...what are the principles of their theories, on what evidence are they based, what do they explain that wasn't already obvious, etc?...If [these requests] can't be met, then I'd suggest recourse to Hume's advice in similar circumstances: 'to the flames'.
Christian philosopher William Lane Craig said
The idea that we live in a postmodern culture is a myth. In fact, a postmodern culture is an impossibility; it would be utterly unlivable. People are not relativistic when it comes to matters of science, engineering, and technology; rather, they are relativistic and pluralistic in matters of religion and ethics. But, of course, that's not postmodernism; that's modernism!
Formal, academic critiques of postmodernism can also be found in works such as Fashionable Nonsense by Alan Sokal.

The Postmodernist approach has helped writers, artists, and scientists break through barriers erected by tradition and established “wisdom” to go on to achieve radical and innovative breakthroughs. However, there is another saying of uncertain origin: “keep an open mind, but not so open that your brain falls out”. This succinctly encapsulates the pitfall into which Postmodernism and Deconstruction have fallen. It is one thing to sweep out old, musty truisms, stodgy conventions, and outdated theories, but not at the expense of discarding all accumulated wisdom and knowledge to make room for some brave new world. This philosophical/literary system lacks a clear central hierarchy or organizing principle, while embodying extreme complexity and celebrating internal contradiction, ambiguity, and diversity for diversity's sake. To those first discovering it, it may seem either intoxicatingly liberating or, on the other extreme, a parody or satire of itself - sheer intellectual fraud.

Dr Alan Bloom, the author of The Closing of the American Mind, was a conservative Humanities scholar at the University of Chicago. During the 1960’s and 1970’s he became increasingly distressed over the direction in which the academic Left was taking Humanities studies at American universities. He believed that the Civil Rights movement, the Anti-War movement, the Women’s movement, and the Third-World movement, all of which help foster the concept of multiculturalism were leading to a paralyzing cultural relativism. The worst thing that a professor could do in academia during this era was to actually come to a conclusion about anything. Instead, what was required was to retain a perpetually open mind - so open, that, in fact, they become closed (thus the title of his book). This paradoxical statement meant that by remaining open to everything, by refusing to apply reason and logic to judge and discriminate among choices, such as whether Shakespeare was a greater writer than Agatha Christie, or indeed if Shakespeare could be judged superior to Australian Aborigine story tellers, then you become mired in indecision. This indecision is based on the belief that we have no right to judge one over the other, to use discrimination (a word which has been much maligned recently) to evaluate relative quality. When we do that, we deny the critical power of human reason. If the University System has any one purpose, it is expose young minds to ideas, and then show them how to use their wits, their intellects, available information, evidence, and human reason to come to informed decisions about things. He worried that this was no longer happening at universities because of the emerging view that all ideas were of equal value. Although Bloom was alarmed only at the erosion of the integrity of the Humanities, we see a similar relativistic attack on the sciences by Creationists and New Age proponents who claim that their brands of alternative science should have an equal footing with traditional science.

To them, western science is just one among many equally valid "narratives", not to be privileged in its competition with native traditions. They maintain science and reason, as a means to discovering the universe, is an arbitrary and uncompelling approach to understanding - no better or worse that any other epistemological preference.

Although considered a fairly recent philosophical movement, Postmodernism really began with Kant's assertion that we cannot know things in themselves and that objects of knowledge must conform to our faculties (i.e., categories) of mental representation. Postmodernism takes this further by claiming that all we know, and all we can know, is filtered by our political and socio-economic preconceptions. Unlike other branches of philosophy, whose proponents assiduously, but often vainly, strive to describe and illuminate, Postmodernism keeps its opponents off balance and disoriented by refusing to submit to definition - in fact rejecting the constraints of definition altogether. It protects itself from criticism by being especially slippery. Just as Solipsism erects logical barriers around itself that effectively stifle criticism while doing nothing to demonstrate its validity, Postmodernism creates similar obstacles by preemptively disarming all external attacks first by refusing to submit to characterization or description, and then by holding to the position that growth and change can only occur from the inside, not from the outside. In their view, critiques originating from outside its domain have no legitimacy. This strategy poisons the well against useful critique and blocks any possible rebuttals.

In their view, religion can criticize religion, science science, history history, and literature literature. No change comes from outside. Each person and each discipline must look into itself for meaning and to discover its problems and to determine its own future. By this standard, no disparagement originating from outside the structure of Postmodernism need be taken seriously. This argument may sound vaguely familiar, as it is the partial basis for the argument that in American society, “dead white males” have no authority to advise minorities, women, or anyone else on their social agendas, philosophies, or world views. This cliché derives from that origin. More than any other branch of philosophy or culture examined in this paper, Postmodernism has successfully framed itself in such vague and indescribable terms that any attempt at definition is rendered nearly impossible. There is practically no assertion one could make that could not be disputed endlessly by well-versed Postmodern apologists. They typically condemn classification and description of their discipline as excessively confining and stereotyping, as an attempt to usurp power by imposing definitions from outside rather than allow it to develop as it chooses and to define itself.

Although the movement’s current incarnation traces back only as far as the late 1960s with the publication of several books by Jacques Derrida in 1967, Michel Foucault in the 1970s, and several other writers of that time, it received a tremendous political and spiritual boost from the Afro-Asian Conference in April 1955 in Bandung, Indonesia. Hosted by President Sukarno, it launched the modern “Third World” movement, heralding the end of the dominance of the West, with its “rapacious capitalism” and overbearing colonial hegemony.

These non-aligned former colony countries did have a legitimate complaint against the West. It was the first time in history that they had been able to come together to express a unified front to the former colonial powers. They may have gone overboard, and made their point too strongly. But they had been victimized for centuries by the powers of Europe and America (England, Spain, Portugal, Holland, United State, Germany, and others). This forum allowed them to express their independence from the Great Powers, to reject the Cold War dichotomy that (mostly) the US and the USSR were pushing on them, and to begin to exert global influence for the first time.

Even given that the participants had good reason to protest their historic treatment, this conference trumpeted the beginning of their new world order, a pacific, non-aligned, supposedly virtuous utopia, free from the colonial past and from white, Western dominance. These ex-colonial states were inherently “righteous” by the fact of their history of victimization. This shared experience united the new non-aligned nations under the flag of oppression. As Modern Times author Paul Johnson satirically wrote, “a gathering of such states would be a senate of wisdom”. But its first order of business was to engineer the escape from the political, historical, artistic, and intellectual shadow of western culture. It would accomplish that in two strokes - by demolishing the icons of that culture, while simultaneously promoting its own. The traditions and culture of the West were roundly condemned, not for any lack of merit, but merely for being associated with a repugnant, oppressive past. This would not have been the first, nor the last time an individual or group adopted a belief system whose primary attractiveness was its tremendous potential to materially benefit its adherents. This movement ushered in an era of unprecedented influence and prestige for the previously dispossessed of nations. Perhaps the Postmodernist accusation that “all institutions, creations, artwork and moral values are expressions of a primal will to power; the enforcement of one person’s ideology on another” is more projection and description of its own value system than a fair analysis of the values and institutions of those it condemned.

But, aside from being intellectually dishonest and devious, are there any structural/logical problems with this philosophy? The writer, Pauline Rosenau, identified seven contradictions in Postmodernism:


  1. Its anti-theoretical position is, itself, essentially a theoretical stand.

  2. While Postmodernism stresses the irrational, instruments of reason are freely employed to advance its perspective.

  3. The Postmodern prescription to focus on the marginal is itself an evaluative emphasis of precisely the sort that it otherwise attacks.

  4. Postmodernism stress inter-textuality but often treats text in isolation when it is convenient.

  5. By adamantly rejecting modern criteria for assessing theory, Postmodernists cannot argue that there are no valid criteria for judgment.

  6. Postmodernism criticizes the inconsistency of modernism, but refuses to be held to norms of consistency itself.

  7. Postmodernists contradict themselves by relinquishing truth claims in their own writings.


In short, if they held themselves and their theories up to the same analysis that they direct outwards, their theoretical framework would be in tatters.

There are countless examples of twisted logic and crazed rationales in modern society that all have a similar underlying essence of unreality. On one level they seem to not break the rules of logic, and at another level, their conclusions are seemingly insane. In George Orwell’s 1984 we have a compelling description of how the so-called Ministry of Truth which used “Newspeak” to brainwash the people of Oceania. The party slogans were: "War is peace; Freedom is slavery; Ignorance is strength". Through crafty manipulation of language, throwing out conventional definitions and re-framing reality in line with the party view, outright lies became self-evident truths. This lexical legerdemain turns words on their heads, robbing text of meaning, equating sense and non-sense, undermines logic and reason itself as simply alternative narratives that we repeat to ourselves.

Postmodernists behave like the stereotypical unscrupulous lawyer trying to win the case: truth and justice aren’t the point; instead using any rhetorical tool or trick that works is the point. Sometimes contradictory lines of argument work, your audience’s desire to belong to the in-group can be played upon, or appearing absolutely authoritative works to camouflage a weak case. Sometimes condescension works. It relies on rhetoric rather than substance.

We can see this occur in current events: religious cults or other minority groups which have long been victimized by bigotry or racism, grab the opportunity when the tables turn, to become bigots and racists themselves, providing a rationale supported by their doctrine. However, a typical postmodernist justification which explains all this away is that it is impossible for a minority to be bigoted or racist, since these traits are expression of power, and minorities have no power (Education & Racism, National Education Association. 1973). This is modern Newspeak postmodernism par-excellance. This very argument is frequently used in modern culture, for example, by Troy Davis in his Whyaminotsurprised blog: “In other words, the very social construction of "race" itself was the act of White oppressors for the purpose of exploiting and dominating people of color...consequently, I (and I am not alone here) don't believe that it's possible for a person of color to be a racist.”

So, reason takes a backseat to the intent of the message. Foucault shared these sentiments, claiming “reason is the ultimate language of madness,” suggesting that nothing should constrain our beliefs and political preferences, not even logic or evidence. Frank Lentricchia, another left-wing theorist, said the postmodern movement “seeks not to find the foundation and conditions of truth, but to exercise power for the purpose of social change.” And Stanley Fish has argued that theorizing and deconstruction “relieves me of the obligation to be right … and demands only that I be interesting.” There is a pattern here. The common goal is to simultaneously remove the supports of conventional wisdom by redefining truth and falsehood, right and wrong, reality and illusion, while also promoting themselves as the fresh arbiters of a new form of insight and authority. It is a bald power-play to enfranchise the previously powerless, to dethrone reason and replace it with a social subjectivism that, they believe, has suffered much reduced prestige at the hands of science, reason, and technology.

Mainstream Philosophy has largely abandoned Post Modernism. It was chic for a few decades, but now it considered a failure. It is one of many half-thought-out and inadequate attempts at new philosophical schools (the same is true of Ayn Rand's Objectivism). Post Modernism is not taken seriously by other philosophers, but is still practiced in niches where out-of-power groups and the academics who support them continue to try to wrest power from the dominant group. It is a thin philosophical veneer overlaying what would otherwise be a naked power play. For an example of how it is currently being expressed, see this Harvard Law Record article describing how "Critical Race Theory (CRT)" is justified by an appeal to "White Privilege" and "Institutional Racism". According to CRT, the current establishment can do nothing to defend itself because it is inherently biased and evil. It should just dissolve itself and give power to the non-white races. It uses a Postmodern defense of this agenda.

Noam Chomsky has argued that postmodernism is meaningless because it adds nothing to analytical or empirical knowledge. He asks why postmodernist intellectuals do not respond like people in other fields when asked, "what are the principles of their theories, on what evidence are they based, what do they explain that wasn't already obvious, etc.? If [these requests] can't be met, then I'd suggest recourse to Hume's advice in similar circumstances: 'to the flames'."

Philosopher Daniel Dennett declared, "Postmodernism, the school of 'thought' that proclaimed 'There are no truths, only interpretations' has largely played itself out in absurdity, but it has left behind a generation of academics in the humanities disabled by their distrust of the very idea of truth and their disrespect for evidence, settling for 'conversations' in which nobody is wrong and nothing can be confirmed, only asserted with whatever style you can muster."

Tuesday, October 27, 2009

5.1.1.9.1 Deconstructionism

The last element in the bullet list above, deconstruction (or deconstructionism), began as a literary analysis practice popularized by Jacques Derrida in the 1960s and 70s. It is based on the idea that meaning is always uncertain, is non-objective, and that it is not the task of the literary critic to illuminate meaning in a given text – each individual can determine what something means for himself. In other words, meaning and value are subjective and relative. It exemplifies the cliche, “beauty is in the eye of the beholder”. This is tame enough, and even justified and empowering, when not taken to extremes. But with Derrida, all external meaning becomes irrelevant, if not altogether non-existent. Derrida began with the established concepts of “the signified” and “the signifier”. An idea (the signified) is represented by a sign or word (signifier), and the signifier can never be the same as the signified. Derrida extended this, introducing an infinite series of signifiers referring to other signifiers, none ever settling on a firm "signified" entity. Because deconstructionism questions order and certainty in language and what language attempts to represent, its opponents view it as an intellectually obscure, negative cultural and philosophical critique – it tears down, but does not build anything to replace what it destroys. Initially considered elitist, nihilistic, and subversive to humanistic ideals, deconstructionism has been much debated in academic circles. It has gained more widespread acceptance, although it still remains, to an extent, a radical and controversial way of analyzing texts. Its critics deride it as being intentionally abstruse and recondite, full of obfuscation, and riddled with pretentious blather.

In one famous incident (the "Sokal affair") a physicist intentionally submitted a confusing and garbled scientific article to a Deconstructionist periodical and had it published. The author, Sokal, revealed that his fake article was "a pastiche of left-wing cant, fawning references, grandiose quotations, and outright nonsense", which was "structured around the silliest quotations [he] could find about mathematics and physics" made by postmodernist academics. This event demonstrated that practically any kind of politically / socially “correct” balderdash is acceptable to the Postmodern elite, independent of its intellectual quality.

Many literary critics detest practice of deconstruction, believing that deconstructing a text robs it of meaning and ultimately destroys the value of anything it touches. To those who defend its use, the answer to this criticism might be: “How does one define value? What is meaning?” So, the rebuttal is to challenge the very question itself, and attempt to demonstrate that the question is meaningless. Instead of answering the criticism, the criticism itself is shown to be flawed. But the critics persist. They convincingly claim that deconstructionism is nihilistic, that authors like Derrida attempt to undermine the ethical and intellectual norms vital to the classical conceptions of knowledge and wisdom. They accuse Derrida and his kind of denying the possibility of actual knowledge and meaning, creating a blend of extreme skepticism and solipsism, which these critics believe harmful. Under Derrida, Postmodernism took a decidedly destructive and non-productive turn.

A major criticism leveled at deconstructionism is that its proponents seldom attack their own work in the same way; why not deconstruct deconstructionism, itself, for instance? Heaven forbid! There are also obvious limitations to which texts can be deconstructed: although some think it can apply to anything, it is hard to see how it can address mathematical or (some) scientific papers without the knowledge of these areas that most deconstructionists lack or without tackling the philosophical problems associated with them first.

Sunday, October 25, 2009

5.1.1.9 Postmodernism and Deconstructionism

Here is a new section that I really should have added back there just after the chapter on "Omphalos". It is a school of philosophy that is so difficult to talk about intelligently and coherently that I didn't really know how to approach it. The very difficulty I have in describing this phenomenon is part of what gives it power. This will be a multi-part entry. Sorry that it's out of order. But here goes:


What is Postmodernism?
Creative individuals from many disciplines popularized this political / philosophical / literary / artistic movement in post WWII Europe and America. Although influenced by Nietzsche and Heidegger’s arguments against pure objectivity, it became a significant movement only after the war.

This movement gave us Dada art, surrealism, and abstract expressionism. In music, Philip Glass’s minimalist composition, Frank Gehry’s and Rem Koolhaus’s architectural responses to “glass box” skyscrapers, literary creations from Vonnegut and Burroughs all contain strong postmodernist elements. Postmodernist historian Howard Zinn helped us see our mistakes in Viet Nam. It revels in iconoclastic attacks on blandness, the status quo, conventional wisdom, ultimate “Truth”, and the accepted norm. It strongly encouraged “thinking outside the box” and finding new and personal meaning instead of accepting established explanations. Postmodernist thinking has helped usurp established common knowledge in anthropology and archeology where prejudice, dogma and tradition have historically taken tenacious handholds and have been difficult to dislodge. As convincingly argued in the book, 1491, by Charles Mann, the new evolving consensus on the arrival and culture of indigenous people in the Americas has strongly depended on revolt against established norms and accepted doctrine. James Burke's The Day the Universe Changed ends with a postmodernist chapter celebrating the uncertainty of science and knowledge. For those of us who grew up in the last third of the 20th century, we may recall our English teachers urging us to discover anew the meaning in the classics of history, rather than regurgitate the Cliff notes. Rebellion and non-conformity drove and energized it, frequently resulting in stunningly beautiful and important works, creative new insights, and introducing many new concepts and phrases into our modern lexicon (“paradigm shift”, “authenticity”, “deconstruction”, “multicultural”, “post colonial”, “cultural relativism”, "speak truth to power", etc).

So what is it, anyway? Charles Upton in The System of Antichrist wrote, “Postmodernism is the name for the general quality of our time. It holds that all worldviews are constructed by historical processes, by culture and religion, so postmodernism sees those worldviews ('tall stories') as a function of power rather than truth.”

In this context postmodernism is the notion that all ideas and beliefs can be best understood as subjective human storytelling – narratives dominated by culture and bias with no special relationship to the truth. Philosophers of science have already rooted out the flaws in such reasoning (in philosophical parlance, postmodernism confuses the exciting context of “discovery” with the context of later labor intensive “justification”). When applied to science it negates the implication of methodology and reduces all scientific research to a cultural narrative.

Without delving to how it differentiates itself from “Modernism”, and after attempting to reconcile many divergent and conflicting descriptions, it is possible to trace the outline of its tenets, though it is fundamental to Postmodernism’s nature to resist and reject all such attempts:
  • There is no objective truth or reality.

  • Reality is constructed by our minds and mental representations (as Kant would have it). It is only as we choose to configure it. The only reality is chaotic potential.

  • All comprehensible worldviews are oppressive, and as such should be deconstructed (i.e., overturned).

  • “Truth” is plural and ultimately subjective. Meaning, truth and morality do not exist objectively; rather they are constructed by the society in which we live.

  • All institutions, creations, artwork and moral values are expressions of a primal will to power; the enforcement of one person’s or group's ideology on another.

  • Reason is thrown out and therefore there is really no basis for debate. Fulfillment comes from submerging one’s self in the larger group and developing a radical openness to existence by refusing to impose order on life.

  • Revolutionary Critique of the Existing Order – The old ‘modern’ society from the enlightenment period with its rationalism and unitary view of truth needs to be replaced with a ‘new world order.’

  • “Deconstruction”, an analytical method used extensively in postmodernism, is the progressive pulverization of reality with the goal of pursuing the meaning of a text or assertion so as to undo the oppositions on which it is founded, and to show that that same foundation is fatally unstable and impossible.

I group this movement with the other instances of idealism because many postmodernists dispute the prevailing conceptions of reality. Jean Baudrillard, one of its founders, maintained that there is no such thing as reality. Everything we consider real is only a simulation, or “simulacra”. This very statement, like many others they make, reflects an internal logical inconsistency. It would require he have access to the “true” reality so as to compare it with the “simulation” to even be able to make this assertion. But if he can indeed do that, then his initial statement is false. And besides, even if there were such a thing as “true” vs “simulated” reality, who is Jean Baudrillard to tell us which is which? He would have to possess a perceptual ability that enabled him to see through the simulacra and simulations to the underlying reality so that he might compare them with such a reality, discern differences and distinctions, and thus have empirical grounds to make his pronouncements concerning them. And if he can do this, why can not we all?

Tuesday, May 5, 2009

5.2.6 Wesley Salmon and the Problem of Induction

Wesley Salmon was a 20th century philosopher who took up the study of how causality functions and why the transfer of information from one spatio-temporal location to another serves as its fundamental mechanism. He continued the investigation into questions of inference, induction, and the scientific process.

According to Salmon, we all believe we have knowledge of facts that extend beyond what we directly perceive. Our view of events is severely limited by both space and time. Based on our limited experiences we presume to predict future events. Take this hypothetical situation: suppose that you have drawn a number of balls from an urn and discovered them to all be black in color. You might infer, therefore, that all the balls from the urn will be black. This is an "ampliative" inference: the conclusion asserts something that is not found in the premises – nothing in the past record of choosing only black balls implies anything about the color of balls you will draw in the future. Some black balls have been drawn from the urn - therefore can one conclude that they will be black as we continue drawing? No, not at all, because in this sort of “non-demonstrative” logic, it is perfectly possible for the conclusion (all future balls will be black) to be false even if the premises are true (all the balls so far were black). Although this example is contrived, that is exactly the kind of judgment we make about the prospects for a sunrise each morning.

A related form of this scenario is called the “Black Swan” problem – it may be that all swans you have ever seen before are white, but you would not be on firm logical ground to conclude that all swans in the world are white. Even a single black swan (which was finally seen by Europeans when they first visited Australia) would invalidate your premature conclusion.

Hume and many others have noted that such inferences would be valid if we could have recourse to some sort of principle of uniformity in nature. If we could prove that the course of nature is uniform, that the future will be like the past, then we would be justified in generalizing from past cases to future cases - from the observed to the unobserved. We have found by experience that nature has exhibited a high degree of uniformity and regularity so far, and we infer inductively that this will continue. Even though we all have many beliefs about unobserved worlds, and in some of them we place great confidence, they are without a solid rational justification. By habit, we assume an intrinsic uniformity in the processes of nature. Such a belief seems easy to rationalize - these inferential methods of both common sense and science have proved themselves by their results. No other method can claim a comparable record of successful accomplishment, and probably most importantly, there is no compelling evidence or reason to believe differently. Consider the amazing technological and scientific discoveries that have been made using these techniques. However, it is easily shown that this is a circular argument, and once again we arrive at the necessity of making an assumption – the world is regular and, within limits, it conforms to predictable and regular patterns. Other theories may explain our experiences here in the world - the theory that the sun has risen every day so far but will stop doing so tomorrow fits the existing data as much as the theory that the sun will always rise. But is it just as believable? The answer is no - it is not. It is clearly an ad hoc explanation designed merely to fit the data, but can offer no explanation or predictions that are of any use. Given the two options: uniformity vs uniformity up to now, followed by chaos tomorrow, the former is far more deserving of belief than the latter.

The universality, uniformity, and predictability of nature are concepts that make their appearances in all debates involving inference and induction. It seems that for inference from past experience to be a valid way of reasoning about future events or current, but unseen events, we must believe that the future will be like the past. This implies a uniformity that spans time and space. But is this a good assumption? If we allow ourselves to be trapped in the circular explanation, it is not good enough.

But why should we limit ourselves in this way? Isn't predicting the sunrise substantially different than predicting whether the next ball will be black or not? We know almost nothing about the unseen balls in the urn, but we know an enormous amount about how the sun and earth are related. Thanks to Newton, we understand orbital mechanics. We can trace the history and describe the structure of the solar system, and the laws of gravity and angular momentum are well-known. Even more, we can apply our knowledge of the Milankovitch cycles that describe the 41,000 year cycle of Earth's precession, and even bring in General Relativity to describe the subtle changes in the precession of Mercury's perihelion. All these facts and theories support the modest contention that tomorrow the sun will rise. Combining the findings from the sciences and many other disciplines to form a network of supporting evidence is the essense of "Coherentism", discussed in the next section. It is a hopeful alternative to the tautology we are trapped in when we use induction to prove induction. The next section describes this concept.

Thursday, April 23, 2009

5.2.5 Russell’s Postulates for Non-Demonstrable Inference

Among the many ideas Bertrand Russell explored was the problem of showing how we can use non-demonstrable (or non-deductive) inference to draw legitimate conclusions about the world. Take two examples:
If at one moment you see your cat asleep by the fire and later you see it in a doorway, you are confident that it has passed through intermediate positions from the fire to the doorway, although you didn’t see it doing so. Because you were not a witness of the movement, there is no form of deductive logic that would prove that it is the same cat – it could be a completely identical duplicate. Common sense tells us that this is highly unlikely.

Or suppose you are walking along and you notice a shadow following you. You jump and it jumps. You stop and it stops. A reasonable inference is that it is your shadow, but it could equally be a dark spot on the ground with an independent existence that is following you around.
Surprisingly, we must make some assumptions that allow us to trust our inference that the cat walked from the fire to the doorway and that the dark spot is, in fact, only our shadow. The inferences we use in our daily lives and in science are of this sort. But what are the principles underlying this activity? What must the world be like for these non-deductive inferences to be warranted? What grounds do we have for believing that what simply must be true is indeed the case? What extra-logical principles must be true if we are not mistaken in cases like these? Regarding the cat, there must be some principle of endurance or constancy of objects that we assume without any more basic supporting evidence. In the case of the shadow, there must be some concept of causality (our body causes the shadow) in nature on which we can depend.

To provide support for making these types of common sense, but non-deductive inferences from “hard data” (external facts) to “soft data” (derived or inferred interpretations of the facts), Russell provided five postulates. In fact, it was this set of postulates that motivated me to put together this entire blog – the rest grew up around it. They support conclusions we make about our experiences that, although highly likely to be true, cannot be absolutely proven by the use of these or any other postulates. They are self-evident assumptions unaccompanied by proof, which he considered necessary to justify the kind of non-demonstrative inferences about which none of us typically feel any doubt.

Why is something like this needed? Why can’t science prove all its assertions using more basic facts, laws, and theories? If every justifiable belief could be justified only by reference to some more basic belief, there would have to be an infinite chain of such justifications. Because such a chain of proofs cannot reasonably go on forever, the only way to stop it (Russell argued) is to define a set of beliefs that are not proven by any references to more fundamental assumptions. Such are these postulates. They exist a priori and are non-demonstrable, though extremely reasonable. They are foundational postulates and serve as the bedrock upon which all other demonstrable inferences are based.

There is no weakness in resting on postulates. Every branch of mathematics has its first principles that shape the proofs and theorems that arise from them (e.g., in plane geometry: “through any two points, there is exactly one line”). It only makes sense that behind every “proof” is either another set of proofs, or some unproven and unprovable first principles (just pick up any calculus book). It can’t go back to infinity, nor can it loop back on itself or else it becomes tautologous. Postulates do not imply weakness – in fact, they are required. They are not “faith”, but are the required building blocks from which any system of empirical knowledge is constructed.

Although Russell asserted that these postulates were required to keep science from being mere “moonshine” his chief support for them was that they were biologically advantageous. That is, they conveyed survival benefits. He didn’t have great confidence in these precise postulates, but came up with them out of a sense of necessity. Without them, the inductive principle cannot be logically justified:
quasi-permanence - There is a certain kind of persistence in the world, for generally things do not change discontinuously (a kitten becomes a cat, but is still the same entity). Given an event, "A", it is likely that in a neighboring time, and at a neighboring place, there is an event very similar to "A" (pertains to continuity of time, space, and events).

separable causal lines - There is often long term persistence in things and processes. From one or two members of a series of events, we can infer something about the other members of the series. This postulate covers our experience of physical motion. It replaces the concept of a thing changing its position by that of a related series of contiguous events. This principle enables us, from partial knowledge, to make a probable inference. The most obvious examples are such things as sound waves and light waves. It is owing to the permanence of such waves that hearing and sight can give us information about occurrences. It is only on the basis of the idea of causal lines that we can infer distant events from near events.

spacio-temporal continuity - Denies action at a distance. When there is a causal connection between two events that are not contiguous, there must be intermediate links in the causal chain such that each is contiguous to the next, or (alternatively) such that there is a process that is continuous.

structural postulate - Allows us to infer from structurally similar complex events ranged about a center to an event of similar structure linked by causal lines to each event. That is if you see several similar events arranged about a center, there is something in the center that has causal lines connected to those distributed events.

analogy - Allows us to infer the existence of a causal effect when it is unobservable (where there is smoke, there is fire). If there is reason to believe from previous evidence that A causes B, then when you see A but no B, you can assume that there is a B somewhere hidden. Or if you see B but no A, there is probably an A somewhere hidden.
Paraphrasing Russell, “The inductive principle is incapable of being proved by an appeal to experience. Experience might confirm the inductive principle as regards the cases that have been already examined; but as regards unexamined cases, it is the inductive principle alone that can justify any inference from what has been examined to what has not been examined. All arguments which, on the basis of experience, argue as to the future or the unexperienced parts of the past or present, assume the inductive principle; hence we can never use experience to prove the inductive principle without begging the question. Thus we must either accept the inductive principle on the ground of its intrinsic evidence, or forgo all justification of our expectations about the future. If the principle is unsound, we have no reason to expect the sun to rise tomorrow, to expect bread to be more nourishing than a stone, or to expect that if we throw ourselves off the roof we shall fall. All our conduct is based upon associations which have worked in the past, and which we therefore regard as likely to work in the future; and this likelihood is dependent for its validity upon the inductive principle."

"The general principles of science, such as the belief in the reign of law, and the belief that every event must have a cause, are as completely dependent upon the inductive principle as are the beliefs of daily life. All such general principles are believed because mankind have found innumerable instances of their truth and no instances of their falsehood. But this affords no evidence for their truth in the future, unless the inductive principle is assumed."

Wittgenstein didn’t see it this way. He outlined in Tractatus that you cannot infer one state of affairs (elements of reality) from one another. There is no logical “law” of cause and effect. No “causal nexus” exists in nature. Cause and effect may be useful, but are not provable or even necessary. Similarly, induction (accepting the simplest law that can reconcile our experiences) has no logical justification, but only a psychological one. We would go quite insane without laws of nature. The facts that constitute the world are utterly disconnected. There is no internal, necessary, organic bond between them. He essentially rejected the postulates that Russell proposed. So, as clearly and carefully as Russell laid out his postulates, there is no unanimity of agreement as to their soundness.

David Hume had a similar take on this as Wittgenstein (predating him, of course). Hume believed that all we had was a stream of unconnected impressions from which we formed ideas. We have a psychologically based sense of a "constancy of perception" and the coherence between the unconnected individual perceptions that are the corner stone of the common sense belief in the existence of an external world, and of the continuity of objects over time (that Russell's cat that was in one room is the same cat that crossed over into the other room). There is no way to deduce this - we only infer it, and we believe it to be the case as a matter of habit and convenience. In each case of this type of thing happening, we move from constancy and coherence to it’s being the same object represented. And if it is the same, it must be that the object has existed unperceived through the interval we might not have been watching it, and so it must be something external to the mind.

In closing, here is a little Russell joke -
Q: Why did the kitten cross the road?
A: One must assume the postulate of quasi-permanence to infer that it is the same kitten before and after crossing.

I guess you had to be there. He tells it funnier...

Tuesday, April 21, 2009

5.2.4 The problem of induction

How shall we deal with the intractable problem of induction? As a means of predicting future events, it can’t be proved either deductively or inductively, which essentially exhausts opportunities for logical justification. Just because something has always occurred a certain way, you can't deduce that it will continue to do so (unless you take as a premise that the future will resemble the past, which just assumes exactly what you are trying to show). And you can't use induction to show that that future events will resemble events observed so far, because that also is what you are trying to prove. Just because induction has worked in the past doesn't mean that it will continue to work. Hume’s solution was to say that we should not refrain from making inductive inferences, but simply realize that we are not being governed by reason when we do so. As Bertrand Russell pointed out, the turkey believes it will be fed every morning until one day when the farmer comes with an axe instead of grain. It is habit, experience, and custom that allows us to rely on inference. We continue to use it because it is the most reliable technique available, despite having no ready proof.

Pragmatic approach
Pragmatists agree with Hume that there is no epistemic justification for induction. Instead, they present a practical explanation of why one is justified in using this method of inference. For one thing, it works better than any alternative, which is a primary selling point for most pragmatic positions. Induction will work if anything will work! Even though the future cannot be known, we can’t avoid having expectations about it. We would be wise to choose a method that would lead to success. One simple way of conceiving of the problem is in a truth table: The world either is uniform or it isn’t. And we can choose to use induction to predict future events or choose not to. Expanding on this exercise, we have six possibilities:
  1. Nature really is uniform and regular:
    1. induction would be a very reliable method for predicting future events.
    2. using some method other than inductive reasoning would be ineffective.

  2. or, nature is “somewhat” uniform, and frequently (but not always) evinces a pattern or connection between past and future:
    1. induction is of some help, and works as a tool as often as nature chooses to be regular.
    2. Non-inductive inference is as reliable as a wild guess.

  3. or, nature really is not uniform at all, and there is no significant pattern or connection between past and future:
    1. induction is of no help at all.
    2. Non-inductive inference is also of no help at all.
Thus, the non-inductive method is useless no matter whether nature is always uniform, somewhat uniform, or chaotic. As for induction, it will certainly be helpful at least in the case when nature is uniform or mostly uniform. Thus, it is rational for us to prefer this method of inference since it is the only one that has any chance at all of being correct.

Avoiding Induction
Karl Popper attempted to resolve the question of induction and inference in science by abandoning the troublesome problem altogether. He had no issue with our inability to conclusively “prove” the validity of the inductive method. If scientific hypotheses (or even less formal everyday hypotheses) are stated in ways that allows them to be falsified, then we can use deductive techniques rather than induction to test them. This technique employs the logical form called “modus tollens”, which was discussed in the section, Moore’s Proof of an External Reality. Knowledge is gradually advanced as tests are made and failures are accounted for. A typical deductive formation of this argument would be along these lines:
  • If (some hypothesis is true), then (we will observe some effect)
  • We do not (observe some effect)
  • Therefore (some hypothesis is true) is proven wrong
Applied to the sunrise, we would say, “If sunrises follow nighttime, the sun will rise tomorrow morning. We do not see the sun rise in the morning. Therefore theory about when the sun rises has been disproved”. As long as we continue to see sunrises after night has passed, we should not reject the theory. When tested many times in many conditions (for example, watch the sunrise from many spots on Earth), and it is never disproved, our confidence in the theory increases (but never becomes 100%). We tentatively and provisionally accept it, barring evidence to the contrary. We should, then continue having confidence in our theory. This is the essence of “falsifiability”. Science, then, could be thought of as a collection of hypotheses that have not been disproved (yet), but none has been conclusively proved, either.

Popper’s seemingly simple argument has not gone unchallenged, though. Among the professional philosophers of science, his view has never been taken as a serious alternative to the consensus theory of probabilistic induction (which takes into account the relevance and weight of evidence, Bayesian probability, and other mathematical representations originated by Carnap and others).

The primary element to consider in Popper’s view is that he believed that focusing on induction, or characterizing our generalizations about the future as exercises in induction was fundamentally mistaken. With finesse worthy of Wittgenstein, he “vanishes” the problem instead of solving it. Statements about how the future will unfold, according to Popper, do not actually employ induction, but instead rely on a technique that only superficially resembles it – the use of tentative hypotheses about future outcomes of our everyday experiments. We conduct an experiment of this sort every morning we look out the window expecting to see the sunrise. When we see it appear over the horizon, we can’t conclusively state that our theory about sunrises is true, but we can say that it has (once again) passed a well-constructed, though informal, test – that our theory can be retained as tentatively valid, useful, and worthy of further testing. We have strong confidence in it because of the countless confirmations of its predictions, and because it is never disproved. It has high verisimilitude, meaning, it correlates strongly with reality.

Like all scientific theories, it cannot ever be completely verified, but it can be quickly falsified. This may appear to share the structure of induction, but it stops short of the end result of induction in that we don’t use the results of this process to construct a general rule from individual outcomes. Instead, we simply can say that, once again, the theory has been corroborated - that it is a very useful and productive theory. We may also construct other theories that have similar logical structure to it regarding moon rises, the rising of Venus, etc. As they are verified, they each help to validate each other and boost our overall confidence in the set of interrelated theories. Popper referred to this as the “Method of conjectures and refutations”. If the sun were to stop rising and not resume its daily circuit across the sky, we would eventually abandon our theory of sunrises as having been falsified. In his view, we can’t accept any theories about the world as absolutely true, regardless of the amount of confirmation they have accumulated. But, those that are consistently corroborated can be made use of because of their eminent practicality and utility, keeping in mind that those same theories may need to be discarded if “eliminative evidence” accrues against them. Although this may seem like a facile manipulation of emphasis, it isn’t – this is exactly how scientists treat all scientific theories, no matter how well established.

Inference to the best explanation
Just as Popper got around the problem of induction by simply dismissing it, others circumvent its difficulties by getting around the issue in other creative ways. One such argument involves “inference to the best explanation” which was first introduced in the chapter on Modern Philosophy of Science. In this view, when one considers all the possibilities for how the future could unfold based on how the past and present events are manifested, the conclusion that the future will resemble the present requires the fewest assumptions, inventions, and stretches – employing Ockham’s Razor – to select the most likely explanation among any of several possible candidates. For example, if one sees a wet sidewalk it is reasonable to assume that it either rained or that the sprinklers were turned on, and less likely that a wave of water suddenly soaked it. Given experience and our knowledge of cause and effect, we can confidently make predictions (using inference) as to past causes of current events and current causes of future events. Drawing other conclusions would require greater leaps of improbability and would stretch credulity. Those who hold this position assert that it is imminently rational to assume that there is order and structure to the universe and that laws of causality actually work, and it would be highly impractical and wildly irrational to assume otherwise. Given the available choices, reliance on induction is the only one that makes any sense.

In fact, inference to the best explanation is used in everyday life far more frequently than either deduction or standard induction. It is how we draw conclusions from partial information. It is how we evaluate social interactions, judge intents of others, and understand potentially ambiguous statements. It is the primary tool of medicine, science, and all other forms of research and discovery. Deduction, although important, is principally a tool used in mathematics, logic, and philosophy.

In making these types of inference, we infer from the fact that a certain hypothesis would explain the evidence, to the truth of that hypothesis. In general, there will be several hypotheses that could potentially explain the evidence, so we systematically consider each one and reject all the least likely ones, leaving one remaining. In this manner, we are able to infer from the premise that a given hypothesis would provide a "better" explanation for the evidence than would any other hypothesis, to the conclusion that the given hypothesis is true. Accepting one of the less probable ones would simply be perverse.

Relax the burden of deductive proof
Another technique for dismissing the inductive problem is to admit and concede that using the strict rules required for deductive logic can’t be applied to induction, which is a fundamentally different and looser form of reasoning. The truth-preserving nature of deductive reasoning doesn’t work when used to justify a reasoning process in which the conclusions are, by definition, not certain. The conclusions of inductive arguments exceed the content of their premises – individual cases when used to construct a general rule necessarily go beyond themselves. However, with deductive arguments the premises contain everything necessary to systematically arrive at a definitive conclusion – the conclusion is inescapable. According to this argument, it is simply inappropriate to impose the tough standards of deduction on the fuzzier process of inductive logic.

As we have seen, inability to disprove a proposition does not render it true. For example:
  • Although the Omphalos and Solipsistic positions are immune from disprove, all reasonable people agree they are beneath consideration.
  • Russel's celestial teapot and the Flying Spaghetti Monster (blessed be his name) cannot be successfully defeated through argument. But, all satire aside, they are not really out there.
  • The many varieties of supernatural mythology all create beings or histories or forces that are beyond the means of science to disprove. This doesn't make them real.
  • There is an astronomical number of other incredible claims that bear similar logical structures to the above examples that also are unsusceptible to the power of logic. They are not, therefore, all true.
Likewise, no one can disprove this claim: "Reliance on induction is unwarranted". That does not automatically render this proposition true. If we can't disprove that "induction is groundless", reliance on induction is not, therefore, groundless. In fact, it is "probably true".

So, as with all assertions about the future (which are what both scientific theories and the inductive process concern themselves), they are beyond positive proof, though their conclusions (when supported by much confirming evidence) are well worth relying on. Interesting.

Probabilistic approach
For all of human history and as far back in time as we can collect evidence, the laws of cause and effect and the uniformity of nature have existed, unchanged. It is completely true that we can't use that past run to make conclusive statements about the future continuation of this consistent track record. However, it would be a gigantic leap of faith to assume that all of this will suddenly change as soon as I finish typing this sentence... See, nothing changed! If these laws were going to change at some time, and that time has not occurred in the last several billion years, there is not a shred of evidence that indicates that it is going to occur in the next few seconds, years, or centuries. From a purely probabilistic framework, the odds of everything being turned topsy-turvy exactly right now are very very very slim when measured against all of the opportunities for change that came and went in the past. For this reason it would be rational to assume the present trend is likely to continue, and highly irrational to assume it will not. For all practical purposes, for all of us, for the rest of our lives and the rest of humanity's existence, the chances of something like this that has never ever occurred and shows no sign of occurring now, are not likely to suddenly happen. Although we can't prove that the continuity of past/present/future will persist, a betting man could reliably count on it.

Even Deduction Cannot Be Proved
The problem that Induction has is that it cannot be proved deductively, and using induction to prove it would be circular. However, the same type of attack could be made on deduction, even though no one serious contemplates abandoning that reasoning technique. Lewis Carroll, the author of Alice in Wonderland, wrote in one of his stories that reaching conclusions by deduction can only be justified by an appeal to deductive inference, yet that doesn't dissuade us from believing that it is a valid methodology. We still consider it to be a rational approach to problem solving, so why would we have a higher standard for induction? If you were to try to convince a person of
  • if p then q
  • p
  • therefore q
and they rejected it how would you respond? They might agree to "if p then q" and also agree to "p", but still not believe "q", because they don't accept the rules of deductive logic. The only response is to tell them that they are not being logical, that you can deductively show them their error, that they are not following the rules of deduction! But that is the issue - they don't accept the rules of deduction, and the only response you can give them is that they really ought to. So, we see that induction can be inductively justified after all, because even deduction can only be given a circular (in other words, deductive) justification.

People still debate the many ways of viewing the inductive process and its legitimacy. There are complex mathematical and probabilistic arguments too intricate to try to try to explain here. However, it is fairly clear that there is no clear cut and unambiguously convincing logical argument in its favor. We are left with one of several responses – to agree with Hume that there is no legitimate rule of inference as induction, and rely either on habit and experience. Or we can agree with Popper that we are mistaken in calling what we do "induction". Or with those who argue that requiring a proof of the validity of induction is not needed, requiring a less rigorous proof of a process that itself is less than purely rigorous. Also there is the argument that we don't require a non-deductive proof of deduction - we think of the rules of logic and deduction as fundamental and intrinsic to our concept of rationality. There is nothing more fundamental that we can use to demonstrate that deduction is justified. The same might also be said of induction - it is just fundamental to what it means to be "rational". It remains an interesting, and still unsolved, problem.

See chapter 2 of James Ladyman's Understanding Philosophy of Science, called "The problem of induction and other problems with inductivism" here for several more responses to the Problem of Induction. I just wish I would have read that before writing this chapter, because he does a much better job than I!

Saturday, April 18, 2009

5.2.3 David Hume and Induction

The study of how induction and inference were involved in the acquisition of knowledge was a cornerstone of Hume's epistemological research. He believed that reliance on induction was fundamental to to making determinations about things when they go “beyond the present testimony of the senses, and the records of our memory”.

We all act as if we believe the world behaves in a consistent and regular manner; that past patterns of behavior will persist into the future, and into the unobserved present. This persistence of regularities is sometimes called the Principle of the Uniformity of Nature, which is discussed later in this document.

Hume wrote that we could not conclusively prove the principle of uniformity in nature, because justification comes in only two varieties, and both of these are inadequate. These two types of reasoning are commonly called deductive (or a priori) and inferential/inductive (or a posteriori). The uniformity principle cannot be deduced because past regularity in nature is no guarantee of future regularity, no matter how probable we may think it is. There are no general principles inherent in past events that compel belief in the orderly progression future events. It is conceivable that nature might stop being regular at any time, as it has on rare instances in the past (consider the occasional, uncommon meteor strike, supernova, or earthquake). We can’t logically maintain that nature will continue to be uniform because it always has been up to now, because this way of reasoning uses induction to prove that induction is valid. This is circular reasoning (discussed in the Infinite Regress Problem section earlier in this document). Thus no form of logical justification will rationally warrant our inductive inferences. Yet we still believe in them.

Hume’s solution to this problem was to say that natural instinct, rather than reason, explains our ability to make inductive inferences. It is our natural instinct that allows us to connect this intuitive series of propositions together:
  • In our past, the sun has risen every day
  • Based on what we know about how sunrises work, there is no evidence to suggest that this will not continue to be the case
  • Therefore the sun will rise tomorrow

Our expectations about such things depend on the relation of cause and effect. It is our common sense about this relation that tells us that depending on tomorrow's sunrise is a reasonable expectation. However, if all matters of fact are based on similar types of causal relations, and if all of these causal relationships depend upon induction, then we must somehow demonstrate that induction is valid.

Hume uses the fact that induction assumes a valid connection between a proposition like "today the sunrise followed a long period of darkness" and the proposition "tonight's darkness will be followed by a similar sunrise." We connects these two propositions not by reason, but by induction.

Probably the first modern philosopher to exhaustively study the problem of induction, Hume was followed by many who tried to address the problems he posed. But they still continue to puzzle us. He argued that it is just as possible to conceive of a contrary proposition to the sun rising:
"That the sun will not rise tomorrow is no less intelligible a proposition, and implies no more contradiction, than the affirmation, that it will rise. We should in vain, therefore, attempt to demonstrate its falsehood. Were it demonstratively false, it would imply contradiction, and could be distinctly conceived by the mind."

In other words, just as we can’t prove the sun will rise, we can’t prove it won’t rise. We can conceive of either course, and neither would contradict any firmly accepted premises. As Hume said elsewhere, existence (or non-existence) cannot be proved through a priori reasoning unless one or the other would cause a contradiction. Of course, everything we know about how orbiting objects work tells us that it would be quite an unlikely feat to stop this well understood phenomenon from happening. But how do we really know that these same physical laws will persist, that the future will resemble the past? This reasoning has to be either a priori or a posteriori. Hume contended that it couldn’t be a priori (deductive). If we were to see the sun rising for the first time we would never discover from that event alone what produced it, just as a child can't use reason to stop himself from touching a flame for the first time. Only after having experienced the pain does the child learn the relation. Knowledge of such causal relations must come only through experience of the relations between objects – therefore our reasoning must be a posteriori (inferential) and require the collection of evidence, not the exercise of pure logic.

Our expectations of the future matters of fact lies in the relation of cause and effect, say both Hume and common sense. "By means of that relation alone, we can go beyond the evidence of our memory and senses". The only way we could obtain knowledge of causality would be to infer it from our past observations of regularities. Our prediction of future events based on the past observations is not a rational activity, but just a matter of habit and an intuitive sense of probability – the odds of the sun not rising are infinitesimal. When we project findings about these relations into the future, we must use an intermediate premise, the uniformity of nature, which is risky, because it can change at any time and be proven false. The chicken thinks that the human will always bring it grain until the day he comes with a hatchet. According to Hume:
"It is impossible, therefore, that any arguments from experience can prove this resemblance of the past to the future; since all these arguments are founded on the supposition of that resemblance. All inferences from experience, therefore, are effects of custom, not of reasoning."