Tuesday, March 24, 2009

5.1.2.9 Modern Philosophy of Science

Finally we come to the 20th and 21st centuries, where we find no single Modern Philosophy of Science. Neither today, nor in the past has there has ever been a unanimous philosophical consensus. Adding to the diversity of outlook, scientific specialization and the rapid pace of discovery in the last century have driven each of the sciences to catalog its own set of non-scientific issues and individual philosophies with which to grapple. We live in a results-oriented age, and pragmatism plays a large part of any modern scientific endeavor. The concern is not so much “what meaning does science have?” as much as “what conditions and ways of thinking and acting make it possible to do science well, to do it fast, to make rapid innovations and breakthrough discoveries?” So, the bottom line is that performance matters, and philosophy takes a back seat. Delivering the goods is at the front, not introspection.

However, 20th and 21st century science produced many new and crucial non-scientific questions. There are so many that that they displace the comparatively uninteresting metaphysical questions related to the nature of existence (unless existence itself is fundamental to the research area as in Cosmology or Particle Physics). Any time for armchair philosophizing is given to considering immensely important issues like the morality of research in nuclear energy, the ethics of genetic modification and stem cell use, impact of science and technology on the environment, limited natural resources, unequal distribution of technology, the whole gamut of green issues, spread of disease in 3rd world countries, famine, birth control, the nature of consciousness and “the mind”, free will vs. determinism, what differentiates life from non-life, and countless other concerns. Some of these issues could be considered philosophical, but for the affected people they can mean life and death.

Of course, we can’t forget the never-ending ideological battles between the metaphysical naturalists and those promoting mystical or religion based “origin” explanations. These range from debates over evolution to cosmology, teleology, the “fine tuned universe”, and consciousness. Little time is spent speculating on non-controversies such as the possibility that the reality we see is not actually there, or is there in some other form.

If one term could encapsulate the current scientific view – “Scientific Realism” is the dominant theme in 21st century science. This doctrine descends, with modification, from the Logical Positivist movement initiated by Wittgenstein, Carnap, and others earlier in the 20th century. Beginning in the 1930's, the Logical Positivists set Philosophy of Science apart as a true separate sub-field with general Philosophy. Their agenda dominated Philosophy of Science for decades. It takes the real world as an adequate working hypothesis – that what we see is what we get. Scientific Realists believe that when we perceive the world, we perceive what is actually there. Additionally, it promotes the idea that the objects of science that cannot be directly observed (atoms, black holes, gravity, electricity, sub atomic particles, magnetism, genes, etc) have real existence that is just like that of objects that can be seen and directly experienced. Even though the objects of the micro-world are invisible to human senses, they are predicted by theory, detectable by our instruments, and transformable into data that can be observed. They act consistently with what one would expect from actually existent objects. The hypothesis that the unobservable objects of science are actually there rather than reality is "acting as if" they are there (but they really aren't) is the best explanation for our experience in the world. The fact that scientific explanations have worked so well for so long, and that they can be utilized in technology and engineering so successfully is a powerful and convincing argument for realism. It would be a huge and improbable coincidence if non-real unobservable entities were able to generate the measurements we take of them, and then permit us use them to build new and surprising technological devices from them, allowing us to discover newer, even more bizarre and different unobservable entities. If they actually were not present, it would require an intricate set of miracles for this to occur. Further, objects that were previously unobservable (DNA, molecules, atoms) now can be observed (using instruments based on our knowledge of how other unobservable objects, like X rays, work). They are have gone from unobservable to observable. This transition doesn't make a non-real object suddenly become real - they were real all the time. One way of thinking about it is that humans have built artificial sense organs that can see into the distant past, the future, the very fast, the hidden, the invisible, the very slow, the very far, the very small, and the very large.

Realists assert that no other way of considering reality would allow as reliable a path to achieving the goals of facilitating the discovery process, publication, and theoretical progress. Realists maintain that a very good reason for subscribing to their view is that it has an unsurpassed record of success and achievement, and no record of being wrong. That is, no experiments have demonstrated that the external world does not exist (which would be an Idealist result). The theories produced by this worldview and practice both explain the existing state of affairs and predict future outcomes with unequaled power. The cumulative set of theories and facts from all the sciences demonstrate extremely high coherence and mutual support that could only be explained by their being correct. Scientific Realism has a remarkable track record that attests to the extremely high probability that it is the right way of viewing the world. It is widely held that the most powerful argument in favor of Realism is the "no-miracles argument", according to which the success of science and realism would be miraculous if scientific theories were not at least approximately true descriptions of the world.

So far, I have been contrasting Realism with Idealism. However, an issue that is much more topical within current Philosophy of Science is Realism and Anti-Realism. Realism is the belief that behind our theories and models, there is actually some sort of substance that is actually being modeled. It reflects a belief in the actual existence of both observable and unobservable aspects of the world described by the sciences. Anti-realism refers to claims about the non-reality of "unobservable" or abstract entities such as sub-atomic particles, electrons, genes, or other objects too small to measure directly or detect with human senses. For example, a realist would assert that there really is an entity called the "electron" that exists independent of its several attributes that we are able to measure. The anti-realist either doesn't care to speculate about it, or chooses not to make assertions about that which cannot be directly experienced, or flatly denies the existence of anything beyond the charge, spin, mass, and other properties of electrons that we are collecting measurements on. "Anti-Realist" is not the same as Idealist, because while the Idealist might think the entire external world is illusory, the anti-realist just has issues with what constitutes the fundamental content of that external world.

Realists advocate the idea that the primary reason for accepting the objective existence of the table and the atoms of which it is made is that it is the only stance which fully explains the table's persistence and similarity to all observers at any time, in any place. Everyone who experiences the phenomenon which we call "table" has essentially the same experience. By extension, the same arguments that are made for the existence of a table applies to the existence of the small unobservable entities of which it is made - the atoms and their constituents. Arguments that anti-realist make for withholding belief in the unobservable atoms in the table should carry over to the table, itself. The belief in everyday objects, and the subordinate objects of which they are composed, allows us to explain many observable phenomena that would otherwise be inexplicable. Why should such explanation be ruled out for the contents of the objects in the unobservable world?

The Ptolemaic theory of the solar system is an anti-realist model. It is a good theory in that it describes current movement of celestial bodies, and can predict their movement in the future. We know now that it doesn't logically model the actual structure of the solar system - it was just a very effective calculating tool that astronomers used to help them locate the objects of our solar system. Many, if not most Ptolemaic cosmologists didn't believe that the solar system actually looked like their model, and it wasn't an important factor for them anyway (although it conveniently fit with the prevalent worldview of the time, which was that the Earth was the center of the universe). The model was just to quirky to be believed as an true representation (requiring its many cycles and epicycles). The same could be said of the early models of the atom. Dalton had a very simple indivisible atomic model - the atom was just a single small object with no internal structure. This was the first physical model for the phenomenon we call "atom", and was really the first advance in over 2000 years since Democritus theorized its existence. The Thomson "plum pudding" model did a better job of predicting outcomes of experiments in the early 1900's after the existence of the electron had been confirmed. But it was not a true representation of any sort of actual reality. The Rutherford nuclear or "planetary" model was a big improvement because it moved the electrons into "shells" or orbits, each capable of containing a certain fixed number of electrons. But it still was not a description of the underlying fabric of reality. Chadwick, Heisenberg, and Bohr further improved it with the proton/neutron model of the nucleus, and so on. Our current model involves a quantum mechanical "cloud" of probabilities representing possible positions of the electrons.

But is there really a set of objects down at that level which the model is correctly describing? If this question would have been asked of any of the prior theories, the correct answer would have been "no" - the underlying reality (if it even existed) would not have looked anything like the model. It is probably not the case that our current probabilistic models will go unmodified into the future. The likelihood that our current descriptions of atoms (and things smaller than the atom) really are painting a true and accurate picture of a deeper internal structure is not very high. And the question, "is there even anything down there?" has not really been answered, and it may not be capable of having an answer. Certainly some type of "atom-like" phenomenon is occurring in that location where our models say the atom is. But the "thing" which the models attempt to describe is probably not quite what those models bring to mind. Our models can only go so far as to describe the structure of the relevant relationships at that level, but the reality to which they point keeps shifting. This is the essence of "Structural Realism", which is a very important concept in modern philosophy of science.

One prominent anti-realist position is instrumentalism. Instrumentalism is the pragmatic view that a scientific theory is a useful tool for understanding the world. Instrumentalists evaluate theories or concepts by how effectively they explain and predict phenomena, as opposed to how accurately they describe objective reality. By this standard, it has been suggested, the Ptolemaic solar system and the Copernican solar systems were equally good (up to a point). In my opinion, Instrumentalists have voluntarily donned blinders so they can focus on their work, rather than adopted a complete philosophical framework for reality. Non-realism takes a purely agnostic view towards the existence of unobservable entities: unobservable entity X serves simply as an instrument to aid in the success of theory Y. We need not determine the existence or non-existence of X. Some scientific anti-realists argue further, however, and deny that unobservables exist.

Individual theories may be disproved, but the overall body of science is fundamentally “right”. Its theories are able to explain what we currently see, to anticipate events that will occur in the future, and to predict discoveries about what occurred in the past (as in geology, astronomy, and paleontology). Its epistemological basis is nature itself, rather than mythology, tradition, or revelation. The increase in knowledge that results from its application passes through the rigorous filter of the scientific method. It is coherent, consistent, reliable, and it makes continual progress and theoretical refinement. Further, there is no compelling reason to disbelieve it. No competing acceptable explanation has been proposed. This doesn’t constitute irrefutable proof, instead utilizing “inference to the best explanation”, meaning that among the only set of available explanations Realism is by far the strongest.

As has been mentioned, the modern view also incorporates Scientific or Methodological Naturalism as its core epistemology – there is simply no other way of gaining knowledge about the world that can compete. However, these terms, themselves, can generate debate even among those who practice them. This debate, though, may be more semantic than substantive.

The first view is that Naturalism is a necessary component of science itself, that Science can only investigate the Natural and can say nothing about the Supernatural. Only by linking empiricism to a naturalistic research framework can we gain knowledge about nature. Because of this, the Supernatural is inaccessible to science. It may or may not exist, but it is outside the purview of science.

Alternatively, some believe that science, as it is practiced, does not need to make a distinction between the Natural and the Supernatural. In this view, insisting on Philosophical Naturalism is putting the cart before the horse. Science doesn’t limit itself to studying natural phenomena – it studies what it can study, and by definition that domain of phenomena and entities is called the “natural world”. The scientific process doesn’t assume in advance anything about what is or is not “natural”, but only what it can successfully address. For example, in centuries past insanity, plagues, comets, eclipses, and other phenomena were considered to be of supernatural origin (i.e., messages from god or possession by demons, etc). Science demystified them and they were transported from the supernatural realm to the natural realm. They were not automatically off-limits to science because they were thought to be of supernatural origin.

If we apply it to the verification of entities, processes, or phenomena, those that we can detect, measure and describe with science end up being what we call “nature”. Science precedes Naturalism, rendering the Natural/Supernatural distinction moot. So, it would be incorrect to say that science can only deal with natural phenomena. More correct is that what we call “natural” is simply the collection of everything that science studies. So, some phenomena that were previously considered supernatural could be effectively moved to the natural if they could be investigated by science. As described in http://www.naturalism.org/science.htm,


“Science needn’t define itself as the search for “natural” or material causes for phenomena. In actual empirical fact, in building explanations and theories, science proceeds quite nicely without any reference to the natural/supernatural distinction. Science is defined not by an antecedent commitment to naturalism (whether methodological or ontological), but by criteria of explanatory adequacy which underpin a roughly defined, revisable, but extremely powerful method for generating reliable knowledge.”

The term “explanatory adequacy”, as used here, involves the standard set of criteria that surrounds the scientific method and scientific proof:
  • There should be good evidence for the phenomenon.
  • The phenomenon being studied should have some minimal level of “prior probability” (i.e., be considered potentially probable even before the collection of new confirming evidence).
  • A hypothesis for the phenomenon must be proposed that is testable, and it must be capable of falsification.
  • If a theory, it should have descriptive, explanatory, and predictive power.
  • The theory should either propose or lead towards a mechanism for the effect being studied.
  • The explanation should be consistent with previous knowledge, or if not, provide a convincing explanation why it is not.
In the generally accepted 21st century model for science, there are several factors that must be exist for one to confidently assert the establishment of a new scientific phenomenon. This might also be framed, as Steven Novella put it, as a standard for having confidence in drawing a scientific conclusion from evidence:
  • The investigation must have been conducted using good methodology, where any artifacts are weeded out, confounding factors are eliminated, and extraneous variables are controlled for.
  • The results must be statistically significant.
  • The results must be capable of replication at other independent labs and by other researchers.
  • The size of the effect must be well above the “noise” level.
Further, we need to see all of these factors occur at the same time. It is not enough to see just one or two, but all together. Replication with poor methodology proves nothing, as do studies with good methodologies but small effect sizes, or studies with strong statistical results but with murky methodologies.

Methodological naturalism is sometimes (incorrectly) used synonymously with philosophical naturalism. In fact, they are quite different. As previously described, methodological naturalism is an epistemology and is the heart of the protocols used in scientific research - it is a tool, a methodology, for discovering new knowledge. It is the idea that all scientific phenomena are to be explained and tested by reference to natural causes and events, and that magic and supernatural causes can not be offered for the phenomena. Philosophical naturalism describes a metaphysical point of view. Methodological naturalism is agnostic towards the ultimate metaphysical realities of the universe which are intrinsic to philosophical naturalism.

Modern science does not reject out of hand supernatural causality or explanations. It does not assume philosophical naturalism, which excludes in advance all supernatural explanations. It does not restrict its inquiry only to naturalistic explanations for phenomena. Methodological naturalism is not a choice or preference, as is philosophical naturalism. It is a necessity for the practice of science. We do not limit the types of answers that we are willing to consider to those that conform to an a priori naturalistic paradigm. We only limit the questions that science asks to those that can be addressed by the scientific method. If the question is posed in such a way that it cannot be falsified, then it simply can't be addressed by science - it is not a scientific question. If a testable hypothesis involving supernatural agents can be constructed, then science can address that hypothesis. This has already been attempted in tests to see if prayer will cause amputated legs to re-grow, and in double-blinded studies to test if appeals to God on behalf of sick persons will hasten their recoveries, homeopathic medicine studies, and ESP experiments. None of the results of these types of investigations were statistically compelling.

Karl Popper led the movement to embrace falsifiability, just mentioned, rather than verifiability, which was a fundamental tenet of the Logical Positivists. Popper was one of the most influential philosophers of science of the last century. Falsifiability certainly ranks as one of the most important elements in the modern conduct of science. Its dominance over verifiability results from fact that no number of positive experimental outcomes can ever absolutely confirm a scientific theory. But a single counter-example is decisive. It shows that the theory being tested is false, or at least incomplete. Instead of saddling scientists with the impossible task of providing absolute proof, a theory was considered to be tentatively “true” if ample opportunity and means were proposed to disprove it, but no one was able to do so. Falsifiability became Popper’s criterion of “demarcation” between what is and is not genuinely scientific: a theory could be considered scientific only if it were also falsifiable. This emphasis differed from that of Logical Positivists, who focused instead on verifiability – testing the truth of statements by showing that they could be verified.

Popper demonstrated his position with an example of the rising sun. Although there is no way to prove that the sun will rise every morning, we can hypothesize that it will do so. If only on a single morning it failed to rise, the theory would be disproved. Barring that, it is considered to be provisionally true. The longer a theory retains this provisional status, and the more attempts are made to test it, and the more times those tests fail to disprove it, the greater its claim to truth. The “sun-will-rise” theory has been well tested many billions of times, and we have no reason to anticipate that circumstances will arise that will cause it to stop happening. So we have a very good reason to believe that this theory "probably" represents reality. This argument has some weaknesses (primarily that it is not deductively ironclad, just as no inductive judgment can be). But because the theory has never failed, no stronger proof suggests itself, it is pragmatically useful, and is statistically unlikely to be disproved, it is a very good operating theory.

In the years since Popper first introduced the idea of falsification, it has been criticized, altered, and enhanced. See the entry in this blog for "Falsifiability vs Verifiability" to learn more about the limits of "naive falsification" and ways around those limits. Also see an amusing Youtube video, for a description of the Duhem-Quine Thesis, which addresses other modern concerns with hypothesis testing. One of my blog entries "Duhem-Quine" briefly describes Duhem-Quine, and another, "Criteria of Adequacy" , describes several techniques that can be used to determine which of several competing hypotheses that appear to be equally believable is more likely to be correct.

No comments:

Post a Comment