Varieties of Ockham's Razor or Using Ockham's Razor to Shave ItselfSun, May 25, 2008 - 12:18 PM
Using Ockham’s Razor to Shave Itself
This paper will explore a variety of formulations of a well-known, and often misunderstood, principle that has arisen within the philosophy of science, called Ockham’s Razor. Thus named, because of its most famous proponent, William of Ockham, the most popular version of the “razor” suggests that, all things being equal, the simplest explanation is most likely to be true. In this paper my goal is to explore some of the range of understandings of this oft misrepresented principle. In the course of this exploration it will be necessary, before attempting to respond to a (necessarily non-exhaustive) variety of formulations of this principle of parsimony, to get across some notion of the impetus for developing such a principle in the first place. Having provided the catalyzing agent for the development of Ockham’s Razor (underdetermination of theory choice), I will begin the exploration with what I take to be a not uncommon account (within the philosophy of science) of Ockham’s Razor as a principle which provides truth. I will follow this with what I take to be a slightly more plausible account of Ockham’s Razor as truth-finding efficiency, and will conclude with what I see as the strongest account of the ‘razor’ as a principle of anti-superfluity, as opposed to a strict principle of anti-quantity.
It seems to me that any attempt to explain the subtleties of Ockham’s Razor must first begin with a discussion of the philosophical problem that it is meant to address. Very simply put, the purpose of Ockham’s razor is as a method of deciding between contending scientific theories. Optimally, the ‘razor’ will cut out the bad theory (or theories) and leave the best theory remaining, which the scientist is then supposed to be (relatively) confident is the most accurate choice. To put this into philosophical terms, we would say that Ockham’s Razor is a heuristic device for dealing with situations of underdetermination in theory choice (i.e., a tool for choosing between competing theories). The admonishment not to multiply entities beyond necessity (in cases of theory choice) is intended to limit the components of a theory to only those needed to explain the situation (Ockham’s Razor will slice off the excess baggage). For instance (to oversimplify things at the start), imagine that you are sitting inside of a room with the door slightly ajar. The door suddenly slams shut. You open the door and there are two people in the other room. You ask them about the door having closed and one answers that it was just the wind, while the other gives an elaborate Rube-Goldberg-like explanation of invisible mechanical contraptions that had been set up and were required to shut the door. You notice a strong wind coming through the room. Both answers supply possible theories as to why the door shut, but one seems to have extra, unnecessary, and unverifiable components to it. We might say that, while the second explanation does explain the door closing, it lacks explanatory force and I assume that in most cases you would tend to trust that it was the wind that closed the door. This is a vastly oversimplified example of a theory choice as it might be made with reference to Ockham’s Razor as a guiding principle. However, over the course of this article we will see that this seemingly ‘simple’ principle can be understood in quite a variety of ways.
Reginald O. Kapp, in an article called “Ockam’s Razor and the Unification of Physical Science”, makes what I see as the strongest case for what I would call the “simplistic simpleton” version of Ockham’s Razor. This understanding of the ‘razor’ assumes that the simplest (and truest) explanation is always the one that literally contains the least entities (as long as there is no observation which directly falsifies it). He supports this view with what I see as a fairly accurate assessment of the history of what we might call “scientific progress”. To justify this view Kapp develops a very helpful notion, which he calls the “Cosmic Statute Book”, which is supposed to contain all of the ‘statutes’ or ‘laws’ by which the entirety of the universe is governed. The interesting thing that Kapp points out in regards to this Cosmic Statute Book is that the number of items that we assume are in it has been dwindling as science has ‘advanced’ through the years. What this means (and I don’t see this as too controversial) is that what we used to think were separate laws and separate domains of science and of explanation are coming to a convergence (or as Kapp would say, the physical sciences are becoming more and more unified). In other words, domains of science that we previously considered as separate or unique have been more and more shown to have common underlying explanations. Perhaps the most straightforward and suggestive of examples (the number of examples that could be given here are vast) is the continued unification of fundamental forces, and probably the most well-known such unification would be regarding the, previously separate, explanations of electricity and magnetism. Once considered two different realms of science, explained by two different theories, it is now common knowledge that both can be explained by a single theory – electromagnetism. At the time that Kapp wrote the article (1958) this was the extent to which that particular example could be drawn . However, as time has passed, and as physics has progressed, this unification process has continued to occur. Physicists now recognize four fundamental forces of nature (gravity, electromagnetism, the strong nuclear force, and the weak nuclear force). Nobel prizes have since been awarded for the unification of electromagnetism with the weak nuclear force (the electroweak force), as well as the further unification of the electroweak force with the strong nuclear force (the G.U.T.s force – which stands for Grand Unified Theory). With G.U.T.s, we have unified three of the four known forces of nature, leaving only gravity unaccounted for (the goal of those searching for a T.O.E. or ‘Theory of Everything’ is to develop such a ‘final’ unification). Based on this account, it seems that Kapp is, indeed, correct in his assertion that the progress of physical science does appear to be leading to an Ockham-like reduction of the Cosmic Statute Book to as few as possible entities necessary to explain all of the known phenomena of the universe. On this account, the only items remaining in a once hefty tome are a couple of force laws and a few constants of nature. What seems to be occurring here is that “[b]y the process of unification the whole of physical science is gradually being fashioned into one complete and consistent structure of thought in which the various parts bear a logical relation to each other.”
Out of this largely historical account, Kapp wants to draw a general moral about the validity of Ockham’s Razor as a truth-finding agent. For Kapp, the ‘razor’ is not merely a tool, but is a necessary path that the scientist must follow, for “when more than one explanation of an observation is available one must provisionally choose the one that involves the least number of assumptions.” Kapp states this as the standard view of Ockham’s Razor, where the term, provisionally, is supposed to indicate the common view that theories chosen by Ockham’s razor are only tentatively made until something better comes along as a replacement. However, Kapp wants to make an even stronger claim than this: “I wish to raise it [Ockham’s Razor] from a mere rule of procedure to one of the great universal principles to which the whole of the physical world conforms.” He proceeds to elaborate this principle as the Principle of Minimum Assumption, which is formulated as follows: “In physics the minimum assumption always constitutes the true generalization.”
I must admit that I have very little sympathy for Kapp’s principle, as I doubt the majority of scientists nowadays do either (though I might be surprised). But, perhaps it would be best to explain a bit about what Kapp means when he uses the term, minimum assumption. A minimum assumption, according to Kapp, is one that is non-specific. For instance, Newton’s laws of mechanics are much less specific (and are thus more general) than laws which would solely describe the behaviors of planets or pendulums or vacuums, etc., because it applies to all of those more specific instances. In this sense, it would require more assumptions to consider each law as separate than it would take to consider a unified law that accounted for them all. By some strange move that is entirely unclear, Kapp seems to simply assert the position that (what I took to be) the quite useful concept of the ‘Cosmic Statute Book’ will eventually be found to contain NO entries (that’s right, ZERO). I can understand what seems to be the goal of many physicists, to unify existing theories, but I cannot imagine unifying them to naught. Wouldn’t there have to be at least one entry in the ‘Cosmic Statute Book’ or are we merely dealing with pure chaos? Well, just to make Kapp’s position quite clear, he says, in response to criticisms of his position, that indeed he is putting forth the position that in the universe, and in physics, given the principles of logic “anything goes.”
Taking a slightly less dogmatic position than Kapp on the truth-finding capabilities of Ockham’s Razor, Kevin T. Kelly, develops an account of Ockham’s Razor as truth-finding efficiency, rather than truth-finding necessity. Kelly’s account requires a fair amount of mathematical analysis, but is uniquely persuasive because of this. Kelly, unlike Kapp, seems to take seriously the “question [of] how a fixed bias toward simplicity helps science find the true theory.” Where Kapp seems to take for granted that the simplest theory will always be the right one, Kelly is sympathetic to the idea that the universe may not have a natural bias toward always producing the greatest simplicity. Rather than the “simplistic simpleton” conception that Ockham’s Razor always points to the simplest and correct theory, Kelly, rather, supports the view that Ockham’s Razor is the most efficient strategy for approaching the truth. And, while I, eventually, will not find his approach satisfactory, it is difficult to deny its overall logic.
What Kelly proposes as truth-finding efficiency is the method of theory selection which will reduce the amount of retractions or reversals of opinion on the way to the correct theory. While there is no hope, in the context of this paper, of getting across the relevant mathematics required for this view, I think that the basic idea is still understandable. The example that is most consistent throughout Kelly’s approach is that of a marble dispenser. The situation is such that the marble dispenser will produce an unknown (but finite) number of marbles and will do so at unknown times. Hence, the situation is such that we must develop a theory to predict the remaining number of marbles that will be dispensed, with an underabundance of evidence to make such a decision (and hence, an underdetermination of possible theories). The solution that Kelly presents is to follow Ockham’s Razor by always choosing the answer which states that no new marbles will be dispensed. This solution, he posits, will remain accurate until such time as it is proven false by the dispensation of another marble, at which time a new theory choice will have to be made. Kelly’s assertion is that if we always continue to take the view that no new marbles will appear (which he seems to be claiming is the simplest answer) we will have to face the least number of retractions of our theory, and hence will always move closer and closer to the truth. “Simplicity doesn’t indicate truth”, Kelly points out, “but it minimizes reversals along the way. That’s enough to explain the unique connection between simplicity and truth, but it doesn’t promise more than can be delivered [as in Kapp], namely, a philosophical warranty against more twists and bumps in the future.” So, by this account, we are not to accept Ockham’s razor on the basis of a simple-minded acceptance that it necessarily leads to truth, but rather on the mathematical basis that it is the strategy of theory-selection that is most efficient (in terms of minimizing reversals) on the road toward truth. To put it another way, Kelly’s method attempts to refrain, to as great a degree as possible, scientists from changing their minds ‘unnecessarily’ (necessity being defined in terms of quantity of possible retractions or reversals of opinion as regards theory choice). Kelly does a great job of justifying his approach to Ockham’s Razor by showing that it does indeed produce the least number of reversals. What he fails to justify is that minimizing reversals is to be considered the best or preferred method of theory-selection.
The final account of Ockham’s Razor that I would like to address in this paper makes an attempt to move beyond the ‘simple’ notion of quantity that is so prevalent in almost all discussions of the ‘razor’. I have always found the focus on quantity in this discussion to be incredibly unappealing. I have never understood why the number of entities in an explanation should give any indication of the relative truth of a theory. It seems to me that the universe is just as capable of complexity as it is of simplicity (or even if simplicity is the norm, there seems to be no reason why it is always the necessary outcome). Rare, unlikely, and even complex events occur all of the time. This is why I found it incredibly attractive to encounter someone attempting to deal with Ockham’s Razor in a way that addresses what I see as a fundamental error in the use of ‘simplicity’ as a criteria for theory selection.
E.C. Barnes makes what I see as a critical distinction between formulations of Ockham’s Razor. The common view, Barnes says, is what he calls the ‘anti-quantity principle’ (AQP). This view is exemplified by the formulations I have discussed so far (among others), and its goal is merely to minimize the number of entities in a given theory. The alternate view that Barnes proposes is what he calls the ‘anti-superfluity principle’ (ASP), which is focused not on the number of entities in a theory, but rather on the role that entities play in a theory. Rather than selecting the theory with the least amount of entities, the scientist who follows the ASP would prefer the theory with no superfluous entities. Superfluous entities are those with no explanatory value in relation to the question at hand. It is important to note, as does Barnes, that the AQP entails the ASP, but the ASP does not entail the AQP. What this means is that a theory of ‘anti-quantity’ implicitly contains an admonishment to remove superfluous components, whereas a strict anti-superfluity principle does not necessarily imply that the theory with the least number of components is best. Barnes’ article, then, is an attempt to show that we can cleave the AQP from the equation. Thus, the second title of my paper is ‘Using Ockham’s Razor to Shave Itself’. What I intend to imply by this is that if the standard assumption is that Ockham’s Razor refers to the AQP (which entails the ASP), and if we can successfully show that Ockham’s Razor only requires the ASP, then we will have, in a sense, successfully applied Ockham’s Razor to itself, removing the superfluous assumption that quantity of theoretical components is significant to the choice between theories in science.
To reemphasize his position, Barnes posits that:
"the function of a razor is to detach useless addenda (e.g. whiskers) leaving essential structure otherwise intact. Likewise, the old injunction that one should ‘not multiply entities beyond necessity’ can be understood as articulating the ASP rather than the AQP…One is free, as far as the ASP is concerned, to postulate as many entities as one likes in the explaining of phenomena, so long as one makes sure that they are all needed."
An example in my own academic career that applies here is from quantum physics and regards an observed phenomenon known as neutron beta decay. This takes place when a free neutron spontaneously decays into 3 other, less massive particles (a proton, an electron, and a neutrino). The simplified model of this would look something like this: n p + e + v. When one observes such particle processes there are certain conservation laws that one looks for to see if they are ‘satisfied’ in the process (conservation of energy, charge, spin, etc.). This is all satisfied adequately in the case of neutron beta decay (as one would expect). However, the question of how to model the process was quite different than the (somewhat) simpler issue of observing it. Presumably using Ockham’s Razor as a guide, Enrico Fermi posited the simplest explanation: the neutron simply splits into the three ‘daughter’ particles. There didn’t appear to be anything particularly wrong with Fermi’s explanation. It did explain neutron beta decay (understanding, of course, that my treatment of his explanation is vastly oversimplified). However, despite its simplicity, Fermi’s model of beta decay was wrong and the reason that it was wrong is because it didn’t posit enough entities. Other theories of beta decay wanted to include an entity ‘mediating’ the decay (literally called an intermediate vector boson). This ‘W particle’ was posited as one of the mediating particles of the weak force (which I referred to earlier in the paper as one of the four fundamental ‘forces of nature’). Observations eventually confirmed that the W particle (and the weak force) actually existed and indeed acted in the theorized role as mediators of particle decays. In my exploration of this topic I communicated with literally dozens of physicists and while there was not 100% agreement on all aspects of beta decay there was universal agreement that Fermi had not posited enough entities to correctly answer the question of theory choice in beta decay.
I find the example of neutron beta decay to concisely represent the lack of necessity for an anti-quantity interpretation of Ockham’s Razor. The inclusion of W particles in a theory of beta decay posited more entities than the competing theory, but it was not superfluous. It could, theoretically at the time, be tested for and was thus falsifiable, and, within the framework of the larger theory under which it was developed (the weak nuclear force), it was a necessary component. I must admit that along the way I, myself, tended to have the ‘simplistic’ urge to say that it would just be a lot simpler and easier if Fermi was right (which shows that the anti-quantity principle is, on some level, intuitive, which perhaps explains its popularity as the common view of Ockham’s Razor). We tend to want to simplify things as much as possible. I wonder if this isn’t merely a psychological or aesthetic preference. We tend to want to make the universe appear far less messy than it really is and perhaps the AQP is merely a further manifestation of that trend. However, my contention (in agreement with Barnes), as exemplified in the above history on beta decay, is that Ockham’s Razor can use itself to sheer off the ‘useless addenda’ of the AQP, leaving behind only the necessary, fleshy substance of the ASP, which bars only superfluous entities (the true enemies of scientific theory) and is willing to accept complicated and simple theories alike, as long as all the parts are necessary to tell the story.
Barnes, E.C. “Ockham’s Razor and the Anti-Superfluity Principle.” Erkenntnis 53 2000, p 353-374.
Kapp, Reginald O. “Ockam’s Razor and the Unification of Physical Science.” The British Journal for the Philosophy of Science Vol. 8, No. 32 Feb 1958, p 265-280.
--. “Reply Note by G. Schlesinger.” The British Journal for the Philosophy of Science Vol. 11, No. 44 Feb 1961, 329-331.
Kelly, Kevin T. “Justification as Truth-Finding Efficiency: How Ockham’s Razor Works.” Minds and Machines 14 2004, p 485-505.
|add a comment|