The Creation Wiki is made available by the NW Creation Network
Watch monthly live webcast - Like us on Facebook - Subscribe on YouTube

Specified information criterion produces no false positives (Talk.Origins)

From CreationWiki, the encyclopedia of creation science
Jump to: navigation, search
Response Article
This article (Specified information criterion produces no false positives (Talk.Origins)) is a response to a rebuttal of a creationist claim published by Talk.Origins Archive under the title Index to Creationist Claims.

Claim CI111.1:

Specified complexity is a reliable criterion for detecting design. The complexity-specification criterion successfully avoids false positives -- in other words, whenever it attributes design, it does so correctly.

Source: Dembski, William A., 2002. No Free Lunch, Lanham, MD: Rowman & Littlefield, pp. 24-25.

CreationWiki response: (Talk.Origins quotes in blue)

The problem here is that T.O seems to be conflating several cases and treating them as one. If we specify what cases we are talking about, then we can deal with T.O more effectively. For example, there are many instances where Dembski claims that it is inappropriate to apply the design filter. Complaining that a tool doesn't work where the tool-designer said that it won't work is a little foolish. Anyway, here are two possibilities of problems when applying the explanatory filter:

  1. We know that we don't know enough for a probabilistic model.
  2. We think we know enough for a probabilistic model, but we don't.

The first one means that we can't apply the explanatory filter. You can't claim to be using the explanatory filter when you are violating its rules. The second one is simply a general problem with empirical inquiry. There isn't a single empirical test that doesn't suffer from the second problem. To fault ID because of the second is ludicrous, as we would have to throw out all scientific work if we only dealt where we knew 100% that we had the whole story.

Complexity-specification allows false positives because it does not consider the combination of regularity and chance acting together

This is simply false. The probabilistic model is based on the laws governing the situation.

and it does not consider unknown causes

This reminds me of the Dilbert cartoon where the boss wants Dilbert to factor in both known and unknown delays for the project in his time estimates.

Specific examples of false positives are irreducibly complex structures for which plausible evolutionary origins have been found.

What counts as plausible? Just because you have a thought experiment that "works"? How exactly would a thought experiment be shown to be implausible? How about instead, we get evidence that such an event could occur. Even if we can't simulate it, perhaps some sort of computational model at least. But no, instead we just get thought experiments.

Another false positive is canals on Mars. Percival Lowell saw that many Martian canals meet at each of several points. The odds of this happening by chance, he calculated, are less than 1 in 1.6 × 10260, proving that Mars must be inhabited (Lowell, 1907). We now know that the canals were optical illusions caused by the human mind connecting indistinct features.

So, in 1907, someone who knew nothing about the Design Inference calculated, not from physical features but from illusionary features, on a planet where we don't have a probabilistic model, that there must be design. Impressive.

Dembski himself admitted the possibility of error in the same book in which he claimed reliability: "Now it can happen that we may not know enough to determine all the relevant chance hypotheses. Alternatively, we might think we know the relevant chance hypotheses, but later discover that we missed a crucial one. In the one case a design inference could not even get going; in the other, it would be mistaken. But these are the risks of empirical inquiry, which of its nature is fallible. Worse by far is to impose as an a priori requirement that all gaps in our knowledge must ultimately be filled by non-intelligent causes." (Dembski 2002, 123)

This goes back to our list of problematic ID scenarios above.

What Dembski fails to appreciate is that his complexity-specification criterion imposes an a priori requirement that all gaps must be filled by supernatural causes.

This is the most legitimate complaint, but is still incorrect on two counts: (a) what Dembski is supposing is not necessarily supernatural causes, but intelligent causes. Supernatural causes would likely be intelligent causes, but there are numerous intelligent causes that aren't supernatural. If you type a letter to a friend, that is caused by an intelligent cause. Unless an Angel asked you to do it, it wasn't a supernatural cause. (b) it does not fill all gaps with intelligent causes, just the ones that have a high degree of specified complexity. I can see, however, that an event which is anomalous physically would set off the explanatory filter initially, when in fact it simply means that we need to investigate the phenomena further. However, the explanatory filter isn't applied to events that are immediately anomalous. It is only generally applied to events that are well-understood, but still exhibit a high degree of specified complexity. I can definitely see the worry that the explanatory filter could be used to claim that all new phenomena are from an intelligent cause, but I think it is unfounded because (a) all that would be required to nullify the explanatory filter would be to show that the phenomenon occurs with law-like regularity, and (b) this is not the type of event for which the explanatory filter was designed anyway.

What the explanatory filter does say, is that it is a valid inference that when a well-known phenomenon exhibits specified complexity there is an intelligent cause behind that phenomenon. It does not mean that the hypothesis will never and can never be invalidated, but that such an inference is valid. The explanatory filter is simply the method by which one can know if an intelligent cause is a valid inference from facts. It is a tool like any other tool. If being 100% accurate for all cases and never having an inference refuted was the standard for scientific inquiry, science would never have gotten off of the ground in any field.