Duration: 07:18 minutes Upload Time: 2008-02-04 18:18:13 |
|
1st part of 2 where I examine William Dembski's book: "the Design Inference" as a tool for proving the fine-tuning of the universe. In this video I discuss the design theorists failure to meet Dembski's criteria of 'detatchment' for Specified Complexity. CORRECTIONS: I made two mistakes. 1st: The section where Dembski talks about Detachment is chapter 5 section 3, NOT section 7. 2nd: I claimed that saying that I would deal a Royal Flush and then actually being dealt one would be sufficient to meet Dembski's criterion for design. Actually, the probability of being dealt a Royal Flush with 5 random cards from a shuffled deck is 4*5!/(52!/47!) = 1.53*10^-6 which is greater than his Universal Probability Bound. Using Dembski's system it would be reasonable to conclude that the hand was dealt by chance. So... where there is strong evidence for design, Dembski's methods produce a clear false negative. The second part is here... http://youtube.com/watch?v=x9BcN0nyU9k |
|
Comments | |
urbanelf 2008-02-07 17:35:22 Damn. I messed up again... That should be 4 * 5!/(52!/47!) > UPB Still, that's a false negative... __________________________________________________ | |
trondreitan 2008-02-06 19:11:50 It would be pretty strong evidence that the assertion you made previous to dealing the cards, was correct though. Seems like Dembskis filter is making false negatives, as well. __________________________________________________ | |
urbanelf 2008-02-06 18:56:44 I see. I look forward to your video. It looks like I made a mistake in my video. I claimed that under Dembski's reasoning you could determine if a poker hand is designed if I told you ahead of time that I was going to deal you a Royal Flush and then you actually were dealt one. Then you could conlcude that the deck was stacked. Actually, 4/((52!))/(47!) = 1.28*10^-8 < UPB, so for Dembski, it would be reasonable to conclude that I didn't stack the deck. It was just random. __________________________________________________ | |
trondreitan 2008-02-06 18:37:48 The perhaps worst example, would be if I took my 200 dice throws and made a model with it saying that this must particular sequence must happen or happens with a great probability under my model. It's a special kind of sequence that needs special consideration, I could claim. If I then said that I now have strong evidence for that model, from the same data, I would be using data twice. __________________________________________________ | |
trondreitan 2008-02-06 18:29:03 Well it's the probabilistic version of circular argument. It was brought up in RUU10, but I'm thinking of doing a vid on it's own about it. In Bayesian probability, the most common mistake is to use the data or something from the data to form the prior. In frequentist statistics, the most common mistake is to evaluate a model with the same data you used for fitting the model. __________________________________________________ |
Sunday, April 6, 2008
Detuning Dembski's Design (1 of 2): detachment
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment