|week 1||week 5||week 9|
|week 2||week 6||week 10|
|week 3||week 7||week 11|
|week 4||week 8||week 12|
Other great examples (mostly submitted from students):
This week we looked at an example of a psychics 'cold reading' done on live TV. We read James Randi's analysis of this example, which demonstrated a number of things. 1. Psychics use generality to effect: namely they offer a huge range of choices in a seemingly specific way ("think of people close to you both living and deceased, 'cause I don't know why I'm getting this..."; water/electricity events) 2. Psychics know how to play the odds, and hence 'see' the most likely possible options ("...but I was picking up an "a" or an "m"..."). 3. Psychics aren't always as 'cold' as they seem (Here, CM knew that DA had recently lost someone). 4. Psychics don't actually provide information, the extract it from their 'victim' (CM *says* only answer 'yes' or 'no', but doesn't later abide by that rule). 5. Psychics make a big deal out of small successes (CM only gets one thing (partially) right (the name) but repeats it as much as possible; she pretends to know nothing about the victim although she does). 6. Psychics exploit basic emotional needs (CM tells DA 'she wants you to know, that she knows, that you lover her') 7. Psychics read body language (CM pauses to see what his reaction is to her guess and makes others if he doesn't react). 8. Victims often try to help the psychic (DA says "maybe that was coming from Bridget" when CM is completely wrong and guessing wildly). 9. Victims edit the exchanges after the fact to remember the few seeming succeses. 10. Psychics aren't successful: CM said "i don't know" 13 times in under 3 minutes, asked him a question every 9 seconds, and got 2 guess partially right. That's not a message full of meaning from the afterlife.
BIOFlex shoes add. Here we looked at a case of false scientific/medical claims and noted what the standard problems with folk science explanations, anecdotal evidence, and untrustworthy experts are. (Here's another analysis of these claims from QuackWatch).
We looked at examples from past portfolios.
We read a description of an amazing coincidence (Laura Buxton finding a balloon sent off by a different Laura Buxton: http://www.randi.org/jr/07-20-01.html, and http://www.randi.org/jr/08-03-01.html), and noted that looking for correlations in data after-the-fact is bound to find such things. The great analogy was the 'incredible coincidence game:' shooting an arrow and then drawing the bull's eye around the spot the arrow hits. Here's part of the original story.
Here is a nice article on the methods and challenges of science.
Codified Claptrap by Michael Shermer from SciAm was read out. He describes problems with biblical numerology. And points to some interesting websites that get 'messages' out of non-bliblical sources (e.g. Moby Dick , and, my favorite, Bible Code II). Again, very few correct predictions (most of which are postdictions) from huge data sets manipulated until something is found, shouldn't be taken as evidence. For good measure, here's an example right from the Bible Code II book. And, for better measure, here's a parody out of War and Peace (look about half way down).
We looked at a few examples of Urban Legends, taken from here and here. I also read out part of an article from CAUT, which can be found in its entirety here.
I read out the results of a recent poll conducted by Reader's Digest that suggests a majority of Canadians hold unjustified beliefs regarding paranormal phenomena. Also, I mentioned a court case.