Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?
Yada yada, people thing they’re at chance while really they should switch because $\frac23 > \frac13$ — but they fail to carry out this simple inequalityreally though, the trick being that the host has no choice in $\frac23$ of the situation and has to open the only possible door — and in those two third of the situations, you should change
From (Krauss and Wang 2003) it seems that there are several ways to help people in solving this task, including insisting on an exhaustive mental representation of all caseslink with the
Inq instruction of ETR
Actually this was mentioned previously in (Johnson-Laird et al. 1999) where they argued that errors came from a lack of exhaustivity in the representation of subjects as they create as many options as there are optionsThere maybe a breach here: ETR argues for two options, this for three
Another one for future read: (Tubau and Alonso 2003), maybe (Jiang and Qinglin 2006) (couldn’t find the document).
So… This document is doomed then?
Pretty much. Or maybe not so? I wanted to bridge a gap between this and the IIFD:
(A ∧ B) ∨ C
There is such a thing as considering two worlds: the one where you’re “right” and the one where you’re not. Suppose you chose answer
C, either the car is behind
C or it is not, thus two worlds — not equiprobable, but you can’t choose more than one door — or mental models. For simplicity let’s assume your worlds are where the prize is not :
A denotes "not behind
A, and so on.
Really then, all you know is you chose a door, and either it’s not behind it or it’s not behind the others. And you provided you’re own answer to the question “Where is it not?”
The update says it’s not behind
A, “confirming”in the ETR sense
A ∧ B side. If you are a good ETR reasoner, then you say “oh well, it must be the left side that is true, hence the gift must be here”.
This story however predicts people sticking to their initial choice, but introspectively people say they’re at chancecitation needed. The current account still has something to say about this : you had two models in mind, they may not have been equiprobable but since you restricted one and you’re an ETR reasoner, not a statistician, you put them both on a balance and say they’re equal.
This is a bit shaky. Crucially there is something to say about this “two worlds” that should lead to predictions of chance after the update, along the lines of “you provided your own answer but actually you don’t know and when the two worlds are update you re-insert your answer but disconnected from the previous context and now you’re at chance”. This is a stretch but I can see it work. The counter argument being, of course, “you create three worlds, not two, and then the update closes one and now you actually are at chance”. That’s a good move from JL.
The end of storytelling?
Let’s make this a theory maybe?