# Difference between revisions of "Screening off"

m (fixed formatting) |
Komponisto (talk | contribs) m |
||

Line 1: | Line 1: | ||

− | If A is a hypothesis and B and C are two pieces of evidence relating to A, then B is said to '''screen off''' C if P(A|B&C) = P(A|B). That is, if knowing C provides no additional information about A once B is known. | + | If A is a hypothesis and B and C are two pieces of evidence relating to A, then B is said to '''screen off''' C from A if P(A|B&C) = P(A|B). That is, if knowing C provides no additional information about A once B is known. |

The conditional probability P(A|B) may be viewed as a measure of the degree to which B is dependent on A in one's model of the world; the higher this number, the more strongly the truth of B implies the truth of A. (See [http://yudkowsky.net/rational/bayes An Intuitive Explanation of Bayesian Reasoning].) Screening off can occur when B depends on A, and C depends on A by way of depending on B, as in the following diagram: | The conditional probability P(A|B) may be viewed as a measure of the degree to which B is dependent on A in one's model of the world; the higher this number, the more strongly the truth of B implies the truth of A. (See [http://yudkowsky.net/rational/bayes An Intuitive Explanation of Bayesian Reasoning].) Screening off can occur when B depends on A, and C depends on A by way of depending on B, as in the following diagram: |

## Revision as of 16:02, 4 June 2010

If A is a hypothesis and B and C are two pieces of evidence relating to A, then B is said to **screen off** C from A if P(A|B&C) = P(A|B). That is, if knowing C provides no additional information about A once B is known.

The conditional probability P(A|B) may be viewed as a measure of the degree to which B is dependent on A in one's model of the world; the higher this number, the more strongly the truth of B implies the truth of A. (See An Intuitive Explanation of Bayesian Reasoning.) Screening off can occur when B depends on A, and C depends on A by way of depending on B, as in the following diagram:

A -> B -> C

For example, suppose A is "Proposition X is true", B is "the arguments for X say Y", and C is "experts believe X". Presumably, experts believe X *because* of what the arguments say; thus, while expert belief in X is evidence for X, it is not *additional* evidence for X *over and above* the arguments for X, once one has already ascertained what the latter are. We say that the authority of the experts is *screened off* by the arguments for X.

(Of course, this may not apply if there are other paths from A to C that do not pass through B; if for instance there is good reason to suspect expert belief is correlated with the truth independently of the arguments put forth, then learning the content of those arguments does not screen off the belief as evidence.)

Failure to take into account the dependence relationships among various pieces of information can lead to serious errors, as P(A|B) may be much lower than P(A|C). An example of this may be seen in the Meredith Kercher murder case, where A is "Amanda Knox killed Meredith Kercher", B is "Rudy Guede killed Meredith Kercher", and C is "Meredith Kercher was killed". Here, P(A|C) is arguably substantial, since Knox was Kercher's roommate. However, B, which is known to be true (to a high level of certainty), implies C, so that P(A|B&C) = P(A|B); the evidence against Guede thus (approximately) screens off Kercher's death as evidence against Knox. And, in fact, P(A|B) is close to the prior probability P(A) (since there is little connection between Knox and Guede); hence the rest of the evidence against Knox has far less Bayesian significance than the investigators and jurors in the case intuitively assigned to it.

## Main post

## Other posts

- The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom, especially comments here and here