SUPPORT THE WORK

GetWiki

conditional probability

ARTICLE SUBJECTS
aesthetics  →
being  →
complexity  →
database  →
enterprise  →
ethics  →
fiction  →
history  →
internet  →
knowledge  →
language  →
licensing  →
linux  →
logic  →
method  →
news  →
perception  →
philosophy  →
policy  →
purpose  →
religion  →
science  →
sociology  →
software  →
truth  →
unix  →
wiki  →
ARTICLE TYPES
essay  →
feed  →
help  →
system  →
wiki  →
ARTICLE ORIGINS
critical  →
discussion  →
forked  →
imported  →
original  →
conditional probability
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{Probability fundamentals}}In probability theory, conditional probability is a measure of the probability of an event occurring given that another event has (by assumption, presumption, assertion or evidence) occurred.BOOK, Gut, Allan, Probability: A Graduate Course, 2013, Springer, New York, NY, 978-1-4614-4707-8, Second, If the event of interest is A and the event B is known or assumed to have occurred, "the conditional probability of A given B", or "the probability of A under the condition B", is usually written as P(A | B), or sometimes P{{sub|B}}(A) or P(A / B). For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person has a cold, then they are much more likely to be coughing. The conditional probability of coughing by the unwell might be 75%, then: P(Cough) = 5%; P(Cough | Sick) = 75%The concept of conditional probability is one of the most fundamental and one of the most important in probability theory.BOOK, Sheldon, Ross, A First Course in Probability, 8th, 2010, Pearson Prentice Hall, 978-0-13-603313-4, But conditional probabilities can be quite slippery and require careful interpretation.BOOK, George, Casella, Roger L., Berger, Statistical Inference, 2002, Duxbury Press, 0-534-24312-6, For example, there need not be a causal relationship between A and B, and they don't have to occur simultaneously.P(A | B) may or may not be equal to P(A) (the unconditional probability of A). If P(A | B) = P(A), then events A and B are said to be "independent": in such a case, knowledge about either event does not give information on the other. P(A | B) (the conditional probability of A given B) typically differs from P(B | A). For example, if a person has dengue, they might have a 90% chance of testing positive for dengue. In this case what is being measured is that if event B ("having dengue") has occurred, the probability of A (test is positive) given that B (having dengue) occurred is 90%: that is, P(A | B) = 90%. Alternatively, if a person tests positive for dengue they may have only a 15% chance of actually having this rare disease because the false positive rate for the test may be high. In this case what is being measured is the probability of the event B (having dengue) given that the event A (test is positive) has occurred: P(B | A) = 15%. Falsely equating the two probabilities causes various errors of reasoning such as the base rate fallacy. Conditional probabilities can be reversed using Bayes' theorem.Conditional probabilities can be displayed in a conditional probability table.

Definition

File:Conditional probability.svg|thumb|Illustration of conditional probabilities with an Euler diagram. The unconditional 1}}) = 1, P(A|B{{sub|2}}) = 0.12 ÷ (0.12 + 0.04) = 0.75, and P(A|B{{sub|3}}) = 0.File:Probability tree diagram.svg|thumb|On a tree diagram, branch probabilities are conditional on the event associated with the parent node. (Here the overbars indicate that the event does not occur.)]](File:Venn Pie Chart describing Bayes' law.png|thumb|Venn Pie Chart describing conditional probabilities)

Conditioning on an event

Kolmogorov definition

Given two events A and B, from the sigma-field of a probability space, with the unconditional probability of B (that is, of the event B occurring ) being greater than zero, P(B) > 0, the conditional probability of A given B is defined as the quotient of the probability of the joint of events A and B, and the probability of B:KOLMOGOROV, Andrey, Foundations of the Theory of Probability, Chelsea, 1956,
P(A mid B) = frac{P(A cap B)}{P(B)},
where P(A cap B) is the probability that both events A and B occur. This may be visualized as restricting the sample space to situations in which B occurs. The logic behind this equation is that if the possible outcomes for A and B are restricted to those in which B occurs, this set serves as the new sample space.Note that this is a definition but not a theoretical result. We just denote the quantity frac{P(A cap B)}{P(B)} as P(Amid B) and call it the conditional probability of A given B.

As an axiom of probability

Some authors, such as de Finetti, prefer to introduce conditional probability as an axiom of probability:
P(A cap B) = P(A mid B)P(B)
Although mathematically equivalent, this may be preferred philosophically; under major probability interpretations such as the subjective theory, conditional probability is considered a primitive entity. Further, this "multiplication axiom" introduces a symmetry with the summation axiom for mutually exclusive events:Gillies, Donald (2000); "Philosophical Theories of Probability"; Routledge; Chapter 4 "The subjective theory"
P(A cup B) = P(A) + P(B) - cancelto0{P(A cap B)}

As the probability of a conditional event

Conditional probability can be defined as the probability of a conditional event A_B.WEB, Draheim, Dirk, An Operational Semantics of Conditional Probabilities that Fully Adheres to Kolmogorov's Explication of Probability Theory, 10.13140/RG.2.2.10050.48323/3, 2017,weblink's_Explication_of_Probability_Theory, Assuming that the experiment underlying the events A and B is repeated, the Goodman–Nguyen–van Fraassenconditional event can be defined as
A_B =
bigcup_{i ge 1} left(bigcap_{j 0 and 0 otherwise.This approach results in a probability measure that is consistent with the original probability measure and satisfies all the Kolmogorov axioms. This conditional probability measure also could have resulted by assuming that the relative magnitude of the probability of A with respect to X will be preserved with respect to B (cf. a Formal Derivation below).The wording "evidence" or "information" is generally used in the Bayesian interpretation of probability. The conditioning event is interpreted as evidence for the conditioned event. That is, P(A) is the probability of A before accounting for evidence E, and P(A|E) is the probability of A after having accounted for evidence E or after having updated P(A). This is consistent with the frequentist interpretation, which is the first definition given above.

Statistical independence

Events A and B are defined to be statistically independent if
P(A cap B) = P(A) P(B).
If P(B) is not zero, then this is equivalent to the statement that
P(Amid B) = P(A).
Similarly, if P(A) is not zero, then
P(Bmid A) = P(B)
is also equivalent. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined, and the preferred definition is symmetrical in A and B.Independent events vs. mutually exclusive eventsThe concepts of mutually independent events and mutually exclusive events are separate and distinct. The following table contrasts results for the two cases (provided the probability of the conditioning event is not zero).{| class="wikitable"|+!!If statistically independent!If mutually exclusive|P(Amid B)=|P(A)|0
| P(Bmid A)=|P(B)|0
|P(A cap B)=|P(A) P(B)|0
In fact, mutually exclusive events cannot be statistically independent (unless they both are impossible), since knowing that one occurs gives information about the other (specifically, that it certainly does not occur).

Common fallacies

These fallacies should not be confused with Robert K. Shope's 1978 "conditional fallacy", which deals with counterfactual examples that beg the question.

Assuming conditional probability is of similar size to its inverse

(File:Bayes_theorem_visualisation.svg|thumb|300px|A geometric visualisation of Bayes' theorem. In the table, the values 2, 3, 6 and 9 give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. This shows that P(A|B) P(B) = P(B|A) P(A) i.e. P(A|B) = {{sfrac|P(B|A) P(A)|P(B)}} . Similar reasoning can be used to show that P(Ä€|B) = {{sfrac|P(B|Ä€) P(Ä€)|P(B)}} etc.)In general, it cannot be assumed that P(A|B) â‰ˆ P(B|A). This can be an insidious error, even for those who are highly conversant with statistics.Paulos, J.A. (1988) Innumeracy: Mathematical Illiteracy and its Consequences, Hill and Wang. {{ISBN|0-8090-7447-8}} (p. 63 et seq.) The relationship between P(A|B) and P(B|A) is given by Bayes' theorem:
begin{align}
P(Bmid A) &= frac{P(Amid B) P(B)}{P(A)}
Leftrightarrow frac{P(Bmid A)}{P(Amid B)} &= frac{P(B)}{P(A)}
end{align}That is, P(A|B) â‰ˆ P(B|A) only if P(B)/P(A) â‰ˆ 1, or equivalently, P(A) â‰ˆ P(B).

Assuming marginal and conditional probabilities are of similar size

In general, it cannot be assumed that P(A) â‰ˆ P(A|B). These probabilities are linked through the law of total probability:
P(A) = sum_n P(A cap B_n) = sum_n P(Amid B_n)P(B_n).
where the events (B_n) form a countable partition of Omega.This fallacy may arise through selection bias.Thomas Bruss, F; Der Wyatt Earp Effekt; Spektrum der Wissenschaft; March 2007 For example, in the context of a medical claim, let S{{sub|C}} be the event that a sequela (chronic disease) S occurs as a consequence of circumstance (acute condition) C. Let H be the event that an individual seeks medical help. Suppose that in most cases, C does not cause S so P(S{{sub|C}}) is low. Suppose also that medical attention is only sought if S has occurred due to C. From experience of patients, a doctor may therefore erroneously conclude that P(S{{sub|C}}) is high. The actual probability observed by the doctor is P(S{{sub|C}}|H).

Over- or under-weighting priors

Not taking prior probability into account partially or completely is called base rate neglect. The reverse, insufficient adjustment from the prior probability is ''conservatism.

Formal derivation

Formally, P(A | B) is defined as the probability of A according to a new probability function on the sample space, such that outcomes not in B have probability 0 and that it is consistent with all original probability measures.George Casella and Roger L. Berger (1990), Statistical Inference, Duxbury Press, {{ISBN|0-534-11958-1}} (p. 18 et seq.)Grinstead and Snell's Introduction to Probability, p. 134Let Ω be a sample space with elementary events {ω}. Suppose we are told the event B âŠ† Î© has occurred. A new probability distribution (denoted by the conditional notation) is to be assigned on {ω} to reflect this. For events in B, it is reasonable to assume that the relative magnitudes of the probabilities will be preserved. For some constant scale factor α, the new distribution will therefore satisfy:
begin{align}
&text{1. }omega in B : P(omegamid B) = alpha P(omega)
&text{2. }omega notin B : P(omegamid B) = 0
&text{3. }sum_{omega in Omega} {P(omegamid B)} = 1.
end{align}Substituting 1 and 2 into 3 to select α:
begin{align}
1 &= sum_{omega in Omega} {P(omega mid B)}
&= sum_{omega in B} {P(omegamid B)} + cancelto{0}{sum_{omega notin B} P(omegamid B)}
&= alpha sum_{omega in B} {P(omega)} [5pt]
&= alpha cdot P(B) [5pt]
Rightarrow alpha &= frac{1}{P(B)}
end{align}So the new probability distribution is
begin{align}
text{1. }omega in B&: P(omegamid B) = frac{P(omega)}{P(B)}
text{2. }omega notin B&: P(omegamid B) = 0
end{align}Now for a general event A,
begin{align}
P(Amid B) &= sum_{omega in A cap B} {P(omega mid B)} + cancelto{0}{sum_{omega in A cap B^c} P(omegamid B)}
&= sum_{omega in A cap B} {frac{P(omega)}{P(B)}} [5pt]
&= frac{P(A cap B)}{P(B)}
end{align}

See also

{{Div col|colwidth=20em}} {{Div col end}}

References

{{Reflist}}

External links

  • {{MathWorld | urlname=ConditionalProbability | title=Conditional Probability}}
  • F. Thomas Bruss Der Wyatt-Earp-Effekt oder die betörende Macht kleiner Wahrscheinlichkeiten (in German), Spektrum der Wissenschaft (German Edition of Scientific American), Vol 2, 110–113, (2007).
  • Visual explanation of conditional probability


- content above as imported from Wikipedia
- "conditional probability" does not exist on GetWiki (yet)
- time: 6:14am EDT - Wed, Sep 18 2019
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
GETWIKI 09 JUL 2019
Eastern Philosophy
History of Philosophy
GETWIKI 09 MAY 2016
GETWIKI 18 OCT 2015
M.R.M. Parrott
Biographies
GETWIKI 20 AUG 2014
GETWIKI 19 AUG 2014
CONNECT