Error in book
Opened this issue · 5 comments
On page 4 of the book "An Introduction to Bayesian Data Analysis for Cognitive Science", I find the statement" The probabilities of all possible events in the entire sample space must sum up to 1."
I do not like this statement very much, because it would not even become true if "events" would be replaced by "elementary events" (because the sum of the probabilities of all elementary events is 0 for a continuous distribution).
yes, you are absolutely right. I have fixed this in the next revision of the book. Do you agree with this revision? The @'s refer to bibliography references, you won't see them resolved in this source text:
Both the frequency-based and the uncertain-belief perspective have their place in statistical inference, and depending on the situation, we are going to rely on both ways of thinking. Regardless of these differences in perspective, the probability of an outcome happening is defined to be constrained in the following way. For now, we consider discrete outcomes, such as obtaining a 6 when tossing a six-sided die. The statements below are not formal statements of the axioms of probability theory; for more details (and more precise formulations), see @RossProb or @kolmogorov2018foundations. Another formal presentation is in @resnick2019probability.
- The probability of an outcome must lie between 0 and 1, where 0 means that the outcome is impossible and cannot happen, and 1 means that the outcome is certain to happen.
- For any two mutually exclusive outcomes, the probability that one or the other occurs is the sum of their individual probabilities.
- Two outcomes are independent if and only if the probability of both outcomes happening is equal to the product of the probabilities of each outcome happening.
- The probabilities of all possible outcomes in the entire sample space must sum up to 1.
The above definitions are based on the axiomatic definition of probability by @kolmogorov2018foundations.
I would keep the term "event" in the first three statements, especially in the 3rd one, because outcomes cannot be independent.
Hence, both terms "outcome" and "event" should be defined. Events are used later in Section 1.2. Hence, readers should know that they are subsets of the sample space (while outcomes are elements of the sample space). In a very strict terminology, outcomes would not have a probability, but the corresponding elementary events.
i think that then i have to introduce the idea of a power set. let me think about this again.
@JaakobKind I rewrote section 1.1. I think this version is accurate. Let me know if there is a mistake:
http://bruno.nicenboim.me/bayescogsci/ch-intro.html#introprob
Thanks.
I rewrote it again following more comments from another mathematician. So this version is now outdated. Here is the revised text.
Probability {#introprob}
Informally, we all understand what the term \index{Probability} probability means. We routinely talk about things like the probability of it raining today. However, there are two distinct ways to think about probability. One can think of the probability of an something happening with reference to the \index{Frequency} frequency with which it might occur in repeated observations. Such a conception of probability is easy to imagine in cases where something can, at least in principle, occur repeatedly.
An example would be obtaining a 6 when tossing a die again and again. However, this frequentist view of probability is difficult to justify when talking about one-of-a-kind things, such as earthquakes; here, probability is expressing our uncertainty about the earthquake happening.
Both the frequency-based and the uncertain-belief perspective have their place in statistical inference, and depending on the situation, we are going to rely on both ways of thinking.
The probability of something happening is defined to be constrained in the way described below. A concrete example of "something happening" is obtaining nine correct answers when we ask a subject
Keep in mind that different textbooks have slightly different ways of presenting the underlying structure of what constitutes a \index{Probability space} probability space (defined below).^[Thanks go to Philip Loewen for his detailed suggestions on improving the presentation without descending into formalism. Any errors that remain are of course due to the authors.]
In the presentation that follows, we assume basic familiarity with set theory.
The formal axioms of probability that will be presented below involve three ingredients: a set
When the sample space
- Both the empty set and universal set (
$\Omega$ ) belong to the event space$F$ . - If
$E$ is an event, then so is the complement of$E$ . - For any list of events
$A_1, A_2,...$ (finite or infinite), the phrase "$A_1$ or$A_2$ or ..." describes another event.
For our purposes here, it suffices to rely on the intuition gained by considering the case where
To make the above abstract concepts more concrete, consider again the situation where we conduct an experiment in which we ask subjects to respond to
When we conduct an experiment, if we get a particular outcome like
The probability axioms refer to the sample space
- For every event
$E$ in the event space$F$ , the probability$P(E)$ is a real number between$0$ and$1$ . - The event
$E=\Omega$ belongs to$F$ , and$P(\Omega)=1$ . - If the events$A_1, A_2, A_3,...$ are mutually exclusive (in other words, if no two of these subsets of
$\Omega$ overlap), then the probability of the event "one of$A_1$ or$A_2$ or$A_3$ or ..." is given by the sum of the probability of$A_1$ occurring, of$A_2$ occurring, of$A_3$ occurring, ... (this sum could be finite or infinite).
Together, the triplet