1 Probability: A Brief Review
MATH/STAT 355 builds directly on topics covered in MATH/STAT 354: Probability. You’re not expected to perfectly remember everything from Probability, but you will need to have sufficient facility with the following topics covered in this review Chapter in order to grasp the majority of concepts covered in MATH/STAT 355.
1.1 Learning Objectives
By the end of this chapter, you should be able to…
Distinguish between important probability models (e.g., Normal, Binomial)
Derive the expectation and variance of a single random variable or a sum of random variables
Define the moment generating function and use it to find moments or identify pdfs
Derive pdfs of transformations of continuous random variables
1.2 Concept Questions
Which probability distributions are appropriate for quantitative (continuous) random variables?
Which probability distributions are appropriate for categorical random variables?
Independently and Identically Distributed (iid) random variables are an incredibly important assumption involved in many statistical methods. Why do you think it might be important/useful for random variables to have this property?
Why might we want to be able to derive distribution functions for transformations of random variables? In what scenarios can you imagine this being useful?
1.3 Definitions
You are expected to know the following definitions:
Random Variable
A random variable is a function that takes inputs from a sample space of all possible outcomes, and outputs real values or probabilities. As an example, consider a coin flip. The sample space of all possible outcomes consists of “heads” and “tails”, and each outcome is associated with a probability (50% each, for a fair coin). For our purposes, you should know that random variables have probability density (or mass) functions, and are either discrete or continuous based on the number of possible outcomes a random variable may take. Random variables are often denoted with capital Roman letters, like \(X\), \(Y\), \(Z\), etc.
Probability density function (discrete, continuous)
- Note: I don’t care if you call a pmf a pdf… I will probably do this continuously throughout the semester. We don’t need to be picky about this in MATH/STAT 355.
There are many different accepted ways to write the notation for a pdf of a random variables. Any of the following are perfectly appropriate for this class: \(f(x)\), \(\pi(x)\), \(p(x)\), \(f_X(x)\). I typically use either \(\pi\) or \(p\), but might mix it up occasionally.
Key things I want you to know about probability density functions:
\(\pi(x) \geq 0\), everywhere. This should make sense (hopefully) because probabilities cannot be negative!
\(\int_{-\infty}^\infty \pi(x) = 1\). This should also (hopefully) makes sense. Probabilities can’t be greater than one, and the probability of event occurring at all (ever) should be equal to one, if the event \(x\) is a random variable.
Cumulative distribution function (discrete, continuous)
Cumulative distribution functions we’ll typically write as \(F_X(x)\). or \(F(x)\), for short. It is important to know that
\[ F_X(x) = \Pr(X \leq x), \]
or in words, “the cumulative distribution function is the probability that a random variable lies before \(x\).” If you write \(\Pr(X < x)\) instead of \(\leq\), you’re fine. The probability that a random variable is exactly one number (for an RV with a continuous pdf) is zero anyway, so these are the same thing. Key things I want you to know about cumulative distribution functions:
\(F(x)\) is non-decreasing. This is in part where the “cumulative” piece comes in to play. Recall that probabilities are basically integrals or sums. If we’re integrating over something positive, and our upper bound for our integral increases, the area under the curve (cumulative probability) will increase as well.
\(0 \leq F(x) \leq 1\) (since probabilities have to be between zero and one!)
\(\Pr(a < X \leq b) = F(a) - F(b)\) (because algebra)
Joint probability density function
A joint probability density function is a probability distribution defined for more than one random variable at a time. For two random variables, \(X\) and \(Z\), we could write their joint density function as \(f_{X,Z}(x, z)\) , or \(f(x,z)\) for short. The joint density function encodes all sorts of fun information, including marginal distributions for \(X\) and \(Z\), and conditional distributions (see next bold definition). We can think of the joint pdf as listing all possible pairs of outputs from the density function \(f(x,z)\), for varying values of \(x\) and \(z\). Key things I want you to know about joint pdfs:
How to get a marginal pdf from a joint pdf:
Suppose I want to know \(f_X(x)\), and I know \(f_{X,Z}(x,z)\). Then I can integrate or “average over” \(Z\) to get
\[ f_X(x) = \int f_{X,Z}(x,z)dz \]
The relationship between conditional pdfs, marginal pdfs, joint pdfs, and Bayes’ theorem/rule
How to obtain a joint pdf for independent random variables: just multiply their marginal pdfs together! This is how we will (typically) think about likelihoods!
How to obtain a marginal pdf from a joint pdf when random variables are independent without integrating (think, “separability”)
Conditional probability density function
A conditional pdf denotes the probability distribution for a (set of) random variable(s), given that the value for another (set of) random variable(s) is known. For two random variables, \(X\) and \(Z\), we could write the conditional distribution of \(X\) “given” \(Z\) as \(f_{X \mid Z}(x \mid z)\) , where the “conditioning” is denoted by a vertical bar (in LaTeX, this is typeset using “\mid”). Key things I want you to know about conditional pdfs:
The relationship between conditional pdfs, marginal pdfs, joint pdfs, and Bayes’ theorem/rule
How to obtain a conditional pdf from a joint pdf (again, think Bayes’ rule)
Relationship between conditional pdfs and independence (see next bold definition)
Independence
Two random variables \(X\) and \(Z\) are independent if and only if:
\(f_{X,Z}(x,z) = f_X(x) f_Z(z)\) (their joint pdf is “separable”)
\(f_{X\mid Z}(x\mid z) = f_X(x)\) (the pdf for \(X\) does not depend on \(Z\) in any way)
Note that the “opposite” is also true: \(f_{Z\mid X}(z\mid x) = f_Z(z)\)
In notation, we denote that two variables are independent as \(X \perp\!\!\!\perp Z\), or \(X \perp Z\). In LaTeX, the latter is typeset as “\perp”, and the former is typeset as “\perp\!\!\!\perp”. As a matter of personal preference, I (Taylor) prefer \(\perp\!\!\!\perp\), but I don’t like typing it out every time. Consider using the “\newcommand” functionality in LaTeX to create a shorthand for this for your documents!
Jacobian Matrix
Let \(f\) be a 1-1 and onto function, where \(f(x_i) = y_i\) for \(i = 1, \dots, n\). Then the Jacobian matrix of \(f\) is the matrix of partial derivatives,
\[ J_f(x) = \begin{pmatrix} \frac{\partial y_1}{\partial x_1} & \dots & \frac{\partial y_n}{\partial x_1} \\ \vdots & & \vdots \\ \frac{\partial y_1}{\partial x_n} & \dots & \frac{\partial y_n}{\partial x_n} \end{pmatrix} \]
The Jacobian matrix is sometimes simply referred to as the “Jacobian”, but be careful when simply calling it the Jacobian, since this can sometimes refer to the determinant of the Jacobian matrix as well. For this course, we’ll always refer to the Jacobian matrix as a matrix, and the “Jacobian” as its determinant.
Jacobian
The Jacobian is the determinant of the Jacobian matrix, denoted by \(| J_f(x) |\). Recall from linear algebra that \(Det(A) = Det(A^\top)\). This is convenient, because it means we won’t have worry too much about remembering which order our partial derivatives go in our matrix, for 2x2 matrices (which is all we’ll be working with for this course).
Expected Value / Expectation
The expectation (or expected value) of a random variable is defined as:
\[ E[X] = \int_{-\infty}^\infty x f(x) dx \]
Expected value is a weighted average, where the average is over all possible values a random variable can take, weighted by the probability that those values occur. Key things I want you to know about expectation:
The relationship between expectation, variance, and moments (specifically, that \(E[X]\) is the 1st moment!)
The “law of the unconscious statistician” (see the Theorems section of this chapter)
Expectation of linear transformations of random variables (see Theorems section of this chapter)
Variance
The variance of a random variable is defined as:
\[ Var[X] = E[(X - E[X])^2] = E[X^2] - E[X]^2 \]
In words, we can read this as “the expected value of the squared deviation from the mean” of a random variable \(X\). Key things I want you to know about variance:
The relationship between expectation, variance, and moments (hopefully clear, given the formula for variance)
The relationship between variance and standard deviation: \(Var(X) = sd(X)^2\)
The relationship between variance and covariance: \(Var(X) = Cov(X, X)\)
\(Var(X) \geq 0\). This should make sense, given that we’re taking the expectation of something “squared” in order to calculate it!
\(Var(c) = 0\) for any constant, \(c\).
Variance of linear transformations of random variables (see Theorems section of this chapter)
\(r^{th}\) moment
The \(r^{th}\) moment of a probability distribution is given by \(E[X^r]\). For example, when \(r = 1\), the \(r^{th}\) moment is just the expectation of the random variable \(X\). Key things I want you to know about moments:
The relationship between moments, expectation, and variance
- For example, if you know the first and second moments of a distribution, you should be able to calculate the variance of a random variable with that distribution!
The relationship between moments and moment generating functions (see Theorems section of this chapter)
Covariance
The covariance of two random variables is a measure of their joint variability. We denote the covariance of two random variables \(X\) and \(Z\) as \(Cov(X,Z)\), and
\[ Cov(X, Z) = E[(X - E[X])(Y - E[Y])] = E[XY] - E[X]E[Y] \]
Some things I want you to know about covariance:
\(Cov(X, X) = Var(X)\)
\(Cov(X, Y) = Cov(Y, X)\) (order doesn’t matter)
Moment Generating Function (MGF)
The moment generating function of a random variable \(X\) is defined as
\[ M_X(t) = E[e^{tX}] \]
A few things to note:
\(M_X(0) = 1\), always.
If two random variables have the same MGF, they have the same probability distribution!
MGFs are sometimes useful for showing how different random variables are related to each other
1.3.1 Distributions Table
You are also expected to know the probability distributions contained in Table 1, below. Note that you do not need to memorize the pdfs for these distributions, but you should be familiar with what types of random variables (continuous/quantitative, categorical, integer-valued, etc.) may take on different distributions. The more familiar you are with the forms of the pdfs, the easier/faster it will be to work through problem sets and quizzes.
Distribution | PDF/PMF | Parameters | Support |
---|---|---|---|
Uniform | \(\pi(x) = \frac{1}{\beta - \alpha}\) | \(\alpha \in \mathbb{R}\), \(\beta\in \mathbb{R}\) | \(x \in [\alpha, \beta]\) |
Normal | \(\pi(x) = \frac{1}{\sqrt{2\pi \sigma^2}} \exp(-\frac{1}{2\sigma^2} (x - \mu)^2)\) | \(\mu \in \mathbb{R}\), \(\sigma > 0\) | \(x \in \mathbb{R}\) |
Multivariate Normal | \(\pi(\textbf{x}) = (2\pi)^{-k/2} |\Sigma|^{-1/2} \exp(-\frac{1}{2}(\textbf{x} - \mu)^\top \Sigma^{-1}(\textbf{x} - \mu)))\) | \(\mu \in \mathbb{R}^k\), \(\Sigma \in \mathbb{R}^{k\times k}\) , positive semi-definite (in practice, almost always positive definite) | \(x \in \mathbb{R}^{k}\) |
Gamma | \(\pi(x) = \frac{\beta^{\alpha}}{\Gamma(\alpha)}x^{\alpha - 1} e^{-\beta x}\) | \(\alpha \text{ (shape)}, \beta \text{ (rate)} > 0\) | \(x \in (0,\infty)\) |
Chi-squared | \(\pi(x) = \frac{2^{-\nu/2}}{\Gamma(\nu/2)} x^{\nu/2 - 1}e^{-x/2}\) | \(\nu > 0\) | \(x \in [0, \infty)\) |
\(F\) | \(\pi(x) = \frac{\Gamma(\frac{\nu_1 + \nu_2}{2})}{\Gamma(\frac{\nu_1}{2}) \Gamma(\frac{\nu_2}{2})} \left( \frac{\nu_1}{\nu_2}\right)^{\nu_1/2} \left( \frac{x^{(\nu_1 - 2)/2}}{\left( 1 + \left( \frac{\nu_1}{\nu_2}\right)x\right)^{(\nu_1 + \nu_2)/2}}\right)\) | \(\nu_1, \nu_2 > 0\) | \(x \in [0, \infty)\) |
Exponential | \(\pi(x) = \beta e^{-\beta x}\) | \(\beta > 0\) | \(x \in [0, \infty)\) |
Laplace (Double Exponential) | \(\pi(x) = \frac{1}{2b} \exp( - \frac{|x - \mu|}{b})\) | \(\mu \in \mathbb{R}\), \(b > 0\) | \(x \in \mathbb{R}\) |
Student-\(t\) | \(\pi(x) = \frac{\Gamma((\nu + 1)/2)}{\Gamma(\nu/2) \sqrt{\nu \pi}} (1 + \frac{x^2}{\nu})^{-(\nu + 1)/2}\) | \(\nu > 0\) | \(x \in \mathbb{R}\) |
Beta | \(\pi(x) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)} x^{\alpha - 1}(1 - x)^{\beta - 1}\) | \(\alpha, \beta > 0\) | \(x \in [0,1]\) |
Poisson | \(\pi(x) = \frac{\lambda^x e^{-\lambda}}{x!}\) | \(\lambda > 0\) | \(x \in \mathbb{N}\) |
Binomial | \(\pi(x) = \binom{n}{x} p^{x} (1 - p)^{n - x}\) | \(p \in [0,1], n = \{0, 1, 2, \dots\}\) | \(x \in \{0, 1, \dots, n\}\) |
Multinomial | \(\pi(\textbf{x}) = \frac{n!}{x_1! \dots x_k!} p_1^{x_1} \dots p_k^{x_k}\) | \(p_i > 0\), \(p_1 + \dots + p_k = 1\), \(n = \{0, 1, 2, \dots \}\) | \(\{ x_1, \dots, x_k \mid \sum_{i = 1}^k x_i = n, x_i \geq 0 (i = 1, \dots, k)\}\) |
Negative Binomial | \(\pi(x) = \binom{x + r - 1}{x} (1-p)^x p^r\) | \(r > 0\), \(p \in [0,1]\) | \(x \in \{0, 1, \dots\}\) |
1.4 Theorems
Law of Total Probability
\[ P(A) = \sum_n P(A \cap B_n), \]or
\[ P(A) = \sum_n P(A \mid B_n) P(B_n) \]
Bayes’ Theorem
\[ \pi(A \mid B) = \frac{\pi(B \mid A) \pi(A)}{\pi(B)} \]
Relationship between pdf and cdf
\[ F_Y(y) = \int_{-\infty}^y f_Y(t)dt \]
\[ \frac{\partial}{\partial y}F_Y(y) = f_Y(y) \]
Expectation of random variables
\[ E[X] = \int_{-\infty}^\infty x f(x) dx \]
\[ E[X^2] = \int_{-\infty}^\infty x^2 f(x) dx \]
“Law of the Unconscious Statistician”
\[ E[g(X)] = \int_{-\infty}^\infty g(x)f(x)dx \]
Expectation and variance of linear transformations of random variables
\[ E[cX + b] = c E[X] + b \]
\[ Var[cX + b] = c^2 Var[X] \]
Relationship between mean and variance
\[ Var[X] = E[(X - E[X])^2] = E[X^2] - E[X]^2 \]
Also, recall that \(Cov[X, X] = Var[X]\).
Iterated Things
\[ E[X] = E[E[X \mid Y]] \] and \[ Var(X) = E[Var(X \mid Y)] + Var(E[X \mid Y]) \]
Finding a marginal pdf from a joint pdf
\[ f_X(x) = \int_{-\infty}^\infty f_{X,Y}(x, y) dy \]
Independence of random variables and joint pdfs
If two random variables are independent, their joint pdf will be separable. For example, if \(X\) and \(Y\) are independent, we could write
\[ f_{X,Y}(x, y) = f_{X}(x)f_Y(y) \]
Expected value of a product of independent random variables
Suppose random variables \(X_1, \dots, X_n\) are independent. Then we can write,
\[ E\left[\prod_{i = 1}^n X_i\right] = \prod_{i = 1}^n E[X_i] \]
Covariance of independent random variables
If \(X\) and \(Y\) are independent, then \(Cov(X, Y) = 0\). We can show this by noting that
\[\begin{align} Cov(X, Y) & = E[(X - E[X])(Y - E[Y])] \\ & = E[XY - XE[Y] - YE[X] + E[X]E[Y]] \\ & = E[XY] - E[XE[Y]] - E[YE[X]] + E[X]E[Y] \\ & = 2E[X]E[Y] - 2E[X]E[Y] \\ & = 0 \end{align}\]
Using MGFs to find moments
Recall that the moment generating function of a random variable \(X\), denoted by \(M_X(t)\) is
\[ M_X(t) = E[e^{tX}] \]
Then the \(n\)th moment of the probability distribution for \(X\) , \(E[X^n]\), is given by
\[ \frac{\partial M_X}{\partial t^n} \Bigg|_{t = 0} \]
where the above reads as “the \(n\)th derivative of the moment generating function, evaluated at \(t = 0\).”
Using MGFs to identify pdfs
MGFs uniquely identify probability density functions. If \(X\) and \(Y\) are two random variables where for all values of \(t\), \(M_X(t) = M_Y(t)\), then \(F_X(x) = F_Y(y)\).
Central Limit Theorem
The classical CLT states that for independent and identically distributed (iid) random variables \(X_1, \dots, X_n\), with expected value \(E[X_i] = \mu\) and \(Var[X_i] = \sigma^2 < \infty\), the sample average (centered and standardized) converges in distribution to a standard normal distribution at a root-\(n\) rate. Notationally, this is written as
\[ \sqrt{n} (\bar{X} - \mu) \overset{d}{\to} N(0, \sigma^2) \]
A fun aside: this is only one CLT, often referred to as the Levy CLT. There are other CLTs, such as the Lyapunov CLT and Lindeberg-Feller CLT!
1.4.1 Transforming Continuous Random Variables
We will often take at face value previously proven relationships between random variables. What I mean by this, as an example, is that it is a nice (convenient) fact that a sum of two independent normal random variables is still normally distributed, with a nice form for the mean and variance. In particular, if \(X \sim N(\mu, \sigma^2)\) and \(Y \sim N(\theta, \nu^2)\), then \(X + Y \sim N(\mu + \theta, \sigma^2 + \nu^2)\). Most frequently used examples of these sorts of relationships can be found in the “Related Distributions” section of the Wikipedia page for a given probability distribution. Unless I explicitly ask you to derive/show how certain variables are related to each other, you can just state the known relationship, use it, and move on!
If I do ask you to derive/show these things, there a few different ways we can go about this. For this course, I expect you to know the “CDF method” for one function of one random variable, and the “Jacobian method” for a function of more than one random variable.
1.4.1.1 CDF Method: One random variable
Theorem. Let \(X\) be a continuous random variable with pdf \(f_X(x)\). Define a new random variable \(Y = g(X)\), for nice* functions \(g\). Then \(f_Y(y) = f_X(g^{-1}(y)) \times \frac{1}{g'(g^{-1}(y))}\).
*By nice functions we mean functions that are strictly increasing and smooth on the required range. As an example, \(exp(x)\) is a smooth, strictly increasing function; \(|x|\) is not on the whole real line, but is from \((0, \infty)\) (where a lot of useful pdfs are defined). For the purposes of this class, every function that you will need to do this for will be “nice.” Note that there are also considerations that need to be taken regarding the range of continuous random variables when considering transforming them. We will mostly ignore these considerations in this class, but a technically complete derivation (or proof) must consider them.
Proof.
We can write
\[\begin{align*} f_Y(y) & = \frac{\partial}{\partial y} F_Y(y) \\ & = \frac{\partial}{\partial y} \Pr(Y \leq y) \\ & = \frac{\partial}{\partial y} \Pr(g(X) \leq y) \\ & = \frac{\partial}{\partial y} \Pr(X \leq g^{-1}(y)) \\ & = \frac{\partial}{\partial y} F_X(g^{-1}(y)) \\ & = f_X(g^{-1}(y)) \times \frac{\partial}{\partial y} g^{-1}(y) \end{align*}\]
where to obtain the last equality we use chain rule! Now we require some statistical trickery to continue… (note that this method is called the “CDF method” because we go through the CDF to derive the distribution for \(Y\))
You will especially see this in the Bayes chapter of our course notes, but it is often true that our lives are made easier as statisticians if we multiply things by one, or add zero. What exactly do I mean? Rearranging gross looking formulas into things we are familiar with (like pdfs, for example) often makes our lives easier and allows us to avoid dealing with such grossness. Here, the grossness is less obvious, but nonetheless relevant. Note that we can write
\[\begin{align*} y & = y \\ y & = g(g^{-1}(y)) \\ \frac{\partial}{\partial y} y & = \frac{\partial}{\partial y} g(g^{-1}(y)) \\ 1 & = g'(g^{-1}(y)) \frac{\partial}{\partial y} g^{-1}(y) \hspace{1cm} \text{(chain rule again!)} \\ \frac{1}{g'(g^{-1}(y))} & = \frac{\partial}{\partial y} g^{-1}(y) \end{align*}\]
The right-hand side should look familiar: it is exactly what we needed to “deal with” in our proof! Returning to that proof, we have
\[\begin{align*} f_Y(y) & = f_X(g^{-1}(y)) \times \frac{\partial}{\partial y} g^{-1}(y) \\ & = f_X(g^{-1}(y)) \times \frac{1}{g'(g^{-1}(y))} \end{align*}\]
as desired.
1.4.1.2 Jacobian Method: More than one random variable
Transformations of single random variables are great, but we’ll need to work with transformations of more than one random variable if we want to be able to manipulate joint pdfs. Suppose, for example, we have \(X \sim Gamma(\alpha, \lambda)\) and \(Y \sim Gamma(\beta, \lambda)\), where \(X \perp\!\!\!\perp Y\). Let \(U = X + Y\) and \(W = \frac{X}{X + Y}\). How do we show that \(U\) and \(W\) are independent? Through finding the joint pdf! Which means we need a method for transforming more than one, continuous random variable. Enter the Jacobian Method.
Theorem. Let \(X = (x_1, \dots, x_n)\) be a vector of random variables (a random vector) with pdf \(\pi_{X}(x)\). Suppose that \(f(x) = y\) is a 1-1 and onto function, and that \(|J_f(x)| > 0\) almost everywhere. Then \(\pi_Y(y)\) is given by
\[ \pi_Y(y) = \pi_{X}(f^{-1}(y)) \times \bigg| \frac{\partial x}{\partial y} \bigg| \times I_Y\{y\} \] where \(I_Y \{y \}\) denotes the support of \(Y\).
NOTE: \(| J_f(y) | = | \frac{\partial x}{\partial y} |\), above, where \(f(x) = y\). This means that the numerators and denominators in the Jacobian Matrix in the definition in the Course Notes are flipped here. See Worked Example 7.
Proof.
Note that if \(f\) is both 1-1 and onto, then \(f\) is either monotone increasing or monotone decreasing. We’ll prove the case where \(f\) is increasing, and we’ll note (but not show) how the decreasing case follows directly.
Let \(f\) be an increasing, continuous function. For some subset \(C \subseteq \mathbb{R}^n\),
\[\begin{align*} \int_C \pi_Y(y) dy & = \Pr(Y \in C) \\ & = \Pr(f(X) \in C) \\ & = \Pr(X \in f^{-1}(C)) \\ & = \int_{f^{-1}(C)} \pi_X(x) dx \end{align*}\]
Now we’ll use a (convoluted) \(u\)-substitution to make this look like what we want it to look like. Let \(u = f^{-1}(y)\). Note that this also means \(u = x\). Then \(du = (f^{-1}(y))' dy = dx\). Proceeding with \(u\)-substitution, we have
\[\begin{align*} \int_C \pi_Y(y) dy & = \int_{f^{-1}(C)} \pi_X(x) dx \\ & = \int_C \pi_X(u) du \\ & = \int_C \pi_X(f^{-1}(y)) (f^{-1}(y))' dy \\ & = \int_C \pi_X(f^{-1}(y)) \bigg| \frac{dx}{dy} \bigg| dy \end{align*}\] which implies \(\pi_Y(y) = \pi_X(f^{-1}(y)) \bigg| \frac{dx}{dy} \bigg|\), as desired. The absolute value signs (the Jacobian piece) come into play to help us deal with the decreasing case.
1.5 Worked Examples
Problem 1: Suppose \(X \sim Exponential(\lambda)\). Calculate \(E[X]\) and \(Var[X]\).
Solution:
We know that \(f(x) = \lambda e^{-\lambda x}\). If we can calculate \(E[X]\) and \(E[X^2]\), then we’re basically done! We can write
\[\begin{align*} E[X] & = \int_0^\infty x \lambda e^{-\lambda x} dx \\ & = \lambda \int_0^\infty x e^{-\lambda x} dx \end{align*}\]
And now we need integration by parts! Set \(u = x\), \(dv = e^{-\lambda x} dx\). Then \(du = 1dx\) and \(v = \frac{-1}{\lambda} e^{-\lambda x}\). Since \(\int u dv = uv - \int vdu\), we can continue
\[\begin{align*} E[X] & = \lambda \int_0^\infty x e^{-\lambda x} dx \\ & = \lambda \left( -\frac{x}{\lambda} e^{-\lambda x} \bigg|_0^\infty - \int_0^\infty \frac{-1}{\lambda} e^{-\lambda x} dx \right) \\ & = \lambda \left( - \int_0^\infty \frac{-1}{\lambda} e^{-\lambda x} dx \right) \\ & = \lambda \left( \frac{-1}{\lambda^2} e^{-\lambda x} \bigg|_0^\infty \right) \\ & = \frac{-1}{\lambda} e^{-\lambda x} \bigg|_0^\infty \\ & = \frac{1}{\lambda} e^{-0} \\ & = \frac{1}{\lambda} \end{align*}\]
We can follow a similar process to get \(E[X^2]\) (using the law of the unconscious statistician!). We can write
\[\begin{align*} E[X^2] & = \int_0^\infty x^2 \lambda e^{-\lambda x} dx \\ & = \lambda \int_0^\infty x^2 e^{-\lambda x} dx \end{align*}\]
And now we need integration by parts again! Set \(u = x^2\), \(dv = e^{-\lambda x} dx\). Then \(du = 2xdx\) and \(v = \frac{-1}{\lambda} e^{-\lambda x}\). Since \(\int u dv = uv - \int vdu\), we can continue
\[\begin{align*} E[X] & = \lambda \int_0^\infty x^2 e^{-\lambda x} dx \\ & = \lambda \left( -\frac{x^2}{\lambda} e^{-\lambda x} \bigg|_0^\infty - \int_0^\infty \frac{-2}{\lambda} xe^{-\lambda x} dx \right) \\ & = \lambda \left( -\frac{x^2}{\lambda} e^{-\lambda x} \bigg|_0^\infty + \frac{2}{\lambda} \int_0^\infty xe^{-\lambda x} dx \right) \\ & = \lambda \left( -\frac{x^2}{\lambda} e^{-\lambda x} \bigg|_0^\infty + \frac{2}{\lambda^3} \right)\\ & = \lambda \left( 0 + \frac{2}{\lambda^3} \right) \\ & = \frac{2}{\lambda^2} \end{align*}\]
Now we can calculate \(Var[X] = E[X^2] - E[X]^2\) as \[ Var[X] = E[X^2] - E[X]^2 = \frac{2}{\lambda^2} - \frac{1}{\lambda^2} = \frac{1}{\lambda^2} \] And so we have \(E[X] = \frac{1}{\lambda}\) and \(Var[X] = \frac{1}{\lambda^2}\).
Problem 2: Show that an exponentially distributed random variable is “memoryless”, i.e. show that \(\Pr(X > s + x \mid X > s) = \Pr(X > x)\), \(\forall s\).
Solution:
Recall that the CDF of an exponential distribution is given by \(F(x) = 1-e^{-\lambda x}\). Thanks to Bayes rule, we can write
\[\begin{align*} \Pr(X > s + x \mid X > s) & = \frac{\Pr(X > s + x , X > s)}{\Pr(X > s)} \\ & = \frac{\Pr(X > s + x)}{\Pr(X > s)} \\ & = \frac{1 - \Pr(X < s + x)}{1 - \Pr(X < s)} \\ & = \frac{1 - F(s + x)}{1 - F(s)} \end{align*}\]
where the second equality is true because \(x > 0\). Then we can write
\[\begin{align*} \Pr(X > s + x \mid X > s) & = \frac{1 - F(s + x)}{1 - F(s)} \\ & = \frac{1 - \left(1 - e^{-\lambda(s + x)}\right)}{1 - \left(1 - e^{-\lambda s}\right)} \\ & = \frac{e^{-\lambda(s + x)}}{e^{-\lambda s}} \\ & = \frac{e^{-\lambda s - \lambda x}}{e^{-\lambda s}} \\ & = e^{-\lambda x} \\ & = 1 - F(x) \\ & = \Pr(X > x) \end{align*}\]
and we’re done!
Problem 3: Suppose \(X \sim Exponential(1/\lambda)\), and \(Y \mid X \sim Poisson(X)\). Show that \(Y \sim Geometric(1/(1 + \lambda))\).
Solution:
Note that we can write \(f(x, y) = f(y \mid x) f(x)\), and \(f(y) = \int f(x, y) dx\). Then
\[ f(x, y) = \left( \frac{1}{\lambda} e^{-x/\lambda} \right) \left( \frac{x^y e^{-x}}{y!} \right) \] And so,
\[\begin{align*} f(y) & = \int f(x, y) dx \\ & = \int \left( \frac{1}{\lambda} e^{-x/\lambda} \right) \left( \frac{x^y e^{-x}}{y!} \right) dx \\ & = \frac{1}{\lambda y!} \int x^y e^{-x(1 + \lambda)/\lambda} dx \end{align*}\]
And we can again use integration by parts! Let \(u = x^y\) and \(dv = e^{-x(1 + \lambda)/\lambda} dx\). Then we have \(du = yx^{y-1} dx\) and \(v = -\frac{\lambda}{1 + \lambda}e^{-x(1 + \lambda)/\lambda}\), and we can write
\[\begin{align*} f(y) & = \frac{1}{\lambda y!} \int x^y e^{-x(1 + \lambda)/\lambda} dx \\ & = \frac{1}{\lambda y!} \left( -x^y \frac{\lambda}{1 + \lambda}e^{-x(1 + \lambda)/\lambda} \bigg|_{x = 0}^{x = \infty} + \int \frac{\lambda}{1 + \lambda}e^{-x(1 + \lambda)/\lambda} yx^{y-1} dx\right) \\ & = \frac{1}{\lambda y!} \left( \int \frac{\lambda}{1 + \lambda}e^{-x(1 + \lambda)/\lambda} yx^{y-1} dx \right) \\ & = \frac{1}{\lambda y!} \left( \frac{\lambda }{1 + \lambda} \right) y \left( \int e^{-x(1 + \lambda)/\lambda} x^{y-1} dx \right) \end{align*}\]
This looks gross, but it’s actually not so bad. Note that, since \(Y\) is Poisson, it can only take integer values beginning at 1! Then we can repeat the process of integration by parts \(y\) times in order to get rid of \(x^{y\dots}\) term on the inside of the integral. Specifically, each time we do this process we will pull out a \(\left( \frac{\lambda }{1 + \lambda} \right)\), and a \(y - i\) for the \(i\)th integration by parts step (try this one or two steps for yourself to see how it will simplify if you find this unintuitive!). We end up with,
\[\begin{align*} f(y) & = \frac{1}{\lambda y!} \left( \frac{\lambda }{1 + \lambda} \right)^y y! \\ & = \frac{1}{\lambda} \left(\frac{\lambda}{1 + \lambda}\right)^y \end{align*}\]
Now let \(p = \frac{1}{1 + \lambda}\). If we can show that \(f(y) \sim Geometric(p)\) then we’re done. Note that \(1 - p = \lambda/(1 + \lambda)\). We have
\[\begin{align*} f(y) & = \frac{1}{\lambda} (1 - p)^y \\ & = \frac{1}{\lambda} (1 - p)^{y-1} (1-p) \\ & = (1 - p)^{y-1} \frac{1}{\lambda} \left( \frac{\lambda}{1 + \lambda} \right) \\ & = (1 - p)^{y-1} \left( \frac{1}{1 + \lambda} \right) \\ & = (1 - p)^{y-1} p \end{align*}\]
which is exactly the pdf of a geometric random variable with parameter \(p\) and trials that begin at 1.
An alternative solution (which perhaps embodies the phrase “work smarter, not harder”) actually doesn’t involve integration by parts at all! As statisticians, we typically like to avoid actually integrating anything whenever possible, and this is often achieved by manipulating algebra enough to essentially “create” a pdf out of what we see (since pdfs integrate to \(1\)!). Massive props to a student for solving this problem in a much “easier” way, answer below:
\[\begin{align*} f(y) &= \int_{0}^{\infty} f(y \mid x) f(x) dx \\ &= \int_{0}^{\infty} (\frac{1}{\lambda}e^{-\frac{x}{\lambda}}) (\frac{x^y}{y!} e^{-x}) dx \\ &= \frac{1}{\lambda y!} \int_{0}^{\infty} x^y e^{-\frac{x}{\lambda}(1 + \lambda)} dx \\ &= \frac{1}{\lambda y!} \int_{0}^{\infty} \frac{(\frac{1+\lambda}{\lambda})^{y+1}}{(\frac{1+\lambda}{\lambda})^{y+1}} \frac{\Gamma(y+1)}{\Gamma(y+1)} x^{(y+1)-1} e^{-\frac{x}{\lambda}(1 + \lambda)} dx\\ &= \frac{\Gamma(y+1)}{\lambda y! (\frac{1+\lambda}{\lambda})^{y+1}} \int_{0}^{\infty} \frac{(\frac{1+\lambda}{\lambda})^{y+1}}{\Gamma(y+1)} x^{(y+1)-1} e^{-\frac{x}{\lambda}(1 + \lambda)} dx\\ &= \frac{\Gamma(y+1)}{\lambda y! (\frac{1+\lambda}{\lambda})^{y+1}} (1)\\ &= \frac{y!}{\lambda y! (\frac{1+\lambda}{\lambda})^{y+1}} \\ &= \frac{\lambda^{-1}}{(\frac{1+\lambda}{\lambda})^{y+1}}\\ &= \frac{\lambda^y}{(1+\lambda)^{y+1}}\\ &= \frac{1}{(1+\lambda)} \frac{\lambda^y}{(1+\lambda)^y}\\ &= \frac{1}{(1+\lambda)} (1 - \frac{1}{(1+\lambda)})^y \\ &=p(1-p)^y\qquad (\text{where }p=\frac{1}{1+\lambda}) \end{align*}\]
Note that we arrive at a slightly different answer with this approach. Specifically, we arrive at the pdf of a geometric random variable with parameter \(p\) and trials that begin at 0, as opposed to 1. There’s some subtlety here that we’re going to choose to ignore.
Problem 4: Suppose that \(X \sim N(\mu, \sigma^2)\), and let \(Y = \frac{X - \mu}{\sigma}\). Find the distribution of \(Y\) (simplifying all of your math will be useful for this problem).
Solution:
To solve this problem, we can use the theorem on transforming continuous random variables. We must first define our function \(g\) that relates \(X\) and \(Y\). In this case, we have \(g(a) = \frac{a - \mu}{\sigma}\). Now all we need to do is collect the mathematical “pieces” we need to use theorem: \(g^{-1}(a)\), and \(g'(a)\), and finally, the pdf of a normal random variable. We have
\[\begin{align*} f_X(x) & = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp(-\frac{1}{2\sigma^2} (x - \mu)^2) \\ g^{-1}(a) & = \sigma a + \mu \\ g'(a) & = \frac{\partial}{\partial a} \left(\frac{a - \mu}{\sigma}\right) = \frac{1}{\sigma} \end{align*}\]
Putting it all together, we have
\[\begin{align*} f_Y(y) & = f_X(g^{-1}(y)) \times \frac{1}{g'(g^{-1}(y))} \\ & = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp(-\frac{1}{2\sigma^2} (\sigma y + \mu - \mu)^2) \times \sigma \\ & = \frac{1}{\sqrt{2 \pi} \sigma} \exp(-\frac{1}{2\sigma^2} (\sigma y)^2) \times \sigma \\ & = \frac{1}{\sqrt{2 \pi}} \exp(-\frac{1}{2\sigma^2} \sigma^2 y^2) \\ & = \frac{1}{\sqrt{2 \pi}} \exp(-\frac{1}{2} y^2) \end{align*}\]
and note that this is the pdf of a normally distributed random variable with mean \(0\) and variance \(1\)! Thus, we have shown that \(\frac{X - \mu}{\sigma} \sim N(0,1)\). Fun Fact: If this random variable reminds you of a Z-score, it should!
Problem 5: Suppose the joint pdf of two random variables \(X\) and \(Y\) is given by \(f_{X,Y}(x,y) = \lambda \beta e^{-x\lambda - y\beta}\). Determine if \(X\) and \(Y\) are independent, showing why or why not.
Solution:
To determine whether \(X\) and \(Y\) are independent (or not), we need to determine if their joint pdf is “separable.” Doing some algebra, we can see that
\[\begin{align*} f_{X,Y}(x,y) & = \lambda \beta e^{-x \lambda - y\beta} \\ & = \lambda \beta e^{-x \lambda} e^{-y \beta} \\ & = \left( \lambda e^{-x \lambda} \right) \left( \beta e^{-y \beta} \right) \end{align*}\]
and so since we can write the joint distribution as a function of \(X\) multiplied by a function of \(Y\), \(X\) and \(Y\) are independent (and in this case, both have exponential distributions).
Problem 6: Suppose the joint pdf of two random variables \(X\) and \(Y\) is given by \(f_{X,Y}(x,y) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \Gamma(\beta)} \binom{n}{y} x^{y + \alpha - 1} (1-x)^{n-y + \beta - 1}\). Determine if \(X\) and \(Y\) are independent, showing why or why not.
Solution:
To determine whether \(X\) and \(Y\) are independent (or not), we need to determine if their joint pdf is “separable.” Right away, we should note that a piece of the pdf contains \(x^y\), and therefore we are never going to be able to fully separate out this joint pdf into a function of \(x\) times a function \(y\). Therefore, \(X\) and \(Y\) are not independent. In this case, we actually have \(X \sim Beta(\alpha, \beta)\), and \(Y \mid X \sim Binomial(n, y)\) (we’ll return to this example in the Bayes chapter!).
Problem 7: Let \(X\) and \(Y\) be independent random variables with \(X \sim Exponential(1)\) and \(Y \sim Exponential(1)\). Find the joint distribution of \(Z = X - Y\) and \(W = X + Y\), and use this joint distribution to show that \(Z \sim Laplace(0, 1)\).
Solution:
Let \(X\) and \(Y\) be independent random variables with \(X \sim Exponential(1)\) and \(Y \sim Exponential(1)\). Find the joint distribution of \(Z = X - Y\) and \(W = X + Y\), and use this joint distribution to show that \(Z \sim Laplace(0, 1)\).
We’ll need a couple things before we can directly apply the Jacobian method:
The joint distribution, \(\pi_{X,Y}(x,y)\)
Our function \(f(x,y)\) and its inverse
Our Jacobian matrix, given by \(J_{f}(z,w) = \begin{pmatrix} \frac{\partial x}{\partial z} & \frac{\partial x}{\partial w} \\\frac{\partial y}{\partial z} & \frac{\partial y}{\partial w} \end{pmatrix}\)
Our Jacobian, given by \(\bigg| J_{f}(z,w) \bigg|\)
The support of the joint distribution \(\pi_{Z,W}(z,w)\) (we’ll do this step at the end).
Since \(X\) and \(Y\) are independent, we have \[ \pi_{X,Y}(x,y) = e^{-x}e^{-y} = e^{-x - y} \] Now we must determine what our function \(f\) is, and its inverse. From the problem set-up, we have \(f(x, y) \longmapsto (x - y, x + y)\). To find the inverse function, we can rearrange these outputs to define \(x\) and \(y\) solely in terms of \(z\) and \(w\). Some algebra included below: \[\begin{align*} x & = z + y \\ y & = w - x \\ x & = z + w - x \\ 2x & = z + w \\ x & = \frac{z + w}{2} \quad \text{(Our first equation)!} \\ y & = w - \frac{z + w}{2} \\ y & = \frac{2w - z - w}{2} \\ y & = \frac{w - z}{2} \quad \text{(Our second equation!)} \end{align*}\] Which gives us \(f^{-1}(z,w) \longmapsto (\frac{z + w}{2}, \frac{w - z}{2})\). The Jacobian matrix is then given by \[ J_f(z,w) = \begin{pmatrix} \frac{\partial (\frac{z + w}{2})}{\partial z} & \frac{\partial (\frac{z + w}{2})}{\partial w} \\ \frac{\partial (\frac{w - z}{2})}{\partial z} & \frac{\partial (\frac{w - z}{2})}{\partial w} \end{pmatrix} = \begin{pmatrix} \frac{1}{2} & \frac{1}{2} \\ \frac{-1}{2} & \frac{1}{2} \end{pmatrix} \] and the Jacobian is given by \(|J_f(z,w)| = (1/2)(1/2) - (1/2)(-1/2) = 1/2\).
Now we determine the support of the distribution \(\pi_{Z,W} (z,w)\). Since \(X\) and \(Y\) are exponential, we know that \[\begin{align*}
& 0 \leq x < \infty \\
& 0 \leq y < \infty
\end{align*}\] Plugging in some things and rearranging, we get \[\begin{align*}
& 0 \leq \frac{z + w}{2} < \infty \\
& 0 \leq z + w < \infty \\
& -z \leq w < \infty \quad \text{(or)} -w \leq z < \infty
\end{align*}\] and
\[\begin{align*}
& 0 \leq \frac{w - z}{2} < \infty \\
& 0 \leq w - z < \infty \\
& z \leq w < \infty
\end{align*}\]
Putting these together, we have \(-w \leq z \leq w < \infty\), so the support for \(z\), marginally is given by \([-w, w]\). The support for \(w\), marginally, is given by \([0, \infty)\), since it is a sum of two random variables that have that same support. Note that this therefore means \(z\) can range from \((-\infty, \infty)\), depending on the value of \(w\).
We can now finally apply the Jacobian method to obtain \(\pi_{Z,W}(z,w)\) using the separate pieces we have calculated, obtaining: \[\begin{align*} \pi_{Z,W}(z,w) & = \pi_{X,Y}(f^{-1}(z,w)) \times | J_f(z,w) | \times I \{ -w \leq z \leq w, 0 \leq w < \infty \}\\ & = \exp(-\frac{z + w}{2} - \frac{w-z}{2}) \times \frac{1}{2} \times I \{ -w \leq z \leq w, 0 \leq w < \infty \}\\ & = \frac{1}{2} \exp(\frac{-z - w - w + z}{2}) \times I \{ -w \leq z \leq w, 0 \leq w < \infty \}\\ & = \frac{1}{2} e^{-w} \times I \{ -w \leq z \leq w, 0 \leq w < \infty \} \end{align*}\] Now that we have the joint distribution \(\pi_{Z,W}(z,w)\), we must integrate with respect to \(W\) to get the marginal distribution of \(Z\). Recall that we have both \(-z \leq w < \infty\) and \(z \leq w < \infty\), so we consider these two cases separately. We have \[ \pi_Z(z) = \begin{cases} \int_z^\infty \frac{1}{2} e^{-w} dw = - \frac{1}{2} e^{-w} \big|_z^\infty = \frac{1}{2}e^{-z} & 0 \leq z < \infty\\ \int_{-z}^\infty \frac{1}{2} e^{-w} dw = - \frac{1}{2} e^{-w} \big|_{-z}^\infty = \frac{1}{2}e^{z} & -\infty < z \leq 0 \end{cases} \] which can equivalently be written as \[ \pi_Z(z) = \frac{1}{2} e^{-|z|} \quad -\infty < z < \infty \] which implies \(Z \sim Laplace(0,1)\).