Happy studying! Did you find my notes useful this semester?
Please consider
giving me a few bucks
or
buying me a beer. Contributions like yours help me keep these notes forever free.
Happy studying! Did you find my notes useful this semester?
Please consider
giving me a few bucks
or
buying me a beer. Contributions like yours help me keep these notes forever free.
Notice a tyop typo? Please submit an
issue
or open a
PR.
Input Analysis
Introduction
The goal of input analysis is to ensure that the random variables we use in our simulations adequately approximate the real-world phenomena we are modeling. We might use random variables to model interarrival times, service times, breakdown times, or maintenance times, among other things. These variables don't come out of thin air; we have to specify them appropriately.
Why Worry? GIGO!
If we specify our random variables improperly, we can end up with a GIGO simulation: garbage-in-garbage-out. This situation can ruin the entire model and invalidate any results we might obtain.
GIGO Example
Let's consider a single-server queueing system with constant service times of ten minutes. Let's incorrectly assume that we have constant interarrival times of twelve minutes. We should expect to never have a line under this assumption because the service times are always shorter than the interarrival times.
However, suppose that, in reality, we have Exp(12) interarrival times. In this case, the simulation never sees the line that actually occurs. In fact, the line might get quite long, and the simulation has no way of surfacing that fact.
So What To Do?
Here's the high-level game plan for performing input analysis. First, we'll collect some data for analysis for any random variable of interest. Next, we'll determine, or at least estimate, the underlying distribution along with associated parameters: for example, Nor(30,8). We will then conduct a formal statistical test to see if the distribution we chose is "approximately" correct. If we fail the test, then our guess is wrong, and we have to go back to the drawing board.
Identifying Distributions
In this lesson, we will look at some high-level methods for examining data and guessing the underlying distribution.
Three Little Bears
We can always present data in the form of a histogram. Suppose we collect one hundred observations, and we plot the following histogram. We can't really determine the underlying distribution because we don't have enough cells.
Suppose we plot the following histogram. The resolution on this plot is too granular. We can potentially see the underlying distribution, but we risk missing the forest for the trees.
If we take the following histogram, we get a much clearer picture of the underlying distribution.
It turns out that, if we take enough observations, the histogram will eventually converge to the true pdf/pmf of the random variable we are trying to model, according to the GlivenkoβCantelli theorem.
Stem-and-Leaf
If we turn the histogram on its side and add some numerical information, we get a stem-and-leaf diagram, where each stem represents the common root shared among a collection of observations, and each leaf represents the observation itself.
Which Distribution?
When looking at empirical data, what questions might we ask to arrive at the underlying distribution? For example, can we at least tell if the observations are discrete or continuous?
We might want to ask whether the distribution is univariate or multivariate. We might be interested in someone's weight, but perhaps we need to generate height and weight observations simultaneously.
Additionally, we might need to check how much data we have available. Certain distributions lend themselves more easily to smaller samples of data.
Furthermore, we might need to communicate with experts regarding the nature of the data. For example, we might want to know if the arrival rate changes at our facility as the day progresses. While we might observe the rate directly, we might want to ask the floor supervisor what to expect beforehand.
Finally, what happens when we don't have much or any data? What if the system we want to model doesn't exist yet? How might we guess a good distribution?
Which Distribution, II?
Let's suppose we know that we have a discrete distribution. For example, we might realize that we only see a finite number of observations during our data collection process. How do we determine which discrete distribution to use?
If we want to model success and failures, we might use a Bernoulli random variable and estimate p. If we want to look at the number of successes in n trials, we need to consider using a binomial random variable.
Perhaps we want to understand how many trials we need until we get our first success. In that case, we need to look at a geometric random variable. Alternatively, if we want to know how many trials we need until the nth success, we need a negative binomial random variable.
We can use the Poisson(Ξ») distribution to count the number of arrivals over time, assuming that the arrival process satisfies certain elementary assumptions.
If we honestly don't have a good model for the discrete data, perhaps we can use an empirical or sample distribution.
Which Distribution, III?
What if the distribution is continuous?
We might consider the uniform distribution if all we know about the data is the minimum and maximum possible values. If we know the most likely value as well, we might use the triangular distribution.
If we are looking at interarrival times from a Poisson process, then we know we should be looking at the Exp(Ξ») distribution. If the process is nonhomogeneous, we might have to do more work, but the exponential distribution is a good starting point.
We might consider the normal distribution if we are looking at heights, weights, or IQs. Furthermore, if we are looking at sample means or sums, the normal distribution is a good choice because of the central limit theorem.
We can use the Beta distribution, which generalizes the uniform distribution, to specify bounded data. We might use the gamma, Weibull, Gumbel, or lognormal distribution if we are dealing with reliability data.
When in doubt, we can use the empirical distribution, which is based solely on the sample.
Game Plan
As we said, we will choose a "reasonable" distribution, and then we'll perform a hypothesis test to make sure that our choice is not too ridiculous.
For example, suppose we hypothesize that some data is normal. This data should fall approximately on a straight line when we graph it on a normal probability plot, and it should look normal when we graph it on a standard plot. At the very least, it should also pass a goodness-of-fit test for normality, of which there are several.
Unbiased Point Estimation
It's not enough to decide that some sequence of observations is normal; we still have to estimate ΞΌ and Ο2. In the next few lessons, we will look at point estimation, which lets us understand how to estimate these unknown parameters. We'll cover the concept of unbiased estimation first.
Statistic Definition
A statistic is a function of the observations X1β,...,Xnβ that is not explicitly dependent on any unknown parameters. For example, the sample mean, XΛ, and the sample variance, S2, are two statistics:
Statistics are random variables. In other words, if we take two different samples, we should expect to see two different values for a given statistic.
We usually use statistics to estimate some unknown parameter from the underlying probability distribution of the Xiβ's. For instance, we use the sample mean, XΛ, to estimate the true mean, ΞΌ, of the underlying distribution, which we won't normally know. If ΞΌ is the true mean, then we can take a bunch of samples and use XΛ to estimate ΞΌ. We know, via the law of large numbers that, as nββ, XΛβΞΌ.
Point Estimator
Let's suppose that we have a collection of iid random variables, X1β,...,Xnβ. Let T(X)β‘T(X1β,...,Xnβ) be a function that we can compute based only on the observations. Therefore, T(X) is a statistic. If we use T(X) to estimate some unknown parameter ΞΈ, then T(X) is known as a point estimator for ΞΈ.
For example, XΛ is usually a point estimator for the true mean, ΞΌ=E[Xiβ], and S2 is often a point estimator for the true variance, Ο2=Var(X).
T(X) should have specific properties:
Its expected value should equal the parameter it's trying to estimate. This property is known as unbiasedness.
It should have a low variance. It doesn't do us any good if T(X) is bouncing around depending on the sample we take.
Unbiasedness
We say that T(X) is unbiased for ΞΈ if E[T(X)]=ΞΈ. For example, suppose that random variables, X1β,...,Xnβ are iid anything with mean ΞΌ. Then:
Since E[XΛ]=ΞΌ, XΛ is always unbiased for ΞΌ. That's why we call it the sample mean.
Similarly, suppose we have random variables, X1β,...,Xnβ which are iid Exp(Ξ»). Then, XΛ is unbiased for ΞΌ=E[Xiβ]=1/Ξ». Even though Ξ» is unknown, we know that XΛ is a good estimator for 1/Ξ».
Be careful, though. Just because XΛ is unbiased for 1/Ξ» does not mean that 1/XΛ is unbiased for Ξ»: E[1/XΛ]ξ β=1/E[XΛ]=Ξ». In fact, 1/XΛ is biased for Ξ» in this exponential case.
Here's another example. Suppose that random variables, X1β,...,Xnβ are iid anything with mean ΞΌ and variance Ο2. Then:
Remember that XΛ represents the average of all the Xiβ's: βXiβ/n. Thus, if we just sum the Xiβ's and don't divide by n, we have a quantity equal to nXΛ:
Unfortunately, while S2 is unbiased for the variance Ο2, S is biased for the standard deviation Ο.
Mean Squared Error
In this lesson, we'll look at mean squared error, a performance measure that evaluates an estimator by combining its bias and variance.
Bias and Variance
We want to choose an estimator with the following properties:
Low bias (defined as the difference between the estimator's expected value and the true parameter value)
Low variance
Furthermore, we want the estimator to have both of these properties simultaneously. If the estimator has low bias but high variance, then its estimates are meaninglessly noisy. Its average estimate is correct, but any individual estimate may be way off the mark. On the other hand, an estimator with low variance but high bias is very confident about the wrong answer.
Example
Suppose that we have n random variables, X1β,...,XnββΌiidUnif(0,ΞΈ). We know that our observations have a lower bound of 0, but we don't know the value of the upper bound, ΞΈ. As is often the case, we sample many observations from the distribution and use that sample to estimate the unknown parameter.
Let's look at the first estimator. We know that E[Y1β]=2E[XΛ], by definition. Similarly, we know that 2E[XΛ]=2E[Xiβ], since XΛ is always unbiased for the mean. Recall how we compute the expected value for a uniform random variable:
E[A]=(bβa)/2,AβΌUnif(a,b)
Therefore:
2E[Xiβ]=2(2ΞΈβ0β)=ΞΈ=E[Y1β]
As we can see, Y1β is unbiased for ΞΈ.
It's also the case that Y2β is unbiased, but it takes more work to demonstrate. As a first step, take the cdf of the maximum of the Xiβ's, Mβ‘maxiβXiβ. Here's what P(Mβ€y) looks like:
If Mβ€y, and M is the maximum, then P(Mβ€y) is the probability that all the Xiβ's are less than y. Since the Xiβ's are independent, we can take the product of the individual probabilities:
P(Mβ€y)=i=iβnβP(Xiββ€y)=[P(X1ββ€y)]n
Now, we know, by definition, that the cdf is the integral of the pdf. Remember that the pdf for a uniform distribution, Unif(a,b), is:
Note that E[M]ξ β=ΞΈ, so M is not an unbiased estimator for ΞΈ. However, remember how we defined Y2β:
Y2ββ‘nn+1β1β€iβ€XiβmaxβXiβ
Thus:
E[Y2β]β=nn+1βE[M]=nn+1β(n+1nΞΈβ)=ΞΈβ
Therefore, Y2β is unbiased for ΞΈ.
Both indicators are unbiased, so which is better? Let's compare variances now. After similar algebra, we see:
Var(Y1β)=3nΞΈ2β,Var(Y2β)=n(n+2)ΞΈ2β
Since the variance of Y2β involves dividing by n2, while the variance of Y1β only divides by n, Y2β has a much lower variance than Y1β and is, therefore, the better indicator.
Bias and Mean Squared Error
The bias of an estimator, T[X], is the difference between the estimator's expected value and the value of the parameter its trying to estimate: Bias(T)β‘E[T]βΞΈ. When E[T]=ΞΈ, then the bias is 0 and the estimator is unbiased.
The mean squared error of an estimator, T[X], the expected value of the squared deviation of the estimator from the parameter: MSE(T)β‘E[(TβΞΈ)2].
Usually, we use mean squared error to evaluate estimators. As a result, when selecting between multiple estimators, we might not choose the unbiased estimator, so long as that estimator's MSE is the lowest among the options.
Relative Efficiency
The relative efficiency of one estimator, T1β, to another, T2β, is the ratio of the mean squared errors: MSE(T1β)/MSE(T2β). If the relative efficiency is less than one, we want T1β; otherwise, we want T2β.
Let's compute the relative efficiency of the two estimators we used in the previous example:
Remember that both estimators are unbiased, so the bias is zero by definition. As a result, the mean squared errors of the two estimators is determined solely by the variance:
The relative efficiency is greater than one for all n>1, so Y2β is the better estimator just about all the time.
Maximum Likelihood Estimation
In this lesson, we are going to talk about maximum likelihood estimation, which is perhaps the most important point estimation method. It's a very flexible technique that many software packages use to help estimate parameters from various distributions.
Likelihood Function and Maximum Likelihood Estimator
Consider an iid random sample, X1β,...,Xnβ, where each Xiβ has pdf/pmf f(x). Additionally, suppose that ΞΈ is some unknown parameter from Xiβ that we would like to estimate. We can define a likelihood function, L(ΞΈ) as:
L(ΞΈ)β‘i=1βnβf(xiβ)
The maximum likelihood estimator (MLE) of ΞΈ is the value of ΞΈ that maximizes L(ΞΈ). The MLE is a function of the Xiβ's and is itself a random variable.
Exponential Example
Consider a random sample, X1β,...,XnββΌiidExp(Ξ»). Find the MLE for Ξ». Note that, in this case, Ξ», is taking the place of the abstract parameter, ΞΈ. Now:
L(Ξ»)β‘i=1βnβf(xiβ)
We know that exponential random variables have the following pdf:
Remember what happens to exponents when we multiply bases:
axβay=ax+y
Let's apply this to our product (and we can swap in exp notation to make things easier to read):
L(Ξ»)=Ξ»nexp[βΞ»i=1βnβxiβ]
Now, we need to maximize L(Ξ») with respect to Ξ». We could take the derivative of L(Ξ»), but we can use a trick! Since the natural log function is one-to-one, the Ξ» that maximizes L(Ξ») also maximizes ln(L(Ξ»)). Let's take the natural log of L(Ξ»):
Thus, the maximum likelihood estimator for Ξ» is 1/XΛ, which makes a lot of sense. The mean of the exponential distribution is 1/Ξ», and we usually estimate that mean by XΛ. Since XΛ is a good estimator for Ξ», it stands to reason that a good estimator for Ξ» is 1/XΛ.
Conventionally, we put a "hat" over the Ξ» that maximizes the likelihood function to indicate that it is the MLE. Such notation looks like this: Ξ».
Note that we went from "little x's", xiβ, to "big x", XΛ, in the equation. We do this to indicate that Ξ» is a random variable.
Just to be careful, we probably should have performed a second-derivative test on the function, ln(L(Ξ»)), to ensure that we found a maximum likelihood estimator and not a minimum likelihood estimator.
Bernoulli Example
Let's look at a discrete example. Suppose we have X1β,...,XnββΌiidBern(p). Let's find the MLE for p. We might remember that the expected value of Bern(p) random variable is p, so we shouldn't be surprised if XΛ is our MLE.
Let's remember the values that Xiβ can take:
Xiβ={10βw.p.Β pw.p.Β 1βpβ
Therefore, we can write the pmf for a Bern(p) random variable as follows:
Let's take the first derivative with respect to ΞΌ, to find the MLE, ΞΌβ, for ΞΌ. Remember that the derivative of terms that don't contain ΞΌ are zero:
Notice how close Ο2 is to the unbiased sample variance:
S2=nβ1βi=1nβ(XiββXΛ)2β=nβ1nΟ2β
Because S2 is unbiased, we have to expect that Ο2 is slightly biased. However, Ο2 has slightly less variance than S2, making it the MLE. Regardless, the two quantities converge as n grows.
Gamma Example
Let's look at the Gamma distribution, parameterized by r and ΞΈ. The pdf for this distribution is shown below. Recall that Ξ(r) is the gamma function.
f(x)=Ξ(r)Ξ»rβxrβ1eβΞ»x,x>0
Suppose we have X1β,...,XnββΌiidGam(r,Ξ»). Let's find the MLE's for r and Ξ»:
We can define the digamma function, Ξ¨(r), to help us with the term involving the gamma function and it's derivative:
Ξ¨(r)β‘Ξβ²(r)/Ξ(r)
At this point, we can substitute in Ξ»=r/XΛ, and then use a computer to solve the following equation, either by bisection, Newton's method, or some other method:
nln(r/XΛ)βnΞ¨(r)+ln(iββxiβ)=0
The challenging part of evaluating the digamma function is computing the derivative of the gamma function. We can use the definition of the derivative here to help us, choosing our favorite small h and then evaluating:
Ξβ²(r)βhΞ(r+h)βΞ(r)β
Uniform Example
Suppose we have X1β,...,XnββΌiidUnif(0,ΞΈ). Let's find the MLE for ΞΈ.
Remember that the pdf f(x)=1/ΞΈ,0<x<ΞΈ. We can take the likelihood function as the product of the f(xiβ)'s:
In order to have L(ΞΈ)>0, we must have 0β€xiββ€ΞΈ,βi. In other words, ΞΈ must be at least as large as the largest observation we've seen yet: ΞΈβ₯maxiβxiβ.
Subject to this constraint, L(ΞΈ)=1/ΞΈn is not maximized at ΞΈ=0. Instead L(ΞΈ)=1/ΞΈn is maximized at the smallest possible ΞΈ value, namely ΞΈ=maxiβXiβ.
This result makes sense in light of the similar (unbiased) estimator Y2β=(n+1)maxiβXiβ/n that we saw previously.
Invariance Properties of MLEs
In this lesson, we will expand the vocabulary of maximum likelihood estimators by looking at the invariance property of MLEs. In a nutshell, if we have the MLE for some parameter, then we can use the invariance property to determine the MLE for any reasonable function of that parameter.
Invariance Property of MLE's
If ΞΈ is the MLE of some parameter, ΞΈ, and h(β ) is a 1:1 function, then h(ΞΈ) is the MLE of h(ΞΈ).
Remember that this invariance property does not hold for unbiasedness. For instance, we said previously that the sample variance is an unbiased estimator for the true variance because E[S2]=Ο2. However, E[S2β]ξ β=Ο, so we cannot use the sample standard deviation as an unbiased estimator for the true standard deviation.
Examples
Suppose we have a random sample, X1β,...,XnββΌiidBern(p). We might remember that the MLE of p is pβ=XΛ. If we consider the 1:1 function h(ΞΈ)=ΞΈ2,ΞΈ>0, then the invariance property says that the MLE of p2 is XΛ2.
Suppose we have a random sample, X1β,...,XnββΌiidNor(ΞΌ,Ο2). We saw previously that the MLE for Ο2 is:
Ο2=n1βi=1βnβ(XiββXΛ)2
We just said that we couldn't take the square root of S2 to estimate Ο in an unbiased way. However, we can use the square root of Ο2 to get the MLE for Ο.
If we consider the 1:1 function h(ΞΈ)=+ΞΈβ, then the invariance property says that the MLE of Ο is:
Ο=Ο2β=nβi=1nβ(XiββXΛ)2ββ
Suppose we have a random sample, X1β,...,XnββΌiidExp(Ξ»). The survival function, FΛ(x), is:
FΛ(x)=P(X>x)=1βF(x)=1β(1βeβΞ»x)=eβΞ»x
We saw previously the the MLE for Ξ» is Ξ»=1/XΛ.Therefore, using the invariance property, we can see that the MLE for FΛ(x) is FΛ(Ξ»):
FΛ(x)β=eβΞ»x=eβx/XΛ
The MLE for the survival function is used all the time in actuarial sciences to determine - somewhat gruesomely, perhaps - the probability that people will live past a certain age.
The Method of Moments (Optional)
In this lesson, we'll finish off our discussion on estimators by talking about the Method of Moments.
Suppose we have a sequence of random variables, X1β,...,Xnβ, which are iid from pmf/pdf f(x). The method of moments (MOM) estimator for E[Xk], mkβ, is:
mkβ=n1βi=1βnβXikβ
Note that mkβ is equal to the sample average of the Xikβ's. Indeed, the MOM estimator for ΞΌ=E[Xiβ], is the sample mean, XΛ:
m1β=n1βi=1βnβXiβ=XΛ=E[Xiβ]
Similarly, we can find the MOM estimator for k=2:
m2β=n1βi=1βnβXi2β=E[Xi2β]
We can combine the MOM estimators for k=1,2 to produce an expression for the variance of Xiβ: