
Title:
Confidence Intervals  Intro to Inferential Statistics

Description:

Although using the standard error of the estimate can help us assess the

accuracy of our predictions. We can make even more accurate predictions by

computing confidence intervals for our predicted values, our yhats. In other

words, when we get our regression line, we have our expected value which is

this yhat naught, for a specified value xnaught. However, the actual value

might be anything from up here, to down here, or it might be exactly equal to

our predicted value. Therefore, we might want a confidence interval around our

expected value for where the actual value might be. Similarly, we might want a

confidence interval for the true slope. We'll have a certain regression line

and we'll have calculated the slope for that regression line based on our

sample data. And this is just an example, the slope could be this, or flatter

down to here. A confidence interval for a slope can tell us the range for which

the true population's slope might be. You won't actually calculate the

confidence interval in this lesson, because we're going to assume that you'll

use a computer to get it. The important thing is that you know what it means.

For example, let's say we have some sample data that looks like this. We

calculate that the regression line is y equals bx plus a. Some slope and some

intercept. But lets say then that we're able to look at all the population

data. And it looks like this. Now it looks to be slightly downward sloping.

Since we're assuming this is the true regression line, we'll use the common

notation for the regression coefficients for the population. Oh, and these

should have hats since they're the predicted values. If this were the case,

where the sample regression line is positively sloping. But the true regression

line for the population is negatively sloping. That would mean that the

confidence interval for b has a negative lower bound and a positive upper

bound. It includes zero within this range. Therefore, if we run a twotailed

hypothesis test for whether or not the slope is equal to zero. We would fail to

reject the null, meaning there's no evidence that there's a linear relationship

between x and y based on that sample. In fact, let's get more into hypothesis

testing for slope.