Comparing Outcomes to Known Values

In this example I am using some data from an evaluation of my research classes, where I did a pre-test and a post-test of my classes. Now I want to compare a current class to that earlier standard, I didn’t happen to do a pre-test this year, so I can only use the post-test outcomes. The current dataset is called current. I am testing it against, the earlier data, which had a mean of 76.36. I can get my current classes’ mean running mean(current) and getting the results. But What I don’t know is if they come from the same distribution or not.

The T-Test in R is pretty simple to do, either using R Commander or via scripting.

t.test(current, mu=76.36, alternative='two.sided')
## 
##  One Sample t-test
## 
## data:  current
## t = -2.5826, df = 41, p-value = 0.01347
## alternative hypothesis: true mean is not equal to 76.36
## 95 percent confidence interval:
##  64.60235 74.92146
## sample estimates:
## mean of x 
##   69.7619

analysis of the code

One ‘calls’ a single or one-way t test into R by using the t.test() command. The argument current, mu=76.36, tells R to look at the current variable and compare it to the population mean of 76.36.

Since we have no reason to believe there is a difference between the current class and the previous one, we use the ‘alternative=’two.sided’ argument.

Analysis of the Output

R tells use that it is a One Sample t-test that we are going, that our data is ‘current’. Boring!

Too the point, it gives us a t score of -2.5826131 at 41 degrees of freedom and our P= 0.0134732.

If our probability exceeds .05, then we can accept the NULL, but if it is less we reject it.

In this case our known mean of 76.36 minus the current mean of 69.76 equals 6.6. The current class is six and one-half percent less skilled at research than the earlier class, and it did not happen by chance.