Expectancy Study

In this example I am using data from a study by Adler1 of the effect of rater expectations on the rating. You can find it in the ‘car’ package in R, under Adler. There are three variables in the dataset expectation, rating, and instruction. Briefly, the study participants were randomly selected into either a HIGH or LOW expectantcy groups by manipulating their unsuspecting raters.

I won’t be using the instruction variable here.

The T-Test in R is pretty simple to do, either using R Commander or via scripting.

t.test(rating~expectation, alternative='greater', conf.level=.95, var.equal=TRUE, data=Adler)
## 
##  Two Sample t-test
## 
## data:  rating by expectation
## t = 0.89865, df = 95, p-value = 0.1856
## alternative hypothesis: true difference in means is greater than 0
## 95 percent confidence interval:
##  -2.219448       Inf
## sample estimates:
## mean in group HIGH  mean in group LOW 
##          -4.571429          -7.187500

analysis of the code

One ‘calls’ a t test into R by using the t.test() command. The argument rating~expectation, tells R to look at the rating variable and compare it by the two categories in expectation. Since we would suspect that those who have been given HIGH expectations by the reseacher who is manipulating the study, we use the ‘alternative=’greater’ argument. Since I believe that there will be a difference in the groups AND the results will be directional towards the HIGH. Since HIGH comes first in the alphabet, R will use it as the first variable. If the variables were named HIGH and CONTROL, we would have said less in this case, the other alternative is two.sided for those times when we don’t have a theory about direction. The confidence interval can be omitted, since the default is .95, but if you wanted to change it to something else, you can. Where we see var.equal=TRUE we could have said FALSE and made the test assume unequal variances. Finally, the data=Adler argument tells R to look in the dataset Adler. One could just as easly wrote it out t.test(Adler$rating~expectation,….) and omitted the final argument.

Rounding off the output that we see above, we get a t obtained of 0.899 at 95 degrees of freedom, at a probability level of 0.19. We would be inclinced to accept the NULL hypothesis. ### A two-sided test….

t.test(rating~expectation, alternative='two.sided', conf.level=.95, var.equal=FALSE, data=Adler)
## 
##  Welch Two Sample t-test
## 
## data:  rating by expectation
## t = 0.89922, df = 94.85, p-value = 0.3708
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -3.159703  8.391846
## sample estimates:
## mean in group HIGH  mean in group LOW 
##          -4.571429          -7.187500

Rounding off the output from our two-sided test, we get a t obtained of 0.899 at 94.85 df, and P = 0.37. Maybe you notice that the test statistics are almost equal between the two, but that the df wonked out a bit, plus the probability value is two times larger in the two.sided test.

Citations

1Adler, N. E. (1973) Impact of prior sets given experimenters and subjects on the experimenter expectancy effect. Sociometry 36, 113–126.