This chapter will help you to:

  1. State the goal of the independent samples t-test
  2. State the null hypothesis for the independent samples t-test
  3. Relate the independent samples t-test to between-subjects designs
  4. State the assumptions of the independent samples t-test
  5. Relate the independent samples t-test to the general linear model
  6. Use SPSS to test the assumptions of the independent samples t-test
  7. Use SPSS to conduct an independent samples t-test with the General Linear Model procedure
  8. Interpret the output of the General Linear Model Procedure
  9. Write up the results of the independent samples t-test in APA style

The Need for the T-Tests

The family of t-tests grew out of a need to improve upon and generalize the one-sample z-test. The z-test allowed us to determine if a sample was likely (or not) to be derived from a hypothesized population. Unfortunately, the z-test assumes large samples ( \(n = 1,000\) ) and that one knows the population variance ( \(\sigma\) ). These assumptions are not always met. Should we conduct a z-test without meeting these assumptions, we may increase the probability Furthermore, the goal of most researchers is slightly different than determining the relatedness of just one sample to a population.

Relevant Research Questions

One common question for researchers simplifies to “are the groups different?” A variant of this is “does my variable have an effect?” In both cases, the goal is to determine if multiple samples come from the same or different populations.

The t-tests allow for determining reliable differences in samples by determining how probable it is for the two samples to have come from the same population. Furthermore, t-tests do not require large samples or that one knows the population variance.

From Z-Test to T-Test

If we do not know the population variance ( \(\sigma\) ), we must estimate it from the data we have (e.g., the sample variance, s). This estimation introduces error because were are basing our estimate on just one of many possible samples from the population. The t-test accounts for sampling error by changing the sampling distribution based on the sample size. Figure 5.1 shows how the sampling distribution for t varies by degrees of freedom (i.e., sample size - number of groups).

Figure 5.1

T-Distributions for Different Degrees of Freedom

Three Types of T-Tests

Along with a better way to account for the error associated with smaller samples, t-tests allow us to generalize from testing one sample’s representativeness of a hypothesized population to testing two samples. When we are testing two samples, we need to be aware of how the samples were generated. The three types of t-tests are reviewed below.

One-Sample T-Test

The one sample t-test is equivalent to the one sample z-test in that we would use this test to determine if our sample comes from some hypothesized population. However, the t-test adjusts for sample size by changing the shape of the t-distribution.

Independent Samples T-Test

The independent samples t-test is used when we want to determine if two samples are derived from the same population. You are likely more familiar with the research questions version, which is “are the two groups different?” Importantly, this form of the t-test requires that the data was collected in a between-subject design or measurements were made for separate groups.

Between-subjects design is a design in which each participant receives only one level of an independent variable.

This is where the name for the t-test comes; we are testing for differences among independent (e.g., different, unrelated) samples. This is the focus of this chapter.

Paired Samples T-Test

The paired samples t-test is used for the opposite case in which the samples are not independent, different, or unrelated. This is particularly true when the two samples come from the same group of participants. This is the case for within-subjects designs.

Within-subjects design is a design in which each participant receives all levels of an independent variable.

The paired samples t-test can also be used if you matched groups. That is, if the characteristics of the first group of individuals is matched to those of another group (e.g., gender, IQ, height, personality, etc.). We’ll focus on this test in the next chapter.

The Independent Samples T-Test

The goal of the independent samples t-test is to determine if the sample data from to unrelated groups are from different populations. This is an important test for the simple experiment with a between-subjects design because it allows us to determine if the independent variable has an effect on changing the dependent variable scores across the control and experimental groups. For example, you may employ an independent samples t-test to determine if there is a difference in memory retention during studying when students either study in silence (control group) or listen to music (experimental group).

Null Hypothesis

Remember that, although we may anticipate that there is an effect of ambient sounds on retention, we need to start with the assumption that there is no effect and look for evidence to the contrary. This is the basis of null hypothesis significance testing (NHST). For the independent samples t-test, the null hypothesis is that the two groups are not different. More specifically, we are assuming that the two samples come from the same population. Of course, we cannot directly test this. Instead, we assume that if the samples came from the same population, they ought have the same mean as the population. We state the null hypothesis as \(H_0: M_1=M_2=\mu_0\). This can be rearranged to more directly related to what were testing \(H_0: M_1 - M_2 = 0\). Our null hypothesis is that there is no difference between our sample means.

The Formula

I know this class does not require you to calculate anything by hand but it may be worth reviewing the formula to conceptualize what we’re doing.

\[ t = \frac{M_1 - M_2}{S_{M_1-M_2}} \] In English, we are subtracting one sample mean from the other before dividing by the standard error of the difference. The details are a little more involved but we are essentially calculating a standard ratio of significance testing

\[ t = \frac{\textrm{Effect}}{\textrm{Error}} \]

The effect here is the difference in the sample means. If we are testing the effect of ambient sound on retention during studying, we are going to look at how much better one group did compared to the other. As such, the effect is quantified by subtracting one groups mean retention score from the other. Why divide by “error?” What is “error” anyway?

Let’s build out our example a little more by assuming that individuals can score between 0 and 10 on the memory test. What if the control group had a mean retention score of 8.8 and the experimental group had a retention score of 8.3. The difference is 0.5. That is not 0. Should we reject the null hypothesis?

We need to consider the possibility that the samples came from the same population but that, because of random differences in those samples, the means are not equal. This is where the “error” comes in. We are really wanting to know if the difference between our samples is meaningful, given the possible sampling error. For the independent samples t-test, the error is variability of the differences among the possible samples.

The t-value we calculate is thus a standardized value. We are transforming the difference in sample means into a set of scores (t-scores) by dividing by some standard error. We can thus check the probability of getting our t-value among the distribution of t-values. Figure 5.2 demonstrates this.

Figure 5.2

Comparing T-Values


This shading in this figure indicates the probability of getting a t-value of 0.5 or more extreme, which seems rather larger. In fact, if we assume we had 30 participants per group, we would expect a .309 chance of getting a t-value of 0.5 or greater when we assume that the samples come from the same population. That is too big a chance to ignore. If we were to reject \(H_0\), we would have a .309 chance of making a Type I Error (i.e., a false positive error). Canonically, we only want a 1 in 20 (or 5%) or less chance of making a Type I Error. That is, we want the probability of getting our t-value or more extreme to be less than 0.05. This is why we set the alpha-level of our two-tailed tests to 0.05.

Assumptions

For the conclusion of our t-test to be valid, there are certain assumptions about the data that need met.

  1. Normally distributed dependent variable for each group. This must be assumed because we are calculating means. Remember that the mean is biased in non-normal distributions. Since we are calculating means for each sample, we need to verify this assumption for each sample.

  2. Equal variance of dependent variable for each group. This assumption is also known as homogeneity of variance. When variances are unequal, our estimate of sampling error requires adjustment.

We will discuss how to check these assumptions using SPSS in the sections that follow. We will being using the General Linear Model procedure in SPSS because it will work for all of the tests of significance we’ll use this semester. We’ll briefly discuss the general linear model before implementing it in SPSS.

The General Linear Model for the Independent Samples T-Test

Recall that the independent samples t-test judges how unlikely a particular difference between sample means is given the assumption that they should be equal. We will reject the null hypothesis if the difference between means relative to sampling error is large enough to have less than a 5% chance of occurring. Figure 5.3 is a visualization of this approach. In the figure, the control group has a mean of 0. Accounting for sampling error, we would only want to reject the null hypothesis (that the population mean and thus the mean of the experimental group is also 0), if the experimental group had a mean more extreme than 1.96. In this case, the experimental group was well outside of that cut-off with a mean of 6.

Figure 5.3

Comparing Two Samples


In the general linear model approach, we are again looking for a significant difference. However, to do so, we will calculate a slope based on the change in the average outcome variable value across the groups. Figure 5.4 shows a scatter plot version of Figure 3. In this approach, we want to check if the slope between the two means (the black dashed line)is reliably different from 0 (i.e, no change in the DV across groups represented by the red dashed line).

Figure 5.4

Scatter Plot with Slopes

Note. Dashed black line indicates observed slope. Dashed red line indicates a slope of zero.

In a similar manner to the t-test, we would check our distribution of possible slopes for our sample sizes to determine the probability of getting our slope or more extreme. In fact, when determining the statistical significance of the slope, we actually convert it to a t-score! We’ll find more connections among tests of significance within the general linear model in future chapters.

Using SPSS: GLM for the Independent Samples T-Test

The Data Set

For this example, please download the ChickFlick.sav file from Canvas. Figure 5.5 is a screen shot of the variable view.

Figure 5.5

The Variable View for the ChickFlick.sav Data Set

Screenshot of Variable View


The Research Question

For this section, our research question will be “do males and female experience different levels of psychological arousal while watching the films?” Is the independent samples t-test an appropriate test for this research question? Yes, it seems appropriate because we are wanting to compare the samples of arousal scores across the two, unrelated groups. However, we must also check the assumptions of normality and homogeneity of variance before we can be sure that the independent samples t-test is the best test of significance.

Checking Assumptions

Normality of DV for Groups

The assumption of normality needs to hold because we will be calculating mean in determining if there is a reliable difference between the groups.

Step 1 Split File

To check normality for each group, we need to tell SPSS to calculate descriptive statistics separately for each group via the “Split File” command.

Click on “Data” in the menu bar, then click on “Split File”

Figure 5.6

Split File in Menu Bar

Split File in Menu Bar


We’ll want SPSS to keep all of our output for the two groups together so select “Compare groups” before dragging “Gender” to the “Groups Based on” box. Once this window is set, click “Paste” to generate the syntax (see Figure 5.7)

Figure 5.7

Split File Window

Split File Window


Step 2 Descriptive Statistics

We have several tools to check for normality including producing histograms and Q-Q plots. For this example, we’ll be getting a set of descriptive statistics.

Recall that in a perfectly normal distribution, the mean, median, and mode are equal. Furthermore, the skewness and kurtosis is zero. Let’s get these statistics for our samples.

Click on “Analyze” then click on “Descriptives.” Choose “Frequencies”.

Figure 5.8

Frequencies in Menu Bar

Frequencies in Menu Bar


In the Frequencies window (see Figure 5.9), drag our outcome variable (Psychological Arousal) to the “Variables” box. Click on the “Statistics” button to view the available descriptives to calculate.

Figure 5.9

Frequencies Window

Frequencies Window


In the “Frequencies:Statistics” window, select “Mean”,“Median”, and “Mode” from the “Central Tendency” options and select “Skewness” and “Kurtosis” from the “Characterize Posterior Distribution” options (See Figure 5.10).

Figure 5.10

Statistics Available in Frequencies

Statistics Available in Frequencies


Click continue to return to the main “Frequencies” window. Click “Paste” to generate the syntax.

Step 3 Checking for Outliers

If we have violations in normality, it may be due to unbalanced, extreme values or outliers. SPSS allows for us to easily check for these outliers by transforming the outcome variable values into z-scores. This is helpful because we judge a value as an outlier if it is so extreme that it is unlikely to be part of the sample. If you recall from the empirical rule, we expect 99.7% of all observations in a normal distribution to fall within 3 standard deviations of the mean. As such, any standardized value less than -3 or greater than +3 is only expected .3% and should be considered an outlier.

Click on “Analyze” then “Descriptive Statistics” and choose “Descriptives” from the menu (see Figure 5.11).

Figure 5.11

Descriptives in Menu Bar

Descriptives in Menu Bar


In the “Descriptives” window (see Figure 5.12), drag our outcome variable (“Psychological Arousal”) to the “Variable(s)” box. Then select the option at the bottom of the window to “Save standardized values as variables.” Click the “Paste” button to generate the syntax.

Figure 5.12

Descriptives Window

Descriptives Window


Step 4 Run the Syntax

We are almost ready to check the assumption of normality. The last thing to do is to turn off our split file to ensure that we can correctly run our GLM in the next segment.

Go to your “Syntax Editor” by clicking “Window” in the menu bar and selecting the syntax window (see Figure 5.13).

Figure 5.13

Select Syntax Window from Menu

Select Syntax Window from Menu


At the end of your syntax, type “SPLIT FILE OFF.” on its own line (as seen in Figure 5.14).

Figure 5.14

Turning Off Split File

Turning Off Split File


Now you can select all of the syntax (by clicking and dragging from the top through the bottom line or pressing “CTRL + A” on the keyboard) then press the green “Play” button in the toolbar (see Figure 5.15).

Figure 5.15

Select and Run Syntax

Select and Run Syntax


Step 5 Interpret the Output

We’ll look to our “Frequencies” first. Remember that the measures of central tendency for normally distributed data should be roughly equal and that the values for skewness and kurtosis should be close to zero. Of course, we are unlikely to find perfectly normally distributed data so we ought look for acceptable ranges. The output from the frequencies procedure is found in Figure 5.16.

Figure 5.16

Descriptive Statistics for Psychological Arousal by Gender

Descriptive Statistics for Psychological Arousal by Gender


Males reported a mean arousal of 21.50, a median arousal of 20.50, and had a modal response of 13. However, not that little superscript \(^a\). Following that to the note below the table informs us that this is just the smallest of the multiple modes. As such, there may be another mode that is closer to our mean and median. Given the proximity of the mean to the median, I don’t suspect any violations of normality.

The values of skewness (.602) and kurtosis (-.229) reinforce this. Although they are not zero, they are well within the acceptable ranges. If the value of skewness is between -3 and +3, and if the value of kurtosis is between -8 and +8, you can proceed with assuming normality.

Acceptable Range for Skewness is -3 to +3

Acceptable Range for Kurtosis is -8 to +8

Let’s check the values for females. Mean arousal is 18.55, median arousal is 17.50 with the minimum of multiple modes being 14. Again, we appear to have a normal distribution and a quick check of skewness (.191) and kurtosis (-.535) collaborates our assumption.

We’ll also need to check our standardized scores those values outside of the acceptable range of -3 to 3. To check these scores, we need to return to the Data Editor and look at the “Zarousal” column. Figure 5.17 is a selection of these standardized scores.

Figure 5.17

Selection of Standardized Arousal Scores

Selection of Standardized Arousal Scores


As indicated by our descriptive statistics, there are no outliers (i.e.,value beyond 3 standard deviations of the mean).

Equal Variance Across Groups

The other assumption we need to verify is that we have equal variance across groups. Although we may be tempted to check the “variance” box in the Frequencies Window for a quick comparison, we would not be able to determine if they values were close enough to be okay. Instead, we’ll ask SPSS to perform Levene’s Test for Equality of Variances when we run the general linear model procedure.

So we’ll stick a pin in this for now while we set up the GLM. Don’t worry, we will check for equality of variances before we interpret the results of the GLM.

Setting Up the General Linear Model

Step 1 Select the Univariate GLM

There are several general linear models that SPSS can perform. You can see them by clicking on “Analyze” then going to “General Linear Model” in the menu bar (see Figure 5.18).

Figure 5.18

Available General Linear Models

Available General Linear Models


The “Univariate” GLM is used for models that only have one outcome or dependent variable. “Multivariate” refers to models that contain multiple outcome variables. Finally, “Repeated Measures” are for models that use a within-subjects design. That is, you would have multiple observations for each participant for the outcome variable.

Our data best matches the “Univariate” GLM, so click that.

Step 2 Assign Variables

The “Univariate” window (see Figure 5.19) has several boxes into which you may place variables. Let’s understand each of these before deciding how to assign our variables.

Figure 5.19

Univariate Window for GLM

Univariate Window for GLM


Dependent Variable. The first box at the top is labeled “Dependent Variable” and this is exactly what it sounds like.

Fixed Factors The next box is where we’ll place our “Fixed Factor(s).” These are predictor variables, which are typically categorical, whose values are fully represented in the study. That is, we know all possible values that could be in the sample.Any independent variable will go here.The “Random Factor(s)” box will house any categorical variables who values are randomly represented in our sample. That is, although we may know that people have different favorite films, we would never be able to full account for all of those films in our sample. Instead, we would only have a sampling from the population presented in our sample.

Covariates The “Covariate(s)” box is for the continuous variables in the model. These may be to control for those variables or to predict outcomes.

WLS Weight The “WLS Weight” box is where you would put a variable to allow some observations to be more influential than others.

As you can see in Figure 5.19 above, “Arousal” will be placed in the “Dependent Variable” box and “Gender” will go into the “Fixed Factor(s)” box.

With our variables set, we’ll ask SPSS to produce some plots

Step 3 Create Bar Chart

Click on the the “Plots” button to open the “Univariate: Profile Plots” window (See Figure 5.20).

Figure 5.20

Univariate GLM Profile Plots Window

Univariate GLM Profile Plots Window


This dialog menu allows us to create figures that relate our independent variables to the dependent variable in line charts or bar charts. When you only have one variable, like our current example, you’ll drag that variable to the “Horizontal Axis” box. If you have more than one variable, you have a few options. To create an interaction plot, you would drag one variable to the “Horizontal Axis” box and another to the “Separate Lines” box. If you want to include three variables, you can place the third variable in the “Separate Plots” box. The most independent variable you can include is 3 in this procedure.

Go ahead and drag our IV (“gender”) to the “Horizontal Axis” box. You must then click the “Add” button. After setting the variables for each plot, you need to click the “Add” button before choosing the chart type.

After clicking “Add,” set the chart type to “Bar Chart” and click to “Include Error Bars” as shown in Figure 5.21. It should be a default option, but make sure the error bars are set to “Confidence Interval (95.0%).”

Figure 5.21

Setting Chart Type and Error Bars

Setting Chart Type and Error Bars


When your chart options are set, click the “Continue” button at the bottom of the window to return to the “Univariate” window for the GLM procedure.

Step 4 Getting Means and Confidence Intervals

The plot we’ll create will allow for a quick, visual determination of reliable group differences but it may be hard to discern the values represented by the various bars. I suggest asking SPSS for the values directly. To get the group means and associated confidence intervals, click on the “EM Means” button on the right of the “Univariate” window. The “Univariate: Estimated Marginal Means” window will open (see Figure 5.22).

Figure 5.22

Estimated Marginal Means Window

Estimated Marginal Means Window


This is a simple window. You just need to drag “gender” to the “Display Means for:” box on the right side. Click “Continue” to return to the main “Univariate” window.

Step 5 Extra Options: Homogeneity Tests

The last step before running our models is to ask SPSS for Levene’s Test for the Equality of Variances. We’ll find that by clicking on the “Options” button in the main “Univariate” window. The “Univariate: Options” window has many options but for this lesson, we’ll just click on the “Homogeneity tests” as depicted in Figure 5.23.

Figure 5.23

Requesting Homogeneity Tests

Requesting Homogeneity Tests


After turning on the “Homogeneity tests” option, click continue to return to the main “Univariate” window.

Step 6 Run the Model.

With all of our variables, plots, and options set, we can now generate the syntax for our model by clicking the “Paste” button at the bottom of the main “Univariate” window.

Navigating to the syntax editor should reveal the complete syntax as found in Figure 5.24. Highlight the syntax starting at “UNIANVOA” through “/DESIGN=gender.” To run the model, click the green “play” button at the top of the syntax editor.

Figure 5.24

Selecting the GLM Syntax

Selecting the GLM Syntax


Interpreting the Output

Navigate to your Output Viewer (you can use the “Window” menu in the menu bar). We’ll finish checking our assumptions before interpreting the model.

Levene’s Test for Equality of Variances

The first table in the output for the “Univariate Analysis of Variance” gives some basic information about the factors entered into the model including the labels used and the count for each group. We need to check the second table.

The “Levene’s Test for Equality of Error Variances” table is presented in Figure 5.25.

Figure 5.25

Levene’s Test for Equality of Error Variances Table

Levene’s Test for Equality of Error Variances Table


I’ve highlighted the top row of this table because that is the test we are concerned with because we are working with means. I’ve also highlighted the last column header (“Sig.”) because that is the value we’ll need to make our judgment regarding the assumption of homogeneity of variance.

A few important things to remember.

  1. The null hypothesis for this test is that "the error variance of the dependent variable is equal across groups. That means that we will only have a problem with the assumption of equal variance if we have to reject that null hypothesis.

  2. We reject a null hypothesis when the p-value (in SPSS, this is labeled as “Sig.”) is less than alpha. We typically set alpha to 0.05.

Now that we have that straight, the Sig. value of .357 is easier to interpret. Given that the Sig. (i.e., p-value) is greater than alpha (.05), we will maintain the null hypothesis that our groups have equal variance.

If the Sig. value for Levene’s Test is > .05, the assumption of equal variance is OK.

With all of our assumptions met, we can now interpret the model.

Test of Between-Subjects Effects

The next table in our output is the “Test of Between-Subjects Effects” (See Figure 5.26). This is an Analysis of Variance (ANOVA) table because of the columns (Sum of Squares, df, Mean Square, F, and Sig.). We’ll see this table for the rest of the course but we won’t explore the variance partitioning approach for a few lessons.

Figure 5.26

Test of Between-Subjects Effects Table

Test of Between-Subjects Effects Table


I’ve highlighted the relevant row in the table for the independent samples t-test. We wanted to know if males and females reported different levels of psychological arousal while watching films. The highlighted line presents the test of the effect of “gender” on the dependent variable of “arousal.” Although we will use more information from this table in the write-up, we will first decide if the effect is reliable by checking the Sig. value.

Remember that the null hypothesis for the independent samples t-test is that the two samples come from the same population of scores. To reject that null hypothesis, we need a Sig. value (i.e., p-value) less than alpha (typically set at .05). For this data set, the Sig. value is greater than the alpha so we will not reject the null hypothesis. As such, we would conclude that males and females do not differ significantly in their reported levels of arousal.

Bar Chart

Let’s check the bar chart we created to see if our interpretation from the ANOVA table matches. Figure 5.27 shows the unstyled (per APA guidelines) bar chart.

Figure 5.27

Bar Chart Relating Gender to Psychological Arousal

Bar Chart Relating Gender to Psychological Arousal


To judge statistically significant reliability of differences in a bar chart, you’ll need to consult the 95% confidence interval error bars. If the error bars do not overlap, then you can claim statistical significance. If the error bars overlap, you cannot claim statistical significance.

For a bar chart, statistical significantly different group means when error bars do nto overlap.

Just as we concluded from the ANOVA table, our bar chart indicates that males and females are not different in their reported psychological arousal.

We can find the table equivalent of this bar chart in the “Estimated Marginal Means” table (see Figure 5.28). The table contains the mean for each gender, which is the bar height, and the lower and upper bounds of the 95% confidence intervals for each gender. These values (in the red box) are used for the lower and upper horizontal bars for each error bar.

Figure 5.28

Estimated Marginal Means for Arousal by Gender

Estimated Marginal Means for Arousal by Gender


Presenting the Results in APA Format

We’ve checked our assumptions, constructed our model, and interpreted the results. Now it is time to share the results.

Styling the Bar Chart

As we’ve covered APA-styled figures previously, I’ll just present a finalized version of our bar chart for your reference in Figure 5.29.

Figure 5.29

APA Styled Bar Chart

APA Styled Bar Chart


Remember the importance of being clear and concise in our figures. That includes the note below the figure that specifies what the error bars represent and re-labeling the y-axis accordingly.

Writing Up the Statistical Results

Writing the results of a statistical analysis in APA format can seem daunting but there is an easy format to follow. You’ll need to 1) state the test used, 2) answer the research question, and 3) provide a statistical summary of the results.

Here is a partial example write-up for our current results.

“An independent samples t-test was performed using the general linear model. The results indicated that males and females did not differ in their reported arousal levels.”

We still need that statistical summary that supports this conclusion. We will need to include information about the test that was performed (an F-test in this case) and information about the groups we are comparing (e.g., means and confidence intervals).

Writing up and F-test

The results of an F-test follow the following format:

\[ F[df_{\textrm{effect}},df_{\textrm{error}}] = F-\textrm{value},\ p = p-\textrm{value}\]

We can find all of this information in the ANOVA table. Figure 5.30 highlights the needed degrees of freedom.

Figure 5.30

Degrees of Freedom in ANOVA Table

Degrees of Freedom in ANOVA Table


Let’s update our write-up

\[ F[1,38] = F-\textrm{value},\ p = p-\textrm{value}\]

Our last two needed values (F and p) are highlighted in Figure 5.31.

Figure 5.31

F- and p-values in ANOVA Table

F- and p-values in ANOVA Table


The complete F-test write-up is as follows:

\[ F[1,38] = 1.275,\ p = .266\]

Writing Up Means and Confidence Intervals

Although the F-test write up can express information about the f-distribution (via degrees of freedom), the obtained F-value, and the probability of that value under the null hypothesis, it does not provide much information about our actual data. As such, we’ll want to include the means and confidence intervals in our statistical summary. They usually follow a pattern like this

\[ \textrm{M}_{\textrm{Group}} = \textrm{mean},\ 95\%\ \textrm{CI [Lower Limit, Upper Limit]} \]

We can find the necessary information in the Estimated Marginal Means table (Figure 5.28 is reproduced below for convenience).

Figure 5.28

Estimated Marginal Means for Arousal by Gender

Estimated Marginal Means for Arousal by Gender


When we are presenting multiple groups, we’ll separate their information with a semicolon (;).

\[ \textrm{M}_{\textrm{Males}} = 21.50,\ 95\%\ \textrm{CI}\ [17.76,25.24];\ \textrm{M}_{\textrm{Female}} = 18.55,\ 95\%\ \textrm{CI}\ [14.81,22.29] \]

Our full statistical summary looks like this: \[ (F[1,38] = 1.275,\ p = .266;\ \textrm{M}_{\textrm{Males}} = 21.50,\ 95\%\ \textrm{CI}\ [17.76,25.24];\ \textrm{M}_{\textrm{Female}} = 18.55,\ 95\%\ \textrm{CI}\ [14.81,22.29]) \]

When we put it all together, our complete APA-styled write-up is as follows:

“An independent samples t-test was performed using the general linear model. The results indicated that males and females did not differ in their reported arousal levels (F[1,38] = 1.275, p = .266; \(\textrm{M}_{\textrm{Males}}\) = 21.50, 95% CI [17.76,25.24]; \(\textrm{M}_{\textrm{Females}}\) = 18.55, 95% CI [14.81,22.29]).”

Summary

In this lesson, we’ve:

  1. Established the need for the t-test,
  2. Related these tests to the general linear model,
  3. Defined the circumstances or when the independent samples t-test is appropriate,
  4. Explored the assumptions of the independent samples t-test,
  5. Set up and run a general linear model in SPSS,
  6. Interpreted the results of the GLM, and
  7. Shared the results in APA format.

In the next lesson, we’ll examine the other popular t-test, the paired samples t-test.