 This author has not yet filled in any details.
So far AM has created 24 blog entries.

## Independent-Samples T-Test

The independent-samples t-test is used to determine if a difference exists between the means of two independent groups on a continuous dependent variable. More specifically, it will let you determine whether the difference between these two groups is statistically significant. This test is also known by a number of different names, including the independent t-test, independent-measures t-test, between-subjects t-test, unpaired t-test, and Student’s t-test.

For example, you could use the independent-samples t-test to determine whether (mean) salaries, measured in US dollars, differed between males and females (i.e., your dependent variable would be “salary” and your independent variable would be “gender”, which has two groups: “males” and “females”). You could also use an independent-samples t-test to determine whether (mean) reaction time, measured in milliseconds, differed in under 21-year-olds versus those 21 years old and over (i.e., your dependent variable would be “reaction time” and your independent variable would be “age group”, split into two groups: “under 21-year-olds” and “21 years old and over”).

### Assumptions

In order to run an independent-samples t-test, there are six assumptions that need to be considered. The first three assumptions relate to your choice of study design and the measurements you […]

## Binomial Logistic Regression

A binomial logistic regression attempts to predict the probability that an observation falls into one of two categories of a dichotomous dependent variable based on one or more independent variables that can be either continuous or categorical.

In many ways, binomial logistic regression is similar to linear regression, with the exception of the measurement type of the dependent variable (i.e., linear regression uses a continuous dependent variable rather than a dichotomous one). However, unlike linear regression, you are not attempting to determine the predicted value of the dependent variable, but the probability of being in a particular category of the dependent variable given the independent variables. An observation is assigned to whichever category is predicted as most likely. As with other types of regression, binomial logistic regression can also use interactions between independent variables to predict the dependent variable.

Note: Binomial logistic regression is often referred to as just logistic regression.

For example, you could use binomial logistic regression to predict whether students will pass or fail an exam based on the amount of time they spend revising, whether English is their first language, and their pre-exam stress levels. Here, your dichotomous dependent variable would be […]

## Paired-Samples T-Test

The paired-samples t-test is used to determine whether the mean difference between paired observations is statistically significantly different from zero. The participants are either the same individuals tested at two time points or under two different conditions on the same dependent variable. Alternatively, you could have two groups of participants that have been matched (paired) on one or more characteristics (e.g., IQ, age, gender, etc.) and tested on one dependent variable. The paired-samples t-test is also referred to as the dependent t-test, repeated measures t-test, or simply abbreviated to the paired t-test.

For example, you could use a paired-samples t-test to understand whether there was a mean difference in dieters’ daily calorie consumption before and after a six week hypnotherapy programme (i.e., your dependent variable would be “daily calorie consumption”, and your two related groups would be calorie consumption values “before” and “after” the hypnotherapy programme). You could also use a paired-samples t-test to determine whether there was a mean difference in reaction times under two different lighting conditions (i.e., your dependent variable would be “reaction time”, measured in milliseconds, and your two related groups would be reaction times in a room […]

## Two-Way ANCOVA

The two-way ANCOVA is used to determine whether there is an interaction effect between two independent variables on a continuous dependent variable (i.e., if a two-way interaction effect exists), after adjusting/controlling for one or more continuous covariates. In many ways, the two-way ANCOVA can be considered an extension of the one-way ANCOVA to incorporate a second independent variable or an extension of the two-way ANOVA to incorporate one or more continuous covariates.

Note: It is quite common for the independent variables to be called “factors” or “between-subjects factors”, but we will continue to refer to them as independent variables in this guide. Furthermore, it is worth noting that the two-way ANCOVA is also referred to as a “factorial ANCOVA”.

Important: If you have two or more continuous covariates, there are some additional considerations when carrying out and interpreting the two-way ANCOVA. This guide is designed to help with a single continuous covariate only. Therefore, we will be adding a separate guide for a two-way ANCOVA with multiple continuous variables. If this is of interest, please contact us and we will let you know when the guide becomes available.

A two-way ANCOVA can be used in a number of […]

## Regression Analysis

A simple linear regression analysis assesses the linear relationship between two continuous variables to predict the value of a dependent variable based on the value of an independent variable. More specifically, it will let you: (a) determine whether the linear regression between these two variables is statistically significant; (b) determine how much of the variation in the dependent variable is explained by the independent variable; (c) understand the direction and magnitude of any relationship; and (d) predict values of the dependent variables based on different values of the independent variable.

Note: This test is also known by a number of different names, including a bivariate linear regression, but it is often referred to simply as a ‘linear regression’. Furthermore, the dependent variable is also referred to as the outcome, target or criterion variable, and the independent variable as the predictor, explanatory or regressor variable.

For example, you can use simple linear regression to predict lawyers’ salaries based on the number of years they have practiced law (i.e., your dependent variable would be “salary” and your independent variable would be “years practicing law”). You could also determine how much of the variation in lawyers’ […]

## One-Way MANCOVA

The one-way multivariate analysis of covariance (one-way MANCOVA) can be thought of as an extension of the one-way MANOVA to incorporate a continuous covariate or an extension of the one-way ANCOVA to incorporate multiple dependent variables. This covariate is linearly related to the dependent variables and its inclusion into the analysis can increase the ability to detect differences between groups of a categorical independent variable. A one-way MANCOVA is used to determine whether there are any statistically significant differences between the adjusted means of three or more independent (unrelated) groups, having controlled for a continuous covariate.

Note 1: Whilst a one-way MANCOVA can be used with a nominal or ordinal independent variable, it treats the independent variable as nominal (i.e., it will not take into account the ordered nature of an ordinal variable). Furthermore, whilst the covariate does not have to be measured on a continuous scale, if your covariate is ordinal or nominal, please contact us because you will need to use a different statistical test.

Note 2: If you have two or more continuous covariates, there are some additional considerations when carrying out and interpreting the one-way MANCOVA. Therefore, we will be adding a separate guide for a […]

## HMR

Like standard multiple regression, hierarchical multiple regression (also known as sequential multiple regression) allows you to predict a dependent variable based on multiple independent variables. However, the procedure that it uses to do this in SPSS Statistics, and the goals of hierarchical multiple regression, are different from standard multiple regression. In standard multiple regression, all the independent variables are entered into the regression equation at the same time. By contrast, hierarchical multiple regression enables you to enter the independent variables into the regression equation in an order of your choosing. This has a number of advantages, such as allowing you to: (a) control for the effects of covariates on your results; and (b) take into account the possible causal effects of independent variables when predicting a dependent variable. Nonetheless, all hierarchical multiple regressions answer the same statistical question: How much extra variation in the dependent variable can be explained by the addition of one or more independent variables?

For example, you could use hierarchical multiple regression to understand whether exam performance can be predicted based on revision time, lecture attendance and prior academic achievement. Here, your continuous dependent variable would be “exam performance”, whilst […]

## PCA

Principal components analysis (i.e., PCA) is a variable-reduction technique that shares many similarities to exploratory factor analysis. Its aim is to reduce a larger set of variables into a smaller set of ‘artificial’ variables (called principal components) that account for most of the variance in the original variables. Although principal components analysis is conceptually different from factor analysis, it is often used interchangeably with factor analysis in practice and is included within the Factor procedure in SPSS Statistics.

## Assumptions

In order to run a principal components analysis, the following four assumptions must be met. The first assumption relates to your choice of study design, whilst the remaining three assumptions reflect the nature of your data:

• Assumption #1: You have multiple variables that are measured at the continuous level (although ordinal data is very frequently used). Examples of continuous variables include revision time (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth. Examples of ordinal variables include Likert items (e.g., a 7-point scale from “strongly agree” through to “strongly disagree”), amongst other ways of ranking categories (e.g., a 5-point scale explaining how much a customer liked a product, ranging from “Not very much” […]

## One-Way ANOVA

If you want to determine whether there are any statistically significant differences between the means of two or more independent groups, you can use a one-way analysis of variance (ANOVA). For example, you could use a one-way ANOVA to determine whether exam performance differed based on test anxiety levels amongst students (i.e., your dependent variable would be “exam performance”, measured from 0-100, and your independent variable would be “test anxiety level”, which has three groups: “low-stressed students”, “moderately-stressed students” and “highly-stressed students”). As another example, a one-way ANOVA could be used to understand whether there is a difference in salary based on degree subject (i.e., your dependent variable would be “salary” and your independent variable would be “degree subject”, which has five groups: “business studies”, “psychology”, “biological sciences”, “engineering” and “law”).

Note: The one-way ANOVA is also referred to as a between-subjects ANOVA or one-factor ANOVA. Although it can be used with an independent variable with only two groups, the independent-samples t-test is typically used in this situation instead. For this reason, you will come across the one-way ANOVA being described as a test to use when you have three or more groups (rather than […]