The two-way multivariate analysis of variance (two-way MANOVA) is often considered as an extension of the __two-way ANOVA__ for situations where there are two or more dependent variables. The primary purpose of the two-way MANOVA is to understand if there is an interaction between the two independent variables on the two or more combined dependent variables.

For example, you could use a two-way MANOVA to understand whether there were differences in students’ short-term and long-term recall of facts based on lecture duration and fact type (i.e., the two dependent variables are “short-term memory recall” and “long-term memory recall”, whilst the two independent variables are “lecture duration”, which has four groups – “30 minutes”, “60 minutes”, “90 minutes” and “120 minutes” – and “fact type”, which has two groups: “quantitative (numerical) facts” and “qualitative (textual/contextual) facts”). Alternately, you could use a two-way MANOVA to understand whether there were differences in the effectiveness of male and female police officers in dealing with violent crimes and crimes of a sexual nature taking into account a citizen’s gender (i.e., the two dependent variables are “perceived effectiveness in dealing with violent crimes” and “perceived effectiveness in dealing with sexual crimes”, whilst the two independent variables are “police officer gender”, which has two categories – “male police officers” and “female police offices” – and “citizen gender”, which also has two categories: “male citizens” and “female citizens”).

## Assumptions

In order to run a two-way MANOVA, there are 10 assumptions that need to be considered. The first three assumptions relate to your choice of study design and the measurements you chose to make, whilst the remaining seven assumptions relate to how your data fits the two-way MANOVA model. These assumptions are:

- Assumption #1: You have
**two or more dependent variables**that are measured at the**continuous**level. Examples of**continuous variables**include include height (measured in centimetres), temperature (measured in °C), salary (measured in US dollars), revision time (measured in hours), intelligence (measured using IQ score), firm size (measured in terms of the number of employees), age (measured in years), reaction time (measured in milliseconds), grip strength (measured in kg), power output (measured in watts), test performance (measured from 0 to 100), sales (measured in number of transactions per month), academic achievement (measured in terms of GMAT score), and so forth.

**Note:** You should note that SPSS Statistics refers to continuous variables as **Scale** variables.

- Assumption #2: You have
**two independent variables**where**each independent variable**consists of**two or more categorical**,**independent groups**. An independent variable with only**two groups**is known as a**dichotomous variable**whereas an independent variable with**three or more groups**is referred to as a**polytomous**variable. Example independent variables that meet this criterion include gender (e.g., two groups: male and female), ethnicity (e.g., three groups: Caucasian, African American, and Hispanic), physical activity level (e.g., four groups: sedentary, low, moderate and high), profession (e.g., five groups: surgeon, doctor, nurse, dentist, therapist), and so forth. If you need more information about variables and their different types of measurement, please contact us.

**Explanation 1:** The “groups” of the independent variable are also referred to as “categories” or “levels”, but the term “levels” is usually reserved for groups that have an order (e.g., fitness level, with three levels: “low”, “moderate” and “high”). However, these three terms – “groups”, “categories” and “levels” – can be used interchangeably. We will mostly refer to them as groups, but in some cases we will refer to them as levels. The only reason we do this is for clarity (i.e., it sometimes sounds more appropriate in a sentence to use levels instead of groups, and vice versa).

**Explanation 2:** The independent variable(s) in any type of MANOVA is also commonly referred to as a **factor**. For example, a two-way MANOVA is a MANOVA analysis involving two factors (i.e., two independent variables). Furthermore, when an independent variable/factor has independent groups (i.e., unrelated groups), it is further classified as a **between-subjects factor** because you are concerned with the differences in the dependent variables between different subjects. However, for clarity we will simply refer to them as independent variables in this guide.

**Note:** For the two-way MANOVA demonstrated in this guide, the independent variables are referred to as **fixed factors** or **fixed effects**. This means that the groups of each independent variable represent all the categories of the independent variable you are interested in. For example, you might be interested in exam performance differences between schools. If you investigated three different schools and it was only these three schools that you were interested in, the independent variable is a **fixed factor**. However, if you picked the three schools at random and they were meant to represent all schools, the independent variable is a **random factor**. This requires a different statistical test because the two-way MANOVA is the incorrect statistical test in these circumstances. If you have a random factor in your study design, please __contact us__.

- Assumption #3: You should have
**independence of observations**, which means that there is no relationship between the observations in each group of the independent variable or between the groups themselves. Indeed, an important distinction is made in statistics when comparing values from either different individuals or from the same individuals. Independent groups (in a two-way MANOVA) are groups where there is no relationship between the participants in any of the groups. Most often, this occurs simply by having different participants in each group. This is generally considered the most important assumption (Hair et al., 2014). Violation of this assumption is very serious (Stevens, 2009; Pituch & Stevens, 2016).

**Note:** When we talk about the **observations being independent**, this means that the observations (e.g., participants) are **not related**. More specifically, it is the **errors** that are assumed to be independent. In statistics, errors that are not independent are often referred to as **correlated errors**. This can lead to some confusion because of the similarity of the name to that of tests of correlation (e.g., Pearson’s correlation), but correlated errors simply means that the errors are not independent. The errors are at high risk of not being independent if the observations are not independent.

For example, if you split a group of individuals into four groups based on their physical activity level (e.g., a “sedentary” group, “low” group, “moderate” group and “high” group), no one in the sedentary group can also be in the high group, no one in the moderate group can also be in the high group, and so forth. As another example, you might randomly assign participants to either a control trial or one of two interventions. Again, no participant can be in more than one group (e.g., a participant in the the control group cannot be in either of the intervention groups). This will be true of any independent groups you form (i.e., a participant cannot be a member of more than one group). In actual fact, the ‘no relationship’ part extends a little further and requires that participants in different groups are considered unrelated, not just different people. Furthermore, participants in one group cannot influence any of the participants in any other group.

Independence of observations is largely a study design issue rather than something you can test for using SPSS Statistics, but it is an important assumption of the two-way MANOVA. If your study fails this assumption, you will need to use another statistical test instead of the two-way MANOVA.

- Assumption #4: There should be a linear relationship between the dependent variables for each group of the independent variable. In a two-way MANOVA, there needs to be a linear relationship between each pair of dependent variables for each group of the independent variable. In this example, there is only one pair of dependent variables because there are only two dependent variables, humanities_score, and science_score. If the variables are not linearly related, the power of the test is reduced (i.e., it can lead to a loss of power to detect differences). You can test whether a linear relationship exists by plotting and visually inspecting a scatterplot matrix for each group combination of the independent variables, gender, and intervention, to see if a linear relationship exists. If the relationship approximately follows a straight line, you have a linear relationship. However, if you have something other than a straight line, for example, a curved line, you do not have a linear relationship.

- Assumption #5: There should be no multicollinearityIdeally, you want your dependent variables to be moderately correlated with each other. If the correlations are low, you might be better off running separate two-way ANOVAs – one for each dependent variable – rather than a two-way MANOVA. Alternately, if the correlation(s) are too high (greater than 0.9), you could have multicollinearity. This is problematic for MANOVA and needs to be screened out. Whilst there are a great deal of complicated, but sophisticated methods of detecting multicollinearity, we show you the relatively simple method of detecting multicollinearity using Pearson correlation coefficients between the dependent variables to determine if there are any relationships that are too strongly correlated.

- Assumption #6: There should be no univariate or multivariate outliers. There should be no
**univariate outliers**in each group combination of the independent variables (i.e., for each cell of the design) for any of the dependent variables. Univariate outliers are often just called “outliers” and are the same type of outliers you will have come across if you have conducted t-tests or ANOVAs. In fact, this is a similar assumption to the two-way ANOVA, but for each dependent variable that you have in your MANOVA analysis. We refer to them as**univariate**in this guide to distinguish them from**multivariate outliers**, which you also have to test for. Univariate outliers are scores that are unusual in any cell of the design in that their value is extremely small or large compared to the other scores (e.g., 8 participants in a group scored between 60-75 out of 100 in a difficult maths test, but one participant scored 98 out of 100). Outliers can have a large negative effect on your results because they can exert a large influence (i.e., change) on the mean and standard deviation for that group, which can affect the statistical test results. Outliers are more important to consider when you have smaller sample sizes, as the effect of the outlier will be greater. Therefore, in this example, you need to investigate whether the dependent variables, humanities_score and science_score, have any univariate outliers for each group combination of gender and intervention (i.e., you are testing whether humanities score and science score are outlier free for each cell of the design).In addition to univariate outliers, you also have to test for**multivariate outliers**in a two-way MANOVA analysis. Multivariate outliers are cases (e.g., pupils in our example) that have an unusual combination of scores on the dependent variables. SPSS Statistics can calculate a measure called**Mahalanobis distance**that can be used to determine whether a particular case might be a multivariate outlier. You can test for multivariate outliers in SPSS Statistics by calculating Mahalanobis distance.

- Assumption #7: There needs to be multivariate normality. The MANOVA needs the data to be multivariate normal. Unfortunately, multivariate normality is a particularly tricky assumption to test for and cannot be directly tested in SPSS Statistics. Instead, normality of each of the dependent variables for each group combination of the independent variables is often used in its place as a best ‘guess’ as to whether there is multivariate normality.

**Explanation:** If there is multivariate normality, there will be normally distributed data (residuals) for each of the group combinations of the independent variables for all the dependent variables. However, the opposite is not true; normally distributed group residuals do not guarantee multivariate normality.

Therefore, in this example, you need to investigate whether humanities_score and science_score are normally distributed for each cell of the design.

**Note:** Whilst it is most common to run only one type of normality test for a given analysis and to rely solely on that result, as you become more familiar with statistics you might start to evaluate normality based on the result of more than one method. If you have another method you would like to use or are curious about other methods (e.g., skewness and kurtosis values, or histograms), please contact us.

- Assumption #8: You should have an adequate sample size. Although the larger your sample size, the better, at a bare minimum, there needs to be as many cases (e.g., pupils) in each cell of the design as there are number of dependent variables. In this example, this means that there needs to be two or more cases per cell of the design.

- Assumption #9: There should be homogeneity of variance-covariance matrices. A further assumption of the two-way MANOVA is that there are similar variances and covariances. This assumption can be tested using
**Box’s M test of equality of covariance**.

- Assumption #10: There should be homogeneity of variances. The two-way MANOVA assumes that there are equal variances in cell of the design for each dependent variable. This can be tested using
**Levene’s test of equality of variances**.

**Note:** If you have violated the assumption of homogeneity of variance-covariance matrices (see **Assumption #9** above), the results from Levene’s test of equality of variances can inform you which dependent variable might be causing the problem (i.e., the dependent variable(s) that have unequal variances).

## Interpreting Results

After running the two-way MANOVA procedure and testing that your data meet the assumptions of a two-way MANOVA, SPSS Statistics will have generated a number of tables that contain all the information you need to report the results of your two-way MANOVA.

The two-way MANOVA has two main objectives: (a) to determine whether there is a statistically significant interaction effect between the two independent variables on the combined dependent variables; and (b) if so, run follow up tests to determine where the differences lie. Both of these objectives will be answered in the following sections:

- Determining whether an interaction effect exists: In evaluating the main two-way MANOVA results, you can start by determining if there is a statistically significant interaction effect between the two independent variables on the combined dependent variables. There are four different multivariate statistics that can be used to test statistical significance when using SPSS Statistics (i.e.,
**Pillai’s Trace**,**Wilks’ Lambda**,**Hotelling’s Trace**and**Roy’s Largest Root**). We will explain which to choose and how to interpret these statistics. - Univariate interaction effects and simple main effects: If the interaction is statistically significant, one typical approach is to determine whether there are any statistically significant
**univariate interaction effects**for each dependent variable separately (Pituch & Stevens, 2016). SPSS Statistics will have run these statistics for you (i.e., a two-way ANOVA for each dependent variable). If you have any statistically significant interaction effects, you can follow these up with**simple main effects**. We will explain how to interpret these follow-up tests. - Main effects and univariate main effects: If your interaction effect
**is not**statistically significant you would follow up the**main effects**instead. If you have statistically significant main effects, you can follow these up with**univariate main effects**. We will explain how to interpret these follow-up tests.