Regression Analysis

A simple linear regression analysis is a statistical method that helps to predict the value of a dependent variable based on the value of an independent variable. It assesses the linear relationship between two continuous variables and provides insights into the relationship’s direction, magnitude, and statistical significance.

For instance, you can use simple linear regression to predict the sales of a product based on the advertising spend (i.e., your dependent variable would be “sales” and your independent variable would be “advertising spend”). You could also determine how much of the variation in sales can be explained by advertising spend. Similarly, you could use linear regression to predict the weight of a person based on their height (i.e., your dependent variable would be “weight” and your independent variable would be “height”). You could also determine how much of the variation in weight can be attributed to the person’s height.

Note that simple linear regression is also known as bivariate linear regression, and the dependent variable is also referred to as the outcome, target, or criterion variable. At the same time, the independent variable is also called the predictor, explanatory, or regressor […]

By |2023-11-29T00:48:39+00:00May 16th, 2023|Help|0 Comments

One-Way ANOVA

One-Way ANOVA

If you aim to investigate whether there are any statistically significant distinctions in the means of two or more distinct groups, you can employ a one-way analysis of variance (ANOVA). For instance, consider a situation where you wish to determine if there are variations in the performance of athletes in a track event based on their preferred running surface (i.e., your dependent variable would be “race performance,” measured in seconds, and your independent variable would be “running surface,” comprising three groups: “grass track,” “cinder track,” and “synthetic track”). Alternatively, a one-way ANOVA could be used to explore whether there are differences in customer satisfaction scores across different service channels (e.g., in-person, phone, online), where your dependent variable would be “satisfaction score,” and your independent variable would be “service channel,” encompassing multiple groups.

It’s important to note that the one-way ANOVA is also known as a between-subjects ANOVA or one-factor ANOVA. While it can technically be applied to an independent variable with only two groups, the independent-samples t-test is preferred in such cases. Hence, the one-way ANOVA is commonly described as a test used when you have three or […]

By |2023-11-29T00:49:38+00:00May 15th, 2023|Help|0 Comments

Independent-Samples T-Test

The independent-samples t-test is used to determine if a difference exists between the means of two independent groups on a continuous dependent variable. More specifically, it will let you determine whether the difference between these two groups is statistically significant. This test is also known by a number of different names, including the independent t-test, independent-measures t-test, between-subjects t-test, unpaired t-test, and Student’s t-test.

For example, you could use the independent-samples t-test to determine whether (mean) salaries, measured in US dollars, differed between males and females (i.e., your dependent variable would be “salary” and your independent variable would be “gender”, which has two groups: “males” and “females”). You could also use an independent-samples t-test to determine whether (mean) reaction time, measured in milliseconds, differed in under 21-year-olds versus those 21 years old and over (i.e., your dependent variable would be “reaction time” and your independent variable would be “age group”, split into two groups: “under 21-year-olds” and “21 years old and over”).

Assumptions

In order to run an independent-samples t-test, there are six assumptions that need to be considered. The first three assumptions relate to your choice of study design and the measurements you […]

By |2023-11-29T00:50:24+00:00February 15th, 2023|Help|0 Comments

Binomial Logistic Regression

Binomial logistic regression is a statistical test for predicting the likelihood of an observation belonging to one of two possible categories of a binary dependent variable. This prediction is based on one or more independent variables, which can be continuous or categorical.

This form of regression shares similarities with linear regression, except for the nature of the dependent variable; linear regression deals with a continuous dependent variable, while binomial logistic regression works with a binary (dichotomous) one. Instead of predicting a specific value for the dependent variable, binomial logistic regression aims to predict the probability of an observation falling into a particular category based on the independent variables. The observation is then classified into the category that is deemed most probable. Binomial logistic regression can incorporate interactions between independent variables to enhance prediction accuracy.

For example, binomial logistic regression could be used to determine whether employees are likely to stay with or leave a company. The dependent variable here is “employment status,” with the two categories being “stay” or “leave.” The independent variables might include:

  • “Number of years with the company” (a continuous variable).
  • “Satisfaction with management” (a […]
By |2023-11-29T00:51:59+00:00February 2nd, 2023|Help|0 Comments

Paired-Samples T-Test

The paired-samples t-test serves the purpose of assessing whether the mean discrepancy between interconnected observations is statistically significant. These observations may involve the same individuals evaluated at two distinct time points or be subjected to two conditions concerning the same dependent variable. Alternatively, you might have two sets of participants matched based on one or more attributes and then evaluated on a single dependent variable. The paired-samples t-test is also known as the dependent t-test, repeated measures t-test, or referred to as the paired t-test.

For instance, consider a scenario where you want to investigate whether there is a significant mean difference in the daily step count of individuals before and after a four-week fitness training program. In this case, your dependent variable would be “daily step count,” and you would have two linked groups, one representing step counts “before” the program and the other “after” the program. Another example is evaluating whether there is a mean difference in the test scores of students who received two different teaching methods (traditional versus online) for the same course. Here, your dependent variable would be “test scores,” and the related groups would […]

By |2023-11-29T00:53:06+00:00February 1st, 2023|Help|0 Comments

Two-Way ANCOVA

The two-way ANCOVA is a statistical test to assess whether there is an interaction effect between two distinct, independent variables on a continuous dependent variable. In simpler terms, it helps us understand if these two variables have a combined influence on the outcome. This analysis considers one or more continuous covariates and additional factors that might impact the dependent variable.

To illustrate this concept further, imagine a study where researchers wanted to evaluate the impact of two teaching methods (the traditional approach and the other a new experimental method) on student test scores. However, they also wanted to consider the effect of students’ prior knowledge levels, which varied across the participants. In this scenario, the two independent variables are the teaching method (with two groups: “Traditional” and “Experimental”) and prior knowledge (with two levels: “Low” and “High”). The dependent variable is the test score, and the continuous covariate is the students’ age. The researchers aim to determine if the experimental teaching method performs differently compared to the traditional method, taking into account students’ prior knowledge levels. Additionally, they want to explore whether the impact of teaching methods […]

By |2023-11-29T00:54:22+00:00January 31st, 2023|Help|0 Comments

One-Way MANCOVA

The one-way multivariate analysis of covariance (one-way MANCOVA) extends the capabilities of the one-way MANOVA and one-way ANCOVA by incorporating either a continuous covariate or multiple dependent variables. This addition enhances the sensitivity of the analysis to detect differences among groups of a categorical independent variable. The one-way MANCOVA is employed to determine whether there are any statistically significant variations in the adjusted means among three or more unrelated groups, all while controlling for a continuous covariate.

Note 1: While the one-way MANCOVA can accommodate a nominal or ordinal independent variable, it treats this variable as nominal, meaning it does not consider an ordinal variable’s ordered nature. Additionally, if your covariate is ordinal or nominal, it is advisable to employ a different statistical test.

Note 2: Handling two or more continuous covariates introduces additional complexities in conducting and interpreting the one-way MANCOVA. As a result, a separate guide for one-way MANCOVA with multiple continuous variables will be made available for those interested.

Understanding that the one-way MANCOVA is an omnibus test statistic is crucial. It provides information about whether the groups of the independent variable significantly differ when considering the […]

By |2023-11-29T00:56:32+00:00January 29th, 2023|Help|0 Comments

HMR

Like standard multiple regression, hierarchical multiple regression (also known as sequential multiple regression) allows you to predict a dependent variable based on multiple independent variables. However, the procedure that it uses to do this in SPSS Statistics, and the goals of hierarchical multiple regression, are different from standard multiple regression. In standard multiple regression, all the independent variables are entered into the regression equation at the same time. By contrast, hierarchical multiple regression enables you to enter the independent variables into the regression equation in an order of your choosing. This has a number of advantages, such as allowing you to: (a) control for the effects of covariates on your results; and (b) take into account the possible causal effects of independent variables when predicting a dependent variable. Nonetheless, all hierarchical multiple regressions answer the same statistical question: How much extra variation in the dependent variable can be explained by the addition of one or more independent variables?

For example, you could use hierarchical multiple regression to understand whether exam performance can be predicted based on revision time, lecture attendance and prior academic achievement. Here, your continuous dependent variable would be “exam performance”, whilst […]

By |2023-11-29T00:58:13+00:00January 29th, 2023|Help|0 Comments

PCA

Principal components analysis (i.e., PCA) is a variable-reduction technique that shares many similarities to exploratory factor analysis. Its aim is to reduce a larger set of variables into a smaller set of ‘artificial’ variables (called principal components) that account for most of the variance in the original variables. Although principal components analysis is conceptually different from factor analysis, it is often used interchangeably with factor analysis in practice and is included within the Factor procedure in SPSS Statistics.

Assumptions

In order to run a principal components analysis, the following four assumptions must be met. The first assumption relates to your choice of study design, whilst the remaining three assumptions reflect the nature of your data:

  • Assumption #1: You have multiple variables that are measured at the continuous level (although ordinal data is very frequently used). Examples of continuous variables include revision time (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth. Examples of ordinal variables include Likert items (e.g., a 7-point scale from “strongly agree” through to “strongly disagree”), amongst other ways of ranking categories (e.g., a 5-point scale explaining how much a customer liked a product, ranging from “Not very much” […]
By |2023-03-24T20:49:34+00:00January 28th, 2023|Help|0 Comments

Two-Way MANOVA

The two-way multivariate analysis of variance (MANOVA) is an analytical technique that extends the principles of the two-way ANOVA to scenarios with multiple dependent variables. It is particularly useful in determining how two independent variables interact in their combined influence on several dependent variables.

For example, consider a study to evaluate the impact of diet type (e.g., vegetarian, keto, Mediterranean) and exercise regimen (e.g., cardio, strength training, mixed) on various health outcomes. The dependent variables, in this case, might include blood pressure, cholesterol level, and body mass index (BMI). The two-way MANOVA would enable researchers to assess how the combination of diet and exercise regimen influences these health outcomes collectively rather than just looking at each health outcome separately.

Another scenario where a two-way MANOVA could be applied is in a marketing research study analyzing the impact of advertising medium (e.g., television, online, print) and product type (e.g., consumer electronics, clothing, food items) on customer responses. The dependent variables could be customer recall, attitude toward the advertisement, and intention to purchase. This analysis would help marketers understand how the effectiveness of different advertising mediums varies with product type across several […]

By |2023-11-29T00:59:29+00:00January 26th, 2023|Help|0 Comments
Go to Top