Here we will briefly discuss some of the important statistical measures.
Of course, we do not pretend to give a complete picture, out aim is to introduce some popular and rather simple applications of statistical criteria
to testing the "basic" hypotheses in social science quantitative research.
Where possible, the links for further developments and more advanced methods are provided.

Some
important
parametric and non-parametric stats:

Parametric
tests and measures

Linking 2 variables:
Pearson's product-moment correlation coefficient
Used to measure the degree of
linear relashionship between 2 (usually) interval-scale variables. In
accordance with the usual convention, when calculated for an entire
population, the Pearson Product Moment correlation is typically
designated by the analogous Greek letter, which in this case is rho
(?). Hence its designation by the Latin letter r implies that it has
been computed for a sample (to provide an estimate for that of the
underlying population) - see image.

Find out more about Pearson-s product-moment correlation by clicking the
image.

Comparing 2
samples/means:
Student's t
Measure
for testing
hypotheses of equality between two (either theoretical vs empirical, or
2 group means). As a parametric criterion, Student's t requires larger
samples and normal distribution of the independent interval-scale
variable. Counted differently for equal and unequal sample sizes and
variances (see image).

Find out more about Student's
t by clicking the image.

Comparing >2
samples/means:
ANOVA (Fisher's F)
Measure
for testing
hypotheses of similarities/ differences between more than two groups. Fisher's F (statistical basis for parametric ANOVA) requires larger
samples, normality of the independent variable distribution, and homoscedasticity (which means that the variance of data in groups should be the same). Some authors claim that the F-test is unreliable if there are deviations from normality, while others claim that the F-test is robust. The Kruskal-Wallis test is a nonparametric alternative which does not rely on an assumption of normality (see below).
Click the image for more on Analysis of Variance.

Non-parametric
tests and measures

Linking 2 variables:
Spearman's correlation coefficient for ordinal variables.
Spearman's rank correlation coefficient or Spearman's rho, is a non-parametric measure of correlation – the degree of
linear relashionship whether 2 ORDINAL-measured variables. In principle, rho is simply a special case of the Pearson's correlation coefficient (see above), in which two sets of data X_{i} and Y_{i} are converted to rankings x_{i} and y_{i} before calculating the coefficient. However, there is a difference related to the presense (or absense) of tied ranks - see image.
Find out more on Spearman's correlation and other ordunal measures of linear association by clicking the
image.

Linking 2 nominal variables and comparing proportions: Pearson's
Chi-square

Key measure for dealing with nominal variables - in principle, the logic of Chi-square (comparing the expected ("independent"/theoretically known) and observed (empirical) proportions) is the only one allowing to measure association between 2 nominal variables. Chi-square itself maybe
sometimes confusing (because of being dependent upon degrees of
freedom), but serves as basis for other statistics (Cramer's V,
Pearson's Phi, etc), as well as in non-parametric testing (see below).

Find out more about Pearson's Chi-square by clicking the image.

Comparing 2
samples:
Non-parametric "alternatives" to Student's t

There are two common tests
of significance for the two-sample case when the dependent variable is measured at the ordinal level (or there is no correspondence other assumptions of parametric methods):
Median test and Mann-whitney's U (see image). Median test is a little bit simple and straightforward (in principle, testing the equality on group s with assumingly (by H_{0}) equal medians. Mann Whitney's U is a bit more complicated (but still used for comparing 2 group statistics).
Click image for more information on non-parametric 2-sample tests.)

Comparing 2
samples:
Non-parametric "alternative" to parametric ANOVA (Fisher's F)

The test named after William Kruskal and W. Allen Wallis
is a non-parametric method for testing equality of population medians among more than 2 groups.
The "idea" of K-W test is identical to a parametric ANOVA discussed above.
As a non-parametric method, the Kruskal-Wallis test does not assume a normal population,
unlike the analogous one-way analysis of variance. However, the test does assume an identically-shaped
and scaled distribution for each group, except for any difference in medians.
Click the image to find out more on K-W analysis of variance.