# A series of basic statistics by Tom Lang

## 5. Tests and Measures of Association

### Introduction

### Tests of Association

For example, suppose we want to know if serum calcium concentrations (low vs. normal or high) are associated with osteoporosis (present or absent). This question is illustrated by the four cells in the data field of Table 2. If calcium concentrations and osteoporosis were perfectly associated, of 100 women, all would be represented in either the upper-left cell or the lower right-hand cell. On the other hand, if calcium concentrations and osteoporosis were perfectly independent, of 100 women, we would expect to see about 25 represented in each of the four cells. That is, if the association was no better than chance, the combinations of our two variables would be more or less evenly distributed over the four cells.

Another example of the same principle is shown in Table 3. Here, the table has six cells. Again, if the mix of proportions in our data were the result of chance, we would expect each cell to represent about 16% of our sample. If the mix in our data was about equally distributed across all the cells, the P value would be large, and the data would be called independent.

The χ2 test of goodness-of-fit is the same as the χ2 test of association or independence, with one difference. Instead of comparing the mix of proportions we got from our data to chance, it compares it to a known mix of proportions.

For example, suppose we are testing the hypothesis that handedness (whether someone is right- or left-handed) is related to some measure of skill, like throwing a ball. Before we test that hypothesis, however, we want to see if the handedness in our sample is representative of the general population. We know that about 80% of people are right-handed and about 20% are left- handed. If we assume that the proportion of handedness is equal between men and women, the mix of proportions would be like that in Table 4. (The sums of percentages at the end of the rows and at the bottom of the columns are called "marginals" because they are given on the right and bottom "margins" or edges of the table).

### Measures of Association

Association is usually reported as present or absent, solely on the basis of the P value. However, there are measures of association that indicate the strength of the association. For example, the phi (φ, pronounced "fee") coefficient is a measure of association that ranges from -1 to +1, where +1 is a perfect (strong) association, zero is no association, and -1 is a perfect inverse association.Ratios are also measures of association that are used to report risk. The most common in medicine are probably the odds, risk, and hazards ratios. In these three ratios, a value of 1 means that the risk in one group is the same as the risk in the other group. A number larger than one indicates that the group in the numerator is at greater risk, and a number smaller than 1 indicates that the group in the denominator is a greater risk.

Risk is simply the frequency with which something occurs. If 3 people of every hundred in a town fall off a bicycle each year, the risk of having a bicycle accident is 3%. If 2 of 100 people trip and fall while walking, the risk of falling is 2%. The risk ratio is just the ratio of the two risks: in this case, the risk of falling off a bicycle divided by the risk of falling while walking is 3/2, or 1.5, meaning that the risk of falling off a bicycle is 1.5 times as great as the risk of falling while walking.

A hazard ratio is interpreted the same as a risk ratio. The difference is that a hazard is a measure of risk over time. More precisely, it is the probability that if an event has not occurred in one period, it will occur in the next.

Hazards ratios are found in time-to-event studies with binary outcomes (lived or died; cured or not) and are the output of Cox proportional hazards regression, which is used in "time-to-event" or "time-to-failure" analysis. Importantly, the "time-to-the-event" from a given starting point is the outcome, not the event itself. For example, the time between hospitalization and death is what we are interested in, not in the death itself. However, Cox regression analysis is also commonly used to identify factors associated with death.

Odds ratios are interpreted the same way risk ratios are, but the two are different. The risk (probability) of drawing a heart from a deck of cards is 13/52 = 1/4 = 25%. The odds, however, is the probability of drawing a heart divided by the probability of not drawing a heart: 13/39 = 1/3 = 33%.

The Odds ratio is the odds for one group divided by that for another. Table 5 shows the calculations for the odds of having a heart attack among smokers and among nonsmokers, as well as the odds ratio that combines both odds into a single number. The risk of smokers having a heart attack is calculated as the number of smokers having heart attacks divided by the total number of smokers: 14/36 = 0.39. The odds of smokers having heart attacks is the number of smokers with heart attacks divided by the number of smokers who did not have heart attacks: 14/22 = 0.636. The odds of nonsmokers having heart attacks is: 5/33, or 0.152. The odds ratio is: 0.636/0.152 = 4.2, which means that the odds of smokers having a heart attack are 4.2 times as high as that of nonsmokers.

**14/22＝0.636**

Odds of heart attack among non smokers：

**5/33＝0.152**

The odds ratio：

**0.636/0.152＝4.2**

The odds of smokers having a heart attack are

**4.2**times as high as that of non-smokers.

Odds ratios are hard to understand, but they are the output of logistic regression analyses, which is a particularly useful statistical method. Another common measure of association is the kappa (κ) statistic, which assesses "agreement" among raters for multiple observations of the same subject. The kappa statistic is often used in evaluating diagnostic tests. It ranges from –1 to +1, where +1 indicates complete agreement, and -1 indicates complete disagreement.