Wednesday, February 10, 2010

How to determine Sample size for Clinical Research



1.Why is sample size important?

  • validity - too small a sample size will not yield valid results
  • accuracy only appropriate sample size will produce accurate results.
  • financial large sample costs more. It is a waste of money to test more subjects or animal than required.
  • resources it is not just money but other resources such as time and manpower are wasted by inappropriately small or large sample size.
  • ethics – it is unethical to choose too small or too large a sample size. The validity of results obtained from a small sample size is questionable and hence the entire study will be a waste. Larger samples tend to waste resources and man power. The patients/animals are subjected to unnecessary hardship. Hence inappropriate sample size is definitely unethical.

2.What factors will affect the size of the sample?

  • degree of difference this is minimum important difference between groups which we want to detect. The difference between two means or proportions (control value and the expected test vaule).
  • type I error this is alpha error – “the chances of detecting a difference which does not exist” (False Positive). It is usually set at 0.05 or 5%. This is also called the “level of significance”.
  • type II error - this is beta error – “the chances of NOT detecting a difference which it exists (False Nagative). It is usually fixed at 0.2 or 20%. 1-beta is called power (1-0.2 = 0.8)
  • variation of results - Standard Deviations of control mean and the test.
  • drop out this rate has to be determined and appropriate allowance has to given.
  • non-compliance this will increase the variation and sample size has to be adjusted to the degree of non-compliance expected.

3.What methods can be used to determine the sample size?

  • arbitrary numbersnot recommended
  • from previous studiesmay not be correct
  • nomograms & tablescan be used; may not be flexible and accurate
  • formulas – to be used. Varies with study design and analysis.
  • computer programsbased on formulas; easy to use.

4.How to calculate the sample size?

Assess the difference expected

Find out the SD of variables

Set the level of significance (alpha)



Set the beta level

Select the appropriate formula



Calculate the sample size



Give allowances for drop-outs and non-compliance





Power calculation

Power is the probability that a study can reveal a difference between groups if the difference exists. Higher the chances of picking up the existing difference, more powerful is the study. Power is calculated by subtracting the beta error from 1. Power = 1- beta. Power is one of the elements required for calculation of the sample size. When a study is over, the power can be calculated (posteriori power calculation) to find out whether the study had enough power to pick up the difference if it existed. The minimum power a study should supposed to have is 0.8 (80%).



Posteriori power calculation can be carried out using computer programs. If a study reveals that the power is <80%>the difference between groups could not be detected rather than “no difference between groups” because the power of the study is too low to pick up the difference if it exists i.e. the probability of missing the difference is high and hence the study could have missed to detect.



Random Allocation

To allocate subjects or animals to different groups, randomization is followed so that each eligible individual in the population has the same chance of being allocated to a group. This eliminates selection bias. Haphazard allocation cannot be called random. Random number tables or computer programs can be used for random allocation. Simple randomization using computer generated random numbers are used commonly. Sometimes simple randomization may pose problems. For example, randomly allocated two groups may have a different male:female ratio. If it is important to have equal number of males and females in all groups, then one of the slightly complex randomization procedures such as block, stratified or cluster randomization should used.



Significance Testing

State Null hypothesis



Set alpha (level of significance)

Identify the variables to be analysed

Identify the groups to be compared

Choose a test

Calculate the test statistic

Find out the P value

Interpret the P value

Calculate the CI of the difference

Calculate Power if required



Null hypothesis (statistical hypothesis) states that there is no difference between groups compared. Alternative hypothesis or research hypothesis states that there is a difference between groups.



e.g. New drug ‘X’ is an analgesic - (Research hypothesis)

New drug ‘X’ is no better than a placebo (no difference

between the drug and placebo) – (Null hypothesis)



Alpha is type 1 error and the acceptable limit is to be set. It is generally set at 0.05 and not above. If the P value is less than this limit then null hypothesis is rejected i.e. the difference between groups is not due to chance.



Think of statistical test as a judge. The accused (= drug under investigation) is not guilty (= null hypothesis) until the charges are proved. The judge’s decision depends on evidences (= data). He calculates the chances (P) of the evidence presented by the inspector (= researcher) being false (= probability of difference observed between groups being spurious). If this is <0.05,>ween groups is significant). If P>0.05, he awards the benefit of doubt to the accused and releases him (=the difference is not significant and the drug is thrown out).



Choosing an appropriate statistical test



There are umpteen number of statistical tests. Why can’t we have just one that calculates the P value? The answer lies in the characteristics of data and type of analysis aimed at. There are different types of data (Continuous, Discrete, Rank, Score, Binomial) and the aim analyzing the data can also differ i.e. finding the significance of difference between means or medians or quantification of the association between the variables. The number of groups used in a study may vary and so does the study design (paired or unpaired). These variations make it difficult to have a single common test. The significance test has to be chosen carefully. Tests which are not appropriate for a given situation will lead to invalid conclusions. For example you cannot use a thermometer to measure the weight of a patient. Similarly a meter scale can measure the height of the patient but not the temperature. One can measure grains using a ‘litre can’ as well as a ‘balance’. But which one do you think will be more precise?



Before choosing a statistical test, one has to work out the following details. This will help you choosing a test from the table given:

Determine :

Aim of the study –

Parameter to be analysed -

Data type - [Continuous, Discrete, Rank, Score, Binomial]

Analysis type - [Comparison of means, Quantify association, Regression analysis]

No. of groups to be analysed - No. of data sets to be analysed -

Distribution of data - [normal or non-normal]

Design - [paired or unpaired]



With the above information, one can decide the suitable test using the table given.

Significance tests can be divided into Parametric and Non-parametric tests. The former includes t test, ANOVA, linear regression and Pearson correlation co-efficient whereas the later includes Wilcoxan, Mann Whitney U, Kruskal-Wallis ANOVA, Friedman ANOVA and Spearman rank correlation. Those variables which follow normal distribution can be subjected to parametric tests and those which do not are suitable for non-parametric test. If the aim is to find out the association between variable, correlation or regression tests should be chosen; the difference between means or medians can be found out using other tests. If more than two groups/means are compared ANOVA should be used. These models/tests have variants also. For example there are many more ANOVA models (one-way, two-way, repeated measure, factorial etc) and one has to choose the appropriate one (remember the analogy “litre can and balance to measure grains”?)

  • Significance test to be used must be decided at the beginning of the study.
  • A study may need more than one test depending on the number and characteristics of the parameters studied.
  • Always spend enough time and brain to choose a right test
  • Inappropriate test will lead to invalid conclusions.



Calculating and Interpreting P

When the data are subjected to significance testing, the resulting value is called statistic. This can be t (t test), chi (chi square), F (ANOVA) etc depending on the test used. This statistic is used to find out the P value available from tables (statistics software can automatically calculate the P value). If the P value is less than the cut off value (level of significance i.e alpha error), it is considered that the difference between the groups is statistically significant. When P is <0.05,> the probability of obtaining the difference (between groups) purely by chance (i.e. when there is no difference) is less than 5%.

If P>0.05, the difference is considered statistically non-significant and it is concluded that there is no difference between the groups or the difference is not detected.

Non-significant result can be due to two reasons :

  1. There is really no difference between the groups.
  2. The study is not powerful enough to detect the difference.

Hence, one should calculate the power to conclude whether there is no difference or the power is inadequate. If the power is inadequate (<80%),>the study did not detect the difference rather than “there is no difference between groups”



CI of the difference



Confidence interval of the difference (between groups) can be calculated. CI indicates the variation of the data. A wide interval implies a larger variation. If the CI includes 0 (e.g. -2.3 to 5.8), then it is unlikely that the difference is statistically significant (P>0.05). Many journals now insist on mentioning the CIs for important results.



Degrees of Freedom (df)



It denotes the number of samples that a researcher has the freedom to choose. It is based on a concept that one could not have exercised his/her freedom to select all the samples.

The concept can be explained by an analogy :

X + Y = 10 ……….(1)

In the above equation you have freedom to choose a value for X or Y but not both because when you choose one, the other is fixed. If you choose 8 for X, then Y has to be 2. So the degree of freedom here is 1.

X+ Y+Z = 15 ……..(2)

In the formula (2), one can choose values for two variables but not all. You have freedom to choose 8 for X and 2 for Y. If so, then Z is fixed. So the df is 2.

df is calculated by subtracting 1 from the size of each group. For example df for Student’s t test is calculated by n -1 (paired design) and N1+N2 – 2 (for unpaired) [N1 – size of group 1, N2 – size of group 2]. The methods of df calculation may vary with the test used.

.



0 comments

Leave a reply

Recent Blog Posts

Recent Comments