www.rmsolutions.net R&M Solutons



Similar documents
Additional sources Compilation of sources:

II. DISTRIBUTIONS distribution normal distribution. standard scores

Study Guide for the Final Exam

Statistical tests for SPSS

A POPULATION MEAN, CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

DATA INTERPRETATION AND STATISTICS

11. Analysis of Case-control Studies Logistic Regression

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

Descriptive Statistics

UNIVERSITY OF NAIROBI

Introduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing.

Simple Linear Regression Inference

Introduction to Statistics and Quantitative Research Methods

Introduction to Quantitative Methods

10. Analysis of Longitudinal Studies Repeat-measures analysis

Projects Involving Statistics (& SPSS)

COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES.

Introduction to Hypothesis Testing

RECRUITERS PRIORITIES IN PLACING MBA FRESHER: AN EMPIRICAL ANALYSIS

Chi Square Tests. Chapter Introduction

QUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS

2. Simple Linear Regression

HYPOTHESIS TESTING WITH SPSS:

Sample Size and Power in Clinical Trials

Statistics Review PSY379

Chapter 13 Introduction to Linear Regression and Correlation Analysis

Statistics in Medicine Research Lecture Series CSMC Fall 2014

STATISTICS 8, FINAL EXAM. Last six digits of Student ID#: Circle your Discussion Section:

Recall this chart that showed how most of our course would be organized:

Multivariate Analysis of Ecological Data

Using Excel for inferential statistics

Chapter 7 Section 7.1: Inference for the Mean of a Population

SCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES

One-Way Analysis of Variance (ANOVA) Example Problem

Elementary Statistics Sample Exam #3

Research Methods & Experimental Design

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

2013 MBA Jump Start Program. Statistics Module Part 3

Bivariate Statistics Session 2: Measuring Associations Chi-Square Test

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

Tutorial 5: Hypothesis Testing

Simple Regression Theory II 2010 Samuel L. Baker

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures

Introduction. Hypothesis Testing. Hypothesis Testing. Significance Testing

Data Analysis Tools. Tools for Summarizing Data

Class 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1)

Parametric and Nonparametric: Demystifying the Terms

Section Format Day Begin End Building Rm# Instructor. 001 Lecture Tue 6:45 PM 8:40 PM Silver 401 Ballerini

Experimental Designs (revisited)

Statistical Functions in Excel

Come scegliere un test statistico

Linear Models in STATA and ANOVA

MEASURES OF VARIATION

Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

Analysing Questionnaires using Minitab (for SPSS queries contact -)

Parametric and non-parametric statistical methods for the life sciences - Session I

One-Way Analysis of Variance

Analysis of Data. Organizing Data Files in SPSS. Descriptive Statistics

Normality Testing in Excel

MONT 107N Understanding Randomness Solutions For Final Examination May 11, 2010

Introduction to Statistics with GraphPad Prism (5.01) Version 1.1

Directions for using SPSS


Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Testing Hypotheses About Proportions

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

WISE Power Tutorial All Exercises

research/scientific includes the following: statistical hypotheses: you have a null and alternative you accept one and reject the other

Chapter 9. Two-Sample Tests. Effect Sizes and Power Paired t Test Calculation

Chapter 7. One-way ANOVA

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups

6: Introduction to Hypothesis Testing

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

STA-201-TE. 5. Measures of relationship: correlation (5%) Correlation coefficient; Pearson r; correlation and causation; proportion of common variance

Research Methodology: Tools

Section 14 Simple Linear Regression: Introduction to Least Squares Regression

HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1. used confidence intervals to answer questions such as...

Basic Concepts in Research and Data Analysis

This content downloaded on Tue, 19 Feb :28:43 PM All use subject to JSTOR Terms and Conditions

DATA ANALYSIS. QEM Network HBCU-UP Fundamentals of Education Research Workshop Gerunda B. Hughes, Ph.D. Howard University

Name: Date: Use the following to answer questions 3-4:

Part 2: Analysis of Relationship Between Two Variables

Case Study in Data Analysis Does a drug prevent cardiomegaly in heart failure?

Comparing Two Groups. Standard Error of ȳ 1 ȳ 2. Setting. Two Independent Samples

1) The table lists the smoking habits of a group of college students. Answer: 0.218

Chicago Booth BUSINESS STATISTICS Final Exam Fall 2011

"Statistical methods are objective methods by which group trends are abstracted from observations on many separate individuals." 1

Simple linear regression

1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ

Nonparametric Two-Sample Tests. Nonparametric Tests. Sign Test

Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS

Simple Predictive Analytics Curtis Seare

Levels of measurement in psychological research:

HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1. used confidence intervals to answer questions such as...

Regression Analysis: A Complete Example

Two-Sample T-Tests Assuming Equal Variance (Enter Means)

ANALYSING LIKERT SCALE/TYPE DATA, ORDINAL LOGISTIC REGRESSION EXAMPLE IN R.

Transcription:

Ahmed Hassouna, MD Professor of cardiovascular surgery, Ain-Shams University, EGYPT. Diploma of medical statistics and clinical trial, Paris 6 university, Paris.

1A- Choose the best answer The duration of CCU stay after acute MI: 48 + 12 hours. A) What is the expected probability for a patient to stay for <24 hours? 1) about 2.5 % 2) about 5% 3) about 95% B) What is the expected probability for a patient to stay for more than 72 hours? 1) same as the probability to stay for less than 24 hours. 2) triple the probability to stay for less than 24 hours. 3) We cannot tell C) What is the probability for a patient to stay for less than 24 hours and for more than 72 hours? 1) about 2.5 % 2) about 5% 3) about 95%

2A- Choose the WRONG answer A randomized controlled unilateral study was conducted to compare the analgesic effect of drug (X) to placebo. The analgesic gave significantly longer duration of pain relief (12 + 2 hours), compared to placebo (2 + 1 hours) ; P = 0.05 (Student s test, one-tail). 1) A unilateral study means that the researchers were only concerned to show the superiority of the analgesic over placebo, but not the reverse. 2) A one-tail statistics implies that a smaller difference between compared analgesic effects is needed to declare statistical significance, compared to a bilateral design. 3) The statistical significance of the difference achieved will not change if the design was bilateral.

3A- Choose the best answer A) The primary risk of error: 1) It is the risk to conclude upon a difference in the study that does not exist in the reality. 2) It is the risk not to conclude upon a difference in the study despite that this difference does exist in the reality. 3) Both definitions are wrong B) The secondary risk of error: 1) It is the risk to conclude upon a difference in the study that does not really exist. 2) It is the risk not to conclude upon a difference in the study despite that this difference does exist in the reality. 3) Bothe definitions are wrong C) The power of the study: 1) It is the ability of the study to accurately conclude upon a statistically significant difference. 2) It is the ability of the study not to miss a statistically significant difference. 3) Both definitions are wrong

4A- Choose the best answer A randomized controlled unilateral study was conducted to compare the analgesic effect of drug (X) to placebo. The analgesic gave significantly longer duration of pain relief (12 + 2 hours), compared to placebo (2 + 1 hours) ; P = 0.05 (onetail). This P value means that: 1) There is a 95% chance that this result is true 2) There is a 5% chance that this result is false. 3) The probability that this result is due to chance is once, every 20 times this study is repeated. 4) The probability that this longer duration of pain relief is not a true difference in favor of the analgesic but rather a variation of that obtained with placebo is once, every 20 times this study is repeated.

5A- Choose the best answer: Although the previous study was a RCT, the researchers wanted to compare 40 pre trial demographic variables among study groups. How many times do you expect that those pre trial variables would be significantly different between patients receiving analgesic and those receiving placebo? a) None, as randomization ensures perfect initial comparability. b) It can happen to have 1 significantly different variable by pure chance. c) It would be quite expected to have 2 significantly different variables. d) We cannot expect any given number.

6A- Choose the best answer: Another group of researchers has repeated the same study and found a statistically more significant difference in favor of analgesic; P value < 0.001. In view of the smaller P value, and provided that both studies were appropriately designed, conducted and analyzed, choose the BEST answer: a) The results of the second study have to be more considered than the first for being truer. b) The results of the second study have to be more considered than the first for being more accurate. c) The results of the second study have to be more considered than the first for being more credible. d) Both studies have to have an equal consideration, for being both statistically significant

The relative Z values (scores)

One of the empirically verified truths about life: it is a finding and not an invention. It is a name given to a characteristic distribution which followed by the majority of biological variables and, not a quality of such distribution. Birth weight classes (gm.) Birth weight frequency Total weight (a) Centers (b) Range* Absolute Relative (%) (gm.) (number) 2100 2000-2200 2 2.1 4200 2300 2200-2400 4 4.2 9200 2500 2400-2600 6 6.3 15000 2700 2600-2800 4 4.2 10800 2900 2800-3000 10 10.5 29000 3100 3000-3200 18 18.9 55800 3300 3200-3400 21 22.1 69300 3500 3400-3600 17 17.9 59500 3700 3600-3800 5 5.3 18500 3900 3800-4000 4 4.2 15600 4100 4000-4200 3 3.2 12300 4300 4200-4400 0 0 0 4500 4400-4600 1 1.1 4500 Total 95 100 303700

m 1 SD 2 SD 3 SD

Birth weight classes (gm.) Birth weight frequency Total weight (a) Centers (b) Range* Absolute Relative (%) (gm.) (number) 2100 2000-2200 2 2.1 4200 2300 2200-2400 4 4.2 9200 2500 2400-2600 6 6.3 15000 2700 2600-2800 4 4.2 10800 2900 2800-3000 10 10.5 29000 3100 3000-3200 18 18.9 55800 3300 3200-3400 21 22.1 69300 3500 3400-3600 17 17.9 59500 3700 3600-3800 5 5.3 18500 3900 3800-4000 4 4.2 15600 4100 4000-4200 3 3.2 12300 4300 4200-4400 0 0 0 4500 4400-4600 1 1.1 4500 Total 95 100 303700 (66 births; 69.5%) (92 births; 96.8%) The mean birth weight m = 3200 gm. and the SD = 450 gm. Let us check for the Normality of the distribution: 2/3 of birth weights are included in the interval: m + 1SD: 2750-3650 gm. 95% of birth weights are included in the interval: m + 2SD: 2300-4100 gm. Nearly all birth weights are comprised within a distance of + 3 SD from the mean

A- Beginning by the observation No 2 samples are alike. The more the sample increases in size (n), the more it will resemble the population from which it was drawn and the more the distribution of the sample itself will acquire the characteristic inverted bell shape of the Normal distribution. However, it is not only a question of size but other factors do matter like the units and measurement scale and hence, in order to compare Normal distributions we have to have a reference that is no more under the influence of both: measurement units and scale.

B- Reaching a suggestion Statisticians have suggested a Standard Normal distribution with a mean of 0 and a SD of 1; which means that the SD becomes the unit of measurement: moving 1 unit on this scale (from 0 to 1 ) will also mean that we went 1 SD further away from the mean and so on. Those units have to have a name and were called Z units (scores, values). Statisticians have then calculated the probabilities for observations to lay at all possible Z units and put it in the Z table. The rough (size, unit and scaledependent) estimation of probabilities that differ from one Normal distribution to another, were now replaced by exact (standard) figures. As example, exactly 68.26%, 95% and 99% of observations were found to lay WITHIN A DISTANCE of 1, 1.96 and 2.6 SD from either sides of the mean. -1.96 SD +1.96 SD 47.5% 47.5% 2.5% 2.5% The probability for a value to lie AT (OR FURTHER AWAY) from +1.96 SD is obtained by simple deduction: 100 95% = 5%; 2.5% on each side.

C- Ending with the application: Standardizing values (the wire technique) Any OBSERVED Normal distribution is EXPECTED to follow the Standard Normal distribution and the more it deviates from those expectations, the more it will be considered as being different and the question that we are here to answer is about the extent and consequently the statistical significance of such a difference or deviation. In fact, the unknown probabilities of our observed (x) values can now be calculated when the latter are transformed into standardized Z values; with already known tabulated probabilities: Z = (x-m)/sd Returning to our example: what is the expected probability of having a child whose birth weight is as large as > 41000 gm.? We begin by standardizing the child s weight: Z = (4100-3200) / 450 = +1.96 Then we check the table for the probability of having a Z score of +1.96; which is simply equal to the probability of having such a low birth weight child of 2300 gm. or less. <2300 gm. >4100 gm. >2300-4100< 47.5% 47.5% 2.5% 2.5% How to consult the Z table? The probability of having a child whose birth weight lay in the interval formed by the mean + 1.96 SD = 3200 + 450 = >2300 and 4100< is 95%.

The (Z) table gives the probability for a value to be smaller than Z; in the interval between 0 and Z. Z value 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.0 0.0000 0.0040 0.0080 0.0120 0.0160 0.0199 0.0239 0.0279 0.0319 0.0359 0.1 0.0398 0.0438 0.0478 0.0517 0.0557 0.0596 0.0636 0.0675 0.0714 0.0753 0.2 0.0793 0.0832 0.0871 0.0910 0.0948 0.0987 0.1026 0.1064 0.1103 0.1141 0.3 0.1179 0.1217 0.1255 0.1293 0.1331 0.1368 0.1406 0.1443 0.1480 0.1517 0.4 0.1554 0.1591 0.1628 0.1664 0.1700 0.1736 0.1772 0.1808 0.1844 0.1879 0.5 0.1915 0.1950 0.1985 0.2019 0.2054 0.2088 0.2123 0.2157 0.2190 0.2224 0.6 0.2257 0.2291 0.2324 0.2357 0.2389 0.2422 0.2454 0.2486 0.2517 0.2549 0.7 0.2580 0.2611 0.2642 0.2673 0.2704 0.2734 0.2764 0.2794 0.2823 0.2852 0.8 0.2881 0.2910 0.2939 0.2967 0.2995 0.3023 0.3051 0.3078 0.3106 0.3133 0.9 0.3159 0.3186 0.3212 0.3238 0.3264 0.3289 0.3315 0.3340 0.3365 0.3389 1.0 0.3413 0.3438 0.3461 0.3485 0.3508 0.3531 0.3554 0.3577 0.3599 0.3621 1.1 0.3643 0.3665 0.3686 0.3708 0.3729 0.3749 0.3770 0.3790 0.3810 0.3830 1.2 0.3849 0.3869 0.3888 0.3907 0.3925 0.3944 0.3962 0.3980 0.3997 0.4015 1.3 0.4032 0.4049 0.4066 0.4082 0.4099 0.4115 0.4131 0.4147 0.4162 0.4177 1.4 0.4192 0.4207 0.4222 0.4236 0.4251 0.4265 0.4279 0.4292 0.4306 0.4319 1.5 0.4332 0.4345 0.4357 0.4370 0.4382 0.4394 0.4406 0.4418 0.4429 0.4441 1.6 0.4452 0.4463 0.4474 0.4484 0.4495 0.4505 0.4515 0.4525 0.4535 0.4545 1.7 0.4554 0.4564 0.4573 0.4582 0.4591 0.4599 0.4608 0.4616 0.4625 0.4633 1.8 0.4641 0.4649 0.4656 0.4664 0.4671 0.4678 0.4686 0.4693 0.4699 0.4706 1.9 0.4713 0.4719 0.4726 0.4732 0.4738 0.4744 0.4750 0.4756 0.4761 0.4767 2.0 0.4772 0.4778 0.4783 0.4788 0.4793 0.4798 0.4803 0.4808 0.4812 0.4817 2.1 0.4821 0.4826 0.4830 0.4834 0.4838 0.4842 0.4846 0.4850 0.4854 0.4857 2.2 0.4861 0.4864 0.4868 0.4871 0.4875 0.4878 0.4881 0.4884 0.4887 0.4890 2.3 0.4893 0.4896 0.4898 0.4901 0.4904 0.4906 0.4909 0.4911 0.4913 0.4916 2.4 0.4918 0.4920 0.4922 0.4925 0.4927 0.4929 0.4931 0.4932 0.4934 0.4936 2.5 0.4938 0.4940 0.4941 0.4943 0.4945 0.4946 0.4948 0.4949 0.4951 0.4952 2.6 0.4953 0.4955 0.4956 0.4957 0.4959 0.4960 0.4961 0.4962 0.4963 0.4964 2.7 0.4965 0.4966 0.4967 0.4968 0.4969 0.4970 0.4971 0.4972 0.4973 0.4974 2.8 0.4974 0.4975 0.4976 0.4977 0.4977 0.4978 0.4979 0.4979 0.4980 0.4981 2.9 0.4981 0.4982 0.4982 0.4983 0.4984 0.4984 0.4985 0.4985 0.4986 0.4986 3.0 0.4987 0.4987 0.4987 0.4988 0.4988 0.4989 0.4989 0.4989 0.4990 0.4990

The Z scores are directly proportional of observed deviation The larger (or smaller) is a value as compared to the mean, the more distinct is its position on the standard scale: i.e. the larger is the Z value Z= (x-m)/sd = (3650-3200) /450 = 1, = (4100-3200) /450 = 1.96, Put it another way, the larger is the Z value (+/-), the less is its chance to belong to this particular distribution. Q1: What is the probability of having a child who is as heavy as 5 kg? Z = (5000-3200) /450 = 4 Q2: If this probability is minimal, (not even listed) what can you suggest? May be this child does not belong to the same population from which we have drawn our sample? Is his mother diabetic?; i.e. we can now suggest a qualitative decision based on such an extreme deviation. 3200 gm. 3650gm. 4100 gm. 5000gm.

The duration of CCU stay after acute MI: 48 + 12 hours. The expected probability for a patient to stay for <24 hours, for more than 72 hours or for both less than 24 hours and more than 72 hours? Z = (24-48)/ 12 = (72-48)12 = 2. Depending on the question posed: A) The probability of having either a larger (+) or smaller (-) Z value of 2 is calculated by adding 50% to the probability given by the table (47.5%) and subtracting the whole from 1 = 1- (47.5% + 50%) = 2.5%. B) The probability of having both larger and smaller Z values (i.e. staying for >72 hours and staying for <24 hours) is calculated by multiplying the probability given in the table by 2 and subtracting the whole from 1= 1-(47.5%x2) = 5%. 2.2.5% 24 48 72 36 60 47.5% 50%

1B- Choose the best answer The duration of CCU stay after acute MI: 48 + 12 hours. A) What is the expected probability for a patient to stay for <24 hours? 1) about 2.5 % 2) about 5% 3) about 95% Z = (x-m)/sd= (24-48)/12 = -2; probability = 1-(47.5+50)= nearly 2.5% B) What is the expected probability for a patient to stay for more than 72 hours? 1) same as the probability to stay for less than 24 hours. 2) triple the probability to stay for less than 24 hours. 3) We cannot tell Z = (x-m)/sd= (72-48)/12 = +2; probability = 1-(47.5+50)= nearly 2.5% C) What is the probability for a patient to stay for less than 24 hours and for more than 72 hours? 1) about 2.5 % 2) about 5% 3) about 95% Summing both previous probabilities = 1- (47.5x2) = 5%

The Normal law: conditions of application The Normal law is followed by the majority of biological variables and Normality can be easily checked out by various methods, starting from simple graphs to special tests. As a general rule, quantitative variables are expected to follow the Normal law whenever the number of values per group >30. For a binominal (p,q) qualitative variable with (N) total number of values (N), Normality can be assumed whenever Np, Nq >5. The presence of Normality allows the application of many statistical tests for the analysis of data. These are called parametric tests for necessitating fulfillment of some parameters before being used, including Normality. Non-parametric (distribution-free) tests are equally effective for data analysis and hence, one should not distort data to achieve Normality.

The null hypothesis

The statistical problem A sample must be representative of the aimed population. One of the criticisms of RCT is that they are too ordered to be a good reflection of the disordered reality. Even if the requirements of representativeness are thought to be fulfilled by randomization, a question will always remain: how much likely does our sample really represent the aimed population? As example, when a comparative study shows that treatment A is 80% effective in comparison to treatment B which is only 50% effective; a legitimate question would be if the observed difference is really due to effect treatment and not because patients who received treatment A were for example less ill than those receiving treatment B? In other words, were both groups of patients comparable from the start by being selected from the same or from different populations with different degrees of illness?

Postulating the Null hypothesis In order to answer this question, statisticians have postulated a theoretical hypothesis to start with: The null hypothesis We start any study by the null hypothesis postulating that there is no difference between the compared treatments. Then we conduct our study and analyze the results; which can either retain or disprove this theory by showing that treatments are truly different. At this point, we can reject the null hypothesis and accept the alternative hypothesis that there is a true difference between treatments; which has just been proved : The alternative hypothesis Both hypotheses: the first suggested to begin with and the second that may be proved by the end of the study are the 2 faces of one coin and hence, cannot co-exist.

When to reject the null hypothesis? Returning to our example of the 95 newly born babies, and under the null hypothesis, all children have comparable weights and the recorded differences are just variations of comparable weights belonging to the same population Differences are expressed in Z scores and the higher is the Z score, the less probable it can be consider as being just a variation of this particular distribution. The probability of having such an extreme variation of a 5 Kg-child (=Z=4) is minimal and hence, can raise questions about the null hypothesis: being member of the same population. In general, if the observed difference is sufficiently large and hence, less probable to be considered as part of the variation, we can consider rejecting the null hypothesis, accepting the alternative hypothesis and concluding upon the existence of a true difference. 3200 gm. 3650gm. 4100 gm. 15% 2.5% 5000gm. <0.0001

When to maintain the null hypothesis? On the other hand, if the difference is (small), we will continue to maintain our theoretical null hypothesis. However, in such a case, we cannot conclude that the observed difference does not exist because the null hypothesis itself is only a hypothetical suggestion. In fact, the aim of the study was to find sufficient evidence supporting the alternative hypothesis. In absence of sufficient evidence, we will maintain the theoretical null hypothesis that was neither rejected nor proved, but has only been maintained for further studies. The usual closing remark, and not a conclusion, is that we could not put into evidence the targeted difference and further studies may be needed to reevaluate the evidence to support this difference (i.e. to support the alternative hypothesis). (Large difference) Reject null hypothesis & accept alternative hypothesis Conclude to a difference. Under the null hypothesis (Small difference) Maintain the null hypothesis X

We have to define a critical limit for rejection We can reject the null hypothesis when the analysis shows a sufficiently large difference that has a SMALL PROBABILITY of being just a variation of the same population. Consequently, it can be considered as being a true difference ; which is coming from a different population. A literal description that merits a numerical expression. Most of the researchers have agreed that the null hypothesis can be rejected whenever the probability of being a variation is as small as 5%. This probability is called primary risk of error?! It means that although we know that there is a small 5% probability that this difference is just an extreme variation of the population, yet we declared it as being coming from a different population. In other words, our conclusion carries a small risk of being wrong and that this difference is still a variation of the first population, even if it is an extreme one.

Primary risk of error (α) >2300 <4100 We maintain the null hypothesis The majority of birth weights (95%) are expected to be between 2300-4100 gm. and, by deduction, only 5% of babies are expected to lie outside this range. The probability of having a baby weighting >4100 gm. (or <2300 gm.) is as small as 5% and hence, this baby can be considered as being born from another population e.g. from a diabetic mother This conclusion still carries the small 5 % risk of being wrong ; i.e. that the weight of this baby is just an extreme variation of non-diabetic mothers. This small, but still present, risk of being wrong (risk of rejecting the null hypothesis where as the null hypothesis is true) is the primary risk of error.

Distribution of the primary risk of error: the unilateral versus the bilateral design A) Whenever we are comparing a treatment to placebo, our only concern is to prove that treatment is better than placebo & never the reverse. Null hypothesis (H0): no difference + Placebo is better Alternative hypothesis (H1): treatment is better than placebo. The primary risk of error of the study (5%) is involved in a single conclusion: treatment is better than placebo, while this is untrue. B) On the other hand, a bilateral design involves testing the superiority of either treatments: A or B. H0: no difference between treatments A and B. H1: involves 2 situations 1) treatment A is better than treatment B 2) treatment B is better than treatment A In order to keep a primary risk of error of 5% for the whole study, (α) which is the risk to conclude upon a difference that does not exist; is equally split between the 2 possibilities: treatment A is better, while this is untrue (2.5%) and treatment B is better than treatment A, while this is untrue(2.5%).

An example (even if it is not the perfect one!) The null hypothesis is rejected whenever the difference (d) is large enough that the probability of being a normal variation is as small as 5%. Returning to the 95 new born babies and suppose that we want know if a newly coming baby does belong to a diabetic mother and hence, we are only interested to prove that he is significantly larger than the rest of the group. This is a unilateral design, H0= no difference in weights + baby weight is significantly smaller. H1 = the baby is significantly larger than the others and, the whole of α is dedicated to this single and only investigated possibility. On the other hand, and if the design was bilateral, we would be interested to know if the weight of the baby is significantly different (whether larger or smaller) from the others; this is the alternative hypothesis and α is no more dedicated to 1 possibility but it is equally split (50:50) between the 2 possibilities; each being α/2. The null hypothesis is that the baby weight is comparable to the rest of the group.

The null hypothesis will be rejected whenever the calculated Z score enters the critical area of our primary risk of error. In a unilateral study, we are only concerned if the difference is in favor of 1 treatment and hence, the whole 5% of α is on 1 side or one tail of the curve. In a bilateral design, the risk of error is equally split into 2 smaller risks of 2.5% each. In consequence, the limit of the larger (5%) critical area of rejection of the unilateral study is nearer to the mean than any of the 2 smaller (2.5%) areas of the bilateral design. In consequence, a smaller Z score (difference) is needed to enter the critical area of rejecting the null hypothesis and declaring statistical significance in a unilateral study; compared to a bilateral design.

In a unilateral design: the null hypothesis will be rejected whenever the calculated Z score enters the critical area of α 3200 3950gm. 1.65 5% In a unilateral study, we are only concerned if the child is significantly larger than the rest of the group and hence, the whole 5% of α is on 1 side (one tail) of the curve. The child weight would be considered as being significantly larger if its corresponding Z score reaches the limit of α. Consulting the Z table, the Z value of point α=5% = 1.65 and by deduction (Z = x- m/sd; x = Z x SD + m = 1.65x450 + 3200 = 3950); a child weighting only 3950 gm. would be considered as being significantly larger than the rest of the population; with a primary risk of error of 5%. In a unilateral design, the critical limit to reject the null hypothesis is Z > 1.65 2.5% 3200 2.5%

In a bilateral design: the null hypothesis will be rejected whenever the calculated Z score enters the critical area of α/2 3200 3950 5% In a Bilateral study, we are equally concerned if the child is significantly larger or smaller than the rest of the group and hence, the 5% of α will be equally split between both tails of the curve (50:50). In comparison, a child weight would be considered as being significantly larger if its corresponding Z score reaches the limit of α/2 ; which by default has to be further away from the mean compared to whole α in a unilateral design In consequence, a larger Z (difference) is needed to touch a now more distal critical limit. The Z table, shows a larger Z (1.96) for the smaller α/2, of course. In consequence, a child has to be as large as 4100 gm. to be declared as being significantly different from the population, compared to only 3950 gm., in the case if the design was unilateral. In a bilateral design, the critical limit to reject the null hypothesis is a Z > 1.96 2.5% 3200 3950 4100 2.5%

2B- Choose the WRONG answer A randomized controlled unilateral study was conducted to compare the analgesic effect of drug (X) to placebo. The analgesic gave significantly longer duration of pain relief (12 + 2 hours), compared to placebo (2 + 1 hours) ; P = 0.05 (Student s test, one-tail). 1) A unilateral study means that the researchers were only concerned to show the superiority of the analgesic over placebo, but not the reverse. 2) A one-tail statistics implies that a smaller difference between compared analgesic effects is needed to declare statistical significance, compared to a bilateral design. 3) The statistical significance of the difference achieved will not change if the design was bilateral.

Testing hypothesis: the comparison of 2 means A standard feeding additive (A) is known to increase the weight of low birth weight babies by a mean value of 170g and a SD of 65g. A new feeding additive (B) is given to a sample of 32 low birth weight babies and the mean weight gain observed was 203g and a SD of 67.4 g. The question now is if additive (B) has provided significantly more weight gain to those babies, compared to the standard additive (A)? The null hypothesis Ho: The mean weight gain obtained by the new additive (B) is just a normal variation of the weight gain obtained by additive (A). The alternative hypothesis H1: the difference between the mean weight gain obtained by (A) and that obtained by (B) are sufficiently is sufficiently large to reject the null hypothesis, at the primary risk of error of 5%.

Testing hypothesis: the equation Maintain H0 2.87 Sample mean (203 gm.)

The secondary risk of error (β) Suppose that we repeat the study and we obtained the same weight gain difference but with only 5 newborns. With such a small sample. we have to expect a larger SEM and hence, a smaller z value. z value = (203-170) / (65/ 5) = 1.645 Being below the critical value of even a unilateral design, this second researcher will be obliged to retain the null hypothesis, despite the fact that a true difference was shown by the first researcher. This example demonstrates the secondary risk of error: the risk of not concluding upon a difference in the study despite that such a difference exists (or can exist) in the reality. The secondary risk of error (risk of secondary species or (β) or type II error) is usually behind the so called negative trials. Most importantly, and unlike the first researcher, our second researcher "cannot conclude and his usual statement will be: we could not put into evidence a significant difference between A and B; that is probably due to the lack of power.

3B- Choose the best answer A) The primary risk of error: 1) It is the risk to conclude upon a difference in the study that does not exist in the reality. 2) It is the risk not to conclude upon a difference in the study despite that this difference does exist in the reality. 3) Both definitions are wrong B) The secondary risk of error: 1) It is the risk to conclude upon a difference in the study that does not really exist. 2) It is the risk not to conclude upon a difference in the study despite that this difference does exist in the reality. 3) Bothe definitions are wrong C) The power of the study: 1) It is the ability of the study to accurately conclude upon a statistically significant difference. 2) It is the ability of the study not to miss a statistically significant difference. 3) Both definitions are wrong

Statistical significance & degree of significance

P value First, and before conducting any research, we have to designate the acceptable limit of (α), which is usually 5%. This is the limit that if reached, we can consider that the tested treatment is not just a variation of the classic one but a truly superior treatment. Concordantly, in the example of food additives, a new additive will be considered superior when the associated weight gain > 193 gm. Secondly, the researcher conducts his study and analyze his results using the appropriate statistical test now to calculate this probability for the new additive to be just a variation of the classic additive; this calculated probability is the P value. If the P value is at least equal or smaller than the designated (α), we can reject the null hypothesis and accept the alternative hypothesis. On the other hand, if this calculated probability is larger than (α), we maintain the null hypothesis and the test results are termed as being statistically insignificant. 2.87 α P

Relation between α and P In other words, we have 2 probabilities: one that we pre design before the experiment and another one that we calculate (using the appropriate statistical test) at the end of the experiment. The pre designed probability indicates the limit for rejecting the null hypothesis that we fix before the experiment. The calculated probability indicates the position of our results in relation to this limit, after the experiment. The null hypothesis will only be rejected If the calculated probability is at least equal or smaller than the pre designed limit; otherwise, it will be maintained. The pre designed probability is called the primary risk of error or (α) and the calculated probability is the well-known P value.

What is the P value? Unlike a common belief, the P value is not the probability for the null hypothesis to be untrue because the P value is calculated on the assumption that the null hypothesis is true. It cannot, therefore, be a direct measure of the probability that the null hypothesis is false. A proper definition of P is the probability of obtaining the observed or more extreme results, under the null hypothesis (i.e. while the null hypothesis is still true). The value of P is an index of the reliability of our results. In terms of percentage, the smaller is the P value, the higher it is in terms of significance; i.e. the more we can believe that the observed relation between variables in the sample is a reliable indicator of the relation between the respective variables in the population.