|
|
| 1 |
| 00:00:11,500 |
| Last time, we talked about chi-square tests. And |
|
|
| 2 |
| 00:00:17,380 |
| we mentioned that there are two objectives in this |
|
|
| 3 |
| 00:00:21,580 |
| chapter. The first one is when to use chi-square |
|
|
| 4 |
| 00:00:25,220 |
| tests for contingency tables. And the other |
|
|
| 5 |
| 00:00:28,630 |
| objective is how to use chi-square tests for |
|
|
| 6 |
| 00:00:31,070 |
| contingency tables. And we did one chi-square test |
|
|
| 7 |
| 00:00:35,410 |
| for the difference between two proportions. In the |
|
|
| 8 |
| 00:00:42,050 |
| null hypothesis, the two proportions are equal. I |
|
|
| 9 |
| 00:00:44,630 |
| mean, proportion for population 1 equals |
|
|
| 10 |
| 00:00:47,970 |
| population proportion 2 against the alternative |
|
|
| 11 |
| 00:00:52,970 |
| here is two-sided test. Pi 1 does not equal pi 2. |
|
|
| 12 |
| 00:00:59,310 |
| In this case, we can use either this statistic. So |
|
|
| 13 |
| 00:01:04,210 |
| you may |
|
|
| 14 |
| 00:01:07,680 |
| Z statistic, which is b1 minus b2 minus y1 minus |
|
|
| 15 |
| 00:01:15,520 |
| y2 divided by b |
|
|
| 16 |
| 00:01:21,840 |
| dash times 1 minus b dash multiplied by 1 over n1 |
|
|
| 17 |
| 00:01:27,500 |
| plus 1 over n2. This quantity under the square |
|
|
| 18 |
| 00:01:31,200 |
| root, where b dash |
|
|
| 19 |
| 00:01:42,180 |
| Or proportionally, where P dash equals X1 plus X2 |
|
|
| 20 |
| 00:01:48,580 |
| divided by N1 plus N2. Or, |
|
|
| 21 |
| 00:01:58,700 |
| in this chapter, we are going to use chi-square |
|
|
| 22 |
| 00:02:00,720 |
| statistic, which is given by this equation. Chi |
|
|
| 23 |
| 00:02:04,520 |
| -square statistic is just sum of observed |
|
|
| 24 |
| 00:02:09,620 |
| frequency, FO. |
|
|
| 25 |
| 00:02:15,530 |
| minus expected frequency squared divided by |
|
|
| 26 |
| 00:02:20,070 |
| expected frequency for all cells. |
|
|
| 27 |
| 00:02:25,210 |
| Chi squared, this statistic is given by this |
|
|
| 28 |
| 00:02:29,070 |
| equation. If there are two by two rows and |
|
|
| 29 |
| 00:02:34,190 |
| columns, I mean there are two rows and two |
|
|
| 30 |
| 00:02:36,290 |
| columns. So in this case, my table is two by two. |
|
|
| 31 |
| 00:02:42,120 |
| In this case, you have only one degree of freedom. |
|
|
| 32 |
| 00:02:44,640 |
| Always degrees of freedom equals number of rows |
|
|
| 33 |
| 00:02:50,440 |
| minus one multiplied by number of columns minus |
|
|
| 34 |
| 00:03:00,320 |
| one. So for two by two tables, there are two rows |
|
|
| 35 |
| 00:03:06,140 |
| and two columns, so two minus one. times 2 minus |
|
|
| 36 |
| 00:03:11,560 |
| 1, so your degrees of freedom in this case is 1. |
|
|
| 37 |
| 00:03:16,440 |
| Here the assumption is we assume that the expected |
|
|
| 38 |
| 00:03:19,320 |
| frequency is at least 5, in order to use Chi |
|
|
| 39 |
| 00:03:22,940 |
| -square statistic. Chi-square is always positive, |
|
|
| 40 |
| 00:03:27,680 |
| I mean, Chi-square value is always greater than 0. |
|
|
| 41 |
| 00:03:34,040 |
| It's one TLTS to the right one. We reject F0 if |
|
|
| 42 |
| 00:03:38,890 |
| your chi-square statistic falls in the rejection |
|
|
| 43 |
| 00:03:42,430 |
| region. That means we reject the null hypothesis |
|
|
| 44 |
| 00:03:45,850 |
| if chi-square statistic greater than chi-square |
|
|
| 45 |
| 00:03:49,470 |
| alpha. Alpha can be determined by using chi-square |
|
|
| 46 |
| 00:03:53,130 |
| table. So we reject in this case F0, otherwise, |
|
|
| 47 |
| 00:03:56,890 |
| sorry, we don't reject F0. So again, if the value |
|
|
| 48 |
| 00:04:02,050 |
| of chi-square statistic falls in this rejection |
|
|
| 49 |
| 00:04:05,350 |
| region, the yellow one, then we reject. Otherwise, |
|
|
| 50 |
| 00:04:11,100 |
| if this value, I mean if the value of the |
|
|
| 51 |
| 00:04:13,900 |
| statistic falls in non-rejection region, we don't |
|
|
| 52 |
| 00:04:17,060 |
| reject the null hypothesis. So the same concept as |
|
|
| 53 |
| 00:04:21,680 |
| we did in the previous chapters. If we go back to |
|
|
| 54 |
| 00:04:27,680 |
| the previous example we had discussed before, when |
|
|
| 55 |
| 00:04:32,060 |
| we are testing about gender and left and right |
|
|
| 56 |
| 00:04:36,620 |
| handers, So hand preference either left or right. |
|
|
| 57 |
| 00:04:42,960 |
| And the question is test to see whether hand |
|
|
| 58 |
| 00:04:49,320 |
| preference and gender are related or not. In this |
|
|
| 59 |
| 00:04:53,100 |
| case, your null hypothesis could be written as |
|
|
| 60 |
| 00:04:56,960 |
| either X0. |
|
|
| 61 |
| 00:05:04,220 |
| So the proportion of left-handers for female |
|
|
| 62 |
| 00:05:07,160 |
| equals the proportion of males left-handers. So by |
|
|
| 63 |
| 00:05:12,260 |
| one equals by two or H zero later we'll see that |
|
|
| 64 |
| 00:05:16,600 |
| the two variables of interest are independent. |
|
|
| 65 |
| 00:05:32,810 |
| Now, your B dash is |
|
|
| 66 |
| 00:05:37,830 |
| given by X1 plus X2 divided by N1 plus N2. X1 is |
|
|
| 67 |
| 00:05:42,250 |
| 12, this 12, plus 24 divided by 300. That will |
|
|
| 68 |
| 00:05:51,930 |
| give 12%. So let me just write this notation, B |
|
|
| 69 |
| 00:05:57,310 |
| dash. |
|
|
| 70 |
| 00:06:05,560 |
| equals 36 by 300, so that's 12%. So the expected |
|
|
| 71 |
| 00:06:13,740 |
| frequency in this case for female, 0.12 times 120, |
|
|
| 72 |
| 00:06:19,680 |
| because there are 120 females in the data you |
|
|
| 73 |
| 00:06:22,140 |
| have, so that will give 14.4. So the expected |
|
|
| 74 |
| 00:06:25,520 |
| frequency is 0.12 times 180, 120, I'm sorry, |
|
|
| 75 |
| 00:06:34,810 |
| That will give 14.4. Similarly, for male to be |
|
|
| 76 |
| 00:06:39,590 |
| left-handed is 0.12 times number of females in the |
|
|
| 77 |
| 00:06:43,390 |
| sample, which is 180, and that will give 21.6. |
|
|
| 78 |
| 00:06:48,670 |
| Now, since you compute the expected for the first |
|
|
| 79 |
| 00:06:53,190 |
| cell, the second one direct is just the complement |
|
|
| 80 |
| 00:06:57,590 |
| 120. 120 is sample size for the Rome. I mean |
|
|
| 81 |
| 00:07:03,020 |
| female total 120 minus 14.4 will give 105.6. Or 0 |
|
|
| 82 |
| 00:07:12,200 |
| .88 times 120 will give the same value. Here, the |
|
|
| 83 |
| 00:07:18,050 |
| expected is 21.6, so the compliment is the, I'm |
|
|
| 84 |
| 00:07:21,730 |
| sorry, the expected is just the compliment, which |
|
|
| 85 |
| 00:07:25,130 |
| is 180 minus 21.6 will give 158.4. Or 0.88 is the |
|
|
| 86 |
| 00:07:32,010 |
| compliment of that one multiplied by 180 will give |
|
|
| 87 |
| 00:07:35,090 |
| the same value. So that's the one we had discussed |
|
|
| 88 |
| 00:07:39,070 |
| before. |
|
|
| 89 |
| 00:07:42,410 |
| On this result, you can determine the value of chi |
|
|
| 90 |
| 00:07:46,550 |
| -square statistic by using this equation. Sum of F |
|
|
| 91 |
| 00:07:50,750 |
| observed minus F expected squared divided by F |
|
|
| 92 |
| 00:07:53,810 |
| expected for each cell. You have to compute the |
|
|
| 93 |
| 00:07:57,450 |
| value of chi-square for each cell. In this case, |
|
|
| 94 |
| 00:08:01,070 |
| the simplest case is just 2 by 2 table. So 12 |
|
|
| 95 |
| 00:08:04,250 |
| minus 14.4 squared divided by 14.4. Plus the |
|
|
| 96 |
| 00:08:09,980 |
| second one 108 minus 105 squared divided by 105 up |
|
|
| 97 |
| 00:08:15,720 |
| to the last one, you will get this result. Now my |
|
|
| 98 |
| 00:08:19,780 |
| chi-square value is 0.7576. |
|
|
| 99 |
| 00:08:24,240 |
| And in this case, if chi-square value is very |
|
|
| 100 |
| 00:08:28,140 |
| small, I mean it's close to zero, then we don't |
|
|
| 101 |
| 00:08:31,180 |
| reject the null hypothesis. Because the smallest |
|
|
| 102 |
| 00:08:34,140 |
| value of chi-square is zero, and zero happens only |
|
|
| 103 |
| 00:08:37,500 |
| if f observed is close to f expected. So here if |
|
|
| 104 |
| 00:08:43,580 |
| you look carefully for the observed and expected |
|
|
| 105 |
| 00:08:46,920 |
| frequencies, you can tell if you can reject or |
|
|
| 106 |
| 00:08:50,520 |
| don't reject the number. Now the difference |
|
|
| 107 |
| 00:08:53,700 |
| between these values looks small, so that's lead |
|
|
| 108 |
| 00:08:58,960 |
| to small chi-square. So without doing the critical |
|
|
| 109 |
| 00:09:05,110 |
| value, computer critical value, you can determine |
|
|
| 110 |
| 00:09:08,210 |
| that we don't reject the null hypothesis. Because |
|
|
| 111 |
| 00:09:11,890 |
| your chi-square value is very small. So we don't |
|
|
| 112 |
| 00:09:16,070 |
| reject the null hypothesis. Or if you look |
|
|
| 113 |
| 00:09:18,670 |
| carefully at the table, for the table we have |
|
|
| 114 |
| 00:09:22,790 |
| here, for chi-square table. By the way, the |
|
|
| 115 |
| 00:09:26,410 |
| smallest value of chi-square is 1.3. under 1 |
|
|
| 116 |
| 00:09:31,480 |
| degrees of freedom. So the smallest value 1.32. So |
|
|
| 117 |
| 00:09:36,180 |
| if your chi-square value is greater than 1, it |
|
|
| 118 |
| 00:09:39,360 |
| means maybe you reject or don't reject. It depends |
|
|
| 119 |
| 00:09:41,920 |
| on v value and alpha you have or degrees of |
|
|
| 120 |
| 00:09:45,920 |
| freedom. But in the worst scenario, if your chi |
|
|
| 121 |
| 00:09:50,280 |
| -square is smaller than this value, it means you |
|
|
| 122 |
| 00:09:53,780 |
| don't reject the null hypothesis. So generally |
|
|
| 123 |
| 00:09:57,600 |
| speaking, if Chi-square is statistical. It's |
|
|
| 124 |
| 00:10:02,120 |
| smaller than 1.32. 1.32 is a very small value. |
|
|
| 125 |
| 00:10:06,940 |
| Then we don't reject. Then we don't reject x0. |
|
|
| 126 |
| 00:10:15,780 |
| That's always, always true. Regardless of degrees |
|
|
| 127 |
| 00:10:24,220 |
| of freedom and alpha. My chi-square is close to |
|
|
| 128 |
| 00:10:31,050 |
| zero, or smaller than 1.32, because the minimum |
|
|
| 129 |
| 00:10:35,710 |
| value of critical value is 1.32. Imagine that we |
|
|
| 130 |
| 00:10:40,990 |
| are talking about alpha is 5%. So alpha is 5, so |
|
|
| 131 |
| 00:10:46,050 |
| your critical value, the smallest one for 1 |
|
|
| 132 |
| 00:10:48,750 |
| degrees of freedom, is 3.84. So that's my |
|
|
| 133 |
| 00:10:53,850 |
| smallest, if alpha |
|
|
| 134 |
| 00:11:03,740 |
| Last time we mentioned that this value is just 1 |
|
|
| 135 |
| 00:11:08,680 |
| .96 squared. And that's only true, only true for 2 |
|
|
| 136 |
| 00:11:17,760 |
| by 2 table. That means this square is just Chi |
|
|
| 137 |
| 00:11:24,180 |
| square 1. For this reason, we can test by one |
|
|
| 138 |
| 00:11:29,470 |
| equal by two, by two methods, either this |
|
|
| 139 |
| 00:11:33,330 |
| statistic or chi-square statistic. Both of them |
|
|
| 140 |
| 00:11:37,750 |
| will give the same result. So let's go back to the |
|
|
| 141 |
| 00:11:41,970 |
| question we have. My chi-square value is 0.77576. |
|
|
| 142 |
| 00:11:52,160 |
| So that's your chi-square statistic. Again, |
|
|
| 143 |
| 00:11:57,500 |
| degrees of freedom 1 to chi-square, the critical |
|
|
| 144 |
| 00:12:00,240 |
| value is 3.841. So my decision is we don't reject |
|
|
| 145 |
| 00:12:08,500 |
| the null hypothesis. My conclusion is there is not |
|
|
| 146 |
| 00:12:11,780 |
| sufficient evidence that two proportions are |
|
|
| 147 |
| 00:12:14,380 |
| different. So you don't have sufficient evidence |
|
|
| 148 |
| 00:12:17,480 |
| in order to support that the two proportions are |
|
|
| 149 |
| 00:12:21,900 |
| different at 5% level of significance. We stopped |
|
|
| 150 |
| 00:12:27,720 |
| last time at this point. Now suppose we are |
|
|
| 151 |
| 00:12:32,700 |
| testing The difference among more than two |
|
|
| 152 |
| 00:12:36,670 |
| proportions. The same steps, we have to extend in |
|
|
| 153 |
| 00:12:42,930 |
| this case chi-square. Your null hypothesis, by one |
|
|
| 154 |
| 00:12:47,830 |
| equal by two, all the way up to by C. So in this |
|
|
| 155 |
| 00:12:50,990 |
| case, there are C columns. C columns and |
|
|
| 156 |
| 00:13:00,110 |
| two rows. So number of columns equals C, and there |
|
|
| 157 |
| 00:13:05,420 |
| are only two rows. So pi 1 equals pi 2, all the |
|
|
| 158 |
| 00:13:10,520 |
| way up to pi C. So null hypothesis for the columns |
|
|
| 159 |
| 00:13:13,840 |
| we have. There are C columns. Again, it's the |
|
|
| 160 |
| 00:13:17,040 |
| alternative, not all of the pi J are equal, and J |
|
|
| 161 |
| 00:13:19,840 |
| equals 1 up to C. Now, the only difference here, |
|
|
| 162 |
| 00:13:26,520 |
| the degrees of freedom. |
|
|
| 163 |
| 00:13:31,370 |
| For 2 by c table, |
|
|
| 164 |
| 00:13:35,710 |
| 2 by c, degrees of freedom equals number |
|
|
| 165 |
| 00:13:42,010 |
| of rows minus 1. There are two rows, so 2 minus 1 |
|
|
| 166 |
| 00:13:45,890 |
| times number of columns minus 1. 2 minus 1 is 1, c |
|
|
| 167 |
| 00:13:50,810 |
| minus 1, 1 times c minus 1, c minus 1. So your |
|
|
| 168 |
| 00:13:54,610 |
| degrees of freedom in this case is c minus 1. |
|
|
| 169 |
| 00:14:00,070 |
| So that's the only difference. For two by two |
|
|
| 170 |
| 00:14:03,190 |
| table, degrees of freedom is just one. If there |
|
|
| 171 |
| 00:14:07,130 |
| are C columns and we have the same number of rows, |
|
|
| 172 |
| 00:14:11,450 |
| degrees of freedom is C minus one. And we have the |
|
|
| 173 |
| 00:14:14,810 |
| same chi squared statistic, the same equation I |
|
|
| 174 |
| 00:14:19,190 |
| mean. And we have to extend also the overall |
|
|
| 175 |
| 00:14:23,890 |
| proportion instead of x1 plus x2 divided by n1 |
|
|
| 176 |
| 00:14:27,330 |
| plus n2. It becomes x1 plus x2 plus x3 all the way |
|
|
| 177 |
| 00:14:32,610 |
| up to xc because there are c columns divided by n1 |
|
|
| 178 |
| 00:14:38,330 |
| plus n2 all the way up to nc. So that's x over n. |
|
|
| 179 |
| 00:14:43,540 |
| So similarly we can reject the null hypothesis if |
|
|
| 180 |
| 00:14:48,400 |
| the value of chi-square statistic lies or falls in |
|
|
| 181 |
| 00:14:52,260 |
| the rejection region. |
|
|
| 182 |
| 00:14:58,120 |
| Other type of chi-square test is called chi-square |
|
|
| 183 |
| 00:15:01,980 |
| test of independence. Generally speaking, most of |
|
|
| 184 |
| 00:15:07,380 |
| the time there are more than two columns or more |
|
|
| 185 |
| 00:15:10,440 |
| than two rows. Now, suppose we have contingency |
|
|
| 186 |
| 00:15:16,490 |
| table that has R rows and C columns. And we are |
|
|
| 187 |
| 00:15:22,370 |
| interested to test to see whether the two |
|
|
| 188 |
| 00:15:26,990 |
| categorical variables are independent. That means |
|
|
| 189 |
| 00:15:31,390 |
| there is no relationship between them. Against the |
|
|
| 190 |
| 00:15:35,600 |
|
|
| 223 |
| 00:18:05,940 |
| tables with R rows and C columns. So we have the |
|
|
| 224 |
| 00:18:11,560 |
| case R by C. So that's in general, there are R |
|
|
| 225 |
| 00:18:15,660 |
| rows and C columns. And the question is this C, if |
|
|
| 226 |
| 00:18:23,060 |
| the two variables are independent or not. So in |
|
|
| 227 |
| 00:18:27,480 |
| this case, you cannot use this statistic. So one |
|
|
| 228 |
| 00:18:30,700 |
| more time, this statistic is valid only for two by |
|
|
| 229 |
| 00:18:34,320 |
| two tables. So that means we can use z or chi |
|
|
| 230 |
| 00:18:38,020 |
| -square to test if there is no difference between |
|
|
| 231 |
| 00:18:41,200 |
| two population proportions. But for more than |
|
|
| 232 |
| 00:18:43,960 |
| that, you have to use chi-square. |
|
|
| 233 |
| 00:18:49,950 |
| Now still we have the same equation, Chi-square |
|
|
| 234 |
| 00:18:53,310 |
| statistic is just sum F observed minus F expected |
|
|
| 235 |
| 00:18:57,870 |
| quantity squared divided by F expected. |
|
|
| 236 |
| 00:19:03,490 |
| In this case, Chi-square statistic for R by C case |
|
|
| 237 |
| 00:19:07,550 |
| has degrees of freedom R minus 1 multiplied by C |
|
|
| 238 |
| 00:19:15,430 |
| minus 1. In this case, each cell in the |
|
|
| 239 |
| 00:19:18,570 |
| contingency table has expected frequency at least |
|
|
| 240 |
| 00:19:21,230 |
| one instead of five. Now let's see how can we |
|
|
| 241 |
| 00:19:26,910 |
| compute the expected cell frequency for each cell. |
|
|
| 242 |
| 00:19:32,950 |
| The expected frequency is given by row total |
|
|
| 243 |
| 00:19:37,530 |
| multiplied by colon total divided by n. So that's |
|
|
| 244 |
| 00:19:42,950 |
| my new equation to determine I've expected it. So |
|
|
| 245 |
| 00:19:50,700 |
| the expected value for each cell is given by Rho |
|
|
| 246 |
| 00:19:56,440 |
| total multiplied by Kono, total divided by N. |
|
|
| 247 |
| 00:20:05,160 |
| Also, this equation is true for the previous |
|
|
| 248 |
| 00:20:09,540 |
| example. If you go back a little bit here, now the |
|
|
| 249 |
| 00:20:16,650 |
| Expected for this cell was 40.4. Now let's see how |
|
|
| 250 |
| 00:20:21,650 |
| can we compute the same value by using this |
|
|
| 251 |
| 00:20:25,470 |
| equation. So it's equal to row total 120 |
|
|
| 252 |
| 00:20:30,250 |
| multiplied by column total 36 divided by 300. |
|
|
| 253 |
| 00:20:43,580 |
| Now before we compute this value by using B dash |
|
|
| 254 |
| 00:20:46,500 |
| first, 300 divided by, I'm sorry, 36 divided by |
|
|
| 255 |
| 00:20:50,900 |
| 300. So that's your B dash. Then we multiply this |
|
|
| 256 |
| 00:20:58,520 |
| B dash by N, and this is your N. So it's similar |
|
|
| 257 |
| 00:21:03,540 |
| equation. So either you use row total multiplied |
|
|
| 258 |
| 00:21:08,540 |
| by column total. then divide by overall sample |
|
|
| 259 |
| 00:21:14,060 |
| size you will get the same result by using the |
|
|
| 260 |
| 00:21:18,880 |
| overall proportion 12% times 120 so each one will |
|
|
| 261 |
| 00:21:25,520 |
| give the same answer so from now we are going to |
|
|
| 262 |
| 00:21:29,860 |
| use this equation in order to compute the expected |
|
|
| 263 |
| 00:21:33,900 |
| frequency for each cell so again expected |
|
|
| 264 |
| 00:21:37,960 |
| frequency is rho total times Column total divided |
|
|
| 265 |
| 00:21:42,920 |
| by N, N is the sample size. So row total it means |
|
|
| 266 |
| 00:21:48,620 |
| sum of all frequencies in the row. Similarly |
|
|
| 267 |
| 00:21:52,220 |
| column total is the sum of all frequencies in the |
|
|
| 268 |
| 00:21:56,160 |
| column and N is over all sample size. |
|
|
| 269 |
| 00:22:03,030 |
| Again, we reject the null hypothesis if your chi |
|
|
| 270 |
| 00:22:06,630 |
| -square statistic greater than chi-square alpha. |
|
|
| 271 |
| 00:22:10,590 |
| Otherwise, you don't reject it. And keep in mind, |
|
|
| 272 |
| 00:22:14,270 |
| chi-square statistic has degrees of freedom R |
|
|
| 273 |
| 00:22:18,390 |
| minus 1 times C minus 1. That's all for chi-square |
|
|
| 274 |
| 00:22:23,730 |
| as test of independence. Any question? |
|
|
| 275 |
| 00:22:31,220 |
| Here there is an example for applying chi-square |
|
|
| 276 |
| 00:22:36,300 |
| test of independence. Meal plan selected |
|
|
| 277 |
| 00:22:42,200 |
| by 200 students is shown in this table. So there |
|
|
| 278 |
| 00:22:46,700 |
| are two variables of interest. The first one is |
|
|
| 279 |
| 00:22:50,960 |
| number of meals per week. And there are three |
|
|
| 280 |
| 00:22:56,230 |
| types of number of meals, either 20 meals per |
|
|
| 281 |
| 00:23:00,550 |
| week, or 10 meals per week, or none. So that's, so |
|
|
| 282 |
| 00:23:07,870 |
| number of meals is classified into three groups. |
|
|
| 283 |
| 00:23:13,210 |
| So three columns, 20 per week, 10 per week, or |
|
|
| 284 |
| 00:23:17,650 |
| none. Class standing, students are classified into |
|
|
| 285 |
| 00:23:23,270 |
| four levels. A freshman, it means students like |
|
|
| 286 |
| 00:23:28,860 |
| you, first year. Sophomore, it means second year. |
|
|
| 287 |
| 00:23:34,440 |
| Junior, third level. Senior, fourth level. So that |
|
|
| 288 |
| 00:23:38,400 |
| means first, second, third, and fourth level. And |
|
|
| 289 |
| 00:23:42,100 |
| we have this number, these numbers for, I mean, |
|
|
| 290 |
| 00:23:47,040 |
| there are 24 A freshman who have meals for 20 per |
|
|
| 291 |
| 00:23:53,660 |
| week. So there are 24 freshmen have 20 meals per |
|
|
| 292 |
| 00:23:59,880 |
| week. 22 sophomores, the same, 10 for junior and |
|
|
| 293 |
| 00:24:04,160 |
| 14 for senior. And the question is just to see if |
|
|
| 294 |
| 00:24:10,220 |
| number of meals per week is independent of class |
|
|
| 295 |
| 00:24:13,740 |
| standing. to see if there is a relationship |
|
|
| 296 |
| 00:24:17,270 |
| between these two variables. In this case, there |
|
|
| 297 |
| 00:24:21,890 |
| are four rows because the class standing is |
|
|
| 298 |
| 00:24:26,850 |
| classified into four groups. So there are four |
|
|
| 299 |
| 00:24:29,190 |
| rows and three columns. So this table actually is |
|
|
| 300 |
| 00:24:34,230 |
| four by three. And there are twelve cells in this |
|
|
| 301 |
| 00:24:40,200 |
| case. Now it takes time to compute the expected |
|
|
| 302 |
| 00:24:46,660 |
| frequencies because in this case we have to |
|
|
| 303 |
| 00:24:49,760 |
| compute the expected frequency for each cell. And |
|
|
| 304 |
| 00:24:55,120 |
| we are going to use this formula for only six of |
|
|
| 305 |
| 00:25:01,320 |
| them. I mean, we can apply this formula for only |
|
|
| 306 |
| 00:25:06,260 |
| six of them. And the others can be computed by the |
|
|
| 307 |
| 00:25:09,880 |
| complement by using either column total or row |
|
|
| 308 |
| 00:25:14,300 |
| total. So because degrees of freedom is six, that |
|
|
| 309 |
| 00:25:19,940 |
| means you may use this rule six times only. The |
|
|
| 310 |
| 00:25:23,880 |
| others can be computed by using the complement. So |
|
|
| 311 |
| 00:25:28,420 |
| here again, the hypothesis to be tested is, Mean |
|
|
| 312 |
| 00:25:34,070 |
| plan and class standing are independent, that |
|
|
| 313 |
| 00:25:36,550 |
| means there is no relationship between them. |
|
|
| 314 |
| 00:25:39,150 |
| Against alternative hypothesis, mean plan and |
|
|
| 315 |
| 00:25:41,650 |
| class standing are dependent, that means there |
|
|
| 316 |
| 00:25:44,630 |
| exists significant relationship between them. Now |
|
|
| 317 |
| 00:25:49,950 |
| let's see how can we compute the expected cell, |
|
|
| 318 |
| 00:25:55,990 |
| the expected frequency for each cell. For example, |
|
|
| 319 |
| 00:26:02,250 |
| The first observed frequency is 24. Now the |
|
|
| 320 |
| 00:26:07,790 |
| expected should be 70 times 70 divided by 200. So |
|
|
| 321 |
| 00:26:15,990 |
| for cell 11, the first cell. If expected, we can |
|
|
| 322 |
| 00:26:25,050 |
| use this notation, 11. Means first row. First |
|
|
| 323 |
| 00:26:32,450 |
| column. That should be 70. It is 70. Multiplied by |
|
|
| 324 |
| 00:26:40,110 |
| column totals. Again, in this case, 70. Multiplied |
|
|
| 325 |
| 00:26:43,990 |
| by 200. That will give 24.5. |
|
|
| 326 |
| 00:26:50,150 |
| Similarly, for the second cell, for 32. |
|
|
| 327 |
| 00:26:56,350 |
| 70 times 88 divided by 200. |
|
|
| 328 |
| 00:27:02,820 |
| So for F22, again it's 70 times 88 divided by 200, |
|
|
| 329 |
| 00:27:12,800 |
| that will get 30.8. So 70 times 88, that will give |
|
|
| 330 |
| 00:27:22,060 |
| 30.8. F21, rule two first, one third. rho 1 second |
|
|
| 331 |
| 00:27:32,780 |
| one the third one now either you can use the same |
|
|
| 332 |
| 00:27:37,600 |
| equation which is 70 times 42 so you can use 70 |
|
|
| 333 |
| 00:27:44,320 |
| times 42 divided by 200 that will give 14.7 or |
|
|
| 334 |
| 00:27:54,360 |
| it's just the complement which is 70 minus |
|
|
| 335 |
| 00:28:03,390 |
| 24.5 plus 30.8. So either use 70 multiplied by 40 |
|
|
| 336 |
| 00:28:14,510 |
| divided by 200 or just the complement, 70 minus. |
|
|
| 337 |
| 00:28:20,800 |
| 24.5 plus 30.8 will give the same value. So I just |
|
|
| 338 |
| 00:28:28,400 |
| compute the expected cell for 1 and 2, and the |
|
|
| 339 |
| 00:28:32,740 |
| third one is just the complement. Similarly, for |
|
|
| 340 |
| 00:28:36,120 |
| the second row, I mean cell 21, then 22, and 23. |
|
|
| 341 |
| 00:28:43,680 |
| By using the same method, he will get these two |
|
|
| 342 |
| 00:28:47,940 |
| values, and the other one is the complement, which |
|
|
| 343 |
| 00:28:51,880 |
| is 60 minus these, the sum of these two values, |
|
|
| 344 |
| 00:28:55,300 |
| will give 12. |
|
|
| 345 |
| 00:28:58,720 |
| Similarly, for the third cell, I'm sorry, the |
|
|
| 346 |
| 00:29:01,920 |
| third row, for this value, For 10, it's 30 times |
|
|
| 347 |
| 00:29:07,460 |
| 70 divided by 200 will give this result. And the |
|
|
| 348 |
| 00:29:12,660 |
| other one is just 30 multiplied by 88 divided by |
|
|
| 349 |
| 00:29:16,060 |
| 200. The other one is just the complement, 30 |
|
|
| 350 |
| 00:29:20,200 |
| minus the sum of these. Now, for the last column, |
|
|
| 351 |
| 00:29:26,660 |
| either 70 multiplied by 70 divided by 200, or 70 |
|
|
| 352 |
| 00:29:35,220 |
| this 70 minus the sum of these. 70 this one equals |
|
|
| 353 |
| 00:29:41,780 |
| 70 minus the sum of 24 plus 21 plus 10. That will |
|
|
| 354 |
| 00:29:51,740 |
| give 14. Now for the other expected cell, 88. |
|
|
| 355 |
| 00:30:02,370 |
| minus the sum of these three expected frequencies. |
|
|
| 356 |
| 00:30:07,290 |
| Now for the last one, last one is either by 42 |
|
|
| 357 |
| 00:30:12,810 |
| minus the sum of these three, or 40 minus the sum |
|
|
| 358 |
| 00:30:17,770 |
| of 14 plus 6, 17.6. |
|
|
| 359 |
| 00:30:22,810 |
| Or 40 multiplied by 42 divided by 400. So let's |
|
|
| 360 |
| 00:30:27,940 |
| say we use that formula six times. For this |
|
|
| 361 |
| 00:30:35,180 |
| reason, degrees of freedom is six. The other six |
|
|
| 362 |
| 00:30:39,100 |
| are computed by the complement as we mentioned. So |
|
|
| 363 |
| 00:30:46,480 |
| these are the expected frequencies. It takes time |
|
|
| 364 |
| 00:30:50,240 |
| to compute these. But if you have only two by two |
|
|
| 365 |
| 00:30:56,010 |
| table, it's easier. Now based on that, we can |
|
|
| 366 |
| 00:31:01,170 |
| compute chi-square statistic value by using this |
|
|
| 367 |
| 00:31:07,430 |
| equation for each cell. I mean, the first one, if |
|
|
| 368 |
| 00:31:12,390 |
| you go back a little bit to the previous table, |
|
|
| 369 |
| 00:31:15,150 |
| here, in order to compute chi-square, |
|
|
| 370 |
| 00:31:22,640 |
| value, we have to use this equation, pi squared, |
|
|
| 371 |
| 00:31:28,860 |
| sum F observed minus F expected squared, divided |
|
|
| 372 |
| 00:31:36,080 |
| by F expected for all C's. So the first one is 24 |
|
|
| 373 |
| 00:31:41,980 |
| minus squared, |
|
|
| 374 |
| 00:31:46,560 |
| 24 plus. The second cell is 32 squared |
|
|
| 375 |
| 00:31:55,350 |
| plus |
|
|
| 376 |
| 00:31:58,990 |
| all the way up to the last cell, which is 10. |
|
|
| 377 |
| 00:32:11,090 |
| So it takes time. But again, for two by two, it's |
|
|
| 378 |
| 00:32:14,430 |
| straightforward. Anyway, now if you compare the |
|
|
| 379 |
| 00:32:18,890 |
| expected and observed cells, you can have an idea |
|
|
| 380 |
| 00:32:23,650 |
| either to reject or fail to reject without |
|
|
| 381 |
| 00:32:25,650 |
| computing the value itself. Now, 24, 24.5. The |
|
|
| 382 |
| 00:32:31,470 |
| difference is small. |
|
|
| 383 |
| 00:32:35,730 |
| for about 7 and so on. So the difference between |
|
|
| 384 |
| 00:32:39,070 |
| observed and expected looks small. In this case, |
|
|
| 385 |
| 00:32:44,590 |
| chi-square value is close to zero. So it's 709. |
|
|
| 386 |
| 00:32:51,190 |
| Now, without looking at the table we have, we have |
|
|
| 387 |
| 00:32:55,370 |
| to don't reject. So we don't reject Because as we |
|
|
| 388 |
| 00:33:02,710 |
| mentioned, the minimum k squared value is 1132. |
|
|
| 389 |
| 00:33:06,350 |
| That's for one degrees of freedom and the alpha is |
|
|
| 390 |
| 00:33:09,670 |
| 25%. So |
|
|
| 391 |
| 00:33:14,390 |
| I expect my decision is don't reject the null |
|
|
| 392 |
| 00:33:19,250 |
| hypothesis. Now by looking at k squared 5% and |
|
|
| 393 |
| 00:33:24,530 |
| degrees of freedom 6 by using k squared theorem. |
|
|
| 394 |
| 00:33:30,200 |
| Now degrees of freedom 6. Now the minimum value of |
|
|
| 395 |
| 00:33:36,260 |
| Chi-square is 7.84. I mean critical value. But |
|
|
| 396 |
| 00:33:40,520 |
| under 5% is 12.59. So this value is 12.59. So |
|
|
| 397 |
| 00:33:48,290 |
| critical value is 12.59. So my rejection region is |
|
|
| 398 |
| 00:33:54,470 |
| above this value. Now, my chi-square value falls |
|
|
| 399 |
| 00:33:59,890 |
| in the non-rejection regions. It's very small |
|
|
| 400 |
| 00:34:06,250 |
| value. So chi-square statistic is 0.709. |
|
|
| 401 |
| 00:34:14,230 |
| It's much smaller. Not even smaller than π²α, it's |
|
|
| 402 |
| 00:34:20,620 |
| much smaller than this value, so it means we don't |
|
|
| 403 |
| 00:34:23,580 |
| have sufficient evidence to support the |
|
|
| 404 |
| 00:34:26,440 |
| alternative hypothesis. So my decision is, don't |
|
|
| 405 |
| 00:34:32,010 |
| reject the null hypothesis. So conclusion, there |
|
|
| 406 |
| 00:34:36,350 |
|
|
| 445 |
| 00:38:19,540 |
| Z statistic or Chi-squared. So Z or Chi can be |
|
|
| 446 |
| 00:38:25,080 |
| used for testing difference between two population |
|
|
| 447 |
| 00:38:28,920 |
| proportions. And again, chi-square can be extended |
|
|
| 448 |
| 00:38:34,360 |
| to use for more than two. So in this case, the |
|
|
| 449 |
| 00:38:40,140 |
| correct answer is C, because we can use either Z |
|
|
| 450 |
| 00:38:43,220 |
| or chi-square test. Next, in testing hypothesis |
|
|
| 451 |
| 00:38:52,090 |
| using chi-square test. The theoretical frequencies |
|
|
| 452 |
| 00:38:58,350 |
| are based on the null hypothesis, alternative, normal |
|
|
| 453 |
| 00:39:03,190 |
| distribution, none of the above. Always when we |
|
|
| 454 |
| 00:39:06,490 |
| are using chi-square test, we assume the null is |
|
|
| 455 |
| 00:39:10,450 |
| true. So the theoretical frequencies are based on |
|
|
| 456 |
| 00:39:14,630 |
| the null hypothesis. So always any statistic can |
|
|
| 457 |
| 00:39:20,060 |
| be computed if we assume x0 is correct. So the |
|
|
| 458 |
| 00:39:25,300 |
| correct answer is A. |
|
|
| 459 |
| 00:39:34,060 |
| Let's look at table 11-2. |
|
|
| 460 |
| 00:39:44,280 |
| Many companies use well-known celebrities as |
|
|
| 461 |
| 00:39:49,000 |
| spokespersons in their TV advertisements. A study |
|
|
| 462 |
| 00:39:54,420 |
| was conducted to determine whether brand awareness |
|
|
| 463 |
| 00:39:57,760 |
| of female TV viewers and the gender of the |
|
|
| 464 |
| 00:40:02,140 |
| spokesperson are independent. So there are two |
|
|
| 465 |
| 00:40:05,860 |
| variables, whether a brand awareness of female TV |
|
|
| 466 |
| 00:40:09,820 |
| and gender of the spokesperson are independent. |
|
|
| 467 |
| 00:40:14,820 |
| Each and a sample of 300 female TV viewers was |
|
|
| 468 |
| 00:40:19,540 |
| asked to identify a product advertised by a |
|
|
| 469 |
| 00:40:24,000 |
| celebrity spokesperson, the gender of the |
|
|
| 470 |
| 00:40:27,000 |
| spokesperson, and whether or not the viewer could |
|
|
| 471 |
| 00:40:32,280 |
| identify the product was recorded. The number in |
|
|
| 472 |
| 00:40:36,460 |
| each category are given below. Now, the questions |
|
|
| 473 |
| 00:40:40,080 |
| are, number one, he asked about the calculated |
|
|
| 474 |
| 00:40:45,520 |
| this statistic is. We have to find Chi-square |
|
|
| 475 |
| 00:40:49,120 |
| statistic. It's two by two tables, easy one. So, |
|
|
| 476 |
| 00:40:54,460 |
| for example, to find the F expected is, |
|
|
| 477 |
| 00:41:00,420 |
| rho total is one over two. And one line here. And |
|
|
| 478 |
| 00:41:13,130 |
| this 150. |
|
|
| 479 |
| 00:41:16,430 |
| And also 150. So the expected frequency for the |
|
|
| 480 |
| 00:41:22,510 |
| first one is 102 times 150 divided by 300. |
|
|
| 481 |
| 00:41:35,680 |
| So the answer is 51. |
|
|
| 482 |
| 00:41:42,880 |
| So the first expected is 51. The other one is just |
|
|
| 483 |
| 00:41:51,560 |
| 102 minus 51 is also 51. |
|
|
| 484 |
| 00:41:57,320 |
| Now here is 99. |
|
|
| 485 |
| 00:42:09,080 |
| So the second |
|
|
| 486 |
| 00:42:15,180 |
| one are the expected frequencies. So my chi-square |
|
|
| 487 |
| 00:42:18,800 |
| statistic is |
|
|
| 488 |
| 00:42:22,400 |
| 41 minus 51 squared divided by 51 plus 61 minus 51 |
|
|
| 489 |
| 00:42:32,260 |
| squared. 561 plus 109 minus 99 squared 99 plus 89 |
|
|
| 490 |
| 00:42:44,160 |
| minus 99 squared. |
|
|
| 491 |
| 00:42:49,080 |
| That will give 5 point. |
|
|
| 492 |
| 00:42:57,260 |
| So the answer is 5.9418. |
|
|
| 493 |
| 00:43:03,410 |
| So simple calculation will give this result. Now, |
|
|
| 494 |
| 00:43:06,450 |
| next one, referring to the same information we |
|
|
| 495 |
| 00:43:10,370 |
| have at 5% level of significance, the critical |
|
|
| 496 |
| 00:43:15,890 |
| value of that statistic. In this case, we are |
|
|
| 497 |
| 00:43:18,510 |
| talking about 2 by 2 table, and alpha is 5. So |
|
|
| 498 |
| 00:43:22,690 |
| your critical value is 3 point. So chi squared |
|
|
| 499 |
| 00:43:28,130 |
| alpha, 5% and 1 degrees of freedom. |
|
|
| 500 |
| 00:43:35,000 |
| This is the smallest value when alpha is 5%, so 3 |
|
|
| 501 |
| 00:43:39,220 |
| .8415. |
|
|
| 502 |
| 00:43:46,160 |
| Again, degrees of freedom of this statistic are 1, |
|
|
| 503 |
| 00:43:52,500 |
| 2 by 2 is 1. |
|
|
| 504 |
| 00:43:56,380 |
| Now at 5% level of significance, the conclusion is |
|
|
| 505 |
| 00:44:01,620 |
| that |
|
|
| 506 |
| 00:44:06,840 |
| In this case, we reject H0. And H0 says the two |
|
|
| 507 |
| 00:44:16,380 |
| variables are independent. X and Y are |
|
|
| 508 |
| 00:44:20,800 |
| independent. We reject that they are independent. |
|
|
| 509 |
| 00:44:27,380 |
| That means they are dependent or related. So, A, |
|
|
| 510 |
| 00:44:33,520 |
| brand awareness of female TV viewers and the |
|
|
| 511 |
| 00:44:36,680 |
| gender of the spokesperson are independent. No, |
|
|
| 512 |
| 00:44:41,580 |
| because we reject the null hypothesis. B, brand |
|
|
| 513 |
| 00:44:45,200 |
| awareness of female TV viewers and the gender of |
|
|
| 514 |
| 00:44:48,340 |
| spokesperson are not independent. Since we reject, |
|
|
| 515 |
| 00:44:53,380 |
| then they are not. Because it's a complement. So, |
|
|
| 516 |
| 00:44:58,430 |
| B is the correct answer. Now, C. A brand awareness |
|
|
| 517 |
| 00:45:02,810 |
| of female TV viewers and the gender of the |
|
|
| 518 |
| 00:45:05,450 |
| spokesperson are related. The same meaning. They |
|
|
| 519 |
| 00:45:10,550 |
| are either, you say, not independent, related or |
|
|
| 520 |
| 00:45:15,470 |
| dependent. |
|
|
| 521 |
| 00:45:19,490 |
| Either is the same, so C is correct. D both B and |
|
|
| 522 |
| 00:45:24,930 |
| C, so D is the correct answer. So again, if we |
|
|
| 523 |
| 00:45:28,970 |
| reject the null hypothesis, it means the two |
|
|
| 524 |
| 00:45:31,650 |
| variables either not independent or related or |
|
|
| 525 |
| 00:45:36,990 |
| dependent. |
|
|
| 526 |
| 00:45:40,550 |
| Any question? I will stop at this point. Next |
|
|
| 527 |
| 00:45:46,630 |
| time, inshallah, we'll start. |
|
|
|
|