- This topic has 11 replies, 7 voices, and was last updated 3 years, 10 months ago by Penpitcha Thawong.
-
AuthorPosts
-
-
2021-01-15 at 7:31 pm #25314Wirichada Pan-ngumKeymaster
For our discussion, pick one point from this paper ( pls say the page number or item number or both) and describe the point in your own words. You may be right you may be wrong, so let’s discuss in the group.
-
2021-01-21 at 11:50 pm #25459tullaya.sitaParticipant
Item number 10, page 342
If you reject the test hypothesis because p< 0.05, the chance you are in error is 5% –this sentence is wrong
In my own word; p <0.05 means that the probability of observed this result if the null hypothesis is true is less than 0.05. This probability is low enough to reject the null hypothesis.
Then, if we reject the null hypothesis and accept the alternative hypothesis, the chance of being an error in your decision is 100% because you decide to choose only one hypothesis.-
2021-01-23 at 3:45 pm #25559Wirichada Pan-ngumKeymaster
Good point. In one single event, it can only be true (0% error) or false (100% error). Type 1 Type 2 error refer to the frequency of making wrong decision over many times of testing. Type 1 error of 5% means over 100 times of rejections, 5 times you may be wrong.
-
-
2021-01-24 at 8:01 pm #25566AmeenParticipant
The topic #19 on page 343; The specific 95 % confidence interval presented by a study has a 95 % chance of containing the true effect size.
What I understand about confidence interval is it tells how much observed data (in percentage) that lie between a specific range of observed data from a group (either effect or control) of a study, while effect size is the size of the difference between two groups. So confidence interval has nothing to do with the chance of obtaining the true effect size as it represents only a group from the study.
For example, if the mean of observed data from a group in a study (either effect or control) is 80, with a minimum at 70 and 90 is a maximum, the observed data between the minimum and maximum would account to 100% of all observed data. So, 95% confidence interval would be that observed data that are within, for instance, 75 – 85 are account 95% of observed data on the group.
-
2021-01-26 at 12:17 am #25586Wirichada Pan-ngumKeymaster
Similarly to Tullaya’s choice, in an event it is either 0 or 100% that the intervals contain the true value. 95% refers to how often this happened when sampling from many studies i.e. 95 out of 100 times.
-
-
2021-01-24 at 9:30 pm #25568w.thanacholParticipant
Number 16 on page 343 When the same hypothesis is tested in two different population and the resulting P values are on the opposite sides of 0.05, the results are conflicting is wrong.
We cannot look at the p-value and conclude result from two different population because two results might have the same effect side but the different standard error thus this contributes to different P-value finally. To compare the results, the proper analysis should be considered such as heterogeneity, interaction or modification.-
2021-01-26 at 12:26 am #25587Wirichada Pan-ngumKeymaster
Good one, I probably got it wrong as well. Well summarized!
-
-
2021-02-06 at 2:56 pm #25934Pacharapol WithayasakpuntParticipant
#21 Page 344
If two confidence intervals overlap, the difference between two estimates or studies is not significant.
It still requires a statistics to compute difference between estimates, and still may produce a difference.
I don’t know if I understand correctly, but vice versa cannot be concluded as well.
It can, however, be noted that if the two 95 % confidence intervals fail to overlap, then when using the same assumptions used to compute the confidence interval, we will find < 0.05 for the difference; and if one of the95 % intervals contains the point estimate from the other group or study, we will find > 0.05 for the difference.
-
2021-02-14 at 4:26 pm #26111Wirichada Pan-ngumKeymaster
It is quite complicated and you can not just make a decision to accept or reject H0 based on just the confidence intervals. The test statistics will have to be calculated. This is all based on the normal distribution as well.
-
-
2021-02-13 at 10:54 am #26104imktd8Participant
#No.3 A significant test result (P£0.05) means that the test hypothesis is false or should be rejected.
If P-Value is less than the chosen significance and then we reject the null hypothesis to support the alternative hypothesis. It does not imply a meaningful or important difference. A low P-value may conclude that there’s statistical evidence to support the H0 rejection, but it doesn’t mean that the alternative hypothesis is true. For example, if we set a significance level at 0.05, it means that there is a 5% chance of rejecting the main hypothesis. That assumption is true (wrong decision)-
2021-02-14 at 4:28 pm #26112Wirichada Pan-ngumKeymaster
We would never say H0 is true or false. We say from the study and sampled data, there is or isn’t sufficient evidence to suggest that Ho is true and thus we accept or reject H0.
-
-
2021-02-21 at 2:34 pm #26191Penpitcha ThawongParticipant
No.1 page 340
The P-value is the probability that the test hypothesis is true; for example, if a test of the null hypothesis gave P=0.01, the null hypothesis has only a 1% chance of being true; if instead gave p=0.40, the null hypothesis has a 40% chance of being true.–> The P-value is the probability of obtaining our results if the null hypothesis is true. The null hypothesis either true or false and we cannot interpret the p-value as the probability that the null hypothesis is true.
-
-
AuthorPosts
You must be logged in to reply to this topic. Login here