- This topic has 12 replies, 12 voices, and was last updated 2 years, 8 months ago by Rawinan Soma.
-
AuthorPosts
-
-
2022-01-21 at 4:01 pm #34765Wirichada Pan-ngumKeymaster
For our discussion, pick one point from this paper ( pls say the page number or item number or both) and describe the point in your own words. You may be right you may be wrong, so let’s discuss in the group.
-
2022-01-23 at 11:03 pm #34781Kridsada SirichaisitParticipant
No. 18 page 343
If one observes a small P value, there is a good chance that the next study will produce a P value at least as small for the same hypothesis.
This is wrong because P value depend on the study size and assumption of the study. The studies are independent then P value will not be related.
-
2022-01-30 at 11:49 pm #34842Saravalee SuphakarnParticipant
Item no. 3 , page 341
“A significant test result (P <= 0.05) means that the test hypothesis is false or should be rejected”
This quote is wrong because a small p-value isn’t always refer to a null hypothesis should be rejected, such as a large random error, and before concluded should make sure about all assumptions of the test. P-value lesser than or equal to 0.05 only tell that the observation differences from the hypothesis prediction.
-
2022-01-31 at 10:21 pm #34852Pongsakorn SadakornParticipant
No. 16 page 343
“When the same hypothesis is tested in two different populations and the resulting P values are on opposite sides of 0.05, the results are conflicting.”
This is wrong because there are many variations of the study population and size between two different populations.
-
2022-02-01 at 2:46 pm #34866Wirichada Pan-ngumKeymaster
Also if there are points that you disagree or don’t quite understand please point out and share here. We can discuss those ones.
-
2022-02-01 at 9:11 pm #34868Khaing Zin Zin HtweParticipant
No. 10, page 342
“If you reject the test hypothesis because P</= 0.05, the chance you are in error is 5%.”P </= 0.05 does not mean that the chance you are in error for rejecting the test hypothesis, which is actually true, is 5%. It means that you will reject it and be in error for 5 times if you apply the test across 100 different studies.
The same explanation can be used for No. 24, page 345.
-
2022-02-01 at 10:53 pm #34870Kaung Khant TinParticipant
No 7 :On page – 341
“Statistical significance indicates a scientifically or substantively important relation has been detected.”
In my words, a p-value of less than 0.05 does not necessarily mean the results of the test are practically significant because p-value tends to depend on the size of the sample. Additionally the statistical significance just only means there is a difference. But for that difference to be practically significant, the scientific judgement needs to be taken into account. Basically this judgement defines whether the magnitude of the difference (effect) is large enough to be practical in a scientific world or not.
-
2022-02-20 at 12:42 pm #35164Navinee KruahongParticipant
This really happened in my department! We have a mental health large dataset like 2 million responses. Some guy took around 50k responses to analyze associations between variables.They found associations with a small p-value. They interpreted the results and used it without validating with other information and did’t aware that with that large number of sample, every thing can be statistical significance.
-
-
2022-02-03 at 1:37 pm #34873Sittidech SurasriParticipant
No. 01, Page 340:
“The P value is the probability that the test hypothesis is true; for example, if a test of the null hypothesis gave P = 0.01, the null hypothesis has only a 1 % chance of being true; if instead it gave P = 0.40, the null hypothesis has a 40 % chance of being true.”In my opinion, the P-values don’t tell the probability that a result is true, but it merely is the outcome of a statistical test. The lower of the p-value is the greater of the statistical significance of the observed difference.
-
2022-02-06 at 5:51 pm #34884Wachirawit SupasaParticipant
From 342 Point 13: Statistical significance is a property of the phenomenon being studied, and thus statistical tests detect significance.
This statement is wrong because statistical significant can manipulate by data collection, method of the study.
-
2022-02-20 at 11:17 am #35163Navinee KruahongParticipant
I really like the discussion of common misinterpretations of power on page 345. On the point 24, “If you accept the null hypothesis because the null P value exceeds 0.05 and the power of your test is 90%, the chance you are in error (the chance that your findings a false negative) is 10%.”
This is very normal misinterpretation even I took many courses of Statistics but still miss sometimes. It is something that we really need to give in mind that we need to state “If the null hypothesis is true”, the chance you are in error is 10%. However, if the null hypothesis is false and we accept it, we are 100% in error.
Thank you for a nice wrap up paper!
-
2022-02-28 at 10:32 pm #35172NaphatParticipant
No.3 of page 341
A significant test result (P≤0.05) means that the test hypothesis is false or should be rejectedA low P-value may conclude that there is statistical evidence supporting the rejection of H0, but this does not mean that the alternative hypothesis is true.
From studies and sample data, there is or is not enough evidence to indicate that Ho is true, so we accept or reject H0. -
2022-04-04 at 8:11 pm #35536Rawinan SomaParticipant
page 342
12. P values are properly reported as inequalities (e.g.,report ‘‘P< 0.02’’ when P= 0.015 or report ‘‘P> 0.05’’ when P= 0.06 or P= 0.70)there was a big mistake, it was easier to report p-value directly. Another reason is it could be differ if we use different dataset. However, the very small p-value should report in inequality form.
-
-
AuthorPosts
You must be logged in to reply to this topic. Login here