Model goodness of fit
Closed this issue · 6 comments
miaomiao-alt commented
miaomiao-alt commented
The following is my understanding: for this model, whether the chi-square value is the primary criterion for evaluation, and the rest are auxiliary. If the chi-square value is too large, even if other fitting indicators perform better, it cannot be said that the model has a good fit. Is that right?
cg09 commented
They are all standard statistical measures of agreement between a sample
empirical distribution and an assumption about the family of distributions
from which the sample is drawn. No such measure has any guarantee of
anything about a finite sample. There best use is comparative, not between
these measures but for any particular measure used to compare alternative
hypotheses. Take your choice.
…On Sun, Sep 1, 2024 at 10:25 AM miaomiao-alt ***@***.***> wrote:
The following is my understanding: for this model, whether the chi-square
value is the primary criterion for evaluation, and the rest are auxiliary.
If the chi-square value is too large, even if other fitting indicators
perform better, it cannot be said that the model has a good fit. Is that
right?
—
Reply to this email directly, view it on GitHub
<#1815 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AD4Y3OJCC3RLOHX7HLPY5X3ZUMPVDAVCNFSM6AAAAABNO4B2XGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRTGM3DGNBYGE>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
miaomiao-alt commented
Could you recommend some relevant literature?
cg09 commented
The literature is vast and, about fundamentals, not very helpful. Google
scholar is your friend, as is Wikipedia. Go to ChatGPT for a start.
The basic problem is that you are trying to guess an infinite property from
a finite sample. That is the basic problem in most statistical inference.
However good the BIC score or p value for a hypothesis test, there is no
logical guarantee that the results will not be different, or even reversed
in model comparison, in a new, larger sample from the same population. The
only logical guarantees that statistic provides are in the infinite
limit...where you will never be.
…On Sun, Sep 1, 2024 at 8:29 PM miaomiao-alt ***@***.***> wrote:
Could you recommend some relevant literature?
—
Reply to this email directly, view it on GitHub
<#1815 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AD4Y3OO4OTLZDOYZEHC4YMLZUOWPVAVCNFSM6AAAAABNO4B2XGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRTGU3DOOBTGY>
.
You are receiving this because you commented.Message ID:
***@***.***>
miaomiao-alt commented
OK!Thank you so much!
At 2024-09-02 08:49:18, "cg09" ***@***.***> wrote:
The literature is vast and, about fundamentals, not very helpful. Google
scholar is your friend, as is Wikipedia. Go to ChatGPT for a start.
The basic problem is that you are trying to guess an infinite property from
a finite sample. That is the basic problem in most statistical inference.
However good the BIC score or p value for a hypothesis test, there is no
logical guarantee that the results will not be different, or even reversed
in model comparison, in a new, larger sample from the same population. The
only logical guarantees that statistic provides are in the infinite
limit...where you will never be.
On Sun, Sep 1, 2024 at 8:29 PM miaomiao-alt ***@***.***> wrote:
Could you recommend some relevant literature?
—
Reply to this email directly, view it on GitHub
<#1815 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AD4Y3OO4OTLZDOYZEHC4YMLZUOWPVAVCNFSM6AAAAABNO4B2XGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRTGU3DOOBTGY>
.
You are receiving this because you commented.Message ID:
***@***.***>
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
jdramsey commented
Hey @miaomiao-alt ...is this issue done? Can I close it?