You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it would make sense to ask the user for only one argument with predictions with at least two columns and split the predictions array. (Also in #221).
In my tests, if you just divide the predictions in two, the estimates are virtually the same:
I think it would be clearer to have this discussed in a separate issue (and tag @LeeviLindgren). I don't expect there is much difference in the easy case, but if the variability of the importance weights is big then a) the importance weights are different for the two halves (while in the current implementation they are the same), b) the number of draws for the importance weighting is halved, which increases Monte Carlo error. It is possible that in practice a + b are not a problem, but this should be investigated more thoroughly.
The text was updated successfully, but these errors were encountered:
Thanks for opening this. I agree that one argument would be simpler.
In my tests
Before we discuss any potential implementation issues, can you report the tests you have made? Report also the result for loo_crps() and loo_scrps with khat varying from less than 0.3 to larger than 0.7, and even better if the autocorrelation varies, too. Report the actual values from the two approaches ("virtually the same" is not well defined.). I just want to be sure we're not losing too much in the accuracy, and we need to justify well the splitting. If you don't have time to do such tests, just comment here.
I think it would make sense to ask the user for only one argument with predictions with at least two columns and split the predictions array. (Also in #221).
In my tests, if you just divide the predictions in two, the estimates are virtually the same:
@avehtari said:
The text was updated successfully, but these errors were encountered: