

32·
6 days ago10 tests per model seems like way too little and they should give confidence intervals…
the 10/10 vs. 8/10 is just as likely due chance than any real difference. But some people will definitely use this to justify model choice.


10 tests per model seems like way too little and they should give confidence intervals…
the 10/10 vs. 8/10 is just as likely due chance than any real difference. But some people will definitely use this to justify model choice.


I think >50% of my up votes are me swiping right to go back but missing the screen border by a tiny bit, causing the jerboa swipe to vote to kick in.
so who knows, maybe they just accidentally hit the downvote arrow while scrolling?
I’m not talking about the quality of LLMs (they suck, in so many different ways…).
I’m criticizing the experiment setup, it is not really statistically sound. Doing 10 tests each with 52 different models is almost bound to have one model be correct 100% of the time (even if the true probability is closer to 50%), by pure chance. Doing 100 tests each might yield very different results with none of them answering correct 100% of the time. Or put another way, the p-values of the tests performed are pretty high, not <0.05, so the results don’t really say what they purport to say.