Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • Zos_Kia@jlai.lu
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    To clarify : model collapse is a hypothetical phenomenon that has only been observed in toy models under extreme circumstances. This is not related in any way to what is happening at OpenAI.

    OpenAI made a bunch of choices in their product design which basically boil down to “what if we used a cheaper, dumber model to reply to you once in a while”.

      • Zos_Kia@jlai.lu
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 hours ago

        I’m sorry but no, models are definitely not collapsing. They still have a million issues and are subject to a variety of local optima, but they are not collapsing in any way. It is not known whether this can even happen in large models, and if it can it would require months of active effort to generate the toxic data and fine-tune models on that data. Nobody is gonna spend that kind of money to shoot themselves in the foot.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      The funny thing is, in order to get it to the dumber model, they have to run people’s queries through a model that selects the appropriate model first. This is resulted in new headaches for AI fans

      • Zos_Kia@jlai.lu
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Yeah that’s also something that you have to train for, i’m not super aware of the technicals but model routing is definitely important to the AI companies. I suspect that’s part of why they can pretend that “inference is profitable” as they are already trying to squeeze it down as much as possible.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Yeah i remember that Ed article ! I don’t think the technical aspects are relevant to the newer generation of models, but yeah of course any attempt to compress inference costs can have side effects : either response quality will degrade for using dumber models, or you’ll have re-inference costs when the dumb model shits its pants. In fact the re-inference can become super costly as dumber models tend to get lost in reasoning loops more easily.