• theparadox@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    The LLM craze is a natural maturation point of the AI field

    I don’t see why that is. Using ML to generate models that accurately perform specific tasks is orders of magnitude away from attempting to feed the entirety of human text into ML and expecting superhuman intelligence to emerge.

    now it’s expanded into foundational models (FM) which you would still probably just call LLMs because most people don’t know the differences.

    While ML and “AI” is not my field, I’m fairly certain that what I was attempting to describe in layman’s terms in my literal first sentence were these foundational models you are referring to.

    FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works.

    I have no direct experience outside of LLMs and I don’t really take issue with what I understand FMs to be, so long as they keep their scope narrow and focus on accurating completing specific tasks to assist humans. As soon as we hand off control and trust it blindly without extensive trials ensuring it’s reliability and failsafes in place to ensure inaccuracies are caught I start raising concerns.

    My only experience is with LLMs - a few, minor attempts to “test the waters” of the major, publicly available LLM models. I’ve been frustrated with my search results and glanced at the AI results. Work gave us Gemini licenses and I used it in similar, desperate situatiuons for coding help and help with Google products foolishly thinking that if any LLM designed to help with such tasks would be passably useful it would be the LLM of the company that owns the products I seek help with. Unless something has changed drastically in the last month or so, every interaction has been a roll of the dice to such an extent that my occasional “testing the waters” caused me to jump out and avoid it as much as possible. I simply can’t trust it to not halucinate and gaslight me.

    What I see as the problem is moving way, way, way too quickly in trusting language models to do anything even remotely important. Human communication is extremely nuanced, complicated, fluid, and imperfect. Humans misunderstand each other during communication even when we have the context of in-person visual/audible cues and interpersonal history.

    • OpenStars@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 hours ago

      What the pro-AI people always tend to argue back at comments like yours is that:

      1. you used the wrong AI - it should be <insert preferred model here> - probably Claude at this point in time, for programming? i.e. the implication being that you are some old man who yells at clouds and does not know what they only learned themselves <6 months ago, as if that knowledge entirely invalidates your own lived experiences even in the last ~4 weeks.

      2. you used the wrong parameters / queries. When applied to the equivalent of Google searches this seems a false claim to me because those used to be fairly brainless, whereas sometime soon Gemini is going to start charging $$$ in return for being able to find anything remotely helpful on the internet, but for now they would like it pretty please if you would help them train their model, before they turn around and sell it to you, and others (isn’t it glorious how you are allowed to help share in the work part, without proportionate access to the reward at the end?).

      Tbf you probably did use the wrong queries for the programming questions. It seems to me to be like someone who actually lets a “self-driving car” drive by… itself? Like you are supposed to pay money for what is marketed one way but the reality after purchase is quite different, and if you e.g. run over little children then it’s not the fault of those who sold you a “self-driving car”, but rather (legally speaking) yourself who should not have allowed the car to drive by itself - how dare you not know better! (Despite being told precisely such with a nod and a wink)

      The AI hype is real, and false, though despite that LLMs are quite a capable tool, if ignoring the hype and used under much more constrained circumstances than the hype would lead us to believe (despite the hype surrounding AI, rather than LLM technology itself, being the literal point of the OP though?).

      I stumbled upon this randomly and enjoyed the read: https://www.structural-integrity.eu/is-there-a-need-for-ai-after-capitalism/.