• Log in | Sign up@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 days ago

    Everyone else who has any sense: llms are shit and you shouldn’t trust them with executive power.

    You: just the cheap ones.

    Me: no, all of them. What kind of lunatic trusts control of anything important to a fundamentally stochastic process?

    • pixxelkick@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      You: just the cheap ones

      I never said that. I just said that the cheap ones are especially shitty.

      People on this site really lack reading comprehension it seems.

      • Log in | Sign up@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        no its just the free models…

        You just have to be aware… when using a cheap model

        You: just the cheap ones

        I never said that.

        Ohhhhhhhhh ok yes of course you never said or implied that. Not your repeated message at all. And yet you can’t keep away from adressing your criticism towards free or cheap LLMs! It’s like your subtext or your underlying belief is that of you just pay big tech enough money and they can just build a big enough set of server farms, it’ll be ok. No, it will not be ok and the enshittification has begun from an already shitty base point.

        All LLMs are shit, the cheap and free ones are indeed just easier to spot as generating shit, if you ask them about things you know about. But you have to accept that they’re ALL shit and STOP making get out clauses for the expensive ones by firing your criticisms exclusively at the cheap or free ones.

        Giving ANY LLM executive power over your data is A BIG MISTAKE because you’re putting your data in the control of something which operates, at its heart, as a random number generator. They’re trained to sound right. People trust them because they sound right. This is a fundamental error.

        • pixxelkick@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          21 hours ago

          The only people who have these issues, are people who are using the tools wrong or poorly.

          Using these models in a modern tooling context is perfectly reasonable, going beyond just guard rails and instead outright only giving them explicit access to approved operations in a proper sandbox.

          Unfortunately that takes effort and know-how, skill, and understanding how these tools work.

          And unfortunately a lot of people are lazy and stupid, and take the “easy” way out and then (deservedly) get burned for it.

          But I would say, yes, there are safe ways yo grant an llm “access” to data in a way where it does not even have the ability to muck it up.

          My typical approach is keeping it sandbox’d inside a docker environment, where even if it goes off the rails and deletes something important, the worst it can do is cause its docker instance to crash.

          And then setting up via MCP tooling that commands and actions it can prefer are explicit opt in whitelist. It can only run commands I give it access to.

          Example: I grant my LLMs access to git commit and status, but not rebase or checkout.

          Thus it can only commit stuff forward, but it cant even change branches, rebase, nor push either.

          This isnt hard imo, but too many people just yolo it and raw dawg an LLM on their machine like a fuckin idiot.

          These people are playing with fire imo.

          • Log in | Sign up@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 hours ago

            You’ll be the 4753rd guy with the oops my llm trashed my setup and disobeyed my explicit rules for keeping it in check.

            You know programmers who use llms believe they’re much more productive because they keep getting that dopamine hit, but when you actually measure it, they’re slower by about 20%.

            You appointed yourself boss over a fast and plausible intern who pastes and edits a LOT of stack overflow code, but never really understands it and absolutely is incapable of learning. You either spend almost all of your time in code review now for your stupid sycophantic llm interns who always tell you you’re right but never learn from you, or you’re checking in vast quantities of shit to your projects.

            You know really subtle, hard to find bugs on rare cases that pass your CI every single time? Or ones that no one in their right mind would have made, but yet they compile and look right at first glance. They’re now your main type of bug. You are rotting your projects with your random number generator.

            And you think that all the money you’re playing for your blagging llms protects you from them fucking up everything for you. But it doesn’t. And you’ll also find that your contract with your llm supplier expressly excludes them from any liability whatsoever arising from you using it instead pre-blaming you for trusting it.

            • pixxelkick@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              14 hours ago

              You’ll be the 4753rd guy with the oops my llm trashed my setup and disobeyed my explicit rules for keeping it in check

              Read what I wrote.

              Its not a matter of “rules” it “obeys”

              Its a matter of literally not it even having access to do such things.

              This is what Im talking about. People are complaining about issues that were solved a long time ago.

              People are running into issues that were solved long ago because they are too lazy to use the solutions to those issues.

              We now live in a world with plenty of PPE in construction and people are out here raw dogging tools without any modern protection and being ShockedPikachuFace when it fails.

              The approach of “Im gonna tell the LLM not to do stuff in a markdown file” is tech from like 2 years ago.

              People still do that. Stupid people who deserve to have it blow up in their face.

              Use proper tools. Use MCP. Use a sandbox environment. Use whitelist opt in tooling.

              Agents shouldn’t even have the ability to do damaging actions in the first place.

              • Log in | Sign up@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                14 hours ago

                Ah yes, lovely mcp. Lovely anthropic mcp. Make sure you give anthropic lots of money and use their tools and then you’ll be completely safe plugging the output of the llm into the os. Definitely fine yes.

                I bet you your contract with them says they’re not liable for shit their llm does to your files, your environment or your repositories, mcp or no mcp.

                Fool.

                • pixxelkick@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 hours ago

                  Lovely anthropic mcp. Make sure you give anthropic lots of money and use their tools

                  Its becoming clear you have no clue wtf you are talking about.

                  Model Context Protocol is a protocol, like http or json or etc.

                  Its just a format for data, that is open sourced and anyone can use. Models are trained to be able to invoke MCP tools to perform actions, and anyone can just make their own MCP tools, its incredibly simple and easy. I have a pretty powerful one I personally maintain myself.

                  Anthropic doesnt make any money off me, in fact, I dont use any of their shit, except maybe whatever licensing fees microsoft pays to them to use Claude Sonnet, but microsoft copilot is my preferred service I use overall.

                  I bet you your contract with them says they’re not liable for shit their llm does to your files

                  Setting aside the fact that I dont even use anthropic’s tools, my copilot LLMs dont have access to my files either. Full stop.

                  The only context in which they do have access to files is inside of the aforementioned docker based sandbox I run them inside of, which is an ephemeral immutable system that they can do whatever the fuck they want inside of because even if they manage to delete /var/lib or whatever, I click 1 button to reboot and reset it back to working state.

                  The working workspace directory they have access to has readonly git access, so they can pull and do work, but they literally dont even have the ability to push. All they can do is pull in the stuff to work on and work on it

                  After they finish, I review what changes they made and only I, the human, have the ability to accept what they have done, or deny it, and then actually push it myself.

                  This is all basic shit using tools that have existed for a long time, some of which are core principles of linux and have existed for decades

                  Doing this isnt that hard, its just that a lot of people are:

                  1. Stupid
                  2. Lazy
                  3. Scared of linux

                  The concept of “make a docker image that runs an “agent” user in a very low privilege env with write access only to its home directory” isnt even that hard.

                  It took me all of 2 days to get it setup personally, from scratch.

                  But now my sandbox literally doesnt even expose the ability to do damage to the llm, it doesnt even have access to those commands

                  Let me make this abundantly clear if you cant wrap your head around it:

                  LLM Agents, that I run, dont even have the executable commands exposed to them to invoke that can cause any damage, they literally dont even have the ability to do it, full stop

                  And it wasnt even that hard to do