A couple of 20-year-old developers make $500,000 a month promising to help men to stop watching porn, but exposed their private porn watching habits.

  • Jo Miran@lemmy.ml
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    3 days ago

    One of my biggest gripes with coding AI like Claude is how desperately polite and flattering they are. I wish there was a way to feed it hand written code to analyze for bugs and security flaws, then have the AI relentlessly roast your shitty code.

    "LMFAO, you dumb b!tch! Are you trying to get hacked and sued, by <insert dumb shit here> or are you just that stupid? Here are a few steps you can take to fix your shit code and have it adhere to standard coding practices. "

      • 9488fcea02a9@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        3 days ago

        So i was reading a thread from the linux kernel mailing list where linus pointed out someone’s coding mistake and why it would lead to a bug…

        So i fed the patch email into google gemini pro, and it spotted the same bug as linus

        I thought that was interesting.

    • StarryPhoenix97@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      3 days ago

      Hey ChatGPT, respond to all of my inquiries like my toxic abusive uncle. The more vicious the response the better. Withhold praise. Pretend it’s opposite day and give me your best compliments in the form of the life-long truama that I have come to associate with authority figures.

      Here’s my code. What do you think?

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      I figured out a way to do this, via Alpaca.

      In Alpaca you can set an LLM with a persistent prompt.

      Basically, I just told the thing hey, you’re too sycophantic, often needlessly verbose, and often overly confident… can you generate a prompt for yourself to address those issues?

      Roughly 30 minutes of trial and error along those lines later, now its quite matter of fact, and is at least more likely to tell me when it is aware it is making an assumption, and ask me for clarification or if i can give it more context, and it doesn’t do the kind of weird, intro and outro paragraphs where it basically just reassures you that your ideas are wonderful and you are valid and i just think the things you say are so interesting!

      Then, you feed it a script, ask it do a sanity check, and it will generally go through and identify strenghts and weakness of the code, at least as it perceives such.


      Beyond that, Alpaca recently introduced a … character system, that is ostensibly tailored toward making specific kinds of conversational chat bots… but it also introduced a kind of ‘dictionary’ system, where you can give it a kind of additional permanent reference knowledge, to associate with certain terms.

      I have not tried this yet, but, I’d be willing to be that you could say, jam that with a bunch of examples of syntax and methods from a particular language or library… and my guess would be that you could thus tailor a ‘character’ that is more up to date or specific to some domain.

      So… you could give it the main prompt of something like “You are a tsuntsun senior programmer who has nothing but contempt for any coding mistakes, and you pride yourself on coming up with entirely novel insults for each inadequacy you notice.”

      … And then give it a ‘dictionary’ that pertains to syntax, methods, perhaps even broader concepts…

      And that might actually produce your desired vicious asshole senior programmer persona.

      Of course, this is not going to work for like, an entire massive codebase, unless you’re the one stockpiling all my the RAM.

      But for smaller projects or just single scripts… it might kind of work.

    • AlphaOmega@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 days ago

      You can give it rules.

      I work with Gemini a lot and I told it to cut all the polite crap out and just give the facts I need.

      • frongt@lemmy.zip
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        3 days ago

        On the rare occasion I use LLMs, I just wish they would respect my request for a list of like twenty bullets and nothing else, instead I get two paragraphs of bs and four bullets.

    • hammertime@lemmy.org
      link
      fedilink
      arrow-up
      2
      arrow-down
      8
      ·
      3 days ago

      I don’t think it will ever cuss at you, but your can have it be more critical. It say to me all the time, “this or probably a bad idea, before I do this, consider this alternative” (paraphrasing)