return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 22 hours agoTesting suggests Google's AI Overviews tell millions of lies per hourarstechnica.comexternal-linkmessage-square24linkfedilinkarrow-up1236arrow-down16cross-posted to: [email protected][email protected]
arrow-up1230arrow-down1external-linkTesting suggests Google's AI Overviews tell millions of lies per hourarstechnica.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 22 hours agomessage-square24linkfedilinkcross-posted to: [email protected][email protected]
minus-square8oow3291d@feddit.dklinkfedilinkEnglisharrow-up1arrow-down2·9 hours ago LLMs don’t have any intentions. Eh. The output from LLMs is usually pretty goal-oriented, so it arguably has intentions. The LLM is not designed to deceive though, so in that sense it is correct that it is not lies.
minus-squaresupamanc@lemmy.worldlinkfedilinkEnglisharrow-up2·5 hours agoAn LLM is a statistical modeling tool. It doesn’t have goals. It can’t have intentions. It just outputs according to an algorithm.
minus-squaredeliriousdreams@fedia.iolinkfedilinkarrow-up3·9 hours agoThe people who program, run and upkeep the LLM have intentions. The LLM is not a sapient or sentient entity.
Eh. The output from LLMs is usually pretty goal-oriented, so it arguably has intentions.
The LLM is not designed to deceive though, so in that sense it is correct that it is not lies.
An LLM is a statistical modeling tool. It doesn’t have goals. It can’t have intentions. It just outputs according to an algorithm.
The people who program, run and upkeep the LLM have intentions. The LLM is not a sapient or sentient entity.