Young people have grown increasingly skeptical of artificial intelligence, even those who use it daily, according to a new Gallup poll of more than 1,500 people aged 14 to 29.

There is no decline in AI use among Gen Zers, but there is also no increase since the same poll was conducted in 2025. The latest poll found that AI use was plateauing among young users, accompanied by rising concern about the technology’s consequences.

The findings are significant because Gen Z is “the generation most likely to enter or grow within the workforce over the next decade,” the report notes, meaning that their adoption could determine the trajectory of broader societal AI adoption. Gen Z has already overtaken Boomers in the workforce. Right now, the AI world is preparing for a massive jump in expected demand, and the top tech and financial companies are investing billions upon billions of dollars into building out the supply. Experts have warned that if demand does not pan out exactly as expected in the short term, then it could have disastrous consequences for the economy.

  • jj4211@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    12 hours ago

    I’m realizing it’s not GenAI itself that I necessarily am bothered by, it’s that it just makes everyone that already annoyed me even worse.

    People that flood the internet with low-value clickbait? Well now they can flood even more, even more text, lots of video.

    People that see a popular content creator that puts out good stuff and then tries to do a knock off? Now I might see that knock-off for 15 seconds before I realize that the thing is trash.

    People that like to tell everyone else how to do their jobs that they themselves have no experience about? Well here’s GenAI to make them claim they can do someone’s job better.

    Megalomaniac billionaires with messiah complexes? Well, GenAI makes them think they are gods. Elon’s Grok even just casually drops Elon praise into content for no reason.

    Executives that view themselves as “thought leaders” and are dismissive of their employees? Yeah, they are itching to lay off some people.

  • AHamSandwich@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    22 hours ago

    My gen Z kid was pretty excited because it seemed like AI had so much promise. Promise to do his homework for him. He since learned how he spends more time getting the right answers out of it than just doing the work himself.

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      13 hours ago

      My kid never really thought it was useful from the onset for fact based material and/or understood that the point of the work was to actually learn something and that the output doesn’t matter and is utterly meaningless if AI generates it.

      I have used it to accelerate double checking their math homework. If it agrees, I assume the homework was done right. If it disagrees, then I do the math manually myself and of the disagreements, about 70% is the AI getting something wrong.

      My kid did however for a time use it for entertainment, but decided pretty quickly that the chatbots added nothing to the experience my kid did not bring to the experience, so stopped that as well.

      Very firmly anti-AI slop too.

      Caused some friction when my wife got into a GenAI story for a few days but my wife got bored of the concept too.

  • scaredoftrumpwinning@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    1 day ago

    After my Google maps was updated to use Gemini things got so much worse. I was trying to send a message to my daughter to send a message to the group chat. Every time I said group chat in my dictation it would search for the group chat never finding it and making me start over. I finally gave up and told her to call me.

    OTOH Alexa is better at farting so we have that.

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 day ago

    Enshitification barely allowed these tools to function as advertised for more than a few years.

    They’re already getting siloed behind paywalls, tranched into “unusable trash $TooMuch/mo” and “barely working as intended $WayTooMuch/year”, and porked up with useless ephemeral. The cutting edge stuff - Sora, for instance - gets trashed almost as soon as it is released. The OpenClaw shit just fucks your shit up if you’re not babysitting it constantly. Mythos isn’t even for public consumption. Grok is entirely for CSAM. A bunch of these models are just being turned over to the military, because Pentagon officials know to keep eating that bullshit and never complain.

    What’s the draw anymore? Come use this garbage application that is going to render your job obsolete largely by tanking the overall economy.

    • partofthevoice@lemmy.zip
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      That is why I am make sure to download and backup interesting models from huggingface. I can even run some of them on my hardware.

      I fully expect that some innovation will one day make models much easier to run locally, including the huge ones. Like quantization already does, but better. By around this time, I also expect a fresh push from OpenAI and the like to try making access to open models much more difficult. So, I want a repository in preparation. Fuck them in advance.

  • MonkeMischief@lemmy.today
    link
    fedilink
    arrow-up
    17
    ·
    1 day ago

    Gen Z is “the generation most likely to enter or grow within the workforce over the next decade,” the report notes"

    …I like to think “most likely” implies there’s a small chance the entire generation will somehow figure it out and just collectively bypass “tHe WoRkFoRcE” altogether. Lol

  • UltraGiGaGigantic@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    23 hours ago

    Let me know when I can get a femboi mark Zuckerberg AI uploaded to my sexbot 4200. Till then, this tech is dead to me.

  • KelvarCherry [They/Them]@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    39
    ·
    2 days ago

    I am once again asking you to petition your local school boards to block generative AI usage by students and teachers alike. AI is being pushed on these kids at a young age, and I feel with the downward-trending attention span of Gen Alpha and Gen Z, that will form into a lifelong dependency. Plus, the loss of the licenses from the school system will be a significant dent in the AI metrics.

    Here are some demands: block ChatGPT/Gemini/Copilot/DeepSeek on school networks (like porn and gaming sites are blocked); prohibit use of AI-generated images and text on assignments and teaching materials; ensure no assignments will require or recommend the use of AI output at any point.

    These suggestions are based on reports I’ve heard from students. Please feel free to comment your own recommendations or information.

      • KelvarCherry [They/Them]@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        I absolutely do not. I’m focused on one in-road to AI usage. My mind has been gravitating toward schools as they are run according to local government boards that people can reasonably challenge and get a seat on, and to whom the representatives are much more accountable and much easier to persuade.

        It’s a lot more difficult to stop, say, a corporate middle manager from pushing AI on their employees. Though, to that point, employees can leave jobs, where students have much less agency over what the school curriculum is, and could be coerced into AI dependency by that school authority (which I have heard happening). Child and young adult brains are also far more malleable, and I fear AI dependency would have a worse, perhaps irreversible, effect.

        Thank you for prompting me to clarify ^^

    • strifegroove@ani.social
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      2 days ago

      As a system admin. We already do this during exams and these fucking kids still figure out how to bypass shit

      We had one the other day use Google chrome remote desktop (had no idea that was a thing) to remote hone bypassing the exam network block

      Only reason we caught them early rather than doing grading was because the idiot uploaded their final answers from their home PC

      • KelvarCherry [They/Them]@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        Perhaps they will. You will always have bad actors. My suggestions are really more focused on the messaging those kids get from schools.

        I have heard that many students have gotten assignments on which one of the requirements is to use Text/Image generation, or for which they are a suggested tool. Stopping students from being encouraged to use AI is far more important (and far easier) than stopping every kid from cheating on assignments with it.

        • strifegroove@ani.social
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          Yes I agree.

          However sadly I am not the one teaching. I just run the magic curtain of user management and internal servers

        • strifegroove@ani.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          2 days ago

          Trust me I am also confused by this but it mostly boils down to “you can’t expect these kids to not be idiots who show up logged out of office or whatever”

    • TubularTittyFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      it will be another year or two.

      they are still actively investing in this. once the money starts drying up then the bubble will pop.

      and all the tech companies will ask the government for a trillion dollars to cover their asses, and we will give it to them, because America.

      • hitmyspot@aussie.zone
        link
        fedilink
        arrow-up
        2
        ·
        21 hours ago

        It doesnt need to dry up. They could totally pop it by continuing to increase their costs exponentially while investment remains high. Or just users get bored and move on.

  • o_oli@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    2
    ·
    2 days ago

    Once you use AI enough you start to peer behind the curtain and see how it’s all just a magic trick and not actually magic like it seems to begin with. So yeah I think its unsurprising people would come to this conclusion.

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      12 hours ago

      I’m surprised it takes so long, honestly. I keep seeing a progression of people who think they uniquely figured out how to avoid the pitfalls of GenAI mistakes and then getting hit with the same mistakes everyone gets hit with and having shocked Pikachu face when the LLM does something it “promised” not to do. They will not believe anyone telling them that LLM generating the phrase “I commit to avoiding deleting any data” doesn’t mean it actually committed to anything. Even when that fails, they think the LLM saying “I have made a mistake, and I have learned from it and I won’t allow it to happen again” means something, and shocked again as, surprise, that also doesn’t mean anything.

      Of course, just last week someone was asking me if I had tried some GenAI stuff and they had been thinking about trying it. Shockingly some people have managed to avoid it and I guess they have more folks to burn through…

    • partofthevoice@lemmy.zip
      link
      fedilink
      arrow-up
      8
      ·
      24 hours ago

      Read up on Information Theory. These machines are glorified autocomplete engines, built by exploiting redundancy within language. Like… you give it a billion sentences, and you ask it “what comes after ‘the dog’?” It says something random like “limousine.” You penalize it for the wrong answer, which means it updates its weights to ever so slightly point further away from such nonsense. You then do this hundreds of millions of times, and suddenly the weights start to be pretty well tuned. Input “the dog” might output “sat” now. Good job.

      I mean… there’s definitely more fancy stuff going on. But this seems to be something fundamental about it all. Given as much, I can’t help but feel like… yeah… they do suck at what they’re most often used for, and it’s not surprising why.

        • Hakuso@scribe.disroot.org
          link
          fedilink
          arrow-up
          4
          ·
          1 day ago

          An institution uses a local model to help organize massive amounts of data, not crawl the whole web stealing everything in sight while being anthropomorphized by a corporations trying to sell you a friend or a waifu…

          Which you won’t be able to run because RAM is $500 now.

          There are valid uses for LLMs, but I think everyone who calls it “AI” is definitely a scam.

            • leftzero@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              4
              ·
              1 day ago

              I didn’t say Open AI’s or Nvidia’s, nor their investors (though, to be fair, Nvidia will probably still end up profiting once the bubble pops, the bastards; after all, in a gold rush the ones selling the mining equipment are the ones who end up making a profit).

              I specifically mentioned the scammers on top, who will grab the cash and run as soon as it starts popping.

              The economy will end up worse than in the 1929 crash, sure, but not for those bastards.

              So, yeah, it can, and it is, because it’s what the whole scam was designed for.

        • CanIFishHere@lemmy.ca
          link
          fedilink
          arrow-up
          4
          arrow-down
          5
          ·
          1 day ago

          I have a buddy who uses AI to read through contracts to identify high risk commitments that might cost the company money. There are thousands more uses .

            • CanIFishHere@lemmy.ca
              link
              fedilink
              arrow-up
              2
              ·
              1 day ago

              I thought it would be clear because it’s a contract, but we are talking about financial risks, not health risks. He is using a corporate trained AI client. When the AI client finds an issue he (the human) still reviews it. According to my buddy his productivity has improved by over 25%.

              • hark@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                13 hours ago

                If the AI has missed risks and he didn’t bother checking (since this is where the added productivity comes from) then the company gets to enjoy those risks.

          • quack@lemmy.zip
            link
            fedilink
            arrow-up
            10
            arrow-down
            1
            ·
            1 day ago

            That’s horrifying. I really hope he’s triple-checking everything.

            • CanIFishHere@lemmy.ca
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              10 hours ago

              He reviews anything the AI flags. As I already mentioned, the AI client is looking for financial risks. ie: a contract committing the company to something it doesn’t have the capability of delivering. I used to do something very similar. One obvious example would be a customer asking for unlimited liability. Company can’t commit to that because it could bankrupt the company.

      • o_oli@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 days ago

        For sure, it’s amazing for some things. But it also appears to do more than you think it does until you become familiar with it. I think everyone new to using AI should quiz it on topics they are knowledgeable in, to realise how much shit it makes up.

        Also yeah I’m specifically talking about LLMs because I think that’s 95%+ of AI usage right now in volume.

        • Grandwolf319@sh.itjust.works
          link
          fedilink
          arrow-up
          16
          arrow-down
          1
          ·
          2 days ago

          For sure, it’s amazing for some things.

          I’m still skeptical about this.

          Most of those things are usually due to the alternative being intentionally bad.

          Like google becoming bad, or bad company documentation, or corporate speak emails that’s could just be straight to the point.

          • o_oli@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            23 hours ago

            Maybe? But to give an example of how I think it’s been pretty cool, is summarising my Dungeons & Dragons session notes, and being available to answer questions, or spin up ideas on the fly. I can take horrible and inconsistent notes with holes in them, but an LLM straightens them all out into any format I need. If I need a small piece of world building and ran out of time I can get it to spit a few ideas at me. Often generic ideas and tropes are actually what I am after. If I forgot something that happened 6 months ago I can just…ask it. It can pull up stuff I noted offhand and totally forgot about no problem. This sort of use where it’s like an admin assistant, and being inaccurate is totally unimportant, it’s a good tool.

            Maybe that’s a really niche example but it’s one of the few cases where I can see long term use with zero downsides.

            Ultimately it’s powerful at consolidating large volumes of information and allowing the user to probe at that information. As long as the use case can tolerate inaccuracies and hallucinations then it’s fine.

  • scytale@piefed.zip
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    3
    ·
    2 days ago

    It doesn’t matter because the companies are mandating their workforce to use it regardless if you like it or not. For personal use, yes interest may be waning, but wanting to use AI is not a factor when you are being forced to use it at work.

    • plateee@piefed.social
      link
      fedilink
      English
      arrow-up
      32
      ·
      2 days ago

      Enterprise is really the only option companies like Anthropic and OpenAI have left. They’re drastically underpricing the service for what it costs to provide and users have shown that price hikes don’t fare well.

      But enterprises? Well you just made AI a core part of your software development workflow, what are you supposed to do, start manually reviewing bitbucket merge requests? Rewrite your Jenkins pipelines? No, when the price hikes come, businesses will pay, and then downsize to reduce that opex.

      • RagingRobot@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        2 days ago

        I’m thinking that eventually the corporations will all be AI all the way through and control even more than they do now.

        Small business and normal people will still do a lot of the work by hand because they will be priced out.

        But I also suspect there might be a good market for hand crafted code similar to other hand crafted goods. Some people will prefer it.

    • Bananskal@nord.pub
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      If the entire software industry mandated AI, I would either create some kind of startup, or just reeducate myself into some other field, like landscaping or something. Because fuck AI.

      • Logi@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        24 hours ago

        I’ve been thinking similar thoughts. I don’t want my life to devolve into managing the fallout from the torrent of slop. Perhaps it’s not too late to switch to geophysics. Those guys get to do fieldwork.

        • Bananskal@nord.pub
          link
          fedilink
          English
          arrow-up
          3
          ·
          16 hours ago

          Might be good!

          I just didn’t become a software developer to generate output. I became a software developer to be a craftsman. To write the code is the fun. If that is taken away from me, I’m out.

          • Logi@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            7 hours ago

            Yeah. I’ve spent (shock, horror) 25+ years resisting being moved “up” to management. Maybe the jig is just up and I’ll be managing a horde of idiot savant bots.

            I guess I could deal with much of the characters on the screen being auto generated if I’m directing the data structures and stuff but…

            :old-man-yelling-at-world:

  • ImgurRefugee114@reddthat.com
    link
    fedilink
    arrow-up
    26
    arrow-down
    2
    ·
    2 days ago

    It’s quite useful and fun to play with. Certainly not reaching AGI from scale alone…

    I think we’re reaching the top of the S-curve with regards to LLMs specifically. They’re a neat gimmick and likely will have an important role to play, but I don’t think they’re going to meet the (completely artificial and grifty) hype Altman et al have been slinging.

    Which was pretty easily predicted, but no one knew for sure. Such financial-class leeches always feed on unfulfilled hopes, and sell dreams of the future.

    • Einskjaldi@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      2 days ago

      They’re good enough to act as natural language translators which is an absolute revolution for computers so they’re useful for automating some tasks that are too fuzzy or vague for basic programs.

      • Lambda@lemmy.ca
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        2 days ago

        This is what I’ve found among a lot of professionals when asked about AI. Every task it does could be easily done by anyone with enough domain knowledge and moderate scripting ability. It just cuts out the need to learn a CLI and scripting language in exchange for lack of scalability or efficiency, plus has more domain knowledge sets than any one person (though not too deeply).

        E.G. It is often used as a poor man’s awk or perl for analyzing emails. But for a lot of people being able to scan 10000 documents, find all references to a soft regex and tabulate them is something they genuinely couldn’t do on their own before “AI”. Nevermind that you hand that problem to any sysadmin worth their salt and they probably already have an alias for it. Not surprising when you realize that the average person thinks that Penelope Garcia is an accurate aepiction of how such tasks are done and think such abilities are so far beyond teir own capabilities.

  • RagingRobot@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    3
    ·
    2 days ago

    It’s useful. A little too useful. Enough that it starts to break all of the systems we have set up for our society. Especially capitalism. Now we will either need to adapt those systems or shun the technology for people to be hopeful. Otherwise it puts the technology at odds with its users.