Young people have grown increasingly skeptical of artificial intelligence, even those who use it daily, according to a new Gallup poll of more than 1,500 people aged 14 to 29.
There is no decline in AI use among Gen Zers, but there is also no increase since the same poll was conducted in 2025. The latest poll found that AI use was plateauing among young users, accompanied by rising concern about the technology’s consequences.
The findings are significant because Gen Z is “the generation most likely to enter or grow within the workforce over the next decade,” the report notes, meaning that their adoption could determine the trajectory of broader societal AI adoption. Gen Z has already overtaken Boomers in the workforce. Right now, the AI world is preparing for a massive jump in expected demand, and the top tech and financial companies are investing billions upon billions of dollars into building out the supply. Experts have warned that if demand does not pan out exactly as expected in the short term, then it could have disastrous consequences for the economy.
My gen Z kid was pretty excited because it seemed like AI had so much promise. Promise to do his homework for him. He since learned how he spends more time getting the right answers out of it than just doing the work himself.
After my Google maps was updated to use Gemini things got so much worse. I was trying to send a message to my daughter to send a message to the group chat. Every time I said group chat in my dictation it would search for the group chat never finding it and making me start over. I finally gave up and told her to call me.
OTOH Alexa is better at farting so we have that.
Let me know when I can get a femboi mark Zuckerberg AI uploaded to my sexbot 4200. Till then, this tech is dead to me.
Enshitification barely allowed these tools to function as advertised for more than a few years.
They’re already getting siloed behind paywalls, tranched into “unusable trash $TooMuch/mo” and “barely working as intended $WayTooMuch/year”, and porked up with useless ephemeral. The cutting edge stuff - Sora, for instance - gets trashed almost as soon as it is released. The OpenClaw shit just fucks your shit up if you’re not babysitting it constantly. Mythos isn’t even for public consumption. Grok is entirely for CSAM. A bunch of these models are just being turned over to the military, because Pentagon officials know to keep eating that bullshit and never complain.
What’s the draw anymore? Come use this garbage application that is going to render your job obsolete largely by tanking the overall economy.
That is why I am make sure to download and backup interesting models from huggingface. I can even run some of them on my hardware.
I fully expect that some innovation will one day make models much easier to run locally, including the huge ones. Like quantization already does, but better. By around this time, I also expect a fresh push from OpenAI and the like to try making access to open models much more difficult. So, I want a repository in preparation. Fuck them in advance.
And don’t forget that those payment tiers are heavily subsidized.
Gen Z is “the generation most likely to enter or grow within the workforce over the next decade,” the report notes"
…I like to think “most likely” implies there’s a small chance the entire generation will somehow figure it out and just collectively bypass “tHe WoRkFoRcE” altogether. Lol
I am once again asking you to petition your local school boards to block generative AI usage by students and teachers alike. AI is being pushed on these kids at a young age, and I feel with the downward-trending attention span of Gen Alpha and Gen Z, that will form into a lifelong dependency. Plus, the loss of the licenses from the school system will be a significant dent in the AI metrics.
Here are some demands: block ChatGPT/Gemini/Copilot/DeepSeek on school networks (like porn and gaming sites are blocked); prohibit use of AI-generated images and text on assignments and teaching materials; ensure no assignments will require or recommend the use of AI output at any point.
These suggestions are based on reports I’ve heard from students. Please feel free to comment your own recommendations or information.
Why do you feel this technology is safe for adults to use if you think its harmful to students?
As a system admin. We already do this during exams and these fucking kids still figure out how to bypass shit
We had one the other day use Google chrome remote desktop (had no idea that was a thing) to remote hone bypassing the exam network block
Only reason we caught them early rather than doing grading was because the idiot uploaded their final answers from their home PC
Why aren’t they doing the exams with a pen and paper?
Perhaps they will. You will always have bad actors. My suggestions are really more focused on the messaging those kids get from schools.
I have heard that many students have gotten assignments on which one of the requirements is to use Text/Image generation, or for which they are a suggested tool. Stopping students from being encouraged to use AI is far more important (and far easier) than stopping every kid from cheating on assignments with it.
Yes I agree.
However sadly I am not the one teaching. I just run the magic curtain of user management and internal servers
Well, at least they are creative
why do they need internet access during the exam?
Trust me I am also confused by this but it mostly boils down to “you can’t expect these kids to not be idiots who show up logged out of office or whatever”
logged out of office
That sentence suggests so many layers of dysfunction.
Have you considered they dont need any of this technology to learn? That it might actually introduce confusion and distraction which will reduce attention and focus?
Oral lectures are going to become the standard again by a few years time, I bet.
Keep dreaming
Just fucking pop this stupid bubble already. I want to be able to buy cheap ram and hardware again.
it will be another year or two.
they are still actively investing in this. once the money starts drying up then the bubble will pop.
and all the tech companies will ask the government for a trillion dollars to cover their asses, and we will give it to them, because America.
It doesnt need to dry up. They could totally pop it by continuing to increase their costs exponentially while investment remains high. Or just users get bored and move on.
Once you use AI enough you start to peer behind the curtain and see how it’s all just a magic trick and not actually magic like it seems to begin with. So yeah I think its unsurprising people would come to this conclusion.
Read up on Information Theory. These machines are glorified autocomplete engines, built by exploiting redundancy within language. Like… you give it a billion sentences, and you ask it “what comes after ‘the dog’?” It says something random like “limousine.” You penalize it for the wrong answer, which means it updates its weights to ever so slightly point further away from such nonsense. You then do this hundreds of millions of times, and suddenly the weights start to be pretty well tuned. Input “the dog” might output “sat” now. Good job.
I mean… there’s definitely more fancy stuff going on. But this seems to be something fundamental about it all. Given as much, I can’t help but feel like… yeah… they do suck at what they’re most often used for, and it’s not surprising why.
It’s a tool like any other . it has it’s use .
And that use is…?
An institution uses a local model to help organize massive amounts of data, not crawl the whole web stealing everything in sight while being anthropomorphized by a corporations trying to sell you a friend or a waifu…
Which you won’t be able to run because RAM is $500 now.
There are valid uses for LLMs, but I think everyone who calls it “AI” is definitely a scam.
Generating BS that upper-level management loves to skim through?
Filling up Jensen Huang’s pockets. Also Sam Altman and others, but mostly Huang.
It cant do that.
I didn’t say Open AI’s or Nvidia’s, nor their investors (though, to be fair, Nvidia will probably still end up profiting once the bubble pops, the bastards; after all, in a gold rush the ones selling the mining equipment are the ones who end up making a profit).
I specifically mentioned the scammers on top, who will grab the cash and run as soon as it starts popping.
The economy will end up worse than in the 1929 crash, sure, but not for those bastards.
So, yeah, it can, and it is, because it’s what the whole scam was designed for.
the same as lorem ipsum. it’s great for filling up space with text.
Okay it can do that. That is valid.
I have a buddy who uses AI to read through contracts to identify high risk commitments that might cost the company money. There are thousands more uses .
I
Holy shit thats terrifying, your buddy is criminally negligent. It cant do that reliably. It doesnt do ‘reliably’.
I thought it would be clear because it’s a contract, but we are talking about financial risks, not health risks. He is using a corporate trained AI client. When the AI client finds an issue he (the human) still reviews it. According to my buddy his productivity has improved by over 25%.
That’s horrifying. I really hope he’s triple-checking everything.
He reviews anything the AI flags. As I already mentioned, the AI client is looking for financial risks. ie: a contract committing the company to something it doesn’t have the capability of delivering on. I used to do something very similar. One obvious example would be a customer asking for unlimited liability. Company can’t commit to that because it could bankrupt the company.
A script of control f’s would be just as useful and more reliable.
For sure, it’s amazing for some things. But it also appears to do more than you think it does until you become familiar with it. I think everyone new to using AI should quiz it on topics they are knowledgeable in, to realise how much shit it makes up.
Also yeah I’m specifically talking about LLMs because I think that’s 95%+ of AI usage right now in volume.
For sure, it’s amazing for some things.
I’m still skeptical about this.
Most of those things are usually due to the alternative being intentionally bad.
Like google becoming bad, or bad company documentation, or corporate speak emails that’s could just be straight to the point.
Maybe? But to give an example of how I think it’s been pretty cool, is summarising my Dungeons & Dragons session notes, and being available to answer questions, or spin up ideas on the fly. I can take horrible and inconsistent notes with holes in them, but an LLM straightens them all out into any format I need. If I need a small piece of world building and ran out of time I can get it to spit a few ideas at me. Often generic ideas and tropes are actually what I am after. If I forgot something that happened 6 months ago I can just…ask it. It can pull up stuff I noted offhand and totally forgot about no problem. This sort of use where it’s like an admin assistant, and being inaccurate is totally unimportant, it’s a good tool.
Maybe that’s a really niche example but it’s one of the few cases where I can see long term use with zero downsides.
Ultimately it’s powerful at consolidating large volumes of information and allowing the user to probe at that information. As long as the use case can tolerate inaccuracies and hallucinations then it’s fine.
It doesn’t matter because the companies are mandating their workforce to use it regardless if you like it or not. For personal use, yes interest may be waning, but wanting to use AI is not a factor when you are being forced to use it at work.
Enterprise is really the only option companies like Anthropic and OpenAI have left. They’re drastically underpricing the service for what it costs to provide and users have shown that price hikes don’t fare well.
But enterprises? Well you just made AI a core part of your software development workflow, what are you supposed to do, start manually reviewing bitbucket merge requests? Rewrite your Jenkins pipelines? No, when the price hikes come, businesses will pay, and then downsize to reduce that opex.
I’m thinking that eventually the corporations will all be AI all the way through and control even more than they do now.
Small business and normal people will still do a lot of the work by hand because they will be priced out.
But I also suspect there might be a good market for hand crafted code similar to other hand crafted goods. Some people will prefer it.
If the entire software industry mandated AI, I would either create some kind of startup, or just reeducate myself into some other field, like landscaping or something. Because fuck AI.
I’ve been thinking similar thoughts. I don’t want my life to devolve into managing the fallout from the torrent of slop. Perhaps it’s not too late to switch to geophysics. Those guys get to do fieldwork.
It’s quite useful and fun to play with. Certainly not reaching AGI from scale alone…
I think we’re reaching the top of the S-curve with regards to LLMs specifically. They’re a neat gimmick and likely will have an important role to play, but I don’t think they’re going to meet the (completely artificial and grifty) hype Altman et al have been slinging.
Which was pretty easily predicted, but no one knew for sure. Such financial-class leeches always feed on unfulfilled hopes, and sell dreams of the future.
They’re good enough to act as natural language translators which is an absolute revolution for computers so they’re useful for automating some tasks that are too fuzzy or vague for basic programs.
This is what I’ve found among a lot of professionals when asked about AI. Every task it does could be easily done by anyone with enough domain knowledge and moderate scripting ability. It just cuts out the need to learn a CLI and scripting language in exchange for lack of scalability or efficiency, plus has more domain knowledge sets than any one person (though not too deeply).
E.G. It is often used as a poor man’s awk or perl for analyzing emails. But for a lot of people being able to scan 10000 documents, find all references to a soft regex and tabulate them is something they genuinely couldn’t do on their own before “AI”. Nevermind that you hand that problem to any sysadmin worth their salt and they probably already have an alias for it. Not surprising when you realize that the average person thinks that Penelope Garcia is an accurate aepiction of how such tasks are done and think such abilities are so far beyond teir own capabilities.
My personal experience substitute teaching plenty seem fine with it.
It’s useful. A little too useful. Enough that it starts to break all of the systems we have set up for our society. Especially capitalism. Now we will either need to adapt those systems or shun the technology for people to be hopeful. Otherwise it puts the technology at odds with its users.
It’s useful for CERTAIN tasks. AI is significantly over applied as a “miracle product”.









