Young people have grown increasingly skeptical of artificial intelligence, even those who use it daily, according to a new Gallup poll of more than 1,500 people aged 14 to 29.
There is no decline in AI use among Gen Zers, but there is also no increase since the same poll was conducted in 2025. The latest poll found that AI use was plateauing among young users, accompanied by rising concern about the technology’s consequences.
The findings are significant because Gen Z is “the generation most likely to enter or grow within the workforce over the next decade,” the report notes, meaning that their adoption could determine the trajectory of broader societal AI adoption. Gen Z has already overtaken Boomers in the workforce. Right now, the AI world is preparing for a massive jump in expected demand, and the top tech and financial companies are investing billions upon billions of dollars into building out the supply. Experts have warned that if demand does not pan out exactly as expected in the short term, then it could have disastrous consequences for the economy.



And that use is…?
An institution uses a local model to help organize massive amounts of data, not crawl the whole web stealing everything in sight while being anthropomorphized by a corporations trying to sell you a friend or a waifu…
Which you won’t be able to run because RAM is $500 now.
There are valid uses for LLMs, but I think everyone who calls it “AI” is definitely a scam.
Generating BS that upper-level management loves to skim through?
Filling up Jensen Huang’s pockets. Also Sam Altman and others, but mostly Huang.
It cant do that.
I didn’t say Open AI’s or Nvidia’s, nor their investors (though, to be fair, Nvidia will probably still end up profiting once the bubble pops, the bastards; after all, in a gold rush the ones selling the mining equipment are the ones who end up making a profit).
I specifically mentioned the scammers on top, who will grab the cash and run as soon as it starts popping.
The economy will end up worse than in the 1929 crash, sure, but not for those bastards.
So, yeah, it can, and it is, because it’s what the whole scam was designed for.
the same as lorem ipsum. it’s great for filling up space with text.
Okay it can do that. That is valid.
I have a buddy who uses AI to read through contracts to identify high risk commitments that might cost the company money. There are thousands more uses .
I
Holy shit thats terrifying, your buddy is criminally negligent. It cant do that reliably. It doesnt do ‘reliably’.
I thought it would be clear because it’s a contract, but we are talking about financial risks, not health risks. He is using a corporate trained AI client. When the AI client finds an issue he (the human) still reviews it. According to my buddy his productivity has improved by over 25%.
If the AI has missed risks and he didn’t bother checking (since this is where the added productivity comes from) then the company gets to enjoy those risks.
That’s horrifying. I really hope he’s triple-checking everything.
He reviews anything the AI flags. As I already mentioned, the AI client is looking for financial risks. ie: a contract committing the company to something it doesn’t have the capability of delivering on. I used to do something very similar. One obvious example would be a customer asking for unlimited liability. Company can’t commit to that because it could bankrupt the company.
A script of control f’s would be just as useful and more reliable.