I’ve noticed an uptick in the number of pro-AI posts on this platform.
Various posts with titles similar to “When will people stop being afraid of AI” or “Can we please acknowledge AI was very needed for X”
Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.
AI (LLMs) is/are a fantastic tool.
But that’s what it is, a tool that can make some tasks easier.
It’s not world-changing like some tech bros and CEOs think it is because they don’t actually understand the technology.
It’s also not the apocalypse or The Matrix or Skynet coming to end civilization. It’s just a tool.
After the AI bubble bursts, AI will still be there, as a tool for humans to use.
I think it’s possible that some of the people you see on Lemmy may have started using AI a little more in their lives and see it for what it is.
You know what’s crazy is that everyone has begun rebranding things that existed before AI as AI.
The algorithm summary of a common question in Google results? Now it’s AI.
Trello’s automation tasks moving items marked as “Done” to archive? Now it’s AI✨
It’s idiotic lol
Marketing BS. The bad part is all the C-Suites falling for it.
To be fair, given the power consumption it requires, it definitely leans towards civilization ending.
We also have “the Internet” slurping up massive amounts of energy.
Current Global Electricity Breakdown:
- Total Data Center/Infrastructure Demand: Approximately 2.0% of global electricity.
- AI-Specific Share: Roughly 0.5% of global electricity.
- “Traditional” Internet/Cloud: Roughly 1.5% of global electricity.
The Internet is also a tool that humanity uses. Should we shut that down too? (I would argue yes considering how the “Information Superhighway” somehow made the average person dumber, but that’s a different discussion.)
Except the Internet is actually useful. AI has not shown that it deserves to use that insane amount of energy. It’s actually insane that you think AI isn’t an issue when it’s using 1/3rd as much energy as the ENTIRE INTERNET
Google at some point also was a great tool. Wikipedia also joins the rankings. LLM chatbots are great but certainly not the primary source of information.
What annoys me is that people began to use them to not to do simple things like writing their own posts about their own things. They began to generate content instead of making it. It is obvious that anything what takes time to be produced, will most certainly be automated once tools are given. But this annoys the hell out of me.
Seeing posts, comments, content generated by LLM, I feel that I am being robbed of artistry, curiosity, interactions with real people. I can automate chats with my family, friends, colleagues, children. But that wont be me. That will be perfect grammar sentence generator, not me - real, tons of mistakes, typos, mostly renting about everything, passionate, bored, funny, witty, dull me.
It saddens me that LLMs are exedcuting (almost?) final blow to a society that is sustaining social media terminal damage.
They began to generate content instead of making it. […] [This] annoys the hell out of me.
Seeing posts, comments, content generated by LLM, I feel that I am being robbed of artistry, curiosity, interactions with real people.
That is probably the greatest irritation I have with my wife right now. I don’t wanna start fights over it, but I also don’t make a secret of my disdain that she uses LLMs for her work. I get it, she has to, because her business requires churning out a lot of text quickly to stay competitive and I want her to succeed, but I hate what the internet has come to and I hate that she participates in that race to the bottom.
typos, mostly renting about everything
That is either a wonderful coincidence or a clever joke, but I love it either way.
Unfortunately we will always have problems explaining to people how to use the right tool for the right job.
The old “if all you have is a hammer, everything looks like a nail” saying still applies.
Using LLMs to automate your social media is dumb as shit and I don’t understand why people are doing that. It is actively destroying social media. Which may be the natural end-state of a social media platform. Isn’t that why most of us are on Lemmy right now? Because of the state of Reddit and Xitter?
Also, generative AI making art and music and literature is dumb as shit too. Why would you make an AI that does the fun stuff that humans actually want to do? I can’t wait to have AI finish playing BioShock for me…
LLMs are neat, and useful for some things - but as with practically everything in modern society, capitalism is ruining it.
It’s also not the apocalypse […] It’s just a tool.
So, the problem with tools is that their existence still affects the systems they’re a part of.
For instance, war between the US and Russia is much more dangerous now (yes, it used to be dangerous before as well) because now we have nuclear bombs. We did a whole cold war thing about it. Nuclear bombs change the world even when they’re not being used.
Similarly, meth is just a tool. It is entirely possible to smoke meth, not become addicted, have a great time, vacuum your entire house I guess, come down, chill, and move on with the rest of your life. But, that’s not what we would say meth’s effect on society is, is it?
I am so happy that you are capable of using AI without becoming a psychopath. I am concerned about the psychopaths.
If people weren’t fucking stupid, these scams would eventually stop working.
What’s it been, 4 years since NFTs? And AI morons are already falling for this shit.
I lean anti-AI, but comparing generative AI to NFTs is very strange to me. Even if you didn’t intend to imply any similarity beyond both being scams, surely generative AI is at least a much more compelling scam.
LLMs can now understand, to some extent, almost any text humans can. They might not be able to reason about it well, but they can at least translate it, summarize it, etc. If you had asked me 10 years ago, I’d have told you there was a near-zero chance of that happening within our lifetimes. NFTs were just “if we put baseball cards on the blockchain, people might buy them because of that same quirk of psychology.”
Transformers are like blockchain: an interesting use of mathematical principles to solve certain problems in a novel way, where the hype around that core attracts charlatans and scammers and combinations of the two traits who claim that it will go on to solve totally different problems in such a way as to revolutionize the world we live in.
NFTs were the end of that line for blockchain where the machine started to eat itself. I can see a future, stable use of blockchain in some limited contexts, but cryptobros have always overstated the contexts in which that particular type of digital ledger can be more useful than other types of digital ledgers.
We’ll see where the end of the road is for transformers, and what’s left at the end. I believe that computer inference will always be useful in some contexts, and that the advances in huge models with absurdly large numbers of parameters have unlocked some previously impractical tasks, but I could also see that settling into a general background existence as just another technological tool for doing things in a world that still looks pretty similar to the world today.
“You don’t understand, she’s REAL! Especially if you use your left hand.”

You look lonely…
Same. I noticed that I finally got banned from a few random instances I’d never visited before under my moderation history, and they were all by the same guy who claimed I was an “anti-AI troll” lmao
The most hilarious part to this is I feel so dispassionate about the subject, I can seldom remember what it was I might have commented, and was probably something like “yeah this looks like slop” hahaha
If you ignore or are blissfully unaware of the negatives – and all the companies behind all the major product lines do their best to hide and minimize them – then it’s easy to find utility. Basically everyone I know IRL actively chooses to use AI for something. Both CRAP (Computer-Rendered Artificial Pictures) and code generation are very common.
When I point out the ethical issues, I am generally dismissed entirely (“they’ll fix that” or “my impact is small”) or counter with something about quality (“it works now” and “it’s getting better”), which I find is beside the point.
code generation
You mean Slopware “Development”?
(I opted to keep the “Development”, putting it in quotes as a sarcastic nod to the fact it’s no longer actual development)
Sort of. A friend used it to generate some “tests” of questionable quality, a cousin is using it to help her learn and use a DSL (my term, not hers) for interactive tasks for her students, another friend was using it for source code generation, but I don’t recall the specific results.
I disagree that it is no longer development, I see LLMs as yet another tool for generating code, and we’ve had generated “source” code since before C was standardized. I think the any code output by most LLMs is derivative of so many works under so many licenses that it is likely not possible to distribute it at all without violating some copyright and is certainly unacceptable for any Free Software project.; I think this is ethically true even if courts find LLM outputs are not derivative works or not subject to copyright protection at all – at least as long as copyright protects Disney. But, I know people that are working on a Free Software LLM, and “the Stack” provides enough information that you could provide all the necessary attributions for works derived from it.
While LLM hallucinations are a real concern, they can be less impactful when doing code generation because of all the automated static checks plus the culture of peer-review. But, I also tend to favor languages with static type systems.
I disagree that it is no longer development, I see LLMs as yet another tool for generating code, and we’ve had generated “source” code since before C was standardized.
Fair. There is a difference between using LLMs to generate boilerplate code customised to your context or provide a starting point if you’re stuck on a given problem and struggle to find a different perspective for approaching it, and using it to get around having to do mental work.
My term is intended for the kind of vibe coding where there is little, if any, technical skill involved and people are just letting LLMs slop together code without meaningful code quality assurance. In those cases, I don’t think it warrants recognition as development. If it produces workable results, cool. Call it software generation.
Using it as a learning assistant would probably be the most justified use case in my opinion. I have my reservations whether it is suitable for that purpose but I don’t know enough about the specific way it is applied to comment on that. If it produces training code that isn’t directly published you dodge the legal iffyiness, and if it helps build skills, that solves the “relying on AI makes you unlearn skills” issue.
Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.
They’re both “annoying teenage tech-bros who are detached from reality” and they are spreading propaganda they picked up elsewhere.
I think AI has positives to help people, that being said I think it’s out of control currently. I hope the bubble burst soon and we can actually get to a reasonable balance.
I hope the bubble burst soon and we can actually get to a reasonable balance.
In fictional stories yes, in reality no. The only application that AI will find is to replace all employees, and people will be thrown out into the street.
NLP interfaces are nice. It’s also great at reformatting data.
Honestly, the problem when talking about “AI” is how many different things that can mean.
- General AI chats
- Coding agents
- Automated pentesting/vulnerability discovery
- Image/video/music generation
- Grammar checking
- Automated support agents (phone or chat)
- Autonomous weaponry
and so many more. Being Pro-AI could mean you like one or two application of the AI, but be against it in the others. I know very few people that like it for the use of media generation. However, there have been a lot of long time vulnerabilities in very popular open source projects that was only just discovered. That seems like a pretty undeniable use case demonstrating its usefulness.
Then of course there’s governments that want to get their greedy blood thirsty hands on it to create autonomous weaponry. So now if you try to defend AI for a use case like defensively finding program vulnerabilities you somehow also have to defend AI weaponry?
For a generic AI model, it is very powerful and can either be used to grow yourself or abused so your brain doesn’t have to work at all. You can use AI to do the hard work for you, or use it as a personal tutor to guide you into what to learn. People will of course mention hallucinations as why it can’t be used to learn, but you don’t have to take AI at its words. If you were to ask it to create a lesson plan on what you should study for a subject, in what order, and resources are available, you can do all of the actual learning using content the AI has no control over. So what you do with that is going to be up to the person, and opinions on it are going to vary wildly.
Some people argue any use case is not okay given the various concerns of energy and water usage, and where those models sourced their training data. Not to mention if you support AI you must be supporting the AI companies. I agree there are concerns for the environmental impact, and the training data discussion is a long one on its own. However, I do think you can support AI as a technology, and not be okay with the way the technology is being done in regards to environmental impact. And given AI can be done on a local machine, I don’t think it has to be tied at all with the big tech at all.
“AI” is such a wide and immense topic. And what we talk about with AI today will not be relevant come next year with how quickly it is developing. We shall see if some form of Moore’s law applied with the growth of AI as far as efficiency and quality of the AI goes.
One of the first things I say when non tech people ask me about ““AI”” is :
“The term AI here is just marketing wank”
Zoomers and gen x that drank the kool aid. What’s worse is they are saying yes to high paying jobs to fuck us all in the ass.
As a member of GenX (1980)…
Yep, that sounds like my peers. Most of them believe the marketing or are at least convinced enough to indulge. The hold-outs are getting more infrequent.
I used to feed AI anything I wrote that I wanted to sound professional to save me time and brain power. Not only do I have no need for that anymore considering I’ve just accepted that my CS degree was truly a waste of my life, but now I realize I’d encourage the building of data centers so now I’m fully radicalized to never use them
Dude your CS degree is not a waste. AI is just a tool. Anyone who thinks they can replace their staff with it are in for a rude awakening. I understand how much harder it is to get your foot in the door though. Its not permanent though. I remember when “no code” was going to take the jobs. The job just changes a bit.
I’ve been around long enough to have experienced multiple technologies that were the “end” of programmers and yet they still exist.
As you pointed out, the job changes a bit, but we are still here. When I started, the job was a lot more about compilation. You had to remember exact syntaxes (spelling, letter cases, line continuation, ect) and code optimization. You couldn’t just look up a function name or something like a win32 API by typing part of it into your code editor. You couldn’t even just go to Google and search because Google and the Internet didn’t exist. You had a literal shelf of books next to your desk that were heavily worn and you referenced constantly. Books got handed down from senior programmers to junior programmers. The senior got a new book that wasn’t held together by a rubber band and the junior got a stack of pages, often partially glued together by coffee stains, that contained invaluable notes in the margins.
Compilers used to be really dumb. Schooling, blogs, articles, ect, these days are all about “readable code”, but for a long time readability wasn’t even in the top 10 or 20 things that you thought about. Just getting the damn thing to compile was easily half of your job and time spent. Schooling and articles spent a massive amount of time discussing optimizations and memory usage. Things like “if else” vs “switch”, which one was actually better and how you could abuse both. Just in case you were wondering, “switch” was king and the “if else” lovers can get go fuck themselves.
I have seen massive shifts in the industry, and companies will use any excuse to fire everyone useful and eviscerate themselves in the name of short term profits. People used to talk about IBM, HP, Sun, Dell, Compaq, ect, like they talk about Amazon and Facebook now. But those are just brands owned by some new titan that didn’t even exist that long ago.
CS will come back, it will be a little different, but new companies will rise from the carcasses of all those that tried to replace developers with ai.
Honestly, given what Facebook is these days, I am more surprised that they still have that many software developers to lay off than I am with the idea that they are laying off people due to AI.
Humans are social animals, in the United States especially where people are severely separated- they’ll look for and find any kind of easy access towards social interactions: including but not limited to Chat bots. It’s a sad reality that they would dismiss the negative affects it has on our social brains, dismiss the environmental effects it has on our planet, dismiss the social warmings because they’re too involved with LLMS “AI”.
That’s right, it’s not even AI; it’s only large language models or some agentic systems. Way smaller ones existed in the past, think Dr. Sbaitso (1992) or A.L.I.C.E. (1995.) it’s actually not hard to make a chat bot, just have it echo what the user says with some key phrases. That’s the whole existence of chat bots and today’s current “ai” only they have a LOT more variables that were generated off of huge randomly generated data sets (both off of free open sources and stolen data) and that’s what causes it to hallucinate: it’s the randomness that humans don’t have the ability to change or update simply because it’s such a huge list of variables. It’s so massive people think it’s real intelligence! PEOPLE WERE FOOLED ON 1990’s CHART BOTS TOO! 😭 😂
Anywho we recommend the movies Desk Set, Space Odyssey, pi and even Alphaville. They’re related to the subject and they’re pretty good at pointing out the bruhs.
Sure, sure. So when LLMs find 0days that have been around for a decade, they’re just cold reading and stroking the sloperator’s ego. Got it.
If your point was to say “LLMs are good because it can hack into people’s PCs and make the world worse” I think you gotta start setting priorities towards finding some empathy.
Besides it was not discovered by an LLM or AI. It was discovered by Taeyang Lee, researcher at Theori and then later refined into an exploit chain by the Xint Code Research Team, whom both used an “AI”-assisted analysis. So no LLMs didn’t magically find a decade old exploit, LLMs simply was used as a search function based on its trained module of the past coding assets and the logic bug in the Linux kernel.
So yeah it’s basically a glorified search function at that point and if you can find peace fucking a search bar- hey man that’s your thing 🤷🏻♀️
Our sources:
Holy shit, are you a professional strawman builder? Because you’re really good.
An LLM helped fix a bug. That’s all we need to know. It’s useful. Saying so has nothing to do with empathy, lack thereof, or robosexuality or whatever the shit kids are in to these days.
idk about being a straw-man, but regardless the reply was addressing the misleading and not giving proper credit to the researchers and further giving that LLMs were used for analysis, not full on finding the exploit, so no LLMs aren’t good at finding exploits without clear search inquires by humans.
As for the empathy and the robo-sexuality- It was the intentional point of the original comment that people find heavy social relation towards LLMs or other objects that are able to communicate back to them. Even in our examples of the movies they touch on romantically/sexual relations towards robots and a couple others point towards the empathy of them as well. PS these are topics from 1950’s not “whatever the shit kids are in to these days.” Most people affected by this are older generations and young adults without social netting.
Turning it around phrasing that LLMs are useful towards finding exploits makes it sound more like your wanting to use LLMs for using said exploits rather than using LLMs for better use cases. Regardless its still not possible nor ever will be because again LLMs can only use predetermined variables based on its previous learning data set and random variables (PS those random variables that are undesirable are what is commonly called hallucination, its just unwanted variables in a huge spaghetti code.) Its even on the site your sourced:
“Was this AI-found? AI-assisted. The starting insight — that splice() hands page-cache pages into the crypto subsystem and that scatterlist page provenance might be an under-explored bug class — came from human research by Taeyang Lee.”
If we misread your interpretation then our mistake, however the phrasing felt more that you were praising AI for finding exploits and not for actual good use and it read out to us like an ethical issue.
If making this stance clear that LLMs make more harm than good in the case of chat Bots and being used as full on replacements of people makes us a Straw-man than IG we’re a straw-man or whatever lol.
Though we can probably agree that Machine Learning can, should and have been used since the 1950s as glorified search and calculation engines for complex equations and datasets. They can make really good use for generating and categorizing random protein molecules, find patterns in cancer research and even filter out examples astronomers find in the night sky; however its overall useless without a qualified and passionate researcher who knows their stuff and can double check their ML sifters.
Sources for the saucy beans:
- 2016 BBC report on rise of Romantic Robotics: https://www.bbc.com/news/uk-england-berkshire-35354263 (wild fucking article, the movies give more light to the subject than the article, but we included it as its an interesting read)
- Your own source for ease of reference:https://copy.fail/#faq (second to last drop down in FAQ titled: Was this AI-found?)
- 2020 History of Artificial Intelligence in Medicine https://www.giejournal.org/article/S0016-5107(20)34466-7/fulltext (Really good read on the history of Machine Learning in medicine)
- 20205 AI advance helps astronomers spot cosmic events with just a handful of examples https://phys.org/news/2025-10-ai-advance-astronomers-cosmic-events.html (AI helped filter and search for patterns it was taught from prior examples and its output was still look through and refined by proper astronomers)
^edit, fixed a bit of formatting lol^
The strawman-building is that you’re extrapolating really, really far based on a tiny comment, and so you’re making wild assumptions that aren’t relevant to the conversation. The accusation that I’m hoping to be able to use LLMs to find bugs for nefarious reasons is far out. In fact, ironically, your text reads like something a badly (or maliciously) configured LLM would produce.
I never claimed that somehow, unprompted, an LLM went out and found a bug. But LLMs are increasingly used as important tools in finding all kinds of problems in code. Going forward, as we get better at how to use these models, more bugs will likely be found. And if we can train other ML models on other kinds of data but with similar size, I think we’d be right to expect a lot.
I have no doubt that misuse of LLMs and other machine learning models is widespread. The parapsychology aside, I’m worried about how it’s being used in war and targeting, which will only get worse.
However I think it’s a bit disingenuous to portray LLMs as glorified search engines or autocorrect. It’s not wrong, it’s technically correct, but the utility is way beyond find-and-replace. It’s a bit like calling humans glorified tapeworms. Doesn’t really make for an interesting discussion.
I also think you’re wrong in asserting that LLMs or other ML models can only be useful for researchers on the edge of their fields. I guess we’ll see.
I hardly ever see them. I love being able to just set my home feed to subscribed communities.
I suppose it’s due many people not seeing things as black or white, but as a variety of grays.
How dare they!
Current AI is unsuitable, but automation of some kind (maybe not AI) will be necessary for a nearly workless future. Life is kind of dumb as is, it’s better if we spent time in the gym, or doing yoga, or learning something, instead of spending life in the pesticide factory, then dying after 3 years of retirement from a horrific disease.
We already had (pre-2020) all the automation we needed to work less than 20 hr/wk and produce all the necessarily calories, fresh water, and housing for everyone. But, instead we chose to turn a few people into decabillionaires and continue to bicker over the scrap like we weren’t in a post-scarcity society.
LLMs, transformers, convolution layers, characteristic tensors, etc. all have some legitimately novel uses, but all the big “AI” product lines are unethically developed, irresponsibly deployed, and dishonestly marketed.
If you want an ethical chatbot, I recommend https://en.wikipedia.org/wiki/Apertus_(LLM) .
I don’t know of a ethical model that’s good for images or code, yet, but I know people are working on them. The IBM Gemini models are getting close, but I don’t know if IBM will ever get the training data completely “clean” / open / free.
I’ve been told that StarCoder is an ethically-trained free software model, but some of my research ( https://mot.isitopen.ai/model/StarCoder ) contradicts that assertion, and I’ve not looked into it deep enough to resolve that conflict myself. (IMO, we don’t actually need automated code generation, we need to write less code in better languages with better tests and more reuse; but you may not agree.)
I’ll use AI to summarize a long document. That’s about it.
Why?
Not the person you asked, but I’d guess it’s multi-factorial. First that LLM-based summaries ARE generally higher quality than the pre-LLM summary tools output. Second, that LLMs are being given away free at point of prompt and are easily found; while summarizing tools have existed at least since 2000 (MS Word contained one), they were not easily found and usually involved purchasing some larger software collection, or a onerous install process. Third, everyone* hates** reading: if you’ve ever has user-support as part of your job, you’ve probably has at least one user where the message they read to you off of their screen tells them exactly what to do, but they chose to call you before really reading the message.
Also, I’m not sure what “long” is. It can be really hard to keep enough attention on something though 100s of pages, especially when it’s not trying to be engaging and is rather dry.
To OP, I would say that you might want to rethink using an LLM summary for any decision process. The LLM architecture makes “hallucinations” inevitable so eventually you are going to read an LLM summary that says the document includes something that it does NOT.
Oh certainly. Basically, I use it in the same way I used the Schaum’s outlines in university. The summary provides a quick outline. To get an actual understanding, I go to the source myself.
I do like to read, but slogging through an entire 500 page manual when I only need like to read six paragraphs to get my job done, is a bit much. And yes I do know how to use indices, but stuff can be buried amidst so many cross references.
So there is a few groups there is the ignorant group, which you are part of evident by your terminology use. When ignorant people do things they tend to be wrong whether that is trusting AI or not trusting AI.
OP is baffled by the pro-AI people.
I’m baffled by the anti-AI people.
Fundamentally it seems bizarre to judge the quality of, for example, an image or a piece of music, by the process that created it: the proof is in the pudding.
I’m amazed at what AI is generating…it seems kind of fake to pretend a beautiful image isn’t beautiful when you discover it’s made by AI.
The arguments against AI are annoyingly reductionist or biased: e.g. focusing on occasional “hallucinations” as if the majority of AI productions aren’t, basically, impressive (or, at least, what was asked for by the user).
It reads like a child who’s never had a human interaction in their life and was raised by Elon Musk Stans.
AI slop is void of any creativity or originality, and the infrastructure required to make it is killing the environment at an unprecedented rate while also poisoning drinking water and driving up costs everywhere.
But hey, at least your mom got to show you Fruit Love Island on your iPad, I guess.
What do you hope to achieve with the personal attacks? You’ll only make me dislike “your side” even more. It only reveals how unpersuasive your position is…if you resort to shaming and insult to bully people into your position.
You care so much about water waste and the environment…but do you eat meat?
If so…all of a sudden your “rational justifications for an ethical position you have taken without bias” cease to be coherent with your other lifestyle choices.
As for “AI Slop” [an obvious propaganda term, designed to be reductionist] and its lack of X, Y and Z: it’s literally drawing on an ocean of X,Y and Z in the first place - the sum total of all X, Y and Z driven human artistic and creative endeavour.
As with so many political discussions: I suspect this one is pointless. Two sides, both alien to the other. I’m as unlikely to bring you round as you are to bring me around.
It processes information to generate new (often very beautiful) works: just like human artists.
You’ll only make me dislike “your side” even more.
The fact that you know this, state this, and will do this is exactly why we’re doomed as a species.
If you know what your lizard brain is doing and that it’s activated, at least have the good sense to not still pretend it’s someone else’s fault.
No.
Being a persuasive communicator and recruiting people to one’s political agenda has never been a matter of pure logic and reason: going around insulting “the other side” will not work.
Not that anything would: I judge the value of X by X. X could have been made by a sandstorm: if it’s beautiful it’s beautiful.
A piece of music, for example, is either enjoyable or it isn’t. Admittedly AI music has a way to go yet - but it’s clearly already superior to a percentage of human made music.
A piece of music, for example, is either enjoyable or it isn’t.
Or it grows on you over time and expands your range as a listener.
But 🤷 you’re just looking for mediocre simulacrums of art anyway, so of course you’re into GenAI “art”.
Dude created a brand new account just for this post, because they knew AI is actually fucked up and very unpopular here.
Is someone paying you to peddle this bullshit propaganda being pushed by the AI billionaires or are you just this dumb and gullible?
Do billionaire boots taste good?
It’s my first 24 hours here. If I could get paid for saying what I believe I’d gladly take the money. But honestly, I’m just a Reddit refugee - and have no idea about the ideological bent of the users of this platform (though I’m quickly learning it’s as hysterical, fanatical and willing to use disingenuous argumentation and rhetoric as those on Reddit).
My real reason for joining: I’m addicted to having my faith in humanity destroyed by interacting with terrible people on the internet - but got permabanned from Reddit for speaking against Israel on r/Worldnews.
So thanks for delivering: 24 hours and I’m already being insulted and called a bot because I think AI is impressive and refuse to join the “Everything AI produces has 0% value” nonsense.
It’s literally got to the point where if I want an actual rationale, balanced, non-hysterical discussion: I go to ChatGPT. If I want an emotionally unpleasant, annoyingly irrational, rhetorically disingenuous and frustrating argument that goes nowhere: I feed my social media addiction instead and talk with a human.
occasional “hallucinations”
Every single AI output is a hallucination
You might prefer the word “confabulation.”
The very concept of “hallucination” and the choice of that word in this context shows how retarded the entire debate had become.
A machine cannot hallucinate because it cannot have an experience.
The output is either pleasing or displeasing, an accurate and useful response to a request or not. To claim that all AI products are “ugly and useless” is a patently absurd position: were the same thing made by a human a decade ago it would have been deemed as “good, beautiful, useful, and valuable.”
This has been going on for decades. Machine learning was used to create new composition based on classical sheet music in the early 2000s. Concert-goers loved it until they found out it was generated.
The first sensible thing I’ve read here.
It’s got to the point where if I want a rational, well-informed, and balanced discussion about anything I’ll just chat with AI.
If I want an emotionally unpleasant, “us vs then” manipulative, frustratingly one-sided or limited interaction: I’ll go onto social media and find someone to trigger me.
Didn’t take long on this new platform.
Ironic I suppose: these people hate AI so much, but everything they type (e.g the manipulative nonsense arguments) illustrates their own inferiority to the AI systems they oppose.
For me it is, apparently, the unpleasantness of social media discussions that make them so compelling and addictive… otherwise I’d just discuss things with AI.
I’m amazed at what AI is generating…it seems kind of fake to pretend a beautiful image isn’t beautiful when you discover it’s made by AI.
Wow, the USA is such a beautiful country, but it feeds its beauty with someone else’s blood. But yes I agree with you the content is beautiful, no really beautiful. Only the price of this beauty is the future of all humanity (AI will kill us all)
Is that something you know or something you choose to believe?
It’s not that I know, it’s rather a natural outcome under capitalism, and I’m not the only one who thinks so.
Although it seems to me that this can be described as a pattern of the universe.







