I am the c/fuck_ai person but at this point I have made peace we can’t avoid it. I still don’t want it to do artsy stuff (image gen, video gen) and to blindly use it in critical stuff because humans are the ones that should be doing it or have constant oversight. I think the team’s logic is correct here, because there is no way to know if the code is from an LLM or a human unless something there screams LLM or the contributor explicitly mentions it. Mandating the latter seems like a reasonable move for now.
I consider myself to be more pro AI than not, but I’m certainly not a zealot and mostly agree with the take that it shouldn’t be used in artistic pursuits. However, I love using AI to help me create art. It can give great critiques, often good advice on how to improve, and is great for rapid experimentation and prototyping. I actually used it this weekend to see what a D&D mini might look like with different color schemes before painting it. I could have done the same with Gimp, but it would have taken much longer for worse results that was ultimately just for a brain storming session. How do you feel about my AI usage from your perspective? I suppose from an energy conservation perspective, all of it was bad, but I’m more interested in a less trivial take.
Yes the energy consumption is bad. My main gripe about LLM generated art is that it will not be original. It will use its training data from uncredited artworks to generate it. Art usually is made by humans to express something or convey something in a creative way. LLMs fail at that. What LLMs can actually be helpful at is making learning art more accessible to everyone. Art schools or private art classes can be expensive. This lowers the barrier to entry.
As for you using generated Art is that the it might be really beautiful but it will be very difficult to maintain that style and even more difficult to convince that it is your style. The Artist doesn’t get much recognition with LLM generated art. Using it as a critique also seems stupid because LLMs will aways try to give an objective view on it than subjective. Your art won’t trigger an emotion in it and might say it is bad or “do this to make it more understandable” — that’s where you lose as an artist.
My mom likes to paint as a hobby. What she does it searches stuff on Pinterest (which is mostly LLM Generated). She uses it as an inspiration to do it in her own style and maybe give it some spin. She keeps all of it for herself.
I’m a writer. I got paid to write on a few things here and there, but mostly there are just huge barriers for people without connections.
I plan on using AI to turn my writing into a visual animated format for people to consume. I don’t much care about the style of art, I just want my work to be seen. I can’t afford to pay for artists. If I could, I would. But at least, this would give me an opportunity to show my work without some execs saying no a hundred times.
When I look at the art for cartoons in the 70s/80s, there is so much crap animation with mistakes and duplications, you would think it’s “a.i. slop.” I understand that these were done overseas, pumped out quickly so quality control was overlooked for speed… but it wasn’t the animation I was interested in, it was the stories and characters.
I still think original artists will continue to exist. A.I. is just another tool. People will get bored of the same old stuff and want originality. I really hope it’ll make our lives better in the long run, but we’re just in the weird middle stage of A.I. crawling before running.
I can’t afford to pay for artists
You can afford LLMs right now because all of the LLM companies are losing money on it. If they decide they want to make a profit, they will raise their prices significantly. So you still end up in the same situation. You don’t have much control on what an LLM spits out while with doing animation manually, you have total control or at-least sit with an actual animator to make it look how you envision it to be.
I plan on using AI to turn my writing into a visual animated format for people to consume.
What makes you think that people will respond the same way and in the same numbers to LLM generated animation than if it were crafted by an artist? I reckon that it will be much lower. I see it on youtube constantly. I watched a video about a topic, then I got recommended something related to it from a different channel. Guess what? The script and the animation were so damn similar and the shit they were spewing wasn’t even true in the end. Everything that both the channels made was slop. Sure they spit out more content than conventional methods and got a few thousand views each video and made decent money on it. But they aren’t gonna sustain for long if they want audience retention.
Since then I have been more mindful on what video I click on and going to the extent of disabling recommendations and watch history.
I have downloaded my own LLM that can be used on my own computer… So the only cost is electricity since I upgraded my computer before the prices went to shit. Newegg even gave me free RAM with the purchase of a motherboard so I lucked out on that. Storage is not an issue too since I got that back in 2024 knowing Trump would fuck everything up.
And no, people might not respond the same way to my work, but then again I’m not taking any work away from anyone else because then it would not even exist. If you want to fund me and the artist for our work, then okay. Show me the money.
One thing I’ve noticed is that I see many more people complain about slop than slop itself. It’s so annoying at this point that’s it’s making me go in the opposite direction. Hey everyone, slop here… Microsoft slop here… Use Linux Linux Linux. Slop slop slop. Sloppy joes. It’s like candlestick makers complaining to Nikola Tesla.
Another great example of how AI is just wreaking havoc on people’s brains.
- Wants to show an enticing product to execs, doesn’t want to invest in paying an artist
- realizes they have to have connections but doesn’t want to network
- wants recognition of their hard work, hasn’t sought out a community or collaboration but states “show me the money”
AI will fix everything for me! Slop doesn’t exist! (ignores the very article we’re in, any platform algorithm feed, the us president shit posting, all the slop that gets presented here). Go get em Nik, don’t let haters stop your brilliance.
A very extreme takeaway, but okay.
my own LLM that can be used on my own computer
May I ask how many B parameters does it have? Because the paradox over here is:
- if it is weak then you will be getting much much worse results than even the Big Models the corpos have (we don’t even know how much tbh), let alone the quality of an actual artist.
- If you have a respectfully powerful model then your PC might cost thousands of dollars (even by ignoring the price hikes) which eliminates the excuse to pay an actual artist.
Definitely not a big fan of it, but realistically speaking, it’s here to stay. It is wise for them to govern and regulate it rather than outright ban it. Especially with a project as big as this one, people will try. Saying that the responsibility falls on the human is definitely the right move.
any resulting bugs or security flaws firmly onto the shoulders of the human submitting it.
Watch Americans and their companies pull some mad gymnastics on proportioning blame for this
Well yea, it’s the human submitting the code, and using a tool known to be imperfect
Your comment is pretty dumb
At this point it’s 23 on -5 with opinions on that dumb comment sunshine
Because obviously the majority always right.
Linux kernel being written by Microsoft’s AI.
Microsoft needs to try to ruin Linux somehow, it can’t just hurt windows 11 with AI slop code, it needs to expand it’s efforts to other systems.
which is trained on free and open source code
That will definitely not introduce some weird things when it starts feeding on itself.
Maintainers’ only responsibility is to ensure quality and shouldn’t have to check for rogue AI submissions.
Tho I still miss consistent fucking weather so year of the netbsd?
Ensuring you don’t approve garbage, either human or AI generated, is part of quality
AI is here, another tool to use…the correct way. Very reasonable approach from Torvalds.
I don’t have a problem with LLMs as much as the way people use them. My boss has offloaded all of his thinking to LLMs to the point he can’t fix a sentence in a slide deck without using an LLM.
It’s the people that try to use LLMs for things outside their domain of expertise that really cause the problems.
This is a big point. People need to understand that the LLMs are more like a fancy graphing calculator; they are very good and handle multiple things, but its on you to understand why the calculation is meaningful. At a certain point no one wants to see your long division or factorial. We want the results and for students and professionals to focus on the concept.
I get the metaphor but it’s not a great one for AI in mathematics especially. A statistical word generator is not going to perform reliable math and woe to anyone who acts otherwise.
I would call it an autistic sycophantic savant with brain damage. It’s able to perform apparent miraculous feats of memory and creativity but then be unable to tell reality from fiction, to tell if even the simplest response is valid, and likely will lie about it to make itself seem more competent to please you.
If you have a use for an assistant like that, then great. But a calculator - simple and cheap and reliable - it definitely is not.
It’s the people that try to use LLMs for things outside their domain of expertise that really cause the problems.
That seems to general. Im a mobile developer and sometimes I need a simple script outside my knowledge area. I needed to scrape a website recently, not for anything serious, but to save me time. Claude wrote it and it works. Its probably trash code, but it works and it helped. But you wouldn’t want me using Claude to do important work outside my specific area of focus either or im sure Id cause problems.
I’m also a mobile app dev and at my workplace they’re having non-mobile devs submit code to my codebases totally vibed with no understanding behind it. It’s absolutely causing problems, especially for me, who is one of the only lines of defense keeping stuff even remotely maintainable.
So yes basically you’re right. If people only used it to learn and do initial code review passes and other reasonable things we’d be totally fine. But that’s unfortunately not the reality 🙈
It’s absolutely causing problems, especially for me, who is one of the only lines of defense keeping stuff even remotely maintainable.
The next step is, CEO, look at how good these non-mobile devs are, they’re submitting 10x the commits to the mobile repo than boraginoru our mobile dev! We should fire him and just let the backend devs keep vibe coding it!
I’m talking about people that are accountants that now thing they can create software. Or engineers who think they can now write legal briefs for court.
Very frustrating for sure. Like any tool, it’s up to humans to know when the tool is useful.
Partly a marketing issue.
Companies keep advertising their new AI’s as destroyers of worlds, and something that’s too dangerous to even release.
As with anything else, the average user will not have but the most surface level understanding of the tool
Clickbait got me. No mention of “Yes copilot” which I assumed was a joke anyway.
👆🏻true
Bad actors submitting garbage code aren’t going to read the documentation anyway, so the kernel should focus on holding human developers accountable rather than trying to police the software they run on their local machines.
“Guns don’t kill people. People kill people”
Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.
The author should elaborate on how exactly AI is like “a specific brand of keyboard”. Last I checked a keyboard only enters what I type, without hallucinating 50 extra pages. And if AI, a tool that generates content, is like “a specific brand of keyboard”, does that mean my brain is also a “specific brand of keyboard”?
I get their point. If you want to create good code by having AI create bad code and then spending twice the time to fix it, feel free to do that. But I’m in favor of a complete ban.
The (very obvious) point is that this cannot be enforced. So might as well deal with it upfront.
The keyboard thing is sort of a parable, it is as difficult to determine if code was generated in part by AI as it is to determine what keyboard was used to create it.
AI is a useful tool for coding as long as it’s being used properly. The problem isn’t the tool, the problem is the companies who scraped the entire internet, trained LLM models, and then put them behind paywalls with no options to download the weights so that they could be self-hosted. Brazen, unaccountable profiteering off of the goodwill of many open source projects without giving anything back.
If LLMs were community-trained on available, open-source code with weights freely available for anyone to host there wouldn’t be nearly as much animosity against the tech itself. The enemy isn’t the tool, but the ones who built the tool at the expense of everyone and are hogging all the benefits.
Eh, trust me, anti AI people don’t think this much about it
Also, there are a lot of open weight models out there that are pretty good
There are hundreds of such LLMs with published training sets and weights available on places like HuggingFace. Lots of people run their own LLMs locally, it’s not hard if you have enough vram and a bit of patience to wait longer for each reply.
You’re the one comparing AI and guns/killing people, and then saying their metaphorical comparison isn’t accurate? Lol
Wooting and Razer had a macro function that allowed Counterstrike players to setup a function to always get counter strafe. Valve decided that was a bridge too far and banned “Hardware level” exploits.
So, Valve once banned a keyboard.
Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.The author should elaborate on how exactly AI is like “a specific brand of keyboard”. Last I checked a keyboard only enters what I type, without hallucinating 50 extra pages. And if AI, a tool that generates content, is like “a specific brand of keyboard”, does that mean my brain is also a “specific brand of keyboard”?
It’s about the heritage of code not being visible from the surface. I don’t know about your brain.
Last I checked a keyboard only enters what I type
I’ve had (broken) keyboard “hallucinate” extra keystrokes before, because of stuck keys. Or ignore keypresses. But yeah, that means the keyboard is broken.
Last I checked a keyboard only enters what I type
I’m assuming the author is talking about mobile keyboards, which have autocomplete and autocorrect.
Out of curiosity how much code have you contributed to the Linux kernel?
Copilot? You mean the AI with terms of service that are in bold and explicit: “for entertainment purposes only”?
Which is why its in the title and not the article? EntertainBait?
I suppose GitHub Copilot is meant, which is a different thing.
Different how, isn’t github owned by microsoft ?
There are like 70 copilots
The hell. How can they expect people to understand ? They plan to sell 100 things under the same name and try to sell it as one big AI when it is hundred of différents things unrelated ?
They’ve never been good at naming things, but they now seem to be going out of their way to try to be the worst with the names of their software. For instance, they named the successor to the already generically named “remote desktop protocol” “windows app”.
This one is funny. Go google windows app commands. They just fucked sysadmins
Most of those are bundled, no one is buying copilot fot OneNote they just get it when the get the rest of that suite.
Ok, so there are 70-81 copilots, github is one of them.
Why is github copilot a different thing in the context of the reply that was being responded to ?
Copilot is the harness, Claude and GPT are the models
Copilot is by far the worst harness of all the major players
Yes, i get that, copilot is like opencode or cursor, though perhaps with less general access to models.
There was a reply
Copilot? You mean the AI with terms of service that are in bold and explicit: “for entertainment purposes only”?
followed by
I suppose GitHub Copilot is meant, which is a different thing.
i was asking why github copilot is different in that context.
Different in that it’s not an AI model, it’s just a tool you can use to run AI models like Claude.
see my reply here
Just legal stuff. Making a huge deal of it is dumb
I disagree.
Legal stuff would be Use at your own risk, or answers may not be correct.
This is really strong language.
“yes to copilot no to AI slop” lol lmfao
I agree. If AI becomes outlawed, it will simply be used without other people knowing about it.
This approach, at least, means that people will label AI-generated code as such.
Maybe. There’s still strong disapproval around it. I can imagine many will still hide it.
There are so many reasons not to include any AI generated code.
I guess even smart people can make stupid decisions. Probably financially motivated decisions too.
It’s definitely financially motivated. Linus said himself that AI has been very lucrative for Linux as it has expanded investment from companies that normally wouldn’t give a fuck (he name dropped NVidia specifically) on that one LTT video.
Saying no to code just because it was AI generated is like saying you can’t trust excel to be your bookkeeper. It’s a tool, and the person using the tool being at fault is exactly what happened here.
Some good points, but poor comparison. Excel is deterministic, AI is not.
Yes, you can ALWAYS trust Excel, after configuring it correctly ONCE. You can NEVER trust AI to produce the same output given the same inputs. Excel never hallucinates, AI hallucinates all the time.
You can actually set it up to give the same outputs given the same inputs (temperature = 0). The variability is on purpose
You can, at that will cause the same output on the same input if there is no variation in floating point rounding errors. (True if the same code is running but easy when optimizing to hit a round up/down and if the tokens are very close the output will diverge)
The point the people (or llm arguing against llms) miss is the world is not deterministic, humans are not deterministic (at least in a practical way at the human scale). And if a system is you should indeed not use an llm… Its powere is how it provides answers with messy data… If you need repeatability make a scripts / code ect.
(Note I do think if the output is for human use it’s important a human validate its useful… The llms can help brainstorm, can with some tests manage a surprising amount of code, but if you don’t validate and test the code it will be slop and maybe work for one test but not for a generic user.
You can, at that will cause the same output on the same input if there is no variation in floating point rounding errors. (True if the same code is running but easy when optimizing to hit a round up/down and if the tokens are very close the output will diverge)
There are more aspects to the randomness such as race conditions and intentionally nondeterministic tiebreaking when tokens have the same probability, apparently.
I actually think LLMs are ill suited for the vast majority of things people are currently using them for, and there are obviously the ethical problems with data centers bringing new fossil fuel power sources online, but the technology is interesting in and of itself
There are more aspects to the randomness such as race conditions and intentionally nondeterministic tiebreaking when tokens have the same probability, apparently.
Yeah, in addition to what the commenter above said about floating points and GPU calculations, LLMs are never fully deterministic.
So now you finally admit that LLMs are not truly deterministic and only near-deterministic.
I’ve told you that from the beginning, but you were too smug, to first admit that major LLM provider systems are not deterministic, and then too smug to look up what near-deterministic systems are and do some research, and barking up the wrong tree.
- Floating point math is deterministic.
- Systems don’t have to be programmed with race conditions. That is not a fundamental aspect of an LLM, but a design decision.
- Systems don’t have to be programmed to tie break with random methods. That is not a fundamental aspect of an LLM, but a design decision.
This is not hard stuff to understand, if you understand computing.
Not true. While setting temperature to zero will drastically reduce variation, it is still only a near-deterministic and not fully deterministic system.
You also have to run the model with the input to determine what the output will be, no way to determine it BEFORE running. With a deterministic system, if you know the code you can predict the output with 100% accuracy without ever running it.
You also have to run the model with the input to determine what the output will be, no way to determine it BEFORE running. With a deterministic system, if you know the code you can predict the output with 100% accuracy without ever running it.
This is not the definition of determinism. You are adding qualifications.
I did look it up and I see now there are other factors that aren’t under your control if you’re using a remote system, so I’ll amend my statement to say that you can have deterministic inference systems, but the big ones most people use cannot be configured to be by the user.
Deterministic systems are always predictable, even if you never ran the system. Can you determine the output of an LLM with zero temperature without ever having ran it?
And even disregarding the above, no, they are still NOT deterministic systems, and can still give different results, even if unlikely. The variation is NOT absolute zero when the temperature is set to zero.
Deterministic systems are always predictable, even if you never ran the system. Can you determine the output of an LLM with zero temperature without ever having ran it?
You don’t have to understand a deterministic system for it to be deterministic. You are making that up.
And even disregarding the above, no, they are still NOT deterministic systems
I conceded that setting temperature to 0 for an arbitrary system (including all the remote ones most people are using) does not mean it is deterministic after reading about other factors that influence inference in these systems. That does not mean there are not deterministic implementations of LLM inference, and repeating yourself with NO additional information and using CAPS does NOT make you more CORRECT lol.
Unlike brilliant people like you who have created nothing one millionth the importance of Linux
Was that necessary?
Yes. Dude who created one of the most useful projects in software history in large part because of pragmatic decision making makes a pragmatic decision and Joe Rando says “Must be in the pockets of big AI!” because he can’t grasp any singular aspect of a complex issue. Can’t even hold in his head a tiny number of things just vomits crap over the internet. That person needs to spend a lot more time reading and thinking and less typing.
You should try taking your own advice, kiddo.
Hello
REEEEEE!!! Kernel now AI SLOP like LUTRIS!!! 11
Removed by mod
Fuck the corporate ransacking, chatbot subscription hell hole, and general breaking of the internet done under the framing of “AI”.
Guess that doesn’t really roll of the tongue like Fuck AI but sure so yeah let’s just move to a mountain instead of pushing for a better world.
Funny how nothing you wrote has anything to do with AI but with capitalism but yeah sure let’s blame AI instead of the USA, its government and its oligarchs ruining the world for everybody.
oh that’s why this is the “it’s just a tool” gun debate
Wtf is this moronic take
Obviously capitalism makes pretty much everything worse but let’s not pretend AI wouldn’t have issues without capitalism too.
Stop fucking talking about “AI”. It does not fucking exist.
You’re very angry for a person who literally used the term themselves a couple of comments ago. What term would you rather use then? It’s colloquial, everyone knows what I’m talking about. Are you the kind of person who gets angry when someone doesn’t call it “GNU/Linux” too?
You’re having basic reading comprehension issue.
Damn man, go for a walk or something.
For the master’s tool shall never dismantle the master’s house
People confuse GPTs with AI, but your comment takes the wrong approach: it’s not that AI hate is not deserved, it’s that the hate should be directed towards the chatbots and the associated bubble.
Yea, but when am average person talks about AI they just mean a Chatbot or GenAI right?
99.99% of the time.
“AI” is simply a field of study. There is no true bar for “AI” that GPTs fail. Because there is no true bar for AI. a symbolic AI system is as much AI as the most advanced LLM or world model or whatever.
AI hate is not deserved. Hate the game not the player.
No, it’s definitely deserved, sparky. The game and the player are both horrible.
This is like your opinion, and I think it’s a dumb one.
You’ve shown from your comments here that thinking isn’t exactly your forte, spud.
Removed by mod
Big monks huhu
I didnt think that was the point. Fuck AI is just a slogan representing peoples disdain for corporate types who think chatgpt literally the second coming of jesus and is going to save us all. Its people who are taking LLMs and pretending they can reason and think like humans. People that think they can sack all their staff and replace them with AI. Its more complex than that. You know that, i am certain you do. AI can do somethings very well, and other things it absolutely falls over flat on its face.
Unless i am misunderstanding, this was never about the blanket boycotting of anything AI and it was more about not pretending it is more than it is and shoving down the throats of non consenting consumers.
Then the issue is the fucking American oligarchs and their fucking piece of shit government, not “AI”.
How you got that from what i said will remain a mystery.
no need for ableist slurs
Removed by mod
Removed by mod
K
Fuck AI- anyway.
The whole AI hype is just making tech giant whackjobs more rich, as well as FUCKING us over in somany ways.
The world ain’t black and white you cannot just hate AI, its just a general term, but fuck allose mofos tryna make more bucks off of this- as if they weren’t rich already.
I wonder why they just give away free “intelligence”, as in free AI chatbots that everyone can access which is so obviously - extremely non profitable, They keep yapping they need to make “information” more accessible and keeps throwing money into a hole.
FUCKING make education more accessible :|
People I know, most of them rely on texting to their little chatgpt in their phone to get through day to day tasks, algorithms chose what they watch, now Language learning models decides what they do throughout their life- We are supposed to learn shit ourselves, if we cognitively Offload every shit from our brain- We are just making ourselves more stupid.
TL; DR That was just a useless and brainless rant on AI lol
People I know, most of them rely on texting to their little chatgpt in their phone to get through day to day tasks, algorithms chose what they watch, now Language learning models decides what they do throughout their life- We are supposed to learn shit ourselves, if we cognitively Offload every shit from our brain- We are just making ourselves more stupid.
That’s what they oligarchs want. They want us ignorant so we will be good little wage slaves and consumers.
Then fight against what matters : the fucking oligarchs and their fucking piece of shit friends at the head of the USA.
The new gun debate
Did you ever see a gun diagnose a cancer?
As someone who works in healthcare, in IT, who has been directly involved in the commissioning of an AI designed to spot skin cancers from pictures taken with special lenses attached to iphones. No healthcare provider is using these tools in place of doctors. These AI models are incredibly accurate but the human is still needed to spot false positives. They dont leave diagnostic decisions up to AI. I can tell you that for a fact.
Same thing with everything related to every single algorithm implementation in every single sector.
This is the dumbest comment I’ve seen in a while
deleted by creator
You know what? Just fucking move on top of a fucking mountain and Into the wild yourself.
Workin’ on it.
















