I’m a software developer in Germany and work for a small company.
I’ve always liked the job, but I’m getting annoyed recently about the ideas for certain people…
My boss (who has some level of dev experience) uses “vibe coding” (as far as I know, this means less human review and letting an LLM produce huge code changes in very short time) as a positive word like “We could probably vibe-code this feature easily”.
Someone from management (also with some software development experience) makes internal workshops about how to use some self-built open-code thing with “memory” and advanced thinking strategies + planning + whatever that is connected to many MCP servers, a vector DB, has “skills”, a higher token limit, etc. Surprisingly, the people visiting the workshops (also many developers, but not only) usually end up being convinced by it and that it improved their efficiency a lot and writing that they will use it and that it changed their perspective.
Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to “save time” not writing the text yourself. Nice!!!
I see Microsoft announcing that 30% of code is written by AI which is advertisement in my opinion and an attempt to pressure companies to subscribe to OpenAI. Now, my company seems to not even target that, but target the 100%???
To be clear: I see some potential for AI in software development. Auto-completions, location a bug in a code base, writing prototypes, etc. “Copilot” is actually a good word, because it describes the person next to the pilot. I don’t think, the technology is ready for what they are attempting (being the pilot). I saw the studies questioning how much the benefit of AI actually is.
For sure, one could say “You are just a developer fearing to lose their job / lose what they like to do” and maybe, that’s partially true… AI has brought a lot of change. But I also don’t want to deal with a code base that was mainly written by non-humans in case the non-humans fail to fix the problem…
My current strategy is “I use AI how and when ->I<- think that it’s useful”, but I’m not sure how much longer that will work…
Similar experiences here? What do you suggest? (And no, I’m currently not planning to leave. Not bad enough yet…).
I’m starting to form a conspiracy theory that the “let AI write the email” concept is, in itself, an ad for AI. Not for people writing them (they are easy to convince), but now the people reading them have a bunch of bullshit to deal with. The best tool is an LLM summary to undo the LLM bullshit. They get double the usage from people (well, if the manager gets many subordinates to do this, it’s well more than double), and nothing of value was added.
Jokes on em, I don’t read work emails. Partially because I refuse to dedicate any time of the day using Outlook, especially in a web browser, because the oh-so-wise IT departnent does not allow to use a different client, and as I can’t use Outlook on Linux, fuck em.
And no, IMAP or POP3 are not available. Trying to login via Thunderbird just triggers a message to contant IT dept to allow me using it. It’s Teams or nothing.
Honestly, this is what I would do in your situation:
- Update your resume and start responding to LinkedIn messages and possibly looking at least possibly.
- Take those workshops for LLMs, there might be useful stuff to learn there, auto-completion, code search and examples of how to use certain features are very good uses of LLMs.
- Don’t be overly vocal about it, but point at issues when you see them, e.g. those large messages that you’re expected to read point out how they’re way longer than need to be and how using LLMs to give you a summary said the wrong thing (even better if you have an actual example of this, by for example invoking TLDR bot or something similar on those messages every time they come up)
- Look at code that was vibe coded in areas you’re working and start creating tickets for the stuff you see, unless they’re vetting everything the LLM produces (which would be slower than writing it yourself) there will be issues there, start documenting those. The thing most managers and other “AI enthusiasts” don’t get is that LLMs are trained with stack overflow and thousands of random GitHub projects written by inexperienced devs for every one good piece of code, so they have thousands of bad or incomplete examples for every good one. This means they end up not doing things like verifying you’re logged in to use an API, sanitize SQL queries, etc. Because when you ask how to do something in stack overflow you will get an answer that is not meant to be used literally things like `query = f"SELECT * FROM {table_name}"`` is an okayish example on how to build queries with validated data, but it’s a TERRIBLE example to use with user provided data, but the LLM doesn’t know that, it just copy pastes the code that gets things from a table where it needs it.
- Prepare yourself, using LLM to write code has a short lifespan in most companies, but the damage takes twice as long to clean up. If you stay you will be seen as the naysayer and might even get fired for it, but eventually this will blow up so gigantically that they’ll start to regulate or even ban LLMs. And then there will be lots of garbage to clean up. In your shoes I might look elsewhere while possible as I wouldn’t want to be associated with the company that had all of their data leaked or similar, because if they’re using vibe code in prod it’s a matter of when.
Step 1, update your resume.
Step 2, follow your boss’ instructions until it all breaks.
Step 2.5, document everything so be can’t blame you later.
Step 3, go have a beer; you don’t get paid enough to give a shit.
Plant the seed of using OpenClaw, but make sure you get no credit for it… Once it’s taken root, make sure you back up as much as you can of everything.
Wait for OpenClaw to inevitably self implode. Panic happens. Point out this is why you didn’t trust AI, become hero by having the backups ready so everything’s not destroyed.
People who don’t know better like AI until it vibes back and bites them.
This but without the backups. Then walk out of the building towards the camera when it all explodes behind you.
A popular option is to use vibe code to help run the place into the ground while looking for the next job.
Pretty soon the wave of vibe-shit code is going to be too much to clean up.
Interesting times ahead.
I think you’re gonna have to kill them
Two words: malicious compliance.
This.
I’ve already got my manager to tell me to not use AI on a task. I see this as an absolute win and I’m gunning for more.
He ALWAYS uses AI first when he needs to figure something out. ALWAYS tells us to use AI for the quick start. But when we do it, and it ends up wasting time, somehow it’s our fault, and we didn’t prompt it properly.
Also, am I mad, or does Cursor (specifically Sonnet) sometimes act dumb on purpose? Sometimes it codes a feature nearly entirely without many issues, other times it seems unable to comprehend that it’s using the wrong property in a class.I feel like it’s made to make us question each other’s ability to use AI tools and cause internal team unrest.
Spoken like an unemployed person…
Why would you sabotage or stagnate your career?
Principles or ideals prioritized above comfort and stability, fucked up you have to ask. Spoken like a hollow bootlicker
Sounds like I walked into reddit antiwork crowd! Always black and white with you lot… If you’re in industry and market that allows that? I’m happy for you.
You’re the one who crashed in with the judgment, name calling, and confrontational attitude. You couldn’t be more thoroughly shaped by your “industry and market” and I’m not sure if it’s more gross or sad. Corporations might be people now but they’re sure as shit not gonna cry at your funeral, get a life outside your job
It’s hard to be a contrarian in these kinds of positions (I’ve been there, and it didn’t end well), so I wouldn’t be too outspoken, but at the same time, try to innocently point out the issues with approaches like this. I would just try to point out the flaws in this approach, the same that we would for any other kind of programming fad - without making it seem like it’s an agenda, of course.
For example, any time teams are looking for feedback - code review, retrospectives, etc. - just point out the flaws on why vibe coding is a bad idea and bring it up casually when the time comes. It doesn’t hurt to be honest as long as you don’t come off as being an ass about it.
Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to “save time” not writing the text yourself. Nice!!!
My last software dev employer did this, except with the “voice recording” feature. Instead of composing messages in text in a text chat (because that takes too long), he’d hit the record button and just start talking it out, then send the recording. Easy! Then the team had to download and listen to ~5 minutes of verbal diarrhea, pausing and rewinding for twice that long in an attempt to glean something useful from it. This particular kind of delusion existed before AI.
This is where AI could be useful. Transcribing and summarising the voice recordings.
Who’s going to be blamed when the summary got it wrong though?
Everyone. One person is too lazy to write a message, the others can’t be bothered to listen to the whole thing 🤷♂️
The transcription should be attached to the audio recording so if the sender cares about it being correct they should be able to comment or add correction.
Remind them that copyright cannot be enforced on anything AI-written.
I try to push on the maintenance aspect. Developing something new is easy, and my company does do that, but the group I’m in is primarily doing maintenance on existing software. Bug fixes, feature additions, etc. If we generate applications entirely using LLMs, none of us will be experts on the applications we push to the customers.
They push corpo buzzwords like “responsibility”, but who takes responsibility when no one has done the work to begin with? It feels like a liability nightmare, and the idea of sitting there cleaning slopcode just isn’t very appealing to me.
That’s going to be a problem, almost like a money laundering scheme. AI can spit out content that’s 99% derived from copyrighted content but is itself free of copyright.
Any idiot can write code. “Vibe coding” is just the new pasting code from stack overflow. For that matter, a lot of LLM generated code probably came from stack overflow.
Your value as a developer is not in your ability to rapidly pump out code. Your value is in your ability to design and build complex systems using the tools at your disposal.
As an industry, software engineering has not yet been forced to reckon with the consequences of “vibe coding.” The consequences being A.) the increasing number of breaches that will occur due to poor security practices and B.) the completely unmanageable mountain of technical debt. A lot of us have been here before. Particularly on the tech debt front. If you’ve ever been on a project where the product team continually pushes to release features as fast as possible, everything else be damned, then you know what I mean. Creating new code is easy. Maintaining old code is hard.
Everything starts out great. The team keeps blowing through milestones. Everyone on the business side is happy. Then, a couple years into the project, strange things start happening. It’s kind of innocuous at first. Seemingly easy tickets take longer to complete than they used to. The PR change logs get longer and longer. Defect rates skyrocket. Eventually, new feature development grinds to a halt and the product team starts frantically asking, “what the hell is going on?”
A question to which maybe one or two of the more, senior devs respond, "Well, uh, we have a lot of technical debt. I mean A LOT. We’re having to spend tons of time refactoring just to make minor changes. And of course, unplanned refactoring tends to introduce bugs.
The product team gets an expression on their face like Wyle E. Coyote as the shadow of a falling ACME anvil closes in around him. At this moment, they have two choices. Option A.) develop a plan to mitigate the existing tech debt and realign the dev teams objectives to help prevent this situation again by focusing on quality over quantity. Option B.) ignore the problem and try to ram feature development back on track by sheer force of will.
Only one of these options will achieve meaningful outcome and it’s not “B”. Unfortunately, in my experience that’s often the chosen option. The product team does not understand that while Option A impedes feature development, it’s only temporary. Option B impedes feature development permanently.
We’re going to see a very similar cycle with vibe coding. It just takes time to materialize. Personally, I think the tech debt for vibe codes projects will be compounded due to the sheer verbosity of LLM’s and the fact that no one actually understands a vibe coded project well enough to fix it.
That said, these issues are rooted in hubris and ignorance. Failure to appreciate the “engineering” part of software engineering. This is not something you alone can change.
The AI hype is going to disappear. Probably sooner than later. Just like every other tech hype cycle before it. But, LLM’s are probably here to stay so we have to make the best of it. I don’t usually use LLM’s for code generation. There are better tools for that already. I do use them frequently for research. Honestly, using an LLM with search incorporated is often a lot faster than scouring dozens of websites to figure out how to do something. You still have to take the information with a grain of salt as much as you would with anything on the Internet because LLM’s have no understanding of the text they spit out and will feed you incorrect information without missing a beat.
If I were you, I would focus on quality over quantity. Closing tickets faster is pointless if you’re introducing a bunch of new bugs. If your bosses don’t know that already, they will learn it soon enough.
Closing tickets faster is pointless if you’re introducing a bunch of new bugs.
Objectively true, but if my bonus reflects tickets rather than bugs, I’m gonna close so many tickets, anyway, because I don’t own the place.
Which is also why wise companies grant their employees stock.
We’re using LLMs at the company I work at and it seems very useful in many cases but sometimes it still doesn’t work. I’m a bit worried about the aspect of the code rotting by LLMs generating stuff based on existing code.
My mindset has shifted a bit, now I’m more focused on making stuff easy to find and easy to figure out patterns to use so that the codebase becomes easier to work with. There’s some horrible code in the project and the LLM absolutely sucks balls at it but if it’s a clean routine job such as making a table with update dialogs and actions to manipulate the data the success rate is >95%.
So yeah, don’t trust it, treat it like a junior dev that got straight As in school and has never considered security. Code reviews are now where it’s at.
I am making similar experiences, but is is not as bad as you are describing it yet. We have a new member in the team who is not a developer by himself, but he has gotten the task to make our way of working more professional (we are mainly scientists and not primarily software engineers, so that’s a good thing).
His first task was to create programming guidelines and standards. He created 8 pages of LLM generated text and example non sense code. He honestly put a lot of effort in it, but of course there are a lot of things in it that are wrong. But the worst thing is the wall of text. You are nailing it - it is my task now to go through this whole thing and extract the relevant information. It sucks. And I am afraid that soon I will need to review more and more low quality MRs generated by people who have little experience in programming.
Fixing vibe code is a specialty that contractors will be able to charge a premium for here pretty soon.
Soon? It’s been on my resume for over a year.
We had a dev drop a combined total of 8,300 lines of readme files into the code base over a weekend. I want to nuke all of them, my boss suggests reviewing and updating them.
8,300 lines
rookie numbers
I think my team is in the tens of thousands of AI generated “documentation”.
They claim the AI can use it to code better in the project.
Bullshit. The AI can’t load in a single one of these files without filling half the context.
I was recently instructed to have gander at it.
I warned that it seemed inconsistent with the actual code.
Was told I’m right and they brushed it off.
“We should update this to reflect reality”
They brushed it off and we moved on. The misleading doc is still there, waiting for its next victim.
That last line belongs in a horror novel
“I don’t have time to read through that much bullshit.”
Maybe phrase it a little more kindly, but that’s what I’d try at the very least. “I have other priorities at the moment” could work too.
Can you promise to spend exactly as much time reviewing as it took to create?
I had a manager who pushed AI a lot. When he left, all the pressure to use it seemed to die down. So maybe it’s just a couple of people creating this environment and if you can get away with avoiding them it’s better.
The problem with AI code we saw is that often no human has actually looked at it. During reviews you won’t check every line and you’ll have to trust much of the code that seems to do obvious things. But that assumes it was written by a human you also trust. When that human hasn’t reviewed the code either, you end up with code no one in the company has seen (and may not even know how it works).
Your entire comment echoes my thoughts. Things aren’t exactly improved by the idea of adding LLMs to the review process either. Gods.
Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to “save time” not writing the text yourself. Nice!!!
I think this is one of your best bets as far as getting a real policy change. Bring it up, mention that posts like that may take less time to “write”, but that they’re almost always obnoxiously verbose, contain paragraphs that say essentially nothing, and take far longer to read than a hand-typed message would. The argument that one person is saving time at the expense of dozens (?) of people losing time may carry a lot of weight, especially if these bosses are in and read the same Slack channel.
Past that I’d just let things go as they are, and take every opportunity to point out when AI made a problem, or made a problem more difficult to solve (while downplaying human-created problems).






