What I have watched happen in my profession in the last two years, I am still struggling to describe. The first time I knew something was wrong, roughly a year and a quarter ago, I noticed a colleague replying to me using AI…
What I have watched happen in my profession in the last two years, I am still struggling to describe. The first time I knew something was wrong, roughly a year and a quarter ago, I noticed a colleague replying to me using AI…
We (my company) are trying to create agents that read a story and translate that into prompts, then execute said prompts, then review the output. The only piece missing is accepting the merge.
I’m not anti-AI, but a human needs to be involved at every step because a minor mistake made at the first step will amplify through the agentic pipeline.
A human should review every single thing that comes out of AI — especially if it is to be fed back into AI.