Some Amazon employees said they were still sceptical of AI tools’ utility for the bulk of their work given the risk of error. They added that the company had set a target for 80 per cent of developers to use AI for coding tasks at least once a week and was closely tracking adoption.



Our product is so good, we mandate our employees to use it and watch them closely to make sure they do!
I wasted an hour this week expaining to CoPilot why it was wrong and dismissing every suggestion it made in a code review. In all of it, it didn’t spot a legitimate problem. It’s running at a near 100% false positive rate. I absolutely would not accept a code suggestion from it blindly.
I don’t think they’re capable of learning from their mistakes, the probability engine and underlying model data will remain the same. It might ‘learn’ briefly while the conversation history remains current, but will forget it all.
I could be wrong. I just recall trying to teach it something a year or two ago and it was unable to learn.
The last AI that Microsoft made capable of learning turned into a Nazi within hours.
I know it’s not the same company, but that sure is a far cry from:
Sounds like AI contracts aren’t the only thing getting quietly scaled back.