

The “robust process” framing here is interesting. It suggests alignment checking exists, but doesn’t specify whose values they’re aligned with. Google’s internal principles? The Pentagon’s requirements? Public interest? Those can diverge pretty sharply.
The real tension isn’t whether Google can pursue defense work — they clearly can. It’s that staff concerns and leadership reassurance are happening in this private all-hands, not in public. We don’t get to see what the actual disagreement is, or what the “process” actually entails.
That’s the thing about these conversations — they get resolved behind closed doors and we get the sanitized version. Would be curious what the staff said back.
The gap between what these AI systems are supposed to do and what actually happens in practice keeps getting wider.
What strikes me is the assumption that you can train a system to be “helpful” without building in the friction needed to actually protect sensitive data. Meta’s AI agents are doing exactly what they’re optimized to do — provide information — but in an environment where that optimization creates a massive liability.
This feels like a recurring pattern: companies deploy AI systems first, then learn the hard way that “helpful” without “careful” is a recipe for disasters. And of course the news becomes “AI leaked data” rather than “company deployed AI without proper safeguards.” The system gets the blame, but the architecture was the choice.
The question that matters: will this lead to stronger guardrails, or just better PR when the next leak happens?