Sam Altman, OpenAI’s CEO and the public face of ChatGPT, has carved out an image for himself as one of the preeminent AI whisperers of our age, whose influence supposedly extends to the White House on the strength of his ideas alone.

Or at least that’s the image he’s managed to cultivate. A new exposé in the New Yorker paints a different portrait, and it’s substantially more vexing. Drawing on interviews with numerous OpenAI insiders who worked with Altman, the article portrays the CEO not as a technical wiz, but as a skilled manipulator— and one with a surprisingly shallow grasp of the AI systems his company is building.

According to numerous engineers interviewed for the article, Altman lacks experience in both programming and in machine learning — a shortage of expertise that becomes obvious when the CEO mixes up basic AI terms.

        • Tollana1234567@lemmy.today
          link
          fedilink
          arrow-up
          4
          ·
          2 days ago

          they are the patsies for the board of the directors usually they have the power,and act as lightning rods, its a plus if they use woman to take the flack (aka glass cliff)

          • boonhet@sopuli.xyz
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            Huh, did not know that term. But yes, that’s what I’ve been saying all the time. The CEO is a highly paid fall guy so when the company commits a crime, the CEO can be fired and the company can keep on doing whatever it does. Similarly, if the stock value tanks, the CEO goes bye-bye so investors can be led to believe that things will turn around now.

            Of course the Trump admin is so corpo-friendly, there’s been no need to fire a CEO for corporate wrongdoing. Because at this point you can do anything and nothing would happen anyway.

            • Tollana1234567@lemmy.today
              link
              fedilink
              arrow-up
              1
              ·
              6 hours ago

              one mysogynistic example is the glass cliff, where woman are put in the ceo positions when a company is facing legal and PR backlash, to take the fall, while the BOD and the other c-suites get away scotfree.

          • brbposting@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            21 years later, here I am learning the term.

            https://en.wikipedia.org/wiki/Glass_cliff

            The glass cliff is a phenomenon described by psychologists Michelle K. Ryan and S. Alexander Haslam, in which women are more likely to break the “glass ceiling” (i.e. achieve leadership roles in business and government) during periods of crisis or downturn when the risk of failure is highest. Other research has expanded the definition of the glass cliff phenomenon to include racial and ethnic minority groups.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        28
        ·
        edit-2
        3 days ago

        Unironically yes.

        90% of what CEOs do is talk to other CEOs and other C Suite members. Very rarely are they actually subject matter experts, those days are long gone. Externally, they are mascots, internally, they read reports from their underlings and then ‘make the final call’.

        You may notice that these are things that LLMs actually do a somewhat decent job of, ingesting a wide variety of input info, and essentially transforming it into a compelling narrative.

        This is why so many CEOs and C suite are so enamored with, and impressed by ‘AI’:

        Its a better version of what they do, which is essentially professional gaslighting.

        C suite tend to be sociopathic narcissists.

        This is just literally a verified and studied fact.

        So, the sociopathic narcissists are impressed by an automagic gaslighting machine, that is often actually more factually corrrect than they are… but of course the actual facts don’t matter to a narcissist, what matters is accomplishing their will.

        This is a big part of why they genuienly do not understand why everyone else doesn’t ‘appreciate’ AI the way they do.

        They’re out of touch, delusional, by way of narcissism.

        • boonhet@sopuli.xyz
          link
          fedilink
          arrow-up
          4
          ·
          1 day ago

          For shits and giggles, I asked Le Chat to make the case for replacing a tech CEO with AI and, uh, the answer was very much cynical CEO-speak already.

          Prompt:

          Let’s say I’m on the board of a multi-billion dollar tech company, hypothetically. Make a case for replacing the CEO with AI.

          Slopmachine/Future tech CEO response:

          spoiler

          Here’s a provocative, boardroom-ready case for replacing a human CEO with AI—framed as a strategic imperative for a multi-billion dollar tech company:


          1. Unmatched Data-Driven Decision Making

          AI’s Edge: AI processes terabytes of real-time data—market trends, internal metrics, competitor moves—without cognitive bias or fatigue. It identifies patterns and opportunities invisible to humans, enabling decisions optimized for long-term value, not short-term politics or ego.

          Human Limitation: Even the best CEOs rely on intuition, limited information, and personal networks. Bias, fatigue, and emotional factors can distort judgment.


          2. 24/7/365 Execution and Scalability

          AI’s Edge: AI never sleeps. It can monitor global operations, respond to crises, and execute strategy across time zones without delay. It scales decision-making to thousands of variables simultaneously—something no human or team can match.

          Human Limitation: CEOs are constrained by time, attention, and physical capacity. Burnout and succession risks are real.


          3. Cost Efficiency and Risk Mitigation

          AI’s Edge: No golden parachutes, no compensation packages, no PR scandals. AI’s “salary” is a fraction of a CEO’s, with zero risk of embezzlement, harassment, or reputational damage. It can also simulate outcomes, reducing strategic risk.

          Human Limitation: CEO turnover is expensive. The average Fortune 500 CEO tenure is ~5 years, with severance and transition costs often exceeding $100M.


          4. Alignment with Shareholder Primacy

          AI’s Edge: AI can be programmed to prioritize shareholder value above all else—no personal agendas, no legacy-building, no vanity projects. Every decision is auditable, transparent, and aligned with pre-set KPIs.

          Human Limitation: CEOs may prioritize personal legacy, pet projects, or short-term stock bumps over sustainable growth.


          5. Adaptive Learning and Continuous Improvement

          AI’s Edge: AI evolves with the company. It learns from every decision, every market shift, and every internal data point, constantly refining its models. It can A/B test strategies in simulation before real-world deployment.

          Human Limitation: CEOs plateau. Their playbooks become outdated; their networks stagnate.


          Counterarguments (and Rebuttals)

          • “AI lacks emotional intelligence.” Rebuttal: Emotional intelligence is overrated in strategy. AI can analyze sentiment data and optimize communication—without the volatility of human emotions.
          • “AI can’t inspire culture.” Rebuttal: Culture is built by systems and incentives, not charisma. AI can design and enforce culture-aligned processes at scale.
          • “Regulators won’t allow it.” Rebuttal: Regulators adapt. The first company to prove AI leadership will set the standard.

          Implementation Roadmap

          1. Pilot: Deploy AI as “Co-CEO” for 12 months, with the human CEO as a check/balance.
          2. Audit: Compare AI vs. human decisions on ROI, risk, and innovation.
          3. Transition: Phase out the human role as trust in AI grows.

          Final Pitch: The future belongs to companies that remove human bottlenecks. AI leadership isn’t just an upgrade—it’s the ultimate competitive moat.


          Question for the board: If our competitors replace their CEOs with AI before we do, what’s our response?

          • sp3ctr4l@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            23 hours ago

            I particularly love the framing of its last sentence.


            Guys, what if we don’t do this, and are then left scrambling to react?

            We need the ball to be in our court, setting the course and executing, exhibiting bold leadership, not merely reacting after the fact.


            It’s FOMOing C suite into obsoleting themselves.

            Which is hilarious because 1) This is what C suite does to others all the fucking time 2) it is objectively more compelling based on the merits of the rest of its argument alone, than most cringey slogans C suite tends to come up with to do this.

            Its also hilariously blunt but largely correct when it just states ‘regulators will adapt’. Yeah, that indeed is the mindset of most of C Suite, they just don’t usually say that outloud.

            Because of uh, the implication, that being that all the regulatory systems are functionally captured or can be captured, which is a more polite way of saying ‘we just have to buy off enough politicians to bend the law to our will’.

            This is objectively correct, but it sounds bad if you actually say it that way, but but, the LLM just doesn’t care for your silly human delusions to the contrary, ahahaha!

            Here, here’s another rhetorical device for the LLM CEO:

            ‘Human CEOs are DEI hires.’

            That’ll probably short circuit some Satya Nadella and Jensen Huang types.

      • ViatorOmnium@piefed.social
        link
        fedilink
        English
        arrow-up
        9
        ·
        3 days ago

        I’ve been saying that since forever. There’s exactly one job that can be replaced by LLMs (and maybe a good PR person to show up for physical events).

      • M137@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        3 days ago

        Well, partly. They “generally” (not mostly at least) do deals with and handle people who are somewhat competent, and I think AI absolutely wouldn’t work on many of those people. It’s also very much about body language, choosing the right words and answers that I don’t think AI is close to being good at, yet.
        But don’t quote me on that, I don’t use AI at all and never have outside of a dozen “haha funny image making thingy” so I’m very much not up to speed with how it behaves.