Blog

From OpenAI to Anthropic: who's leading on AI governance?

Sam Pic 2

Sam Altman, CEO of OpenAI and one of the most influential figures in the global AI race, recently published The Gentle Singularity, a wide-ranging reflection on the future of artificial intelligence. In it, he makes a series of striking predictions: superhuman reasoning capabilities by 2026, real-world robotics by 2027, and a transformation of human life within the next decade. But tucked among the techno-optimism is a simple yet powerful qualifier. This future will only work, he says, “with good governance.” The phrase is significant. Altman has been clear that the world’s most powerful AI systems should not be left to market forces alone. But what does he mean by good governance? And how does that compare to how other major AI labs, such as Anthropic, Google DeepMind and Meta, are approaching governance in practice?

Ethics by design: Anthropic’s built-in approach

Anthropic, founded in 2021 by former OpenAI employees including Dario and Daniela Amodei, has taken a distinctive stance from the outset. The company is a public benefit corporation, legally committed to long-term human-centred goals. It has developed “Constitutional AI”, a method for aligning large models with a written set of ethical principles that are transparent and auditable. It also created a Long-Term Benefit Trust, a governance structure that gives power over the company’s future direction to a group of trustees charged with representing the public interest. This is arguably the most formalised, internal approach to governance among the major players. Rather than relying on regulation or oversight from the outside, Anthropic tries to embed governance into its corporate structure, training process and decision-making from day one. It reflects a belief that responsible AI use must be hardwired into the development process, not bolted on as an afterthought.

OpenAI and the institutionalist model

OpenAI, under Altman’s leadership, takes a different route. It operates as a capped-profit company with a stated mission to ensure AGI benefits “all of humanity”. While originally set up as a non-profit, it has evolved into a hybrid structure to attract capital and talent at scale. OpenAI’s governance strategy is more external-facing, focused on partnership with regulators, engagement with governments and shaping global norms for AI deployment. Altman has repeatedly called for international institutions to oversee advanced AI, suggesting something akin to an IAEA-style body. This reflects an “institutionalist” approach, where governance is the domain of treaties, oversight bodies and multilateral cooperation. However, this vision is still aspirational, and OpenAI has faced criticism for a lack of transparency in its own practices, particularly regarding safety testing and model release decisions.

Google DeepMind: principles and process

Google’s DeepMind operates under a set of published AI Principles first introduced in 2018, which guide the responsible development and application of AI across Alphabet. These principles emphasise social benefit, safety, fairness, privacy and accountability. Google has invested heavily in formal governance infrastructure, including internal ethics reviews, fairness audits and the development of open-source governance tools such as model cards and explainability frameworks.
DeepMind also benefits from Google's broader ecosystem, including red-teaming procedures, responsible innovation guidelines and research into technical AI alignment. However, critics argue that corporate incentives and scale ambitions may sometimes outpace internal oversight. Governance at Google is structured and serious, but still ultimately controlled by the commercial priorities of one of the world’s largest tech firms.

Meta: ambition without guardrails?

Meta’s approach has so far been more commercially driven and less transparent. It has invested heavily in open-sourcing foundational models like LLaMA, positioning itself as a champion of accessible AI development. At the same time, it has recently launched a new superintelligence research effort, with CEO Mark Zuckerberg expressing intent to build artificial general intelligence and integrate it into consumer-facing products. While Meta has AI ethics teams and has published commitments on fairness and bias, its strategic direction appears to prioritise scale, speed and competitive positioning. Unlike OpenAI and Anthropic, it has not released detailed frameworks for AI alignment or risk governance. Its openness with models has attracted both praise for transparency and criticism for enabling misuse. This relatively hands-off approach to governance raises questions about how risks will be mitigated as its AI ambitions scale.

Why it matters to governance professionals

Altman’s call for “good governance” is open to interpretation, but the differences between the major AI players show that approaches to governance are anything but uniform. Our own research at The Chartered Governance Institute UK & Ireland highlights the real-world implications of this. Seventy-four percent of governance professionals are concerned about the accuracy of AI-generated corporate reporting, but many organisations have no formal policies in place.
Whether it is Anthropic's ethics-first model, Google’s internal governance processes or Altman’s aspiration for international oversight, these models all point to a future where governance will be as important as innovation. For governance professionals, that means taking the lead now: advising on board-level AI strategy, setting policies, training teams and evaluating risk across every layer of AI adoption.

The governance imperative

If AI is to become as foundational as energy or the internet, it must be governed with similar care. That governance will not come from tech companies alone, nor from regulators acting in isolation. It will come from the ecosystem of professionals who understand accountability, risk and the structures of good decision-making. Governance professionals have a unique opportunity and responsibility to help shape how AI is deployed within their organisations, ensuring that trust, transparency and fairness are not optional extras, but essential design principles.

 

Training and resources

At The Chartered Governance Institute UK & Ireland, we offer training in strategy, leadership and governance support, designed specifically for boards, executive teams and governance professionals across the corporate, not-for-profit and public sectors.

Find out more about our training and resources.