On the same day the U.K. gathered some of the world’s corporate and political leaders into the same room at Bletchley Park for the AI Safety Summit, more than 70 signatories put their name to a letter calling for a more open approach to AI development.
“We are at a critical juncture in AI governance,” the letter, published by Mozilla, notes. “To mitigate current and future harms from AI systems, we need to embrace openness, transparency, and broad access. This needs to be a global priority.”
Much like what has gone on in the broader software sphere for the past few decades, a major backdrop to the burgeoning AI revolution has been open vs. proprietary — and the pros and cons of each. Over the weekend, Facebook parent Meta’s chief AI scientist Yann Lecun took to X to decry efforts from some companies, including OpenAI and Google’s DeepMind, to secure “regulatory capture of the AI industry” by lobbying against open AI R&D.
“If your fear-mongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI,” Lecun wrote.
And this is a theme that continues to permeate through the growing governance efforts emerging from the likes of President Biden’s Executive Order and the AI Safety Summit hosted by the U.K. this week. On the one hand, heads of large AI companies are warning about the existential threats that AI poses, arguing that open source AI can be manipulated by bad actors to more easily create chemical weapons (for example), while on the other hand counter arguments posit that such scaremongering is merely to help concentrate control in the hands of a few protectionist companies.
The truth is probably somewhat more nuanced than that, but it’s against that backdrop that dozens of people put their name to an open letter today, calling for more openness.
“Yes, openly available models come with risks and vulnerabilities — AI models can be abused by malicious actors or deployed by ill-equipped developers,” the letter says. “However, we have seen time and time again that the same holds true for proprietary technologies — and that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst.
Esteemed AI researcher Lecun — who joined Meta 10 years ago — attached his name to the letter, alongside numerous other notable names including Google Brain and Coursera co-founder Andrew Ng; Hugging Face co-founder and CTO Julien Chaumond; and renowned technologist Brian Behlendorf from the Linux Foundation.
Specifically, the letter identifies three main areas where openness can help safe AI development, including through enabling greater independent research and collaboration; increasing public scrutiny and accountability; and lowering the barriers to entry for new entrants to the AI space.
“History shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation,” the letter notes. “Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there.”