Are some AGI systems too risky to release? Meta says so.
Since AI came into our world, creators have put a lead foot down on the gas. However, according to a new policy document, Meta CEO Mark Zuckerberg might slow or stop the development of AGI systems deemed too "high risk" or "critical risk."
AGI is an AI system that can do anything a human can do, and Zuckerberg promised to make it openly available one day. But in the document "Frontier AI Framework," Zuckerberg concedes that some highly capable AI systems won't be released publicly because they could be too risky.
The framework "focuses on the most critical risks in the areas of cybersecurity threats and risks from chemical and biological weapons."
You May Also Like
"By prioritizing these areas, we can work to protect national security while promoting innovation. Our framework outlines a number of processes we follow to anticipate and mitigate risk when developing frontier AI systems," a press release about the document reads.
For example, the framework intends to identify "potential catastrophic outcomes related to cyber, chemical and biological risks that we strive to prevent." It also conducts "threat modeling exercises to anticipate how different actors might seek to misuse frontier AI to produce those catastrophic outcomes" and has "processes in place to keep risks within acceptable levels."
If the company determines the risks are too high, it will keep the system internal instead of allowing public access.
"While the focus of this Framework is on our efforts to anticipate and mitigate risks of catastrophic outcomes, it is important to emphasize that the reason to develop advanced AI systems in the first place is because of the tremendous potential for benefits to society from those technologies," the document reads.
Still, they're not denying that the risks are there.
Topics Meta
Christianna Silva is a senior culture reporter covering social platforms and the creator economy, with a focus on the intersection of social media, politics, and the economic systems that govern us. Since joining Mashable in 2021, they have reported extensively on meme creators, content moderation, and the nature of online creation under capitalism.
Before joining Mashable, they worked as an editor at NPR and MTV News, a reporter at Teen Vogue and VICE News, and as a stablehand at a mini-horse farm. You can follow her on Bluesky @christiannaj.bsky.social and Instagram @christianna_j.