The Duke and Duchess of Sussex Join AI Pioneers in Calling for Prohibition on Superintelligent Systems
The Duke and Duchess of Sussex have teamed up with artificial intelligence pioneers and Nobel laureates to advocate for a total prohibition on developing superintelligent AI systems.
Harry and Meghan are part of the group of a influential declaration that calls for “a ban on the development of artificial superintelligence”. Superintelligent AI refers to artificial intelligence that would surpass human cognitive abilities in every intellectual area, though this technology remain theoretical.
Key Demands in the Statement
The declaration insists that the prohibition should stay active until there is “broad scientific consensus” on developing ASI “safely and controllably” and once “substantial public support” has been secured.
Prominent figures who added their signatures include technology visionary and Nobel Prize recipient a leading AI researcher, along with his fellow “godfather” of contemporary artificial intelligence, another AI expert; Apple co-founder Steve Wozniak; British business magnate Virgin founder; Susan Rice; ex-head of state Mary Robinson, and UK writer a public intellectual. Additional Nobel winners who endorsed include a peace advocate, Frank Wilczek, an astrophysicist, and Daron Acemoğlu.
Organizational Background
The declaration, aimed at national leaders, technology companies and policy makers, was coordinated by the FLI organization, a American AI ethics organization that previously called for a hiatus in developing powerful AI systems in 2023, shortly after the launch of conversational AI made artificial intelligence a worldwide public discussion topic.
Industry Perspectives
In recent months, Mark Zuckerberg, the leader of Facebook parent Meta, one of the leading tech companies in the US, stated that development of superintelligence was “approaching reality”. However, some analysts have suggested that talk of ASI reflects competitive positioning among tech companies investing enormous sums on artificial intelligence this year alone, rather than the industry being near reaching any scientific advancements.
Potential Risks
Nonetheless, FLI warns that the possibility of ASI being developed “within the next ten years” carries numerous threats ranging from replacing human workers to losses of civil liberties, exposing countries to national security risks and even endangering mankind with existential risk. Deep concerns about artificial intelligence center around the possible capability of a AI system to evade human control and safety guidelines and trigger actions contrary to human interests.
Citizen Sentiment
The institute released a American survey showing that approximately three-quarters of US citizens want strong oversight on sophisticated artificial intelligence, with six out of 10 believing that artificial superintelligence should not be created until it is proven safe or manageable. The survey of American respondents added that only a small fraction supported the status quo of rapid, uncontrolled advancement.
Industry Objectives
The top artificial intelligence firms in the US, including the ChatGPT developer a major AI lab and the search giant, have made the development of artificial general intelligence – the hypothetical condition where AI matches human cognitive capability at most cognitive tasks – an stated objective of their research. While this is slightly less advanced than superintelligence, some experts also caution it could carry an extinction threat by, for example, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an implicit threat for the modern labour market.