Stalling At The Crossroads Of Artificial Intelligence – The Time To Act Is Now 

By S. Mona Sinha, Global Executive Director at Equality Now, and Ivana Bartoletti, Co-Founder of the Alliance for Universal Digital Rights 

Our world stands at a crucial juncture with artificial intelligence (AI). The initial steps towards a global consensus on AI safety and regulation are in place, but they are just that – initial steps. As technological advancements outpace policy, the urgency for decisive action grows. This is not just a matter of maintaining momentum; it’s a race against time.

We recently witnessed a rare alignment in the discourse surrounding our relationship with artificial intelligence. A crescendo of dialogues peaked with the US taking the lead through a decisive Executive Order, which directs tech companies and other stakeholders to establish safety and privacy standards and respect human rights, and the establishment of the Artificial Intelligence Safety Institute (USAISI)

Across the Atlantic, the UK’s AI Safety Summit culminated in a declaration by 28 countries, the Bletchley Declaration, which echoed the necessity for AI safety considerations to ensure that the technology is able to benefit all users.  

This is not just success – it’s a testament to the potential for global consensus.

A Distraction from Real AI Issues 

Consensus is not the finish line; it’s the starting block – but momentum seems to have slowed. The weeks following this unprecedented activity in AI regulation have been unsettlingly quiet on the regulation front, with the all-too-human power struggles and focus on the people at the top of the AI bloc pulling the focus away from the impact and consequences of the technology they have pioneered. 

Even beyond the world of tech, people have been fascinated by the dramatic ousting and swift rehiring of OpenAI co-founder and CEO Sam Altman – a testament to just how powerful AI giants are in this new landscape we find ourselves in and a timely reminder that all lofty digital ambitions rest on the shoulders of fallible individuals grappling with relationships, egos, and the bottom line. 

This lull on the policy front is not just disappointing; it’s dangerous. As policymakers idle, technology leaps forward. Before the company’s HR drama made headlines, OpenAI announced their latest development: the ability for anyone to customize, create and train their own chatbot. This shift in our interaction with technology demands immediate attention, yet the response from AI experts and policymakers has been tepid at best. 

Elon Musk’s recent announcement of a chatbot with an “edgy” personality is particularly disconcerting. The hype was short-lived, but the idea underscored the necessity for swift, decisive governance in the realm of AI. The potential for harm, particularly to vulnerable groups, is not hypothetical, it is a present reality.

Speaking with a Unified Voice on Digital Regulation

With the EU on the cusp of passing the AI Act and the Council of Europe nearing the release of its own Convention, we see policymaking in motion. The US President’s Executive Order has complemented these efforts, framing a transatlantic synergy which even precedents like IEEE (Institute of Electrical and Electronics Engineers Standards Association) standards have not fully captured. The G7’s Hiroshima Process and its seven guiding principles for AI safety further exemplify the international cooperation at hand.

This cross-sector alignment among governments, businesses, and civil society speaks with a unified voice: AI demands regulation. However, a pivotal query stands about how do we harness this moment? How do we transform dialogue into enforceable global governance?

We must now move from philosophical contemplation to pragmatic action. Our focus should shift to the Global Digital Compact (GDC) negotiations, taking cues from the Sustainable Development Goals (SDGs) model, which harmoniously blends human rights and development principles with quantifiable indicators. 

The GDC will see governments agreeing on principles that could, and should, influence tech development and regulation for decades to come. This is being prioritized by the Alliance for Universal Digital Rights (AUDRi), which is also vocalizing the need for the GDC to consult and take into account a wide variety of voices, including civil society, tech and human rights experts, vulnerable groups, and young people. This is crucial in order to ensure there is a commitment to digital principles that guarantee a safe and equal digital future for all. 

But promises, goals and commitments need monitoring and enforcement. Countries should not only ratify the GDC but also establish robust monitoring mechanisms and set a clear, actionable path forward. And, crucially, we need a dedicated agency, reminiscent of nuclear or aviation authorities, to oversee this.

Concrete Steps Towards a Sustainable and Safe Digital Future

The rational and sensible agreements that we now need to act on exist despite the hype and hyperbole surrounding AI. Even at the AI Safety Summit, there were the merchants of doom, who dominated conversations with their cataclysmic predictions, and these threatened to distract from the essential yet mundane work of regulating according to more everyday risks. 

Thankfully, Vice President Kamala Harris took to the stage to prioritize the need to tackle known and familiar harms such as bias and discrimination and to extract them from the algorithmic processes that often cement societal inequalities across gender, race, and socio-economic lines. 

National laws must evolve to address the nuances of algorithmic discrimination, which traditional non-discrimination laws fail to capture. This calls for immediate, practical measures – a litmus test for the sincerity behind global commitments.

Decision-makers must now forsake the comfort of roundtables and step into the arena of execution, supported by the expert voices of technologists and human rights advocates. We cannot afford a passive approach as the technology that shapes our world advances at a breakneck pace. The conversation has been set in motion; now is the time for decisive action. Let us seize this moment, not as an end but as an urgent beginning, to establish a framework for AI that is safe, equitable, and accountable. 

The future of our digital age depends on it.

Please follow and like us:

Leave a Reply

Your email address will not be published. Required fields are marked *