The AI Safety Summit in November, hosted by the UK, has faced criticism for its ineffectiveness, termed a ‘damp squib’ by a Labour MP.
- Exeter MP Steve Race underscored the lost opportunity for the UK to assert itself in global AI regulation.
- Race highlighted the previous government’s failure to make a substantial impact with the summit.
- Labour Digital Chair, Casey Calista, proposed a more inclusive regulatory approach.
- Brittany Smith identified flaws in the summit’s focus, spotlighting unaddressed immediate risks.
The AI Safety Summit, hosted by the United Kingdom in November last year, has been described as a ‘damp squib’ that ultimately ‘didn’t go anywhere’. This critique came from Labour MP, Steve Race, during a session at the 2024 Labour Party Conference. In his address, Race indicated that the UK government at that time missed a significant opportunity to position itself as a leading authority in the field of global AI regulation.
MP Steve Race emphasized the UK’s unique position due to its historical capability and trust in regulatory matters. The previous government, he suggested, rightly endeavored to lead on these global discussions, especially given that ‘Americans can’t really do it’ due to perhaps differing cultural or regulatory standards. Yet, the summit failed to deliver tangible outcomes or to maintain momentum post-event, according to Race.
Meanwhile, Casey Calista, Chair of Labour Digital, criticised the Conservative government’s approach, specifically their negligence in actively incorporating civil society into the conversation. She stated that a ‘whole of society approach’ would be adopted under Labour, ensuring a breadth of diverse voices are part of the decision-making process, which extends beyond just governmental or corporate representatives.
Another voice raised at the event was Brittany Smith, head of UK policy and partnerships at OpenAI. While acknowledging that the summit was ‘interesting’, she argued it overly concentrated on ‘existential risks’, potentially overlooking immediate dangers posed by AI technologies in use today. Smith cited examples such as flawed facial recognition technologies being utilised by law enforcement, which have the potential to ‘ruin lives’.
The discussions within this panel came as the UK prepared to host another follow-up event. The upcoming event is intended to further the discussions and implementations of safety protocols envisioned at the Bletchley Park AI Safety Summit and the subsequent AI Seoul Summit, illustrating the UK’s ongoing commitment to addressing these critical issues.
The AI Safety Summit faced significant criticism for its ineffectiveness, highlighting the need for more inclusive and actionable approaches in AI regulation.
