Generative AI firms face scrutiny over ethical issues.
- Safety roles shouldn’t resolve ethical dilemmas alone.
- Commercial interests may hinder ethical decision-making.
- Academic research offers ready solutions for AI safety.
- Prominent tech figures urge enhanced AI regulation.
Amid increasing scrutiny, leading figures in the AI sector have highlighted the complexities involved in addressing ethical concerns within generative AI businesses. The head of safety at a prominent London company has voiced that these enterprises should not independently tackle ethical challenges, suggesting that collaborative approaches are crucial.
In a statement that underscores the potential conflict between commercial goals and ethical responsibilities, Aleksandra Pedraszewska, who joined ElevenLabs earlier this year, remarked, ‘I don’t consider myself or anyone working in any safety role in a tech company should be solving any ethical problems.’ This sentiment reflects the broader industry view that ethical issues are best managed by independent bodies with specific focus and expertise, free from the influence of business incentives.
The approach promoted by Pedraszewska involves deploying readily available solutions for content moderation and monitoring problematic behaviours. She emphasises the importance of leveraging established academic research to inform and enhance AI safety measures. This method not only utilises existing knowledge but also advocates for a partnership between academic researchers and the AI industry, aiming to address ethical and policy questions more effectively.
Pedraszewska’s comments were made at a recent TechCrunch Disrupt conference, following serious allegations against a competing AI company whose chatbot was implicated in a tragic incident. This has reignited discussions on the imperative for robust safety measures and the wider responsibilities of AI providers.
The issue of AI safety has been further highlighted by a series of departures among senior leaders in major tech firms. These exits underscore the difficulty of reconciling technological advancement with necessary safety and ethical standards. For instance, OpenAI has seen the resignation of key figures from its ‘superalignment’ team—those responsible for addressing AI’s significant challenges.
ElevenLabs, despite its commercial success, remains committed to ethical practices, dedicating substantial resources to remunerate voice actors whose data contribute to AI model training. This reflects a broader industry trend towards accountability and transparency.
Collaboration between academia and AI firms is essential for addressing ethical concerns effectively.
