ATxAI
31 May 2024, 9.00am - 12.00pm
Grand Ballroom, Capella Singapore
ATxAI, organised by IMDA, will take place on 31 May 2024 at Capella Singapore. This signature conference of ATxSummit will feature a refreshed lineup of visionaries and experts from industry, the research community and governments engaging in thought-provoking conversations about global developments in AI governance, standards and safety work and their implications for society.
Agenda
Information accurate as of 6 March 2024
9.00am - 9.10am: Welcome Remarks
9.10am - 9.30am: Keynote Address by Guest-of-Honour
9.30am - 10.15am: That’s Not Taylor Swift! What the World Needs to do About Generative AI Governance
In a year of global elections, the proliferation of deepfakes on the Internet is putting the sanctity of truth under threat. Even as we take stock of the progress made on Gen AI governance, where along the spectrum of regulation should we position ourselves as we tackle the growing challenge of misinformation? This panel will also consider how we can mitigate societal risks, while maintaining a healthy innovation ecosystem.
10.30am - 11.15am: With Great Power, Comes Great Responsibility – Discussing Gen AI Evaluation Tools and Standards
We need to take accountability and ownership of responsible AI approaches. Mitigating the associated risks of Gen AI requires an ecosystem of regulators, businesses and individual users to lean into global efforts. This panel discusses some of the best practices today, as well as new tools and standards that can and should be developed to foster an ecosystem that supports interoperability and compliance.
11.15am - 11.55am: Full Speed Ahead – Accelerating the Science Behind Responsible AI
Imagine a future where AI benefits everyone. But how do we keep up with its rapid advancements and ensure it is developed safely and used responsibly? In conversation with diverse voices from AI Safety Institute leaders, academia and industry, this panel will explore science-based approaches to AI safety. Their insights will illuminate how we can bridge the gap between AI's expanding capabilities and the necessary safeguards.