Art O Cathain
Senior Engineer - AI Safety Institute
AI Safety
30 min talk Entry level 🤖 Data + AII will cover the UK AI Safety Institute’s mission and progress so far, then give a taste of the work I do on the Platform team exploring agentic AI. The audience will get a sense of the capabilities of current AI agents and where they might be in the near future.
The AI Safety Institute is the world’s first government-run AI safety body. We’ve hit the ground running, starting with 2023’s AI Safety Summit. Since then we’ve been hard at work defining our mission, establishing international consensus on AI risk, and testing frontier AI models before and after they are publicly deployed.
ChatGPT and other generative models are great at producing text, but can they function effectively in the real world by taking actions? Would they do anything dangerous? What guardrails should we put in place? We’re trying to answer these questions by doing our own experimentation. We “scaffold” a large language model by giving it access to a restricted computer environment, then set it challenges to see if it can solve them. These challenges can include things like cyber security hacking, AI research and development, and autonomous replication.
The field of AI is moving very quickly, so we’re also keeping abreast of what state-of-the-art AI agents can do. I’ll cover what might be on the horizon, based on recently published research papers.
Key Takeaways
- A better understanding of the UK Government's AI approach
- A chance to find out more about the world's first government AI Safety Institute
- A glimpse into the future of AI automation
The Institute's mission is to make advanced AI safe and beneficial for Britain and the world. Art works on the Platform team, providing support to their researchers. His current focus is exploring the capabilities of autonomous AI agents.