top of page
Search
bakarebarley

The Road to Safe AI Systems: Reflections on the SAINTS Launch at the University of York

Updated: 1 day ago



It was an icy cold but invigorating day at the University of York’s East Campus for the launch of the UKRI AI Centre for Doctoral Training in Safe AI Systems (SAINTS).


I was delighted to attend in my new capacity as a member of the SAINTS Independent Advisory Board, joining colleagues from academia and industry to help shape this vital programme.


The event was introduced by Professor Ibrahim Habli, Centre Director and a leading figure in safety-critical systems. He set the tone by referencing a compelling statistic: while 3 out of 10 people believe AI has the potential to cause harm, 4 out of 10 see its potential benefits. These numbers highlight a key tension in the field of AI development: the balance between progress and ethical responsibility.


Having observed both the promise and pitfalls of AI, I couldn’t agree more with the need for safety to be at the heart of this programme. We know that without careful consideration of the societal contexts in which AI operates, the technology can inadvertently perpetuate or exacerbate existing biases, potentially causing harm. At the same time, when designed thoughtfully, AI can be a positive support.


AI in Practice


The positive and negative nature of AI’s potential can be seen by the practical applications below:


  • Using AI for Fairer Recruitment AI can level the playing field in recruitment processes by analysing applications without the biases that humans might unconsciously introduce. For example some companies have successfully used AI tools to ensure gender-neutral language in job descriptions or to assess candidates based solely on their qualifications and experience, rather than their demographics. Such systems, when designed to avoid biases and to avoid AI discrimination, can help organisations build more inclusive workforces.


  • Biased Algorithms in Decision-Making We must also recognise that poorly designed AI systems have been shown to reinforce existing inequalities. An example is a "sexist" recruitment algorithm that, after being trained on historical data, began to favour male candidates over equally qualified female applicants. The issue arose because the training data reflected past hiring biases, which the AI inadvertently learned and perpetuated.


These different examples show the importance of embedding ethical considerations and societal awareness into AI research and development—a principle that SAINTS is actively championing.


SAINTS: Building a Diverse, Inclusive, and Ethical AI Future


One of the highlights of the SAINTS launch event was a presentation by Professor Paul Wakeling, Dean of the York Graduate Research School (YGRS) who detailed the programme’s commitment to inclusive recruitment. Supported by the Yorkshire Consortium for Equity in Doctoral Education, SAINTS has cultivated a diverse cohort of PhD students, each bringing unique perspectives to the programme.


This approach is essential. Diversity in AI research teams is critical to identifying and mitigating biases, ensuring that the systems they develop are fair and representative.


Congratulations to SAINTS


The launch of SAINTS is a significant milestone, not just for the University of York but for the UK’s AI landscape as a whole. By prioritising safety the programme sets a high standard for ethical AI development. Congratulations to the SAINTS team, and best of luck to the first cohort of PhD students as they embark on this exciting journey. Together, their work has the potential to ensure that AI systems of the future are not only innovative but also safe, ethical, and inclusive. As we move forward, the true measure of AI’s success will not be in its technical achievements alone but in its ability to benefit society without leaving anyone behind.

11 views0 comments

Comments


bottom of page