“Ethical Challenges of AI“By Professor Laurence Brooks
|
![]() |
Abstract
Artificial Intelligence (AI) is fast becoming one of the issues of the first quarter of the 21st century. It is fast reaching ubiquity, with the technology that supports it developing at an incredibly fast rate and already embedded in every aspect of our lives. At the same time AI is seen as having to potential to be both the liberating force that enables us to live better and more prosperous lives and the ‘terminator’ style overlord and oppressor that is going to subjugate humanity. The reality is of course that we are still a long way off from AI being able to take on either of those roles. However, as a ubiquitous technology, with it’s own failings (notably bias introduced from poor data training sets, or lack of insight into the implications of using these technologies), what is clear is that we have a number of ethical challenges presented by the use of any one AI in any one specific context. Stephen Hawking notably said in 2016 at the inauguration of the Cambridge Leverhulme Centre for the Future of Intelligence “The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.” For me, this shows that we have choices which we can make, in both the development and use of AI technologies. The key issues are how we should be using AI, as opposed to how we can… to make the world a better place, while not making things worse.
Bio
Laurence Brooks is Professor of Information Systems in the Information School in the Faculty of Social Sciences in the University of Sheffield, UK and holds a visiting position in the Centre for Computing and Social Responsibility (CCSR) at De Montfort University, UK. He is a Past President and current board member of the UK Academy of Information Systems (UKAIS), (for whom he has co-chaired and organised the national conference many times), a Past President of the UK Systems Society (UKSS) and Past President of the AIS SIG Philosophy of IS.
His most recent research project was as Co-I on the UKRI funded FRAIM project (https://sites.google.com/sheffield.ac.uk/fraim/home) which focused on framing responsible AI implementation and management. Previously, he was a PI and work package leader on the H2020 ‘TechEthos’ project (www.techethos.eu), focusing on the ethics of emerging technologies. He was a Co-I on the successful H2020 ‘SHERPA’ project, which focussed on the ethics of smart information systems, such as Artificial Intelligence (www.project-sherpa.eu). He was also a PI on the EPSRC Horizon Centre for Doctoral Training (in collaboration with the University of Nottingham, cdt.horizon.ac.uk) focusing on ‘our lives in data’ and led a work package on the successful FP7 network ‘eGovPoliNet’.
His main research interest lies in the interaction and integration of society/organisations and information systems (IS). He has published in major IS journals and he is currently a Senior Editor for the Information Technology & People journal, and a member of the editorial advisory board for the Journal of Information, Communication and Ethics in Society (JICES). He has presented papers at the major international Information Systems conferences. He has researched and published in a wide range of Information Systems areas, including ethics of IS, eGovernment, ICT for development and policy informatics, as well as acting as ethics advisor on several EU funded projects and advises the European Commission as an ethics expert.
Professor Laurence Brooks l.brooks@sheffield.ac.uk