Skip to content
This website uses cookies to help us understand the way visitors use our website. We can't identify you with them and we don't share the data with anyone else. If you click Reject we will set a single cookie to remember your preference. Find out more in our privacy policy.
Blog

AI and ethics: the future is unethical

There’s a lot of hype around artificial intelligence (AI). The discussions swing between AI killing off cancer to AI killing us via autonomous robots. These technologies are not science fiction, they are here now – in retail, finance, those recommendations you get on Amazon and Netflix.

There is no single definition of AI, nor what we mean by ‘intelligence’.

Broadly, AI can be used to describe a set of advanced technologies that enable machines to carry out highly complex tasks effectively – tasks that require the equivalent of or more than the intelligence of a person performing the task. AI is enabled by three components – computational processing power, algorithms, and data. ​

For now, when we talk about AI, we’re talking about a relatively narrow scope of ‘applied intelligence’ with computers, if trained with the right data, are better at than humans. This includes recommending things to buy, to using smartphones and fitness trackers to monitor health.

The national picture for AI is busy, complex, and crosses many sectors – from business and finance, to manufacturing, to health and care; and covers a range of issues from education and training, data sharing and licensing, and ethics. The government’s ambition is to make the UK a global leader.

In health, the government wants to transform how the NHS uses technology.  There is evidence that AI can have a positive impact, from detecting some cancers, in radiology, ophthalmology, and dermatology, and predicting some health events. It also has the potential to be used in ways to give staff more time to spend with patients and users of services.

There is real potential for sensor technologies embedded within our physical environments, in hospitals, clinics and the home, and assisting with care needs. However, alongside this, there is a real risk of AI being oversold as a solution to all problems.

So what’s the fuss? Why is there so much chatter about all of this? We don’t feel the need to have discussions about the introduction of MRIs. At the heart of it, AI poses ethical challenges in a way that an MRI doesn’t.

So what’s the fuss? Why is there so much chatter about all of this? At the heart of it, AI poses ethical challenges in a way that an MRI doesn’t.

These come under a number of broad headings:

Transparency and explainability
It can be difficult or impossible to understand the underlying logic of outputs. How do decisions get made? Who is accountable in this brave new world?

Data bias and fairness
While AI has the potential to reduce bias, it can also reflect and reinforce bias in existing data. How do we ensure AI technologies are non-biased?

Data, privacy, and ownership
AI applications in health and care often require the use of sensitive information that is considered private. There are concerns about ownership of data as well as the AI systems that use and learn from the data, and the outputs. Do we need to care about this?

More broadly, there are increasing calls for health and care to realise the value of its data and to only enter into industry partnerships that also provide value to the public sector. How do we ensure that the sharing of health and care data with industry provides a value return?

Impact on health and care professionals
There’s a lot of talk about the importance of engaging health and care professionals throughout the development of AI application but it goes further than that. In health and care, to what extent should human decision making be involved?  How do we embed accountability in using AI technologies?

This leads to, what is for me, the penultimate question: Are existing ethics frameworks, the law, and regulatory environment, fit for purpose in this brave new world? Is the future unethical?

Note: These issues are covered more fully in a briefing note I co-authored with John Kellas for Bristol Health Partners. You can read more here. And if you want to dig a deeper, here are my top 5 podcasts on the ethics of AI, and for your viewing pleasure, some great short clips and longer discussions on AI in health and care.

Dr Sophie Taysom is an independent policy consultant at Keyah Consulting. She has a keen interest in bringing innovation and policy to life, and is passionate about AI in health and care, and the opportunities and challenges it brings. Sophie sits on the Health Panel at The IET. She is also a policy consultant in the property sector and is becoming increasingly fascinated with the potential for linking healthtech and proptech. She is a regular writer at Medium.


Blog
Sophie Taysom1 February 2019

A little something else? If you found this of interest, then we’d suggest taking a look at some related items. And when you’re ready to chat, you can get in touch.