The more we rely on the complex algorithms that power AI, the more some people feel concerned. How do we ensure these algorithms are fair, ethical and transparent? The UK government wants to set up a ground-breaking Centre for Data Ethics and Innovation, while India is getting a new research institute, Wadhwani AI, which has a mission of harnessing AI for social good.
How would you feel, for example, if your activity on social media was monitored by AI to look for signs of suicidal thoughts? Research from both China and the USA is being applied to do just this. Weibo, the Chinese equivalent of Twitter, has been using such a system for nine months and has identified 20,000 at-risk users and directed them to support. Facebook has also been using a similar system, which automatically highlights content that might contain suicidal thoughts and sends that information to human reviewers.
AI is being touted as one of the ways we can shift traditional healthcare systems from reacting to when we get sick to detecting illness in advance and ultimately reducing the economic burden of chronic conditions such as cancer, heart disease and diabetes. But how do we feel about an all-seeing AI that monitors our every move, sending us digital nudges to ensure that our choices are the optimal ones?
As investment into AI research, products and services grows, we all need to be part of the conversation as to how it’s developed, deployed and monitored. Listening to the voice of the patient is critical. Healthcare systems are straining under the pressure of increasing demand, both in the developed and developing world, and the world population is projected to grow from 7.6 billion today to 9.8 billion by 2050. We must ensure that policy, process and people move in tandem, otherwise we may not maximize the impact of this AI revolution.