Thu, Jun 27, 2019 – 5:50 AM
ARTIFICIAL intelligence (AI) is by turns terrifying, overhyped, hard to understand and just plain awesome.
For an example of the last, researchers at the University of California, San Francisco were able this year to hook people up to brain monitors and generate natural-sounding synthetic speech out of mere brain activity. The goal is to give people who have lost the ability to speak – because of a stroke, ALS, epilepsy or something else – the power to talk to others just by thinking.
That’s pretty awesome.
One area where AI can most immediately improve our lives may be in the area of mental health. Unlike many illnesses, there’s no simple physical test you can give someone to tell if he or she is suffering from depression.
Primary care physicians can be mediocre at recognising if a patient is depressed, or at predicting who is about to become depressed. Many people contemplate suicide, but it is very hard to tell who is really serious about it. Most people don’t seek treatment until their illness is well advanced.
Using AI, researchers can make better predictions about who is going to get depressed next week, and who is going to try to kill themselves.
The Crisis Text Line is a suicide-prevention hotline in which people communicate through texting instead of phone calls. Using AI technology, the organisation has analysed more than 100 million texts it has received. The idea is to help counsellors understand who is really in immediate need of emergency care.
You would think that the people most in danger of harming themselves would be the ones who use words like “suicide” or “die” most often. In fact, a person who uses terms like “ibuprofen” or “Advil” is 14 times more likely to need emergency services than a person who uses the word “suicide”. A person who uses the crying face emoticon is 11 times more likely to need an active rescue than a person who uses the word “suicide”. On its website, the Crisis Text Line posts the words that people who are seriously considering suicide frequently use in their texts. A lot of them seem to be all-or-nothing words – “never”, “everything”, “anymore”, “always”. Many groups are using AI technology to diagnose and predict depression. For example, after listening to millions of conversations, machines can pick out depressed people based on their speaking patterns.
When people suffering from depression speak, the range and pitch of their voice tend to be lower. There are more pauses, starts and stops between words. People whose voice has a “breathy” quality are more likely to re-attempt suicide. Machines can detect this stuff better than humans.
There are also visual patterns. Depressed people move their heads less often. Their smiles don’t last as long. One research team led by Andrew Reece and Christopher Danforth analysed 43,950 Instagram photos from 166 people and recognised who was depressed with 70 per cent accuracy, which is better than general-practice doctors.
INVASION OF PRIVACY?
There are other ways to make these diagnoses. A company called Mindstrong is trying to measure mental health by how people use their smartphones – how they type and scroll, how frequently they delete characters.
In his book Deep Medicine, which is about how AI is changing medicine across all fields, Eric Topol describes a study in which a learning algorithm was given medical records to predict who was likely to attempt suicide. It accurately predicted attempts nearly 80 per cent of the time. By incorporating data of real-world interactions such as laughter and anger, an algorithm in a similar study was able to reach 93 per cent accuracy.
I had a chance to interview Mr Topol last weekend at the Aspen Ideas: Health conference. He emphasised how poor we are at diagnosing disease across specialties and figuring out when to test and how to treat. When you compare a doctor’s diagnosis to an actual cause of death as determined by an autopsy, you find that doctors are wrong a lot of the time. Three-quarters of patients taking one of the top 10 drugs by gross sales do not get the desired or expected benefit.
Medicine is hard because, as AI is teaching us, we are much more different from one another than we thought. There is no single diet approach that is best for all people because we all process food in our own distinct way. Diet, like other treatments, has to be customised.
You can be freaked out by the privacy-invading power of AI to know you, but only AI can gather the data necessary to do this.
The upshot is that we are entering a world in which people we don’t know will be able to understand the most intimate details of our emotional life by observing the ways we communicate. You can imagine how problematic this could be if the information gets used by employers or the state.
But if it’s a matter of life and death, I suspect we are going to go there. At some level, we are all strangers to ourselves. We are all about to know ourselves a lot more deeply. You tell me if that is good or bad. NYTIMES