A year ago, Facebook started using artificial intelligence to scan people’s accounts for danger signs of imminent self-harm. Now AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone — and that doesn’t include Europe, where the system hasn’t been deployed
Davis says the AI works by monitoring not just what a person writes online, but also how his or her friends respond. For instance, if someone starts streaming a live video, the AI might pick up on the tone of people’s replies.
“Maybe like, ‘Please don’t do this,’ ‘We really care about you.’ There are different types of signals like that that will give us a strong sense that someone may be posting of self-harm content,” Davis says.
When the software flags someone, Facebook staffers decide whether to call the local police, and AI comes into play there, too.
Read More at NPR
Read the rest at NPR