Social Media

Can Social Media Predict Mass Shootings Before They Happen?


Dayton Mass Shooting
Mourners attend a memorial service in the Oregon District to recognize the victims of an early-morning mass shooting in the popular nightspot in Dayton, Ohio. Scott Olson / Getty Images

In the wake of two mass shootings over the weekend that left at least 31 people dead, President Donald Trump called on social media companies like Twitter and Facebook to “detect mass shooters before they strike.”

“I am directing the Department of Justice to work in partnership with local state and federal agencies, as well as social media companies, to develop tools that can detect mass shooters before they strike,” Trump said in a White House speech in response to the shootings Monday morning.

The weekend’s violence included a shooting at a Walmart in El Paso, Texas on Saturday that left 22 people dead and another that left 10 dead, including the shooter, early Sunday morning in Dayton, Ohio. The alleged shooter in El Paso posted a racist manifesto full of white supremacist talking points on 8chan, a hate-filled online message board. As far as authorities have been able to tell, both shooters didn’t post a warning on mainstream social networks.

Trump’s vague directive shifts some of the blame for gun violence onto social media companies, who run massive platforms and can sift through the personal data of billions of people. But there’s a difference between tipping authorities off when someone posts a concrete threat of violence and social media companies using algorithms and their massive troves of data to identify who could potentially be a shooter.

Companies like Google, Facebook, Twitter, and Amazon already use algorithms to predict your interests, your behaviors, and crucially, what you like to buy. Sometimes, an algorithm can get your personality right – like when Spotify somehow manages to put together a playlist full of new music you love. In theory, companies could use the same technology to flag potential shooters.

“To an algorithm, the scoring of your propensity [to] purchase a particular pair of shoes is not very different from the scoring of your propensity to become mass murderer — the main difference is the data set being scored,” wrote technology and marketing consultant Shelly Palmer in a newsletter on Sunday.

But preventing mass shootings before they happen raises some thorny legal questions: how do you determine if someone is just angry online rather than someone who could actually carry out a shooting? Can you arrest someone if a computer thinks they’ll eventually become a shooter?

A Twitter spokesperson wouldn’t say much directly about Trump’s proposal, but did tell Digital Trends that the company suspended 166,513 accounts connected to the promotion of terrorism during the second half of 2018. Twitter’s policy doesn’t allow specific threats of violence or wishes “for the serious physical harm, death, or disease of an individual or group of people.”

Twitter also frequently works to help facilitate investigations when authorities request information – but the company largely avoids proactively flagging banned accounts (or the people behind them) to those same authorities. Even if they did, that would mean flagging 166,513 people to the FBI – far more people than the agency could ever investigate.

We reached out to Facebook to get details on how it might work with federal officials to prevent more mass shootings, but they didn’t get back to us. That said, the company has a tricky history when it comes to hate speech and privacy. Facebook has a detailed hate speech and violence policy, but the decision about whether to remove content or ban a user is left to subjective content moderators.

Even if someone does post to social media immediately before they decide to unleash violence, it’s often not something that would trip up either Twitter or Facebook’s policies. The man who killed three people at the Gilroy Garlic Festival in Northern California posted to Instagram from the event itself – once calling the food served there “overprices” and a second that told people to read a 19th-century pro-fascist book that’s popular with white nationalists.

A simple search of both Twitter and Facebook will turn up similar anti-immigrant and anti-Hispanic rhetoric to that found in the El Paso shooter’s manifesto. The alleged shooters behind the Pittsburgh synagogue attack in 2018 and the Christchurch mosque shootings in March also both expressed support for white nationalism online. Companies could use algorithms to detect and flag that sort of behavior as an indicator that someone would be a mass shooter, but it would require an extensive change to their existing policies. Essentially, they’d need to ban accounts (or flag them to authorities) before anyone makes a solid threat against another person or group.

There’s also the question of whether algorithms can get it right. The Partnership on AI, an organization looking at the future of artificial intelligence, conducted an intensive study on algorithmic tools that try to “predict” crime. Their conclusion? “These tools should not be used alone to make decisions to detain or to continue detention.”

“Although the use of these tools is in part motivated by the desire to mitigate existing human fallibility in the criminal justice system, it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data,” the organization wrote in its report.

We’ve already seen what can happen when an algorithm gets it wrong. Sometimes, it’s innocuous, like when you see an ad for something you’d never buy. Third parties can exploit algorithms as well: some far-right extremists used the YouTube algorithm to spread an anti-immigrant, white supremacist message until YouTube changed its hate speech policy in June. Preventing radicalization is one thing – predicting potential future crimes is another.

We’ve reached out to the Department of Justice to see if they had any more details on how social media companies could prevent shootings before they happen. We’ll update this story if they get back to us.








READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.