Part of the difficulty in correcting misinformation is that the corrections often come much too late. Facts are sticky: once we’ve embedded them in our long-term memory, their very presence makes us resistant to digging them out. In essence, we use our memory of a fact as an internal signifier of its truth.
So how do we challenge misinformation at the very point that it enters people’s awareness? Research and development group Pacific Social could have one answer. The company is developing bots that will detect misinformation posted to social media – and challenge users to think again.
The group is tackling one of the most pressing misinformation problems we face today: the belief that vaccines cause autism. Pacific Social’s thinking on the problem has evolved, according to Tim Hwang, one of its three researchers. Hwang, one of Fortune’s 30 Under 30 for law and politics, is also head of special initiatives at Imgur and creator of a host of initiatives exploring the intersection between technology and society.
In the last few years we’ve been looking at the use of bots – realistic online identities that are good at asking people to talk about certain things because they look like they’re human,” Hwang told me. “We held a series of competitions in which people would write bots and we would see if they could do certain things… So for a while we were building bots that were making introductions between people. A bot called, let’s say, Bob Hansen, would look like a human and might say, ‘You’d be interested in talking to this person.’”
Hwang and his team thought such a pseudo-person could float around social media spaces, talking about its lunch and its funny cats, until someone shared a link that promotes the vaccine-autism myth. Then the bot would counter the misinformation. But Hwang concedes that the use of subterfuge, while potentially effective, could also backfire by fueling conspiracy theories. So now the team is thinking that bots will be clearly labeled as such.
Another design problem is how exactly the bots will counter misinformation. Will they present arguments? Refer users to a link with reliable information? Just engage users in conversation? Hwang says he’s been reading the latest research on the psychology of misinformation, and is using this to try and craft a method that will effectively counter the falsehoods – and not enforce them. Pacific Social hopes to start trialling its bot in about six months.
Hwang is emphatic that the project “is not very sci-fi at all” – it employs fairly basic artificial intelligence techniques. But he can imagine a future that does sound somewhat fantastic, and maybe a little scary.
“Essentially it’s an arms race,” Hwang says. “We’re calling it cognitive security, where computer security will eventually apply to the social space. There’ll be a world where people are activity influencing and building tools against influence.”
It seems not too far-fetched to imagine Hwang’s army of “good” information bots fighting it out with more nefarious bots. As Hwang himself points out, bots have already been used for repressive or less-than-honest ends, such as drowning out the voices of dissidents and astro-turfing Twitter accounts. That’s part of what inspired him to use bots for good.
So I wonder whether this is the next frontier for journalists – or at least, the journalist-programmer hybrids that could be our information dealers of the future. Perhaps in 2030 or 2040, it will no longer be Fox News versus MSNBC. It will be Black Hat Bot versus White Hat Bot. The most worrying part is, there’s nothing to compel those bot creators to reveal themselves or their methods – or even that your good friend Bob Hansen, who’s taught you so much about politics and health, is nothing more than code.