Del Harvey is watching what you’re doing on Twitter. Head of the Trust and Safety Team at the social network, she develops ways to keep Twitter’s 240 million-plus users safe. With that many people — sending 500 million tweets every day — dangerous things are bound to happen, she says at TED2014.
For Harvey, the day-to-day is hardly boring. “My job is to ensure user trust, protect users’ rights, and keep users safe — both from each other and, at times, from themselves,” she says. “The vast majority of activity on Twitter puts no one in harm’s way; my job is to root out activity that might. [At Twitter], a one-in-a-million chance happens 500 times a day.”
A tweet with exactly the same wording can be, in one situation, a threatening insult towards a stranger, and in another, just a friendly greeting between friends. How does one interpret a tweet that only says, ‘yo b*tch’? she asks. One user could have malicious intentions, she says, while another could merely be referencing the TV show Breaking Bad. (Or using Twitter in the persona of a dog. True story.)
It is Harvey’s job to develop ways to discern the difference — on a tremendous scale. “99.999% of tweets pose no risk to anyone,” she says. “After you take out that 99.999%, that tiny percentage remaining works out to roughly 150,000 per month. The sheer scale of what we’re dealing with makes things difficult.”
Take spam, for example. Common spam on Twitter coalesces as one user sending the same message to thousands of people, over and over. As the head of Trust and Safety, why doesn’t Harvey just set up an algorithm to ban every account that performs this action? Because tweets aren’t that simple, she says. That one message sent to thousands of users could be a bot sending spam, or a notification from NASA informing a user when the International Space Station will pass over their town. If Twitter simply banned every account sending out mass tweets, thousands of users could lose the opportunity to access important information, Harvey says.
Similarly, an account sending out the same link over and over again could be a user phishing for private information, or a bystander who captured a video of police brutality communicating with journalists. “We don’t want to gamble on potentially silencing that speech,” Harvey says. “We don’t want to gamble on classifying that as spam.”
“It’s crucial,” Harvey says, “that I not only predict, but also design protections for the unexpected.” One of these designs, she explains, is Twitter’s decision to strip all geo-data from photos uploaded to the system. “How could something as benign and innocuous as a picture of a cat lead to death?” she asks. The answer: When a picture of a cat contains data revealing the exact location of a user’s apartment — anything is possible. “If I start by assuming the worst and work backwards,” she says, “I can predict even for the unexpected.”
With a job dedicated to constantly imagining the worst things that could happen online, Del Harvey can understand why you may think her view of the world might be grim. But that’s not the case, she says. “The vast majority of interactions that I see on Twitter are positive,” she says. “People reaching out to help, to connect, to share information with each other … [but] for those of us dealing with scale, tasked with keeping people safe, we have to assume the worst will happen. For us, the one-in-a-million chance is pretty good odds.”