As social networks continue to mature, they increasingly take on roles they may not have anticipated. Moderating graphic imagery and hate speech, working to address trolling and harassment, and dealing with dissemination of fake news puts companies like Facebook and Twitter in powerful societal positions. Now, Facebook has acknowledged yet another challenge: Keeping your data safe from surveillance.
That’s harder than it may sound. When you post something publicly on a social network, anyone can view it—including law enforcement or federal agencies. Those types of groups, particularly local police, have increasingly capitalized on social media as an investigatory resource. And those one-off cases hardly register compared to the mass surveillance tools that software companies can create by using a social network’s API—the set of tools that allow outside parties to develop interoperable software for a company’s product. In the case of a company like Facebook, those tools can surveil and collect data about millions of people. These products are then sold to police, advertisers, or anyone else willing to pay. Or at least, they could until this week.
“We are adding language to our Facebook and Instagram platform policies to more clearly explain that developers cannot ‘use data obtained from us to provide tools that are used for surveillance,’” Facebook said in a statement. “Over the past several months we have taken enforcement action against developers who created and marketed tools meant for surveillance, in violation of our existing policies.”
Facebook worked with the American Civil Liberties Union of California, Center for Media Justice, and Color of Change to implement the policy, prompted in part by ACLU research from September that demonstrated how law enforcement used third-party tools to track activists, particularly those from the Black Lives Matter movement.
“The clear public policy is important because it sends a very clear message to developers and to businesses about what is not allowed on Facebook,” says Nicole Ozer, the Technology and Civil Liberties Policy Director at the ACLU of California. “And if their business model is based on building tools for surveillance they need to get a new business model.”
Facebook’s not navigating these waters alone. Twitter grappled with enforcement of its anti-surveillance policies throughout 2016, quietly banning the company Dataminr from selling Twitter data reports to government intelligence agencies, and limiting the data service Geofeedia’s access after an ACLU of California investigation into its practices. Facebook and Instagram also curtailed Geofeedia’s data access after the report.
“These third-party data-mining companies that are just in it to make a buck are building relationships not just with law enforcement but with corporations and using our data in all these really harmful ways which disproportionately impact poor folks, communities of color, et cetera,” says Brandi Collins, a campaign director at Color of Change, which researches technology adoption.
Making It Work
Strengthening policies prohibiting this type of API use should put developers on notice, while also codifying Facebook’s position, so that employees know internally what the expectations are. But the real question is how Facebook, Instagram, and other social networks like Twitter will actually enforce their policies. Users, after all, have no control over how developers incorporate the APIs, and aren’t generally aware when they’re being surveilled.
Facebook says that it requires developers to submit statements about what they plan to do with the data they request access to, and that the company does both automated and manual reviews to confirm that developers are being consistent. Facebook also said it does broader investigations when it receives reports that a developer may not be compliant.
If social networks like Facebook and Twitter can create self-perpetuating systems internally they can safeguard users against this type of third-party surveillance, all the better. But activists say that transparency will be crucial to confirming whether this has really happened and whether enforcement is going on across the board.
“It’s a great first step, but it is only a first step,” says Malkia A. Cyril, the founder and executive director of the Center for Media Justice. “We need these companies to tell us how they’re doing without activists having to work day in and day out to remind them of their obligations to their users and to society. We need the will to come from within to take transparency one step further, document your enforcement, and tell us how enforcement is going through independent audits.”
The groups are hopeful that Facebook has the tools and resources to build this internal structure. But for the industry as a whole stepping up to address these problems may be daunting. “There’s a trifecta of protections that are needed. Protections against censorship, protections against harassment, and protections against surveillance,” Cyril says. “Social media companies they’re no longer just connecting people. Now they have an extraordinary responsibility to also protect. I don’t think they’re ready for that responsibility.”