Facebook Thursday said it would expand a fact-checking program to its Instagram image-sharing service. Instagram users in the US can now report content they believe is false, but it’s not clear that the system, which is already overwhelmed, can handle more suspect information.
“Facebook did not ever scale the fact checking program on Facebook to be able to reach all users and all information on Facebook,” says Robyn Caplan, a media and information policy scholar at Rutgers who studies social media governance. “I’m not quite certain how they’re going to scale to Instagram effectively.”
Instagram was once the land of golden filters, where positivity reigned supreme. More recently, though, the platform has fallen victim to the same hate speech, bullying, and misinformation that plagues just about every social media site. Systems that can respect free speech, and sensitively address complicated and culturally inflected conversations, at Instagram’s monstrous and growing scale, have proved elusive.
Facebook began its fact-checking initiative in the wake of the 2016 election. When users see content they think is suspicious or misleading, they can flag it. If posts are repeatedly flagged, Facebook sends them to fact checkers at organizations like PolitiFact, the Associated Press, and Factcheck.org. Those fact checkers aren’t obligated to review content, but can choose the posts they think are the most important or impactful to evaluate. On Instagram, posts that are deemed false aren’t taken down, but they are removed from the site’s “explore” and hashtag pages, which Stephanie Otway, a spokesperson for Facebook, says can significantly limit their reach. “We’re investing heavily in limiting the spread of misinformation across our apps,” she says.
Ben Nimmo, a senior fellow at the Atlantic Council’s Digital Forensics Research Lab who studies disinformation campaigns on social media, sees this as a logical expansion for Facebook and a generally good policy. “Information operations don’t stick on one platform so fact checking shouldn’t stick on one platform either,” he says. Facebook was heavily criticized for its failure to counteract the disinformation campaign run by Russia’s Internet Research Agency (IRA) during the 2016 election. But those trolls were operating across multiple platforms. A report from the Senate Intelligence Committee concluded that Instagram, not Facebook, was probably the most effective platform for the IRA’s meme warfare.
Fact-checking alone won’t be enough to counteract the online tide of misinformation, says Nimmo. Groups like the IRA are highly organized, complex networks of linked accounts that like and reshare each other’s content. Checking if each meme is true—and flagging those that aren’t—isn’t a good strategy for dismantling those operations. To do that, Instagram and Facebook will still need teams to look more broadly at activity on those platforms and find connections between posts promoting false information to root out bad actors who may be running calculated campaigns. Nimmo says fact checking is an integral part of that process, though, and an important starting point to establish what kinds of language and lies are being spread. But the scale of disinformation on Facebook far outpaces the number of fact checkers working on the problem.
Facebook currently works with about 25 fact checking organizations around the world, sifting through content from its more than 1 billion daily active users globally. Expanding to include Instagram’s US market will add over 100 million more users and, as Nimmo notes, “fact checkers have to sleep.” Instagram hopes to use information gathered by fact checkers to understand how disinformation is spreading across the platform and to eventually train AI tools that will be able to proactively recognize misleading posts without requiring users to flag them. But those solutions are a long way off and will always be somewhat limited.
Caplan says determining if something is true or false means you have to know a lot of other, culturally specific, things, including which sources are reliable and what conspiracy theories are popular in different countries. She says there are simply “too many context factors that go into the fact checking process to fully automate that.” The system as it functions right now, with fact checkers verifying some, but not all posts, can cause other problems because users don’t always know what’s been checked and what hasn’t. One study found that when users see some headlines flagged as fake, they are more likely to perceive unflagged headlines as true because they believe they’ve all been verified.
Facebook does not disclose how much of its content is fact checked but Aaron Sharockman, executive director of PolitiFact, a fact checking nonprofit that works with Facebook, says that between checking the president, the nearly two dozen Democrats who are running for president, governors, senators, and social media content, “we simply can’t cover all the ground.”
Facebook pays PolitiFact to check a certain amount of content and, despite adding an entirely new platform to the deal, Sharockman says the two organizations haven’t discussed expanding the agreement. Without an “unlimited blank check, we’re always going to pick one piece of misinformation over fact checking another,” he says. But Sharockman says adding more content may still be a good idea. “I’d rather have more access to more information so I can hopefully pick the most important things for us to work on an debunk” he says.
Sharockman says his staff of 10 full-time fact checkers try to prioritize stories that are the most important or have the potential to be the most impactful. After the shootings in El Paso or Jeffrey Epstein’s suicide, they did their best to keep conspiracy theories from spreading unchecked. He says that while the total volume of checks won’t change for the time being, having more information from Instagram allows them to make better decisions about which fires need to be immediately put out and which can wait.
PolitiFact rates content on a “Truth-O-Meter” scale that ranges from “true,” to “mostly false,” to its most damning rating, “pants on fire!” But the organization gets no information about what happens after it flags content, or what happens to the users who posted it. Earlier this year, Snopes walked away from its fact checking contract with Facebook, frustrated by the narrowness of the project and the capacity it gobbled up. “It doesn’t seem like we’re striving to make third-party fact checking more practical for publishers—it seems like we’re striving to make it easier for Facebook,” Vinny Green, Snopes’ vice president of operations, told Poynter. “The work that fact-checkers are doing doesn’t need to be just for Facebook—we can build things for fact-checkers that benefit the whole web, and that can also help Facebook.”
Sharockman agrees that aspect of the work is frustrating but he also says working with Facebook gives Politifact an immediate impact it doesn’t often achieve. While it can point out a politician is making untrue statements, politicians rarely erase or retract them. On Facebook, if PolitiFact determines something is untrue, the post is flagged. Expanding to Instagram gives Sharockman’s fact checkers the opportunity to expand their impact and to reach a younger demographic. Sharockman says he’s excited to see what comes of the partnership. “There will be learning for all of us to do but we’re up for it,” he says.