Facebook was encountering election interference content as far back as 2006, some 10 years before Mark Zuckerberg first acknowledged the issue, the platform’s former head of global public policy has claimed.
Speaking at Sky News’ Big Ideas Live event, where experts and industry leaders discussed the biggest science and technology issues of our times, Paul Kelly said staff had to deal with it “all the time”.
“We saw the initial aspects of misinformation campaigns being built around elections as early as 2006 and in 2008,” Mr Kelly revealed at a panel on the future of big tech companies.
Missed Big Ideas Live? Follow it as it happened
“We actually did a number of projects to try to increase civic engagement on the platform at that time. And we certainly saw people try to use misinformation to influence elections early on at that phase.”
Facebook founder Zuckerberg admitted in 2017 that he should have taken concerns about fake news leading up to the 2016 presidential election, when Donald Trump won the race to the White House, more seriously.
He had dismissed the notion as “crazy”, but then wrote in a public post in September 2017: “Calling that crazy was dismissive and I regret it.
“This is too important an issue to be dismissive.”
Mr Kelly was responding to an audience member’s question about the link between social media and increased divisiveness in US politics and elsewhere.
Challenged by Sky News’ technology correspondent Rowland Manthorpe about the gap between Facebook tackling misinformation and Zuckerberg acknowledging the issue, Mr Kelly said “the scale changed”.
“I had left by then,” he stressed.
“But we definitely had seen some attempts at electoral misinformation in the earlier races.”
A spokesperson for Facebook’s parent company Meta said it had “developed a comprehensive approach to how elections play out on our platform” – “reflecting years of work” and “billions of dollars in investments”.
They added that they had “dedicated teams working on elections”, including this month’s US midterms.
“Meta has hundreds of people working across more than 40 teams to combat election and voter interference, fight misinformation and find and remove violating content and accounts,” they said.
“We’ve also developed stronger policies to stop claims of delegitimisation or fraud on our services.”