Sharing on Facebook seems harmless. But leaked documents show how it may help spread

[ad_1]

A video of House Speaker Nancy Pelosi seeming to slur her speech at an event tore through the internet, gaining steam on Facebook. Share after share, it spread to the point of going viral.  

The altered video from May 2019 was a slowed-down version of the actual speech the California Democrat gave but was being promoted as real. Even though Facebook acknowledged the video was fake, the company allowed it to stay on the platform, where it continued to be reshared. That exponential resharing was like rocket fuel to the manipulated video. 

In the run-up to the 2020 election, with additional traction coming from then-President Donald Trump sharing the video, the amplification of misinformation showed the real-world implications and the need for social media companies to take action to stem its spread.

YouTube, where it also appeared, took the video down. But Facebook said at the time because the company wanted to encourage free expression, it allowed it to remain up while reducing distribution of it to strike a balance between that priority and promoting authentic content.

The fake Pelosi video is an example of the power of something social media users do naturally – sharing.

play

Speaker Pelosi slams Facebook over altered video

House Speaker Nancy Pelosi is slamming Facebook for not removing a doctored video that has spread widely on the social network in which she appears to slur her words. (May 29)

AP, AP

►Navigating Facebook: Facebook fed posts with violence and nudity to people with low digital literacy

It turns out, internal documents show, that a company researcher found that Facebook could have flagged the source of that video, the Facebook page of Politics WatchDog, at least week earlier based on a simple metric – how much traffic was coming from people sharing its content.

With its content surfacing almost exclusively from Facebook users resharing its posts, the page had gained a massive audience in the days leading up to the Pelosi video through a strategy one researcher dubbed “manufactured virality,” or when a group uses content that has already gone viral elsewhere to drive their Facebook page’s popularity.

►The story of Carol and Karen: Two experimental Facebook accounts show how the company helped divide America

►The Facebook Papers: Facebook says it’s stopping hate against Black Americans. Its own research shows otherwise

While not the exclusive domain of shady intent, the approach is common by bad actors on Facebook often to spread falsehoods. Facebook has allowed this type of content to flourish on its platform.

Sharing in Facebook isn’t inherently bad. It is, after all, a basic function of how social media works and why many of us go there. 

What Facebook’s internal research shows about sharing

In documents released by whistleblower Frances Haugen, Facebook employees warn repeatedly of the likelihood that reshares like these were a main vector for spreading misinformation and the harms that could come from that. They suggested myriad solutions – everything from demoting them to slowing them down – only to see their suggestions ignored. 

►Who is Frances Haugen? Everything you need to know

Over the red flags raised by some employees, Facebook made sharing easier during that time, choosing core engagement metrics critical to its business over measures that could have reduced the harmful content on the platform. Getting people to read, share and respond to Facebook content and spend more time on the platform is critical to what the company can charge advertisers, and it found misinformation in reshares to be particularly engaging.

In a whistleblower complaint Haugen filed with the Securities and Exchange Commission, she included reshares as one of the ways Facebook has failed to remove misinformation from the platform even as it touted its efforts to do so.   

While Facebook had publicized its efforts countering extremism and misinformation related to the 2020 U.S. elections and the Jan. 6 insurrection, it failed to adequately account for its role in the spread of misinformation, Haugen’s complaint states. 

“In reality, Facebook knew its algorithms and platforms promoted this type of harmful content,” her complaint says, “and it failed to deploy internally-recommended or lasting counter-measures.”  

►Inaction: Facebook still had Holocaust denial content 3 months after Zuckerberg pledged to remove it

►From the Facebook Papers: Black people use Facebook more than anyone. Now they’re leaving.

Attorneys for Haugen, a former Facebook product manager, disclosed more than 1,000 documents to the SEC and provided them to Congress in redacted form. USA TODAY was among a consortium of news organizations that received redacted versions. 

The documents have shed light on internal research showing Facebook’s knowledge of a variety of harms, many of which were first reported by The Wall Street Journal.   

Meta Platforms, Facebook’s parent company, declined to answer a list of detailed questions about misinformation spread through reshares, the solutions offered by its employees and the company’s incentives not to act on reshares because of the impact on its engagement metrics.  

“Our goal with features like sharing and resharing is to help people and communities stay connected with each other,” Aaron Simpson, a spokesman for Meta, wrote in an emailed statement. “As with all our features and products, we have systems in place to keep communities safe, like reducing the spread of potentially harmful content.”    

Why sharing on Facebook can be connected to misinformation

To be sure, sharing is not inherently bad and, indeed, is a bedrock of the platform. Users do it all the time to share news of a friend facing a medical issue, seek help finding a lost pet, announce a birth or just pass on something they found interesting.

But Facebook’s research found misinformation in particular draws user engagement with a high likelihood of being reshared and that the company could use reshare indicators to lessen the reach of harmful content.

Experts agreed the key role of reshares in spreading misinformation and Facebook’s inaction have not been widely known. The documents show its reluctance to reduce the spread of misinformation in reshares as doing so impacts the kind of engagement that Facebook profits from.  

“One thing that we have seen consistently, not just in these documents but in other reports about actions that Facebook has taken, is that Facebook is not willing to sacrifice its business goals to improve the quality of content on its system and achieve integrity goals,” said Laura Edelson, co-director of Cybersecurity for Democracy at New York University.

Facebook disabled Edelson’s account after her research team created a browser extension that allows users to share information about which ads the site shows them. Other experts agreed with her assessment of Facebook’s incentives playing a role in its decisions about how, and whether, to address this type of misinformation on the platform.

Edelson added, “We do see Facebook is consistently willing to sacrifice its integrity goals for the sake of its overall business goals.”

The role of Facebook’s algorithm as accelerant

In a late 2018 note, Meta Platforms CEO Mark Zuckerberg explained Facebook’s efforts to combat misinformation, namely content that borders on violating its policies. The closer a piece of content gets to that line, the more people engaged with it even as they said they didn’t like it, he wrote.

Zuckerberg said the company would work to reduce the distribution and virality of this type of content, specifically misinformation.  

Yet over and over in the documents, Facebook’s employees reiterate the likelihood that reshared content is misinformation and found that these shares are a key indicator it can use to reduce the distribution of likely harmful content.

How many layers of resharing, or its reshare depth, can also be an indicator of its potential for harm. Facebook has a metric for what it calls “deep reshares.”

When you post a link or a video, for instance, according to Facebook’s measure, that originating post has a reshare depth of zero. Then one of your friends clicks the button to share your post, and that bumps it to a depth of one. If their friend or follower shares that, the depth is two. And so on, and so on. 

Facebook found a reshare depth of two or greater for a link or photo indicated that piece of content was four times as likely to be misinformation compared to other links and photos in the news feed generally. That could increase to up to 10 times as likely to be misinformation at higher reshare depths.

That doesn’t mean everything reshared six steps from the original poster is misinformation. But, according to Facebook’s research, it is far more likely to be.

In a 2020 analysis, Facebook found group reshares are up to twice as likely to be flagged as problematic or potentially problematic. Another analysis that year found that since 2018 content shared by groups grew three times faster than content shared outside of groups overall.  

According to one document, up to 70% of misinformation viewership comes from people sharing what others have shared.

“If we are talking about stuff that is misinformation or hate speech that (Facebook says) they do not want to tolerate on their platform and then they just let it run wild, I’d say yes there is also something that they could and should do about it,” said Matthias Spielkamp, executive director of Algorithm Watch, a research and advocacy organization. 

Facebook’s algorithm, optimized for engagement and virality, serves as an accelerant and further amplifies content that is gaining momentum on its own. 

While individual users can create misinformation that gets reshared, Facebook’s research focused on the particular harm of groups and pages – including those that use the company’s algorithms as a way to spread this type of content and grow their following.  

“These kind of actors who are trying to grow their celebrity status, to grow their follower networks, they understand that you make sensational content, you make stuff that really surprises people, captures their attention and trades on their already held beliefs and you keep working on that and pretty soon you’ve got a nice follower base,” said Jennifer  Stromer-Galley, a Syracuse University professor who studies social media.    

Facebook’s documents warn of the harms that could come from reshared misinformation. One 2019 experiment found adding friction to sharing in India reduced “particularly concerning” content that inflamed tensions about Kashmir. 

Another document from 2019 warned that “political operatives and publishers tell us that they rely more on negativity and sensationalism for distribution due to recent algorithmic changes that favor reshares.”

Citing…

[ad_2]

Read More: Sharing on Facebook seems harmless. But leaked documents show how it may help spread

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments