Frances Haugen, leaked a cache of Facebook’s internal searches to the media and the Securities and Exchange Commission in hopes of better regulation of the company. In one such complaint based on these documents, she indicated that Facebook’s own research shows that the social network amplifies hatred, misinformation and political turmoil, but also that the company hides what it knows.
What I have seen multiple times on Facebook is that there were conflicts of interest between what was good for the audience and what was good for Facebook. And Facebook, again and again, has chosen to put their interests first, how to make more money, ”said Haugen Scott Pelley, host of the show. 60 minutes.
Frances Haugen: So, you know, you have your phone. You may only see 100 pieces of content if you sit and scroll for, say, five minutes. But Facebook has thousands of options it could show you.
The algorithm chooses from these options based on the type of content you have engaged with the most in the past.
Frances Haugen: And one of the implications of how Facebook curates this content today is that it’s about optimizing content that gets engagement or reaction. But her own research shows that hateful, divisive, and polarizing content inspires people with anger more easily than other emotions.
Scott Pelley: Misinformation, angry content attracts people and continues to keep them on the platform.
Francoise Haugen: Yes. Facebook has realized that if they change the algorithm to make it more secure, people will spend less time on the site, click fewer ads, and earn less money.
Haugen notes that Facebook understood the danger to the 2020 election, so Facebook put in place security systems to reduce disinformation, but many of these changes, he says, were temporary.
Frances Haugen: And as soon as the elections were over, they either turned them off or returned the settings to how they used to be, to prioritize growth over security. And that really sounds like a betrayal of democracy to me.
Facebook released a statement challenging Haugen’s points after the televised interview.
“We continue to make significant improvements to combat the spread of misinformation and harmful content,” said Facebook spokeswoman Lena Pietsch. Suggesting that you promote the wrong content and do nothing is simply not true.
But Facebook took the money without batting an eye
When Meta’s Mark Zuckerberg was called to testify before Congress in 2018, Senator Orin Hatch asked him how he made Facebook. Zuckerberg’s response has since become something of a meme: “Senator, let’s advertise.”
Between July 2018 and April 2022, Meta made at least $ 30.3 million in ad revenue from the networks it removed from its platforms to engage in coordinated inauthentic behavior (CIB), according to data.com. Margarita Franklin, Meta’s security communications manager, confirmed that the company does not refund advertising money if a network is disrupted. Franklin clarified that some of the money came from advertisements that did not violate the company’s rules, but had been posted by the same PR or marketing organizations, later banned for taking part in the operations.
A report by the Wall Street Journal estimates that by the end of 2021 Meta has withdrawn 17% of the money from the global advertising market and made $ 114 billion from advertising. At least some of the money came from ads purchased from networks that violated Meta’s policies and that the company itself flagged and removed.
The global advertising industry is estimated at $ 400 to $ 700 billion, said Claire Atkin, co-founder of the independent watchdog. Check out My Institute announcements. It’s a rough estimate, but no one knows how big the sector is. Nobody knows what’s going on inside.
But Atkin says that part of what makes information, including advertisements, legitimate on social media is the context in which it appears: Facebook, Instagram, WhatsApp, whatever network within our Internet experience, there is a place in which. which we connect with our closest friends and family. It’s a place on the internet where we share our innermost feelings about what’s happening in our lives, explains Atkin. This is our trusted place to connect.
For nearly four years, Meta has been publishing periodic reports identifying CIB networks of fake accounts and pages that aim to deceive users and, in many cases, push propaganda or misinformation to appear organic and change public opinion. These networks can be operated by governments, independent groups, or public relations and marketing firms.
Last year, the company also began addressing what it called “coordinated social harm,” in which networks used real accounts as part of their information operations. Nathaniel Gleicher, Meta’s head of security policy, announced the changes in a blog post, noting that threat actors deliberately blur the lines between genuine and inauthentic activity, making enforcement in our industry more difficult.
This change, however, shows how specific the company’s criteria for the CIB are, meaning Meta may not have documented some networks using other tactics at all. Informational operations can sometimes use real accounts or be conducted on behalf of a political action committee or LLC, making it more difficult to classify their behavior as inauthentic.
“One tactic that has been used more frequently, at least since 2016, has not been robots, but real people who go out and publish,” says Sarah Kay Wiley, a researcher at Columbia University’s Tow Center for Digital Journalism. Facebook’s CIB report shows that they somehow identify them, but it’s really hard to track them down.
Russia was the largest country in terms of the number of ads on the networks that Meta identified as CIB and subsequently removed. The United States, Ukraine, and Mexico were targeted more often, although nearly all campaigns targeting Mexico were tied to national actors (Meta’s public revenue statements do not break down the company’s revenue by country, but by region only).
More than $ 22 million of the $ 30.3 million was spent on just seven networks, the largest of which was a $ 9.5 million global campaign linked to the anti-Chinese right-wing media group behind The Epoch. Times.
Of the 134 campaigns that included paid ads that Meta identified and removed, 56% were focused on national audiences. Only 31% focused exclusively on foreign audiences, i.e. users located outside the network’s country of origin (the remaining 12% focused on a mix of national and international audiences).
Many of the larger networks that Meta removed were run by public relations or marketing firms, such as the Archimedes Group in Israel and Pragmatico in Ukraine. When this happens, Meta will remove and ban all accounts and pages associated with that activity, regardless of whether or not it is involved in a particular CIB campaign, in an attempt to discourage companies from selling paid disinformation services.
CIB’s campaigns and disinformation aren’t limited to Facebook and Instagram. Twitter, which describes these activities as “information operations,” has identified and removed thousands of accounts on its platform. Although researchers have identified disinformation campaigns on TikTok, reports on the company’s Community Guidelines enforcement do not indicate whether or how the platform handles artificially enhanced content.
Wiley says Meta’s reports mask how little researchers and the public know what’s going on within the company and on its platforms. In a January report, Meta said that due to evolving threats against its teams, it would prioritize strengthening and protecting our teams over publishing our findings, which could impact corporate transparency. .
Is this the tip of the iceberg? Unfortunately, I think so, says Wiley.
Over the past five years, we’ve shared information on more than 150 covert influence operations that we’ve removed for violating our inauthentic coordinated behavior (IBC) policy. Transparency is an important tool to counter this behavior and we will continue to act and report publicly, “said Gleicher of Meta.
Sources: Meta (1, 2), TikTok, TwitterWall Street Newspaper
And she ?
How do you read it?
Facebook encourages hate speech to make money, says a whistleblower who posted information he gleaned from his time with the company
Facebook whistleblower accuses Meta of mishandling disinformation on COVID-19 and climate change, his lawyers file two new SEC complaints