Facebook is regulating its products before lawmakers force them to
Earlier this month, Facebook undertook an effort to recast the debate around the regulation of big tech companies on its own terms. Mark Zuckerberg wrote an op-ed; Sheryl Sandberg published a blog post; and their deputies gave interviews to outlets read closely by policymakers. The overall effect was of a company that has spent the past two years on the defensive organizing around a core set of principles to advocate for: principles that will allow the company to continue operating basically as is.
This week, we saw the second plank of Facebook’s strategy: self-regulation from its product teams. In a meeting with reporters in Menlo Park, myself included, the company announced a series of product updates organized around what the company calls “integrity.” The announcements touched most of Facebook’s biggest products: the News Feed, groups, stories, Messenger, and Instagram. (WhatsApp was a notable exception.) Collectively, the moves seek to strike a better balance between freedom of speech and the harms that come with it. And also, of course, to signal to lawmakers that the company is capable of regulating itself effectively.
Facebook says its strategy for problematic content has three parts: removing it, reducing it, and informing people about the actions that it’s taking. Its most interesting announcements on Wednesday were around reducing: moves that limit the viral promotion of some of the worst stuff on the platform.
”Click gap,” for example, is a new signal that attempts to identify sites that are popular on Facebook but not the rest of the web — a sign that they may be gaming the system somehow. Sites with a click gap will be ranked much lower in the News Feed. As Emily Dreyfuss and Issie Lapowsky describe it in Wired:
Click-Gap could be bad news for fringe sites that optimize their content to go viral on Facebook. Some of the most popular stories on Facebook come not from mainstream sites that also get lots of traffic from search or directly, but rather from small domains specifically designed to appeal to Facebook’s algorithms.
Experts like Jonathan Albright, director of the Digital Forensics Initiative at Columbia University’s Tow Center for Digital Journalism, have mapped out how social networks, including Facebook and YouTube, acted as amplification services for websites that would otherwise receive little attention online, allowing them to spread propaganda during the 2016 election.
Another move aimed at reducing harm on Facebook involves cracking down on groups that become hubs for misinformation. As Jake Kastrenakes writes in The Verge:
Groups that “repeatedly share misinformation” will now be distributed to fewer people in the News Feed. That’s an important change, as it was frequently group pages that were used to distribute propaganda and misinformation around the 2016 US elections.
Facebook will also soon give moderators a better view of the bad posts in their groups. “In the coming weeks,” it said, it will introduce a feature called Group Quality which collects all of the flagged and removed posts in a group in one place for moderators to look at. It will also have a section for false news, Facebook said, and the company plans to take into account moderator actions on these posts when determining whether to remove a group.
I like these moves: they take away “freedom of reach” from anti-vaccine zealots and other folks looking to cultivate troll armies by hijacking Facebook’s viral machinery. There are a lot of other common-sense changes in yesterday’s fine print: allowing moderators to turn posting permissions on and off for individual group members, for example; and bringing Facebook verified badges to Messenger, which should cut down on the number of fake Mark Zuckerbergs scamming poor rubes out of their money.
Still, I can’t shake the feeling that all these moves are a bit ... incremental. They’re fine, so far as they go. But how will we know that they’re working? What does “working” even mean in this context?
As Facebook has worked to right its ship since 2016, it has frequently fallen back on the line that while it’s “making progress,” it “still has a long way to go.” You can accept these statements as being true and still wonder what they mean in practice. When it comes to reducing the growth of anti-vaccine groups, for example, or groups that harass the survivors of the Sandy Hook shooting, how much more “progress” is needed? How far along are we? What is the goal line we’re expecting Facebook and the other tech platforms to move past?
Elsewhere, Mark Bergen and Lucas Shaw report that YouTube is wrangling with a similar set of questions. Would the company’s own problems with promoting harmful videos diminish if it focused on a different set of metrics? YouTube is actively exploring the idea:
The Google division introduced two new internal metrics in the past two years for gauging how well videos are performing, according to people familiar with the company’s plans. One tracks the total time people spend on YouTube, including comments they post and read (not just the clips they watch). The other is a measurement called “quality watch time,” a squishier statistic with a noble goal: To spot content that achieves something more constructive than just keeping users glued to their phones.
The changes are supposed to reward videos that are more palatable to advertisers and the broader public, and help YouTube ward off criticism that its service is addictive and socially corrosive.
But two years on, it’s unclear that new metrics have been of much help in that regard. When platforms reach planetary scale, individual changes like these have a limited effect. And as long as Facebook and YouTube struggle to articulate the destination they’re aiming for, there’s continuing reason to doubt that they’ll get there.
DEMOCRACY
Aoife White reports that the Netherlands are considering antitrust action against Apple based on a recent complaint from Spotify:
The Netherlands’ Authority for Consumers & Markets will examine whether Apple abuses a dominant market position “by giving preferential treatment to its own apps,” it said in a statement on Thursday. The probe will initially focus on Apple’s App Store, where regulators have received the most detailed complaints, and Dutch apps for news media, but is also calling on app providers to flag if they have any problems with Google’s Play Store.
The antitrust probe adds to a growing backlash against the tolls Apple and Google charge to developers using their app stores. The EU’s powerful antitrust arm is weighing Spotify’s complaint targeting Apple. This builds on concerns that technology platforms control the online ecosystem and may rig the game to their own advantage. Amazon.com Inc.’s potential use of data on rival sellers is also being probed by the EU to check if it copiesproducts.
A popular internet archive is reporting that the European Union has been overzealous in its recent anti-terrorism enforcement. It’s this sort of thing that causes free-speech advocates to worry when regulations against “harmful content” are enacted:
In a blog post yesterday, the organization explained that it received more than 550 takedown notices from the European Union in the past week “falsely identifying hundreds of URLs on archive.org as ‘terrorist propaganda’.”
Here’s a story that shows how platforms are still struggling to prevent ban evasion:
A day after Facebook banned six Canadian individuals and groups for spreading hate, two made their way back onto the platform with new pages, while 11 pages with similar names and content remained online despite the ban.
Faith Goldy, the Canadian Nationalist Front, Wolves of Odin, and Canadian Infidels were all banned Monday, but more than 24 hours later BuzzFeed News and the Toronto Star found 12 pages, groups, and Instagram accounts using similar names and posting similar content that had been on the banned accounts. After asking Facebook for comment, they were all taken down.
Interesting from Erica Orden and Shimon Prokupecz:
Amazon CEO Jeff Bezos is scheduled to meet with federal prosecutors in New York as soon as this week, according to people familiar with the matter. The meeting signals that the US attorney’s office is escalating its inquiry connected to Bezos’s suggestion that the kingdom of Saudi Arabia was behind a National Enquirer story that exposed his extramarital affair and his claim that the tabloid attempted to extort him.
Speaking of Bezos, he’s facing his largest internal pressure front yet on climate change, Karen Weise reports:
This week, more than 4,200 Amazon employees called on the companyto rethink how it addresses and contributes to a warming planet. The action is the largest employee-driven movement on climate change to take place in the influential tech industry.
The workers say the company needs to make firm commitments to reduce its carbon footprint across its vast operations, not make piecemeal or vague announcements. And they say that Amazon should stop offering custom cloud-computing services that help the oil and gas industry find and extract more fossil fuels.
Today in Twitter whoopsies:
“This is a bug in our search typeahead system limited to desktop that we are working to fix,” a spokesperson said. “The issue is that for some search queries, the word ‘People’ is linked to ‘@NYTimes.’” So while we still don’t really know why the search system is working this way, we do know that it’s supposed to be working differently.
Source: https://www.theverge.com
Tags :
Previous Story
- Facebook, Google execs could reportedly be liable for...
- Facebook dips on report that user records were...
- How Facebook makes money by targeting ads directly...
- Facebook is about to reveal exactly how your...
- Mark Zuckerberg's call for harder Internet guideline won't...
- Facebook says video of New Zealand mosque shootings...
- UK parliament calls for antitrust, data abuse probe...
- The Tortured Case for Deleting Instagram