Big techs ask the Supreme Court to block lawsuits against algorithms in the US

- Article Top Advertisement -

A wide range of companies, internet users, academics and even human rights experts defended Big Tech’s liability shield on Thursday (19) in a pivotal Supreme Court case over YouTube algorithms, with some arguing that deleting the AI-based recommendation engines of federal legal protections would cause radical changes in the open internet.

The diverse group weighing in on the Court ranged from large technology companies such as Meta, twitter and Microsoftto some of Big Tech’s most vocal critics, including Yelp and the Electronic Frontier Foundation.

- Article Inline Advertisement 1-

Even Reddit and a collection of volunteer Reddit moderators got involved.

In documents from friends of the court, the companies, organizations and individuals said that the federal law whose scope the Court could restrict in the case — Section 230 of the Communications Decency Act — is vital to the web’s basic function.

- Advertisement -

Section 230 was used to protect all websites, not just social media platforms, from lawsuits over third-party content.

The central issue of the case, Gonzalez v. Google, is whether Google can be sued for recommending pro-ISIS content to users via its YouTube algorithm; the company argued that Section 230 prevents such litigation.

But the plaintiffs in the case, family members of a person killed in a 2015 ISIS attack in Paris, have argued that YouTube’s recommendation algorithm can be held responsible under US anti-terrorism law.

In their lawsuit, Reddit and Reddit moderators argued that a ruling allowing litigation against tech industry algorithms could lead to future lawsuits against non-algorithmic forms of recommendation and potentially targeted actions against individual internet users.

“The entire Reddit platform is built around users ‘recommending’ content for the benefit of others, taking actions such as upvoting and pinning content,” the document read.

“There should be no doubt about the consequences of the petitioners’ claim in this case: their theory would drastically expand the potential for internet users to be prosecuted for their online interactions.”

Yelp, a longtime antagonist of Google, has argued that its business depends on providing relevant, non-fraudulent reviews to its users, and that a ruling creating liability for recommendation algorithms could disrupt Yelp’s core functions, effectively forcing it to to stop screening all reviews, even those that may be manipulative or untrue.

“If Yelp could not review and recommend reviews without facing liability, the costs of submitting fraudulent reviews would disappear,” Yelp wrote.

“If Yelp had to display all submitted reviews… business owners could submit hundreds of positive reviews for their own businesses with little effort or penalty risk.”

Section 230 ensures that platforms can moderate content to present the most relevant data to users from the massive amounts of information added to the internet daily, Twitter argued.

“It would take an average user approximately 181 million years to download all data from the web today,” the company wrote.

If the Supreme Court were to move forward with a new interpretation of Section 230 that protected platforms’ right to remove content but stripped out protections from their right to recommend content, it would open up new questions about what it means to recommend something online, Meta argued.

“If simply displaying third-party content in a user’s feed qualifies as ‘recommending it,’ many services will face potential liability for virtually all third-party content they host,” Meta wrote, “because nearly every decision about how rating, sorting, organizing and displaying third-party content may be construed as ‘recommending’ such content.”

A court ruling that technology platforms can be sued for their recommendation algorithms would put GitHub, the vast repository of online code used by millions of programmers, at risk, Microsoft said.

“The feed uses algorithms to recommend software to users based on projects they have previously worked on or shown an interest in,” Microsoft wrote.

He added that, for “a platform with 94 million developers, the consequences [de limitar a Seção 230] are potentially devastating to the world’s digital infrastructure.”

Microsoft’s search engine Bing and its social network LinkedIn also enjoy algorithmic protections under Section 230, the company said.

According to the Stern Center for Business and Human Rights at New York University, it is virtually impossible to design a rule that highlights algorithmic recommendation as a significant category of liability and could even “result in the loss or obscuration of an enormous amount of valuable information”, particularly discourse pertaining to marginalized or minority groups.

“Websites use ‘targeted recommendations’ because those recommendations make their platforms usable and useful,” the NYU document said.

“Without a liability shield for recommendations, platforms will remove large categories of third-party content, remove all third-party content, or abandon their efforts to make the vast amount of user content accessible on their platforms. In any of these situations, valuable freedom of expression will disappear – either because it has been removed or because it is hidden amidst a poorly managed information dump.”

Source: CNN Brasil

- Article Bottom Advertisement -


Please enter your comment!
Please enter your name here

Hot Topics

Related Articles