untitled design

Algorithms are everywhere. Know why you should care

Every time you pick up your smartphone, you’re summoning algorithms. They’re used ​​for everything from unlocking your phone with your face, to deciding which videos you see on TikTok, to updating your Google Maps route, or to avoiding a highway accident on your way to work.

An algorithm is a set of rules or steps followed, usually by a computer, to produce a result. And the algorithms aren’t just on our phones: they’re used in all sorts of processes, online and offline, from helping to enhance your home to teaching your robot to aspire to stay away from your dog’s poop.

Over the years, they have received more and more life-altering decisions, such as helping to decide who to arrest, who should be released from prison before the court date, and who is approved for a home loan.

In recent weeks, the judgment of algorithms has been renewed, including how tech companies should change the way they use them. This stems both from concerns raised in hearings with Facebook whistleblower Frances Haugen and from bipartisan legislation introduced in the House (a supplementary bill had previously been reintroduced in the Senate).

The legislation would force big tech companies to allow users to access a version of their platforms where what they see is not shaped by algorithms. These developments highlight the growing awareness of the central role that algorithms play in our society.

“At that point, they’re responsible for making decisions about virtually every aspect of our lives,” said Chris Gilliard, visiting researcher at the Shorenstein Center for Media, Policy and Public Policy at the Harvard Kennedy School.

However, the ways in which algorithms work and the conclusions they reach can be mysterious, particularly as the use of artificial intelligence techniques makes them increasingly complex. Its results are not always understood or accurate – and the consequences can be disastrous. And the impact of potential new legislation to limit the influence of algorithms on our lives remains uncertain.

Algorithms, explained

Basically, an algorithm is a series of instructions. As Sasha Luccioni, a research scientist on the AI ​​ethics team at AI model builder Hugging Face, pointed out, it can be coded, with fixed instructions for a computer to follow, such as putting a list of names in alphabetical order. Simple algorithms have been used for computer decision making for decades.

Today, algorithms help to facilitate processes that would otherwise be complicated all the time, whether we know it or not. When you direct a clothing site to filter pajamas to see the most popular or least expensive options, you’re essentially using an algorithm to say, “Hey, follow the steps to show me the cheapest pajamas.”

All sorts of things can be algorithms, and they’re not confined to computers: a recipe, for example, is a kind of algorithm, as is the weekday morning routine that you sleep through before you leave the house.
“We run our own personal algorithms every day,” said Jevan Hutson, a data security and privacy attorney at Hintze Law in Seattle, who has studied AI and surveillance.

But while we may interrogate our own decisions, those made by machines have become increasingly puzzling. This is because of the emergence of a form of AI known as deep learning, which is modeled on the way neurons function in the brain and gained prominence about a decade ago.

A deep learning algorithm can task a computer with watching thousands of videos of cats, for example, to learn how to identify a cat’s appearance. (It was very important when Google figured out how to do this reliably in 2012).

The result of this process of stuffing itself with data and improving over time would, in essence, be a computer-generated procedure for knowing how the computer will identify if there’s a cat in every new photo it sees. This is often known as a model (although it is sometimes also known as an algorithm itself).

These models can be incredibly complex. Facebook, Instagram and Twitter use them to help customize users’ feeds based on each person’s past interests and activities. Models can also be based on heaps of data collected over many years that no human being could classify.

Zillow (a major US real estate company), for example, has been using its machine-learning trademark “Zestimate” to estimate home values ​​since 2006, taking into account property and tax records, details submitted by the owner. , like adding a bathroom and photos to the house.

The risks of relying on algorithms

As the case of Zillow shows, however, shifting decision making to algorithmic systems can also go wrong in painful ways, and it’s not always clear why.

Zillow recently decided to close its home launch business, Zillow Offers, showing how difficult it is to use AI to value real estate. In February, the company said its “Zestimate” would represent an initial cash offer by the company to buy the property through its real estate sales business; in November, the company lowered inventory by $304 million, which it attributed to recent home buying at prices higher than it thinks it can sell.

Elsewhere online, Meta, the company formerly known as Facebook, is on trial for tweaking its algorithms in a way that helped fuel more negative content on the world’s biggest social network.

Algorithms also have life-changing consequences, particularly in the hands of the police. We know, for example, that several black men, at least, have been wrongfully imprisoned for using facial recognition systems.

There is often little more than a tech company’s basic explanation of how their algorithmic systems work and what they are used for. In addition, experts in technology and technology legislation told the CNN Business that even those who create these systems do not always know why they reach their conclusions – which is why they are often called “black boxes”.

“Computer scientists, data scientists, at this current stage, they seem like magicians to a lot of people because we don’t understand what they do,” said Gilliard. “And we think they always do, but that’s not always the case.”

Popping Filter Bubbles

The United States has no federal rules about how companies can and cannot use algorithms in general, or those that use AI in particular. Some states and cities have passed their own rules, which tend to address facial recognition software or biometrics more generally.

But Congress is currently considering legislation dubbed the Filter Bubble Transparency Act, which, if passed, would force big Internet companies like Google, Meta, TikTok and others to “give users the option to engage with a platform without being manipulated by algorithms driven by user-specific data”.

In a recent article from CNN Opinion, Republican Senator John Thune described the legislation he co-sponsored as “a bill that would essentially create a light switch for the secret algorithms of big tech – artificial intelligence (AI) designed to shape and manipulate user experiences – and give consumers the option to turn it on or off. ”

Facebook, for example, already has this, although users are effectively discouraged from turning the so-called key permanently. A well-hidden “Latest” button will show posts in reverse chronological order, but the Facebook news feed will revert to its original, heavily moderated state as soon as you exit the site or close the app. Meta stopped offering this option on Instagram, which it also has, in 2016.

Hutson noted that while the Filter Bubble Transparency Act clearly focuses on large social platforms, it will inevitably affect others, such as Spotify and Netflix, that rely heavily on algorithmically-based curation. If it passes, he said, it will “fundamentally change” the business model of companies that are built entirely around algorithmic curation — a feature he suspects many users will appreciate in certain contexts.

“This will impact organizations far beyond those that are in the spotlight,” he said.

AI experts argue that the need for more transparency is crucial for companies that create and use algorithms. Luccioni believes algorithmic transparency laws are necessary before specific uses and applications of AI can be regulated.

“I see things definitely changing, but there’s a really frustrating gap between what AI is capable of and what’s legislated for,” Luccioni said.

Reference: CNN Brasil

You may also like

Get the latest

Stay Informed: Get the Latest Updates and Insights

 

Most popular