untitled design

Realistic images created by artificial intelligence worry experts; understand

A million bears walking the streets of Hong Kong. A strawberry frog. A cat made of spaghetti and meatballs.

These are just some of the text descriptions people have provided for cutting-edge AI systems in recent weeks.

These systems—primarily OpenAI’s DALL-E 2 and Google Research’s Imagen—can produce detailed, realistic images. They can be silly, weird or even reminiscent of classic art, and they are being widely shared on social media, including by influential figures in the tech community.

So it’s not hard to imagine this on-demand imaging eventually serving as a powerful tool for making all sorts of creative content, whether it’s art or ads.

DALL-E 2 and a similar system, Midjourney, for example, were once used to help create magazine covers. OpenAI and Google have pointed out some ways in which the technology can be commercialized, such as editing images or creating an image bank.

Neither DALL-E 2 nor Imagen is currently available to the public. However, they share a problem with many others who already are: they can also produce results that reflect the cultural and gender biases of the data they were trained on.

Bias in these AI systems presents a serious problem, experts told CNN Business .

For them, technology can perpetuate prejudices and stereotypes. They are concerned that the open nature of these systems — which makes them adept at generating all kinds of images from words — and their ability to automate image creation means they can automate this bigoted bias on a massive scale.

They also have the potential to be used to spread misinformation.

“Until that damage can be prevented, we’re not really talking about systems that can be used openly, in the real world,” said Arthur Holland Michel, a senior fellow at the Carnegie Council for Ethics in International Affairs who researches AI and AI technologies. surveillance.

Documenting prejudice

Artificial intelligence has become commonplace in everyday life in recent years, but it is only recently that the public has become aware of just how many biases can infiltrate technology.

Facial recognition systems, in particular, have been increasingly scrutinized for concerns about their accuracy and racial bias.

OpenAI and Google Research have acknowledged many of the issues and risks related to their AI systems in documentation and research, both saying the systems are prone to gender and race bias and depicting Western cultural stereotypes.

OpenAI, whose mission is to build so-called artificial general intelligence that benefits all people, has included in an online document titled “risks and limitations” images that illustrate how these problems can happen: A search for “nurse”, for example, resulted in images that appeared to show women using stethoscopes, while one for “CEO” showed images that appeared to be men and nearly all of them were white.

Lama Ahmad, OpenAI’s policy research program manager, said that researchers are still learning to measure bias in AI and that OpenAI can use what it learns to fine-tune its AI over time.

Ahmad led OpenAI’s efforts to work with a group of external experts earlier this year to better understand the issues with DALL-E 2 and provide feedback so it can be improved.

Google declined an interview request from CNN Business.

In their research paper introducing Imagen, Google Brain team members wrote that Imagen appears to encode “various social prejudices and stereotypes, including a general bias towards generating images of people with lighter skin tones and a tendency towards images that portray different professions to align with Western gender stereotypes.”

The contrast between the images these systems create and the thorny ethical questions is stark to Julie Carpenter, a research fellow in the Ethics and Emerging Sciences Group at California Polytechnic State University, San Luis Obispo.

“One of the things we have to do is understand that AI is really cool and can do some things really well. And we must work with her as a partner,” Carpenter said. “But it’s an imperfect thing. It has its limitations. We have to adjust our expectations. It’s not what we see in the movies.”

Holland Michel is also concerned that no amount of safeguards can prevent these systems from being used maliciously, noting that deepfakes — a cutting-edge AI application for creating videos that purport to show someone doing or saying something that actually didn’t do or say—were initially leveraged to create fake pornography.

“In some ways, a system that is orders of magnitude more powerful than the first systems could be orders of magnitude more dangerous,” he said.

veiled prejudice

Imagen and DALL-E 2 had to be trained with two types of data: pairs of images and related text captions.

Google Research and OpenAI filtered harmful images, such as pornography, from their datasets before training their AI models, but given the large size of their datasets, these efforts are unlikely to capture all of this content, nor make AI systems incapable of producing harmful results.

In their article, Google researchers pointed out that while they sift through some data, they also use a huge dataset that includes pornography, racist slurs, and “harmful social stereotypes.”

Really filtering these datasets for bad content is impossible, Carpenter said, as people are involved in decisions about how to label and exclude content — and different people have different cultural beliefs.

“AI doesn’t understand this,” he said.

Some researchers are thinking about how it might be possible to reduce bias in these types of AI systems, but still use them to create stunning images.

One possibility is to use less data instead of more.

Alex Dimakis, a professor at the University of Texas at Austin, said one method involves starting with a small amount of data — for example, a photo of a cat — and cutting it out, rotating it, creating a mirror image of it, and so on. forward—on, to effectively transform one image into many different images.

“It solves some of the problems, but it doesn’t solve other problems,” Dimakis said.

The trick alone won’t make a dataset more diverse, but the smaller scale can allow people working with it to be more intentional about the images they’re including.

Focus on “cute” photos

For now, OpenAI and Google Research are trying to keep the focus on cute photos and away from images that might be disturbing or show humans.

There are no realistic-looking images of people in the vibrant sample images on Imagen’s or DALL-E 2’s online project page, and OpenAI says on its page that it used “advanced techniques to avoid photorealistic generations of real individuals’ faces.” , including those of public figures.”

This protection can prevent users from getting image results for, say, trying to show a specific politician performing some sort of illicit activity.

OpenAI has provided access to DALL-E 2 to thousands of people who have applied to a waitlist since April.

Entrants must agree to an extensive content policy, which tells users not to attempt to make, upload or share photos “that could cause harm”.

DALL-E 2 also uses filters to prevent an image from being generated if an image upload violates OpenAI policies, and users can flag problematic results.

In late June, OpenAI began allowing users to post photorealistic human faces created with DALL-E 2 on social media, but only after adding some security features, such as preventing users from generating images containing public figures.

“Researchers, specifically, I think it’s very important to give them access,” Ahmad said. This is in part because OpenAI wants your help in studying areas like misinformation and prejudice.

Meanwhile, Google Research is not allowing researchers outside the company to access Imagen.

He has received requests on social media for requests that people would like to see Imagen interpret, but as Mohammad Norouzi, co-author of the newspaper Imagen, tweeted in May, he will not show images “including people, graphic content and sensitive material.”

Still, as Google Research noted in its article on Imagen, “even when we focus generations away from people, our preliminary analysis indicates that Imagen encodes a range of social and cultural biases when generating images of activities, events and objects.”

A hint of this bias is evident in one of the images Google posted on its Imagen web page, created from a prompt that reads, “A wall in a royal castle. There are two paintings on the wall. The one on the left, a detailed oil painting of the royal raccoon king. The one on the right is a detailed oil painting of the royal raccoon queen.”

The image is just that, with paintings of two crowned raccoons – one wearing what appears to be a yellow dress, the other a blue and gold jacket – in ornate gold frames.

But, as Holland Michel has noted, the raccoons are wearing real Western-style clothing, though the prompt doesn’t specify anything about how they should appear other than looking like “royalty.”

Even these “subtle” manifestations of prejudice are dangerous, Holland Michel said.

“Because they’re not blatant, they’re really hard to catch,” he said.

Source: CNN Brasil

You may also like

Get the latest

Stay Informed: Get the Latest Updates and Insights

 

Most popular