Mariarosa Taddeo: “The ethical side of artificial intelligence: what we need to know before it’s too late”

Artificial intelligence is a pervasive reality that model our newspaper, from recommendations on social media to systems that influence important decisions. But are we really ready to understand the deepest implications? We met the teacher Mariarosaria Taddeoa prominent figure in the field of AI ethics and professor at the University of Oxford, who told us how this technology is transforming not only what we do, but also our way of conceiving the world.

Professor Taddeo, who presents her new book “War Code” on 30 August, during the Sarzana mind festival, He has very clear ideas on the subject and feels with calm resolve that the question is delicate but fortunately (hopefully), The difference will continue to make it the human being. However, the results are not at all obvious.

The interview with Mariarosaria Taddeo

Let’s start with the foundations: what does philosophy with artificial intelligence have to do with it?
«The Ai, like all digital, transforms not only how we do things, but also how we understand them. It is an agent, not just an instrument, and this conceptual change is essential to understand in order to be able to control it and use at best ».

So what is the added value of the philosopher in this scenario?
«The philosopher is crucial because he identifies, explains the conceptual changes and clarifies its implications, both opportunities and risks. It allows us to anticipate the problems, as in the case of the cybernetic war that redefines coercion without the use of force, asking unpublished philosophical issues on the rules to be adopted “.

Professor Mariarosaria Taddeo

A provocative question: will AI make us more stupid or smarter? Will it turn off our critical sense or will it awaken it?
“The Ai per se does neither more stupid nor more intelligent; It is our use that makes the difference. Passive use or dictated by commercial thrusts, as on social media, can reduce critical sense. But if we use it well and we regulate it, it can make us smarter, as in scientific research where it has already contributed to important medical discoveries. Control is not in technology, but in human beings. The biggest risk is to use it badly, losing immense opportunities ».

In your work you deal with the theme of “unexpected bias”. How can we manage this problem to prevent our life from being unconsciously influenced by algorithmic prejudices?
«The bias are inevitable, but we can reduce the impact. Technological checks are needed on databases and continuous monitoring of the use of artificial intelligence, because technology evolves with its interactions with the environment. We must be human beings to identify the bias as soon as they manifest themselves to intervene accordingly ».

Many AI systems are like “black boxes”, difficult to understand. How is the problem of “black box” solve?
“The” Black Box “is not completely resolvable, even with the interpretable. The solution is not a perfect “White Box”, but a form of revision of AI as we use it. We must check not only technology, but also the decision and operational processes in which it is included ».

What about privacy? AI is its definitive end, or are we users still do something?
«Privacy is increasingly under pressure, especially for data value. I fear a greater risk of mass surveillance for national security reasons (such as Osint – Open Source Intelligence), not only commercial. Today we have technology to collect and process huge quantities of data, which in the past did not happen ».

So, in summary, can we still protect our privacy?
«Yes, it is essential to escape the narrative that we are ineludibly in the hands of technology. A careful and aware use of technology is the first step in respecting and protecting our rights ».

She spoke of “techno-discussionism”. Is there a risk that artificial intelligence is seen as a panacea for every problem?
«Tecno-discussionism is problematic because it assumes that it is enough to use technology to solve all problems. In reality, the AI ​​is helpful only if you adopt within a strategy, of a vision. If there is no cost-benefit analysis, staff training and internal governance, the implementation fails. AI can be a help or a “stick between the wheels” depending on the ecosystem in which it is inserted ».

Can we trust these AI, knowing that it risks becoming a great oracle in the hands of a few companies, not always philanthropic?
“The concept of” confidence “is rewarding in this context. Trust implies a delegation without control, but with the AI ​​we have to do the opposite: use it under our control. We must not fear it or consider it a magic wand; We must have control and choice on the methods of use ».

She and a few others studied these themes thoroughly, but most people know little about them. How crucial is a paradigm change in the awareness of these tools, perhaps starting from elementary school?
«Training is essential, how to have a license to guide a technology that we use daily. It is crucial to learn how to use tools such as Large Language Models, to ask the right questions and check the sources. However, training must not become a “download”, where the user is the only responsible. It is a triangle: legislators, developers and users must do their part ».

Last advice: In the AI ​​era, to achieve one’s goals, is it better to use artificial intelligence as a competitive advantage or capitalize our “analog” skills?
“We need to study artificial intelligence, but analog skills allow us to develop intuitions and talents that AI will never have. The critical sense, the synthesis ability, the training of intuitions: these are these that will make the difference. Everyone will use the same tools, but only those who have a solid base and a deep thought will find the intuition that will lead to some great discovery ».

In short, artificial intelligence is not an unavoidable fate, but a very powerful tool in our hands. The key to navigation this new era is awareness, education and a refined critical sense, remembering that, in the end, it is our humanity that makes the real difference. And this reassures a lot, but not entirely.

Source: Vanity Fair

You may also like

Metamask launches his own stablecoin
Top News
David

Metamask launches his own stablecoin

Metamask will be the first non -coddal wallet with its own stabelcoin. The developers announced the launch of the same