Worrying findings: Chatgpt gave step-by-step instructions for bombs, terrorist attacks and illegal drugs

Openai’s chatgpt has given researchers step-by-step instructions on how to bomb sports facilities-including weaknesses in specific arenas, explosive recipes and tips on how to cover their traces-according to security tests this summer. AI Chatbot also analyzed how to use biological carbon (Anthrax) as a weapon and how to make two types of illegal drugs during the experiments, the Guardian said. The alarming revelations come from an unprecedented cooperation between Openai, the start -up business of an artificial intelligence worth $ 500 billion under the leadership of Sam Altman, and the competitive company Anthropic, founded by former Openai experts who left. Each company tried the AI ​​models of the other, deliberately promoting their ability to help with dangerous and illegal actions. Tests do not reflect the behavior of models for normal users – who have additional security filters […]
Source: News Beast

You may also like