Openai’s chatgpt has given researchers step-by-step instructions on how to bomb sports facilities-including weaknesses in specific arenas, explosive recipes and tips on how to cover their traces-according to security tests this summer. AI Chatbot also analyzed how to use biological carbon (Anthrax) as a weapon and how to make two types of illegal drugs during the experiments, the Guardian said. The alarming revelations come from an unprecedented cooperation between Openai, the start -up business of an artificial intelligence worth $ 500 billion under the leadership of Sam Altman, and the competitive company Anthropic, founded by former Openai experts who left. Each company tried the AI ​​models of the other, deliberately promoting their ability to help with dangerous and illegal actions. Tests do not reflect the behavior of models for normal users – who have additional security filters […]
Source: News Beast

With 6 years of experience, I bring to the table captivating and informative writing in the world news category. My expertise covers a range of industries, including tourism, technology, forex and stocks. From brief social media posts to in-depth articles, I am dedicated to creating compelling content for various platforms.