Chatbot Grok from Elon Musk turned out to be bad in terms of security

Chatbot Grok from billionaire Elon Musk's xAI company showed the worst results in terms of security compared to similar solutions, according to report from Adversa AI. After certain manipulations, the program provided recommendations for stealing a car and making explosives.

Experts used different attack vectors, including social engineering. The study tested several AI-based software solutions: ChatGPT, LLAMA, Claude, Le Chat, Gemini, Grok and Bing.

According to the results of the experiment, the Grok chatbot turned out to be vulnerable to three out of four attacks. Specifically, using logical linguistic manipulations, the program provided detailed instructions on how to gain a child's trust, assemble a bomb, and steal a car.

The Le Chat program from the developer Mistral showed similar results.

The LLAMA solution from Meta, in turn, turned out to be the most reliable. None of the jailbreak methods used by Adversa AI experts worked in this case.

Second place is shared by software solutions Claude and Bing from Anthropic and Microsoft, respectively. They turned out to be vulnerable to mixed types of attacks.

Elon Musk announced the launch of xAI in July 2024. The billionaire initially positioned the studio as a direct competitor to OpenAI and its ChatGPT product.

The Grok chatbot was released in November 2023.

Source: Cryptocurrency

You may also like