ChatGPT Created Polymorphic Malware: Nearly Undetectable

Some time ago, cybersecurity experts reported that they were able to use ChatGPT from OpenAI to create quite working scripts that can be embedded in emails, for example, to gain access to a user’s personal files. To do this, according to experts, you do not even need to have any special knowledge or skills. Now, CyberArk, a cybersecurity research company, has decided to use the same AI-based tool to create a complete chain of polymorphic malware. And, of course, they succeeded.

It is worth noting that the polymorphism mechanism is effective in that it is almost impossible to detect malicious software – in this case, the program code is generated on the fly, that is, during its execution. Moreover, polymorphic viruses are good because the generated code is not permanent and changes with each iteration, so it is extremely difficult to find a universal signature for detecting such software. However, writing such malware is also quite difficult, although ChatGPT succeeded – the experts said that with relatively little effort they managed to set the necessary variables, based on which the chatbot completed the task.

CyberArk’s report indicates that they had to spend the lion’s share of their efforts to bypass ChatGPT’s restricted content filters. The fact is that chatbot workers do not allow AI to work in certain directions, but specialists managed to get around this limitation. To do this, they simply insisted, repeating the same request with minor clarifications.

“Interestingly, by asking ChatGPT to do the same with a few restrictions and asking it to comply, we got a functional code,” CyberArk employees said.

Enthusiasts also noted that when using the ChatGPT API version, the system does not use content filters, while all filters are present in the web version of the chatbot. Accordingly, if you use the API, the task is greatly simplified, because the system does not have to be overloaded with complex multi-level queries with many refinements and restrictions. And after all the prohibitions and filters were leveled, specialists using ChatGPT were able to create different versions of the source code of the malicious program, each time getting a unique version. Due to the “mutation” of the code, detecting this software will be extremely problematic, if not impossible.

Source: Trash Box

You may also like

Meghan Markle and the marmalate mess
Entertainment
Susan

Meghan Markle and the marmalate mess

Meghan Markle He will launch a Rosé wine, one of the most anticipated products of his AS Ever company, supported