OpenAI launched its latest threat report, “Disrupting Malicious Uses of AI,” on Tuesday, revealing how hackers have been using AI for cyberattacks. Malicious actors have been using ChatGPT to assist in their operations, applying different strategies. According to OpenAI’s report , the recent threat analyses, which the startup began issuing in February, have helped it understand malicious actors’ campaigns and how the use of AI systems has evolved over the past few months. “Repeatedly, and across different types of operations, the threat actors we banned were building AI into their existing workflows, rather than building new workflows around AI,” states the document. “We found no evidence of new tactics or that our models provided threat actors with novel offensive capabilities.” OpenAI highlighted several cases to demonstrate how threat actors use AI models. In one of the case studies, Russian-speaking cybercriminals attempted to develop malware — including features to evade d...