What is Offensive AI?

AI has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative purposes. In particular, cyber adversaries can also use AI, but to enhance their attacks and expand their campaigns.

There are two forms of offensive AI: attacks using AI and attacks against AI. For example, an adversary can (1) use AI to improve the efficiency of an attack (e.g., information gathering, attack automation, and vulnerability discovery) or (2) use knowledge of AI to exploit the defender’s AI products and solutions (e.g., to evade a defense or to plant a trojan in a product). The latter form of offensive AI is commonly referred to as adversarial machine learning.

Over the last few years, adversaries have begun to use offensive AI. For example, attackers have poisoned models,1 evaded defenses,2 scammed companies3 and individuals,4 ruined reputations,56 impersonated political leaders,7 and spread misinformation.8 In the coming years, we can expect to see more attacks emerge as our adversaries become aware of how AI can improve the coverage, speed and success of their attack campaigns, and as AI becomes even more accessible to novice users.

Many cybersecurity organizations feel that offensive AI is an imminent threat,9 and companies are investing to counter offensive AI.10 However, although the threat has been acknowledged, we are grossly unprepared. According to the US National Security Commission on AI in 2021,

The U.S. government is not prepared to defend the United States in the coming artificial intelligence (AI) era. …Because of AI, adversaries will be able to act with micro-precision, but at macro-scale and with greater speed. They will use AI to enhance cyber attacks and digital disinformation campaigns and to target individuals in new ways.’’

To counter the threat of offensive AI, we need to anticipate the adversary’s next move and devise countermeasures which can prevent a reactive arms race.

Our Lab

image

The Offensive AI Research Lab was founded in 2020 with the goal of producing research which can help the world prepare for the emergence of offensive AI. Instead of just ‘patching’ the problem, we are working on ways to give the defender the upper hand by putting the adversary at a disadvantage. We are a friendly group of researchers and hackers who have a passion for AI and cyber security. We strongly believe that AI should never be used to cause physical, financial, or psychological harm. However, we understand that cyber criminals do not hold by the same code of ethics as we do. Therefore, it is our mission to identify, counter, and mitigate the threat of offensive AI to help protect society.

Join Us!

We are currently seeking excellent students and post-docs who are eager to research offensive AI. Whether you are captivated by deepfakes, concerned about AI-powered attacks, or just want to exploit machine learning models, we would love to work with you in our labs! You can make a difference in the world, and we’d be honored to make that difference with you. Generous stipends are available for suitable candidates.

If you are interested in joining or have any questions, please reach out to Dr. Mirsky by email: yisroel(at)post.bgu.ac.il

References