Emerging Trends at the Nexus of Artificial Intelligence & Cybersecurity
In this blog post, we analyze three trends at the nexus of cybersecurity and AI that are fundamentally changing the digital landscape in 2023.
Due to the recent attention garnered by generative artificial intelligence (AI) and other innovative applications emerging worldwide, the threats and opportunities posed by AI are rising to the top of every news feed. One area where AI has already made a tremendous impact is in the field of cybersecurity, as exemplified in past research from the Wilson Center. As more machine learning (ML) algorithms are leveraged in cybersecurity systems and cyber operations, we have seen the following trends emerge at the nexus of AI and cybersecurity:
- AI-informed defensive strategies show potential to become the best cybersecurity measures against hacking operations.
- Explainable AI (XAI) models are making cybersecurity applications more secure.
- The democratization of AI inputs is lowering barriers to entry in automating cybersecurity practices.
AI-Informed Defensive Strategies
As new AI software is leveraged in cyber operations, AI-informed defensive strategies may become a determining factor in properly securing our networks. SecurityWeek recently highlighted at a Cyber Insights Conference how AI software is commonly used for anomaly detection in computer networks, such as identifying malware, active adversarial activity, or indications of compromise on a system. What’s more, AI software is now also used to automate other cybersecurity processes, such as isolating a compromised device.
Additionally, a recent study in Electronics demonstrated that proactive measures leveraging ML models for threat detection are becoming essential to stopping cyberattacks. For example, neural networks, deep reinforcement learning, and autoencoders are examples of AI-based models that have been identified as critical to robust cybersecurity responses. Moreover, a comprehensive survey published in the UAE indicates that AI-enabled software may be one of few applications that can detect when other AI systems have “poisoned” information or data on a computer.
Explainable AI Models in Cybersecurity
Most human users do not understand the purpose of ML algorithms and how they function. In 2019, the Defense Advanced Research Projects Agency initiated ground-breaking research to address this issue and explore the importance of XAI in creating systems whose models and decisions can be understood by human users. However, according to a recent IEEE study, despite the widespread use of AI for cybersecurity anomaly detection, most ML algorithms are deployed in a “black box manner, meaning that security experts and customers are unable to explain how such procedures reach particular conclusions.” This opacity decreases user confidence in these models and underemphasizes their limitations in addressing diverse cyber intrusions. Yet applying XAI in cybersecurity bolsters user trust and helps manage cyber defense mechanisms when they fall short, by providing clear reviews of how the models functioned. For example:
- Malware Analysis – XAI models can extract rules applied in neural networks that identify mobile malware behaviors and present them with accuracy estimations to users. Additionally, these rules can be compiled by analysts to create hacker profiles and malware classification methodologies.
- Botnet Operations – XAI models can enhance human-to-machine interactions for botnet operation detection, which requires feature classification that can demand immense computational resources. By having the human user provide active feedback on the classifications used in XAI models, these systems can overcome resource demands more quickly and improve the transparency of the botnet prediction process.
- Distributed Denial of Service (DDoS) Attacks – XAI models can translate the flow of data from DDoS packets to track the origination of these attacks and assist with attribution.
As AI models are used more often for intrusion detection, software reverse engineering, website fingerprinting, and other applications, the models that are explainable may prove to be more resilient and effective at blocking cyber threats by enhancing security flaw identification, error remediation for human users, and attribution capabilities.
The Democratization of AI Inputs
Another trend emerging this year is the democratization of AI inputs, which is lowering barriers to the automation of cybersecurity practices. As Dr. Patrick Shafto writes, AI product developers and academic groups have published their algorithmic models, code, and data sources to move toward open source systems, which allow for constructive feedback, community-driven innovation, and increased transparency on functionality. Additionally, the Association of Information Systems has found that this movement to democratize AI and make AI components more accessible to groups with limited knowledge and experience can bestow the capability to automate cybersecurity practices.
An example is the use of “no-code tools,” which enable users to develop AI-based platforms without advanced technological competencies or coding skills, according to one recent study from Umeå University in Sweden. According to forthcoming research from the Jamk University of Applied Sciences, these platforms are becoming more affordable for organizations that could benefit from automating cybersecurity practices, but lack the resources to create and analyze complex algorithmic systems. While there are significant benefits to democratizing AI inputs for public use, SlashNext’s work tracking cybercrime tools has shown how malicious actors can exploit or weaponize these models to enhance their own operations. Additionally, if these models are uninterpretable for users, they may lack robust security long-term. Therefore, the democratization of AI inputs may create further vulnerabilities that counterbalance the benefits organizations would reap from automating their cybersecurity practices.
These three trends at the nexus of cybersecurity and AI are fundamentally changing the digital landscape and while their consequences remain uncertain, their development is worth watching in the coming months. AI-informed defensive strategies and XAI models may significantly improve cybersecurity protocols, however, the current movement to democratize AI will introduce vulnerabilities to our digital systems that will have tradeoffs for cybersecurity. As a common trope in security circles suggests, you are only as strong as your weakest point. As these trends progress, the cybersecurity community and AI practitioners will need to work together to balance innovative automation with comprehensive security responses.
About the Author
Science and Technology Innovation Program
The Science and Technology Innovation Program (STIP) brings foresight to the frontier. Our experts explore emerging technologies through vital conversations, making science policy accessible to everyone. Read more