收藏 分销(赏)

2024网络安全领域可解释人工智能综合调查白皮书(英文版).docx

上传人:宇*** 文档编号:10581827 上传时间:2025-06-03 格式:DOCX 页数:21 大小:200.45KB 下载积分:20 金币
下载 相关 举报
2024网络安全领域可解释人工智能综合调查白皮书(英文版).docx_第1页
第1页 / 共21页
2024网络安全领域可解释人工智能综合调查白皮书(英文版).docx_第2页
第2页 / 共21页


点击查看更多>>
资源描述
Whitepaper on A COMPREHENSIVE SURVEY ON EXPLAINABLE AI IN CYBERSECURITY DOMAIN Prepared by Ms N Eswari Devi, Dr N Subramanian, Dr N Sarat Chandra Babu Society for Electronic Transactions and Security (SETS) Under Office of the Principal Scientific Adviser to the Government of India MGR Knowledge City, CIT Campus, Taramani, Chennai – 600 113 Understanding a model's decisions also addresses fairness issues and aids in debugging the model. The research on XAI began in the year 2004, with the significant breakthroughs starting around 2014, when DARPA announced its XAI project. According to DARPA [3], the goal of XAI is to ‘‘produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners’’. 1.2 Objectives and Scope This approach paper focuses on the related works that are being carried out on “Explainable Artificial Intelligence (XAI)” for cybersecurity at both international and national levels. 1.3 Organization of the Paper The approach paper is structured as follows: the Fundamentals of XAI section provides an overview of Explainable AI concepts, techniques, and the importance of interpretability in AI models. The XAI in Cybersecurity section explores the applications and benefits of XAI specifically in cybersecurity tasks. The Review of Related Work section summarizes existing research in the field, highlighting methodologies and findings related to XAI in cybersecurity bringing in view the national and international efforts. Subsequently, the Challenges of XAI in Cybersecurity section discusses the limitations, and potential adversarial threats associated with implementing XAI in cybersecurity systems. Finally, the approach paper concludes with insights into Future Directions, suggesting research directions, and issues in XAI for cybersecurity. 2. XAI in Cybersecurity 2.1 Role of AI in Cybersecurity Artificial intelligence significantly enhances cybersecurity by improving threat detection, prevention, and response capabilities. According to a NASSCOM report from February 2024, India’s AI market is growing at a CAGR of 25-35% and is projected to reach around $17 billion by 2027 [4]. AI systems analyze vast amounts of data to identify anomalies, detect malware, and predict potential cyber threats. They automate threat intelligence gathering and incident response, enabling rapid and effective countermeasures. AI also optimizes scanning and patching processes, enhancing vulnerability management and strengthening user behavior analytics to detect insider threats and fraud. Furthermore, integrating AI into cybersecurity frameworks not only strengthens defense mechanisms but also enhances the ability to foresee and counteract sophisticated cyberattacks, ensuring a more robust and resilient digital environment. SETS | XAI in Cybersecurity Domain 2.2 Necessity of XAI in Cybersecurity Explainable AI has become essential for security as it enhances transparency, trust, and accountability. One of the main reasons XAI is crucial in cybersecurity is its ability to foster transparency and trust in AI systems' decision-making processes. It helps cybersecurity professionals understand why an AI model flagged a particular activity as malicious or benign. By making AI systems' workings transparent, XAI allows for ongoing refinement and improvement. XAI provides insights into the model's decision-making process, enabling the identification and correction of biases. This ensures that cybersecurity measures are fair and unbiased, maintaining the integrity of security protocols. Statistics show that global attacks increased by 28% in the third quarter of 2022 compared to the same period in 2021 [5]. The use of explainable methods from the perspectives of three cybersecurity stakeholders are 1) designers, 2) model users, and 3) adversaries. Their work thoroughly examines various traditional and security-specific explanation methods and explores intriguing research directions [6]. Gautam Srivastava [7] focused on the application of XAI for cybersecurity in specific technology verticals such as smart healthcare, smart banking, smart agriculture, smart cities, smart governance, smart transportation, Industry 4.0, and 5G and beyond. A short survey on XAI for cybersecurity lists several XAI toolkits and libraries that support the implementation of explainability [8,9]. An exhaustive literature review with 244 references, highlights various cybersecurity applications using deep learning techniques such as intrusion detection systems, malware detection, phishing and spam detection, botnet detection, fraud detection, zero-day vulnerabilities, digital forensics, and cryptojacking. Their survey also examines the current use of explainability in these methods, identifies promising works and challenges, and provides future research directions. It emphasizes the need for more formalism, the importance of human-in-the- loop evaluations, and the impact of adversarial attacks on XAI [10]. The X_SPAM approach combines the machine learning technique Random Forest with the deep learning technique LSTM to detect spam, using the Explainable Artificial Intelligence technique LIME to enhance trustworthiness by explaining classification decisions [11]. A survey on classifies XAI applications in cybersecurity into three groups such as defensive applications against cyber-attacks, potentials for cybersecurity in various industries, and cyber adversarial threats targeting XAI applications along with defense approaches. They also highlight challenges in implementing XAI for cybersecurity and stress the importance of standardized explainability evaluation metrics [12]. In focusing on explainable AI (XAI) applications in cybersecurity, especially regarding fairness, integrity, privacy, and robustness, it's clear that real-world scenarios are often overlooked. Additionally, current countermeasures to defend XAI methods are limited [13]. The application of XAI in Cyber Threat Intelligence (CTI) is across three major themes: phishing analytics, attack vector analysis, and cyber-defense development [14]. Another study [15] discusses the strengths and concerns of existing methods in applications like security log analysis, presenting a pipeline to design interpretable and privacy- preserving system monitoring tools. XAI is becoming increasingly essential in cybersecurity applications, as a lack of explainability undermines trust in AI predictions. Beyond explainability, accuracy and performance must also be guaranteed in AI models used for cybersecurity. Training datasets play a crucial role in any machine learning application. XAI helps detect highly imbalanced datasets and correct biases, thereby improving system robustness. The concept of Explainable Security (XSec) [16] addresses the security of XAI systems, providing a thorough review of how to secure them. Side Channel Analysis (SCA) extracts the secret information from the cryptographic devices by analysing the physical emissions such as power consumption, electromagnetic leakages, or timing information. AI algorithms play a significant role in enhancing profiling SCA. AI algorithms are used for recognizing patterns in large datasets and used to identify subtle correlations between side-channel emissions and the secret information [17,18]. In AI assisted SCA, the features/Points of Interest (PoI) contribute for the retrieval of the secret information. Explainability on the features/Points of Interest (PoI) which contribute to the most for decision making is explained [19]. This helps in the validation of a design against SCA by identifying the potential vulnerable leakage points and implementing suitable countermeasures. An interpretable neural network known as Truth Table Deep Convolutional Neural Network (TT-DCNN) which has interpretable inner structure is used to perform SCA. The interoperability is achieved by converting the Neural Network (NN) into SAT equations to understand what the NN model has learnt [20]. Masking and hiding are the countermeasures used to protect the secret information at the implementation level. But AI algorithms are effective in defeating these countermeasures. ExDL-SCA (explainability of deep learning-based SCA) methodology is used in SCA to understand the effect of such countermeasures against AI assisted SCA [21]. This helps the developers in evaluating the security of the implemented countermeasures. Also, XAI plays an important role in hardware trojan detection [81]. Figure 1 depicts the application of XAI in various domains of cybersecurity. Phishing Detection Botnet Detection Ransomeware & Malware Detection Hardware Security Intrusion Detection System Explainable AI in Security For all stakeholders · Who gives and receives the explanation on security? · What is explained? · When is the explanation on security given? · Why is the explainable security needed? · How to explain security? Spam Detection Anomaly Detection Domain Generation Algorithm Detection Figure 1: Application of Explainable AI in Cybersecurity 3. Review of Related Work 3.1 International Efforts In 2017, John Launchbury, Director of DARPA's Information Innovation Office (I2O), discussed the "three waves of AI" to demystify the technology. The first wave, "handcrafted knowledge," involves encoding domain-specific knowledge into rules that computers follow. The second wave, "statistical learning," uses statistical models trained on specific data. The third wave, "contextual adaptation," features systems that can understand and explain the reasoning behind their decisions [22]. DARPA’s XAI program focused on two main challenges: (1) solving machine learning problems to classify events of interest in heterogeneous, multimedia data, and (2) developing machine learning methods to construct decision policies for autonomous systems to perform a variety of simulated missions [23]. In 2020, IBM launched "The Policy Lab," a platform designed to develop AI policies and recommendations, providing policymakers with a vision and practical guidance to harness the advantages of innovation while ensuring trust in a rapidly changing technology landscape [24]. In 2022, the IBM Institute for Business Value published a study on AI ethics in action. It states that building trustworthy AI is perceived as a strategic differentiator and that organizations are beginning to implement AI ethics mechanisms. The study suggests addressing dimensions such as privacy, robustness, fairness, explainability, transparency, and other relevant principles to establish a governance approach for ethical AI implementation [25]. Also in 2020, Google released a white paper titled "AI Explainability," a technical reference accompanying Google Cloud's AI Explanations product. The paper aims to leverage AI Explanations to simplify model development and explain the model’s behavior to key stakeholders. The "Global AI Action Alliance" project, part of the World Economic Forum’s Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning Platform, aims to harness the transformative potential of AI by accelerating the adoption of trusted, transparent, and inclusive AI systems globally. By 2026, Gartner expects organizations to achieve a 50% improvement in their AI models in terms of adoption, business goals, and user acceptance by operationalizing AI transparency, trust, and security [26]. Gartner notes that technology service providers are increasingly using explainable AI in their models, especially in security and regulated industries like healthcare and financial services [26,27]. "AI privacy, security, and/or risk management starts with AI explainability, which is a required baseline." - Avivah Litan, vice president and distinguished analyst at Gartner In 2019, The Royal Society published "Explainable AI: the basics Policy briefing," summarizing the challenges and considerations for developers and policymakers when implementing explainable AI systems. It emphasizes that the explanation for an AI model varies based on its application and provides a view on how XAI can be implemented without affecting system performance, including accuracy, interpretability, and privacy [28]. In September 2022, the NIST published a draft on the AI Risk Management Framework (AI RMF) aiming at the development and implementation of trustworthy AI, with explainability as one of the characteristics to consider in comprehensive approaches for identifying and managing AI system risks [29]. In May 2022, IBM provided insights into overall AI adoption worldwide, the barriers and challenges hindering AI from reaching its potential, and the use cases, industries, and countries where AI is most likely to thrive through the "IBM Global AI Adoption Index 2022" [30]. The statistics in Figure 2 reveal that most organizations have not taken essential steps toward trustworthy AI. Specifically, 61% of the organizations surveyed have not made significant efforts to explain AI-powered decisions. Security professionals are the fourth largest group of AI users at organizations, representing 26%. Additionally, 29% of organizations use AI for security and threat detection. Figure 2: Statistics of organizations that haven't taken key steps towards trustworthy AI as per IBM Global AI Adoption index [30] Additionally, 84% of IT professionals acknowledge the importance of trust in AI and believe that explaining AI decisions is crucial for their business. Currently, 17% of IT professionals are more likely to report that their business values AI explainability compared to those only exploring AI. Furthermore, companies face several barriers in developing trustworthy and explainable AI. Notably, 63% of companies struggle due to a lack of skills and training to develop and manage trustworthy AI. IT professionals in government and healthcare sectors, who are currently exploring or deploying AI, are more likely to identify barriers to explainability and trust compared to those in other industries. 3.2 National Efforts As one of the fastest-growing economies globally, India has a keen interest in the AI revolution reshaping the world. Recognizing AI's potential to drive economic transformation and the need for India to strategically position itself in this shift, the government has initiated the development of a national AI strategy. At the national level, NITI Aayog, MeitY (Ministry of Electronics & Information Technology), Ministry of Commerce & Industry, and the Office of the Principal Scientific Adviser to the Government of India have formed various task forces comprising domain experts to craft
展开阅读全文

开通  VIP会员、SVIP会员  优惠大
下载10份以上建议开通VIP会员
下载20份以上建议开通SVIP会员


开通VIP      成为共赢上传

当前位置:首页 > 包罗万象 > 大杂烩

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        抽奖活动

©2010-2025 宁波自信网络信息技术有限公司  版权所有

客服电话:4009-655-100  投诉/维权电话:18658249818

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :微信公众号    抖音    微博    LOFTER 

客服