Print
Print:
Body
AI and Cybersecurity
Body
Understanding the relationship between artificial intelligence (AI) and cybersecurity is vital for Dartmouth College employees to maintain strong digital defenses. This quick article covers the following:
History of AI
Benefits of AI for Cybersecurity
How AI is being utilized in cyber attacks
Generative AI and cybersecurity
Determining if AI tools are trustworthy
History of AI
The term “artificial intelligence” was conceptualized in the 1950s after Alan Turing, a British mathematician and logician, speculated about “thinking machines” that could reason like humans.
Benefits of AI for Cybersecurity
According to a recent research report, the estimated global market for AI cybersecurity products will climb to about $135 billion by 2030
AI can be used by cybersecurity organizations to supplement traditional tools like antivirus protection and data-loss prevention.
AI can analyze large sets of data and find patterns, making it useful in cybersecurity tasks, including the following:
Detecting attacks (and reducing false positives)
Identifying and flagging email phishing attacks
Simulating social engineering attacks to test an organization’s security awareness
Penetration testing designed with AI to target an organization’s specific technology (identifying weaknesses before they can be exploited by hackers)
Potential to stop security incidents before they occur, lowering IT costs
How AI is being utilized in cyber attacks
Social engineering schemes - automated by AI to generate more attacks in a shorter period of time
Password hacking - AI helps improve algorithms for password hacking, providing quicker and more accurate password-guessing
Deepfakes
- AI can make and distribute fake audio and video content to realistically impersonate an individual
Data poisoning - altering the training data used by an AI algorithm, leading to bad output
Generative AI and cybersecurity
What is
generative AI
? - a branch of AI focused on using existing data to generate new data
Use cases
Data retrieval/analysis (e.g. assisting threat hunters with data retrieval for real-time insights into vulnerability management workflows)
Content generation and summarization (e.g. learning and summarizing security documentation)
Generate complex, unique passwords or encryption keys that are more difficult to hack, offering an additional layer of security
Pros of generative AI in cybersecurity
Efficiency
In-depth analysis and summarization - ability to quickly and accurately conduct previously time-intensive data analysis tasks
Proactive threat detection - shifting from reactive to proactive cybersecurity to detect threats before they occur
Cons of generative AI in cybersecurity
Requires high levels of computational resources (power, storage, etc.)
Open-sourced generative AI (e.g. GPT-based tools) is making it easier for cybercriminals to conduct attacks
Ethical questions related to privacy and control over data used in AI training datasets
Determining if AI tools are trustworthy
NIST has developed an
AI Risk Management Framework
(AI RMF) to better manage risks to individuals, organizations, and society associated with AI
Decision-making should involve a broad set of stakeholders considering the relative risks, impacts, costs, and benefits.
Ongoing monitoring, testing, and adaptation are crucial for maintaining trustworthiness as AI systems evolve.
Human judgment is essential in determining trustworthiness metrics and thresholds.
Organizations must balance characteristics based on context, risks, and impacts.
Characteristics of Trustworthy AI Systems:
Valid and Reliable
Fulfillment of intended use
Consistent performance over time.
Requires system functionality and structure
Safe
AI systems should not endanger human life, health, property, or the environment
Achieved through responsible design, deployment, and decision-making practices
Secure and Resilient
Allows systems to withstand adverse events.
Prevents unauthorized access and use.
Maintains system functionality and structure
Accountable and Transparent
Ensures access to information about the system and its outputs.
Enables individuals to understand AI system operations and outcomes
Explainable and Interpretable
Reveals the mechanisms underlying AI systems.
Provides meaning to system outputs.
Privacy-Enhanced
Safeguards human autonomy, identity, and dignity.
Privacy norms and practices guide AI system design and development.
Fair with Harmful Bias Managed
Addresses harmful bias and discrimination.
Bias categories include systemic, computational, and human-cognitive biases.
Details
Details
Article ID:
158622
Created
Wed 4/24/24 11:30 AM
Modified
Mon 5/13/24 12:32 PM