top of page

An Artificial Intelligence Primer

Updated: Jun 4

From NVIDIA's stock price to increasing adoption rates to an ever growing list of concerns, we are officially in the age of Artificial Intelligence (AI).





What is artificial intelligence?


Artificial Intelligence (AI) is a technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.


AI is a general purpose technology, which means it can affect the entire economy. The uses of AI spans across all companies, of all sizes, in all parts of the world. AI can be used to solve problems, automate tasks, and so much more.


People, organizations, and governments can use AI to benefit the world, improve the human condition, increase profits, and many other positive outcomes. However, people, organizations, and governments can use also AI to destroy the world, cause harm, erode the human condition, and many other negative outcomes. In many cases, the use of AI will be interpreted differently by different parties.


AI is not perfect. AI technology may glitch, may contain bias, and just may not operate as expected. Therefore, any user (or operator) of AI should be aware of the associated risks with any such technologies.




What are the risks with AI?


Artificial Intelligence (AI) is a double-edge sword, mixed blessing, or paradoxical situation. The use of AI can have both good and bad consequences, which means that we need to understand the profound set of associated risks.

In order to understand the risks, let's break AI risks into two categories


1. AI adoption risks.

AI adoption risks covers the concept of people, organizations, and governments incorporating AI into their day-to-day routines.


The adoption of AI comes with risks. For ease, you can categorize AI risks into three attributes: confidentiality, integrity, and availability.


A situation where sensitive or confidential data is exposed via an AI technology is an AI-Confidentiality risk. This category includes exposure of customer data, employee data, trade secrets, government secrets, and other. In most cases, the exposure of this data is due to human error, systems flaws, or system weaknesses. The severity of this risk category is based on the value of the exposed data.


A situation where an AI technology behaves inconsistently, in a false manner, or in a corrupted fashion is an AI-Integrity risk. This category could result into making bad decisions, operational glitches, misappropriations of assets, and other. In most cases, integrity-based risks are due to technology flaws, development issues, or results of a cyber-attack. The severity of this risk category is varied and is based on the result of the bad decision, operational glitch, misappropriation incident, and other.


A situation where an AI technology is unavailable is an AI-Availability risk. This category includes situations where the AI technology is the primary system and where the AI technology is a dependent component in a larger system. In most cases, availability-based risks are due to malfunctions, power and network outages, glitches, capacity issues, and human error. The severity of this risk category is based on the outage duration and impacted functions during the outage.


Even though AI risks present a challenge, people, organizations, and governments should not refrain for incorporating AI into their day-to-day routine. Instead, AI users and operators may want to adopt a set of responsibilities and practices to reduce confidentiality, integrity, and availability AI risks. See here for more information regarding AI Governance.



2. AI-based cyber attack risks.

AI-based cyber attack risks covers the concept of a motivated attacker using AI to exploit people, organizations, and goverments. The attackers motivation may be based on financial gain, espionage, fun, grudge, ideology, or convenience.


AI helps the attacker exploit weaknesses by getting around or overwhelming known countermeasures.


As one example, an attacker can use AI deep fake technology to social engineer their way into a private system. Once in the private system, the attacker can steal information or alter the integrity of the system.


AI provides an advantage to the attacker. AI operates faster than a normal human, AI can ascertain and apply knowledge faster than a normal human, AI can learn and apply skills faster than a normal human. With AI, the attacker gains efficiency, knowledge, and skill to exploit its victims.


Protection against AI-based or AI-assisted attacks ties back to normal cyber risk management concepts. In other words, the same rules apply. See here for more information.




AI Governance.


If you are person, organization, or government that intends to use or is currently using an AI technology, then you may want to consider adopting an AI governance framework.


AI Governance:

AI governance is a set of overarching responsibilities and practices to protect trade secrets, customer data, reputation, shareholder value, and many other liabilities as related to the use of Artificial Intelligence (AI).


AI Governance Framework

An AI governance framework is a guide that expands AI governance into something that is practical and useful. An AI governance framework may include elements related to AI oversight, AI policy, AI third party vendor management, AI responsibilities, AI roles and authorities, an AI risk management strategy, and an understanding of AI use cases.


AI Rigor

The rigor of an AI governance framework may vary from one AI operator to the next. For organizations and governments, AI governance:

  • may be organization-wide,

  • may include regular monitoring of AI risks in comparison to other operational risks,

  • may include AI budget considerations,

  • may include an executive-sponsored AI vision,

  • may include AI training that leads to an improved AI culture,

  • and may include expectations that the organization can adapt its AI strategy due to changes in the organization and emerging AI technologies.


In certain cases, AI operators may want to use their AI governance framework to derive an implementation score that is compared to a defined AI governance target. AI operators can use their AI governance target to aim their AI improvement actions.




A practical AI risk enumeration structure.


The following abridged AI risk enumeration structure can be used as a guide to better understand your AI risks, to create an AI governance framework, and to prioritize AI actions.


AI Risk

Primary Attribute

Secondary Attribute

Possible Outcomes

Data loss, which includes loss or exposure of customer data, employee data, trade secrets, and government secrets.

Confidentiality, mainly related to human error, misuse, system flaws, or system weakness.

None

Severity of outcome will vary based on data type and data volume. You may experience margin erosion due to incident costs and/or you may experience revenue loss due to reputation damage and loss of strategic advantage.

Bad decisions, which results from data bias and system inaccuracies.

Integrity, mainly related to limited or manipulated data, human error, and system flaws.

None

Severity of outcomes depends on the decision being made. You may experience margin erosion, loss of revenue, and/or devaluation.

Operational failure, which results from failure of a dependent AI system.

Availability, mainly related to a dependent AI system becoming unavailable due to malfunction, power or network outage, glitches, capacity issues, and human error.

Integrity, mainly related to system behavior issue, fraudulent activity, misappropriation, and cybersecurity incident.

Severity of outcome depends on relation of operational component to revenue generation. You may experience direct revenue loss and indirect revenue loss from reputation damage.

Human casualty, which results from an AI malfunction or failure. This risk is mostly limited to healthcare devices, onboard systems, and operational technology.

Integrity, mainly related to system behavior issue, fraudulent activity, misappropriation, human error, system flaws, and cybersecurity incident.

Availability, results from the integrity issue. Either the AI technology fails or the AI operator disables the AI technology to prevent further damage.

Severity of outcome depends on degree of casualty and volume of victims. You may experience margin erosion, direct revenue loss, indirect revenue loss due to reputation damage, and devaluation.

Property damage, which results from an AI malfunction or failure. This risk is mostly limited to healthcare devices, onboard systems, and operational technology.

Integrity, mainly related to system behavior issue, fraudulent activity, misappropriation, human error, system flaws, and cybersecurity incident.

Availability, results from the integrity issue. Either the AI technology fails or the AI operator disables the AI technology to prevent further damage. Additonally, property damage may lead further availability issues.

Severity of outcome depends on degree of property damage. You may experience margin erosion, direct revenue loss, indirect revenue loss due to reputation damage, and devaluation.

Legal and regulatory, which results from a violation of contractual agreements, compliance, and/or regulation due to use of AI technology.

Confidentiality, mainly relates to misuse or exposure of customer or employee data.

Integrity & Availability, mainly relates to a failure of adhearing to legal obligations and warranties due to malfunction or behavior issue.

Severity of outcome depends on disputed contractual agreement and/or regulatory violation. You may experience margin erosion or indirect revenue loss due to brand damage.

Special Note: The above table is abridged and does not represent all risk or all AI risk enumeration elements.



Since most operators of AI are not the inventor of the AI technology, AI operators need to consider how each AI risk scenario associates with third party AI providers. AI operators will have some control in how to use and configure the AI technology to reduce AI risks. However, the AI operators will also have limited risk reducing options based on the contractual relationship with the AI technology vendor.




Defending against AI-based cyber attacks.


If you are person, organization, or government concerned about AI-based cyber attacks, then you may want to default to your traditional cyber risk management practices.


AI gives the attacker some extra advantages, such as efficiency and skill. However, AI will attempt to exploit based on known attack methods. Therefore, you may want to use your cyber risk management practice to better understand emerging AI-based cyber attack frequency and methods in order to improve your risk mitigation and transfer strategy.


Additionally, you don't want to lose sight of the basics. Vulnerability management, account management, access management, secure configurations, data protections, awareness training, and incident response remain effective in reducing your cyber risk condition with or without AI-based cyber attacks.




In summary.


Artificial Intelligence (AI) is a general purpose technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. The use of AI will be interpreted differently by different parties. AI is not perfect and comes with risks. All AI operators should be aware of these risks.


We can categorize AI risks into two categories:




The definition and solution to each AI risk category is unique. AI governance helps with AI adoption risks, while traditional cyber risk management helps with AI-based cyber attack risks.


AI governance is a set of overarching responsibilities and practices to protect trade secrets, customer data, reputation, shareholder value, and many other liabilities as related to the use of AI, and an AI governance framework is a guide that expands AI governance into something that is practical and useful.


An AI risk enumeration structure can be used as a guide to better understand your AI risks, to create an AI governance framework, and to prioritize AI actions.




Next Step.


Check out the X-Analytics AI Governance Tool (beta version) to quickly and easily determine your AI Adoption Risks.



Comments


bottom of page