top of page

An Artificial Intelligence Primer

Updated: Jun 4

From NVIDIA's stock price to increasing adoption rates to an ever growing list of concerns, we are officially in the age of Artificial Intelligence (AI).


What is artificial intelligence?

Artificial Intelligence (AI) is a technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.

AI is a general purpose technology, which means it can affect the entire economy. The uses of AI spans across all companies, of all sizes, in all parts of the world. AI can be used to solve problems, automate tasks, and so much more.

People, organizations, and governments can use AI to benefit the world, improve the human condition, increase profits, and many other positive outcomes. However, people, organizations, and governments can use also AI to destroy the world, cause harm, erode the human condition, and many other negative outcomes. In many cases, the use of AI will be interpreted differently by different parties.

AI is not perfect. AI technology may glitch, may contain bias, and just may not operate as expected. Therefore, any user (or operator) of AI should be aware of the associated risks with any such technologies.


What are the risks with AI?

Artificial Intelligence (AI) is a double-edge sword, mixed blessing, or paradoxical situation. The use of AI can have both good and bad consequences, which means that we need to understand the profound set of associated risks.

In order to understand the risks, let's break AI risks into two categories


1. AI adoption risks.

2. AI-based cyber attack risks.


AI governance.

If you are person, organization, or government that intends to use or is currently using an AI technology, then you may want to consider adopting an AI governance framework.

AI governance is a set of overarching responsibilities and practices to protect trade secrets, customer data, reputation, shareholder value, and many other liabilities as related to the use of Artificial Intelligence (AI).

An AI governance framework is a guide that expands AI governance into something that is practical and useful. An AI governance framework may include elements related to AI oversight, AI policy, AI third party vendor management, AI responsibilities, AI roles and authorities, an AI risk management strategy, and an understanding of AI use cases.

The rigor of an AI governance framework may vary from one AI operator to the next. For organizations and governments, AI governance:

  1. may be organization-wide,

  2. may include regular monitoring of AI risks in comparison to other operational risks,

  3. may include AI budget considerations,

  4. may include an executive-sponsored AI vision,

  5. may include AI training that leads to an improved AI culture,

  6. and may include expectations that the organization can adapt its AI strategy due to changes in the organization and emerging AI technologies.


In certain cases, AI operators may want to use their AI governance framework to derive an implementation score that is compared to a defined AI governance target. AI operators can use their AI governance target to aim their AI improvement actions.


A practical AI risk enumeration structure.

The following abridged AI risk enumeration structure can be used as a guide to better understand your AI risks, to create an AI governance framework, and to prioritize AI actions.

AI Risk

Primary Attribute

Secondary Attribute

Possible Outcomes

Data loss, which includes loss or exposure of customer data, employee data, trade secrets, and government secrets.

Confidentiality, mainly related to human error, misuse, system flaws, or system weakness.

None

Severity of outcome will vary based on data type and data volume. You may experience margin erosion due to incident costs and/or you may experience revenue loss due to reputation damage and loss of strategic advantage.

Bad decisions, which results from data bias and system inaccuracies.

Integrity, mainly related to limited or manipulated data, human error, and system flaws.

None

Severity of outcomes depends on the decision being made. You may experience margin erosion, loss of revenue, and/or devaluation.

Operational failure, which results from failure of a dependent AI system.

Availability, mainly related to a dependent AI system becoming unavailable due to malfunction, power or network outage, glitches, capacity issues, and human error.

Integrity, mainly related to system behavior issue, fraudulent activity, misappropriation, and cybersecurity incident.

Severity of outcome depends on relation of operational component to revenue generation. You may experience direct revenue loss and indirect revenue loss from reputation damage.

Human casualty, which results from an AI malfunction or failure. This risk is mostly limited to healthcare devices, onboard systems, and operational technology.

Integrity, mainly related to system behavior issue, fraudulent activity, misappropriation, human error, system flaws, and cybersecurity incident.

Availability, results from the integrity issue. Either the AI technology fails or the AI operator disables the AI technology to prevent further damage.

Severity of outcome depends on degree of casualty and volume of victims. You may experience margin erosion, direct revenue loss, indirect revenue loss due to reputation damage, and devaluation.

Property damage, which results from an AI malfunction or failure. This risk is mostly limited to healthcare devices, onboard systems, and operational technology.

Integrity, mainly related to system behavior issue, fraudulent activity, misappropriation, human error, system flaws, and cybersecurity incident.

Availability, results from the integrity issue. Either the AI technology fails or the AI operator disables the AI technology to prevent further damage. Additonally, property damage may lead further availability issues.

Severity of outcome depends on degree of property damage. You may experience margin erosion, direct revenue loss, indirect revenue loss due to reputation damage, and devaluation.

Legal and regulatory, which results from a violation of contractual agreements, compliance, and/or regulation due to use of AI technology.

Confidentiality, mainly relates to misuse or exposure of customer or employee data.

Integrity & Availability, mainly relates to a failure of adhearing to legal obligations and warranties due to malfunction or behavior issue.

Severity of outcome depends on disputed contractual agreement and/or regulatory violation. You may experience margin erosion or indirect revenue loss due to brand damage.

Special Note: The above table is abridged and does not represent all risk or all AI risk enumeration elements.


Since most operators of AI are not the inventor of the AI technology, AI operators need to consider how each AI risk scenario associates with third party AI providers. AI operators will have some control in how to use and configure the AI technology to reduce AI risks. However, the AI operators will also have limited risk reducing options based on the contractual relationship with the AI technology vendor.


Defending against AI-based cyber attacks.

If you are person, organization, or government concerned about AI-based cyber attacks, then you may want to default to your traditional cyber risk management practices.

AI gives the attacker some extra advantages, such as efficiency and skill. However, AI will attempt to exploit based on known attack methods. Therefore, you may want to use your cyber risk management practice to better understand emerging AI-based cyber attack frequency and methods in order to improve your risk mitigation and transfer strategy.

Additionally, you don't want to lose sight of the basics. Vulnerability management, account management, access management, secure configurations, data protections, awareness training, and incident response remain effective in reducing your cyber risk condition with or without AI-based cyber attacks.


In summary.

Artificial Intelligence (AI) is a general purpose technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. The use of AI will be interpreted differently by different parties. AI is not perfect and comes with risks. All AI operators should be aware of these risks.

We can categorize AI risks into two categories: 1. AI adoption risks and 2. AI-based cyber attack risks. The definition and solution to each AI risk category is unique. AI governance helps with AI adoption risks, while traditional cyber risk management helps with AI-based cyber attack risks.

AI governance is a set of overarching responsibilities and practices to protect trade secrets, customer data, reputation, shareholder value, and many other liabilities as related to the use of AI, and an AI governance framework is a guide that expands AI governance into something that is practical and useful.

An AI risk enumeration structure can be used as a guide to better understand your AI risks, to create an AI governance framework, and to prioritize AI actions.


Comments


bottom of page