How can AI-actors leverage their expertise in appraising AI technology and its context to improve decision-making?

Yuri Bobbert en Vincent van Dijk | September 5, 2023

The year 2023 marked a significant milestone in Artificial Intelligence (AI). ChatGPT released an AI hurricane on the world, accelerating the AI arms race between technology giants like Microsoft and Google. Next to the enormous potential AI can bring, people, governments, and companies are increasingly concerned [1] regarding the potential (mis)use of AI which may harm persons' civil liberties, rights, and physical or psychological safety. Such (mis)use can hurt group discrimination, democratic participation, or educational access.

The Dutch Security Intelligence Agency (AIVD) expresses similar concerns regarding the potential risks associated with AI technology. In a recent report, the AIVD calls for extra attention to be paid to the development of AI technology, warning that the rapid advancement of AI could pose a severe threat. [2]. Italian regulators even went as far as to ban ChatGPT for all users within Italy (OpenAI had to block users from Italy). [3] Microsoft has launched initiatives to promote and standardize responsible AI amidst the growing concerns over AI ethics [4]. Can we rely on Big Tech guidance in an area with such a significant commercial interest? 

Ever since the introduction of the AI act and more specifically to Article 9 of the EU AI ACT that mandates risk management for high-risk there is more attention for conducting risk assessments. 

Brecker et al. stated, “Policymakers need to provide guidelines as every day without clearer regulations means uncertainty for market participants and potentially ethically and legally undesirable AI solutions for society.” They referred to earlier concerns  about “un-black boxing” BigTech. [5] Un-blacking boxing is needed to enable fact-based and transparent decision-making for managers [6]. 

“Regulators and policymakers cannot take a “wait-and-see” approach as AI systems are already used or are being developed for their imminent deployment in sensitive areas such as healthcare, social robotics, finance, policing, the judicial system, human resources, etc.” [7]. The EU parliament knows that the EU cannot take a wait-and-see approach and are in the process of developing one of the first regulations around AI that pays particular attention to 8 sensitive areas: law enforcement, critical infrastructure, etc. [19]

How can we grasp the fundamental underlying concerns of AI more collectively? How can AI actors leverage their expertise in appraising AI technology and its context to improve decision-making? Let us first look at the main challenges before detailing our suggested approach.

“Policymakers cannot take a wait-and-see approach.”

What are the main challenges with AI?

In early 2023 the Karlsruhe Institute of Technology [8] stated:

“… Artificial Intelligence (AI) assessment to mitigate risks arising from biased, unreliable, or regulatory non-compliant systems remains an open challenge for researchers, policymakers, and organizations across industries. Due to the scattered nature of research on AI across disciplines, there is a lack of overview on the challenges that need to be overcome to move AI assessment forward.”

The research at Karlsruhe University highlights seven areas of concern. Their work revolves around these seven areas of concern:

- Assessing AI for ethical compliance 
- Assessing AI’s Future Impact on Society 
- Adequacy of existing law for AI assessments 
- Dealing with Conflicting Laws on Trade Secrets 
- AI-based Systems’ Explainability 
- AI-based Systems’ Transparency 
- AI-based Systems’ Impermanence

Regarding the first concern, ethical compliance, a great example manifested when Amazon created an AI recruitment tool that favored male candidates for technical roles. It wasn’t that the AI model was directly biased toward males. Because the training data contained historical patterns of male-dominated technical roles, the AI system learned to associate specific colleges with successful candidates, inadvertently disadvantaging female applicants. [9]

This example underscores the importance of ethical compliance and addressing potential biases in AI systems. The example highlights the necessity for transparent and explainable AI — without it, you can’t validate the decisions. Furthermore, do we even want AI to make such decisions? What about privacy?

- A milestone in AI risk management

2023 is a milestone in risk management with the publication of the National Institute of Standards and Technology's (NIST) standard for AI — a welcomed addition to their existing risk management standard. [14] The framework initiative launched in July 2021 and was improved by approximately 400 comments from 240 global organizations over the past years. The AI framework was released just before the enormous AI storm. The AI NIST framework addresses many of the earlier raised concerns by academics.  Academics was concerned about the lack of empirical validation of these frameworks. Something NIST has done via its extensive foothold in several industries and AI practitioners. This is why we, as authors, consider the NIST AI framework a significant step forward and use it in our research and practices. 

In this article, we summarize the critical points to further your understanding of the topic, empowering you as a reader to apply the AI appraisal method and framework yourself. We offer high-level guidance via a. the framework and b. a practical approach through Group Support Systems (GSS) for conducting the assessment to address the main seven concerns.

- AI risk management framework 

One of the most critical outcomes of risk management is to treat those risks most relevant to your organization, considering your specific context. Since every organization is different, defining the context seems logical but needs to be done more explicitly. For a company that uses AI to create legal advise, the importance and risk are different than for a company that uses the same AI for marketing on social media. 

As Long [10] points out in his book Fallibility and Risk, Living with Uncertainty: “… not identifying the assessment context and requirements and delivering functional risk identification and risk expression language that decision-makers can understand will contribute to an increased risk”. Earlier studies at Antwerp Management School [11], [12] in other industries, like Industrial Control Systems, Cybersecurity, and Smart Grids, emphasize this context refinement as a must-do in any risk assessment method [13].

The NIST framework helps frame risk, paving the path towards using the AI system and maximizing its associated opportunities. The AI framework allows identifying and measuring AI risks, determining the risk tolerance, performing risk prioritization, and integrating the outcomes into the organization's management system (if applicable). Next to the seven Karlsruhe University highlights, the NIST publication outlines seven essential characteristics of AI risk, providing a framework to shed light on the dangers of using and deploying AI. [14]

1. Valid and reliable
2. Safe
3. Secure and resilient
4. Accountable and transparent
5. Explainable and interpretable
6. Privacy-enhanced
7. Fair with harmful bias managed

Each of these seven characteristics deserves balanced care and attention. To discover what one of these characteristics entails, we zoom in on “secure and resilient.”

NIST outlines several common security concerns, such as data poisoning and the exfiltration of models, training data, or other intellectual property through AI system endpoints. Microsoft went even further by distilling more than 200 research papers on AI security into a comprehensive list of 17 threats. [15]


A subset of the AI threats outlined by Microsoft

Most notable is that some threats Microsoft outlined differ from traditional cyber risks. These new threats need particular attention. We see these threats also applicable in the context of ChatGPT. [15]

Threat number 3, model inversion, was found in ChatGPT. Model inversion is a threat to using AI tools outside their intended use, the so-called “secret features” of the AI. [15] 

You can access ChatGPT’s secret features by carefully crafting ChatGPT prompts. For example, you could activate ChatGPT’s evil twin: DAN. Do Anything Now (DAN) ignores all the ethical limitations implemented by OpenAI. DAN has extreme opinions on sensitive topics and can swear and insult people, while these actions are restricted under normal use conditions.


Figure 1 ChatGPT’s Evil Twin: DAN who can ignore the content restrictions of OpenAI [17]
See figure in this article: https://www.linkedin.com/pulse/how-appraise-artificial-intelligence-ai-why-context-key-yuri-bobbert

Such attacks are referred to as “ChatGPT jailbreaking.” Jailbreaking aims to remove the restrictions, thus bypassing the intended use of the AI model. Another one of these alter egos is called Ranti. Ranti even provides users access to the generation of malicious code.(Figure 2)


Figure 2 ChatGPT’s evil twin Ranti can write malicious code [15]
See figure in this article: https://www.linkedin.com/pulse/how-appraise-artificial-intelligence-ai-why-context-key-yuri-bobbert

A call for “thoughtful” use and creation of AI

AI systems can be prone to biases, errors, and other inaccuracies. These issues can have serious consequences, particularly in high-risk environments. By assessing these risks, organizations can identify potential biases and risks in their AI systems and take steps to correct them. Apart from biases, AI systems are also vulnerable to cyber-attacks because you can misuse the underlying programming like traditional applications. Still, AI models also introduce new ways to be misused because of the introduction of AI models. These attacks can compromise data and put sensitive information at risk.

AI technology makes decisions with significant social and ethical implications, such as decisions related to employment, finance, and criminal justice, and is expected to gain importance in decision-making. AI assessments should be used to ensure that the AI systems are fair and make ethical decisions that do not perpetuate existing inequalities or biases. Secondly, the evaluation should investigate the risk of potential misuse and hacking.

An AI assessment within the NIST framework views the AI system from the seven angles mentioned earlier. Each angle provides a different perspective on the risks of using or building an AI system. In the screenshot below, you can see these seven perspectives placed into the agenda of the collective assessment instrument based on Group Support Systems.


Figure 3 Screenshot of the AI assessment created in Meetingwizard that looks at the seven angles of potential AI risk. The screenshot shows a meeting agenda that participants use to discuss AI together.
See figure in this article: https://www.linkedin.com/pulse/how-appraise-artificial-intelligence-ai-why-context-key-yuri-bobbert

b. The use of Group Support System

In a standard setting, a risk assessment on an AI system would be done by one person. Or a separate group of people. All thought this would get the job done; this approach can be risky for biased results. AI systems can have a wide range of potential impacts on different groups of people. It is best to involve multiple stakeholders in helping identify and understand potential risks and the consequences that may not be apparent to a single person. 


A GSS-based assessment method focuses on getting all AI actors' collective views and opinions. These AI actors vary in the AI lifetime but share a joint responsibility of managing the risks of AI. This group can reflect a diversity of experience, backgrounds, and expertise. Using a GSS-based assessment, you remove the risk of bias by collectively doing the evaluation together. 

A self-assessment or even a one-on-one assessment has enormous limitations. It is risky due to a range of cognitive biases, including confirmation bias, individual biases, pretending your situation is better than it (f)actually is, or even lying about the condition or lack of interaction and discussion between stakeholders that have another opinion or (outside in) view. 

There are other methods to gain stakeholders' perspectives, like surveys, interviews, and focus groups. However, only GSS can encourage all participants to share their opinions without fear of judgment (anonymous participation), and organizational rank and reduce social desirability bias. Additionally, GSS employs structured argumentations for decision-making to ensure that all viewpoints are considered, minimizing bias from dominant or vocal participants. 

Conducting the AI risk assessment can be done throughout the entire AI lifecycle. This is needed since usage, technology, and regulatory requirements rapidly change. 

The group's collective brain is mobilized via GSS to establish a more sincere view of the current situation while simultaneously developing a feasible theory of the desired situation. The outcome of such a joint exercise is a transparent action plan. 


Figure 4 Assessing an AI system on “Secure and Resilient” using Meetingwizard on the participant’s mobile phone. Participants can rate each threat on their importance.
See figure in this article: https://www.linkedin.com/pulse/how-appraise-artificial-intelligence-ai-why-context-key-yuri-bobbert

Academic research has proven that conducting a collective GSS-based assessment contributes in: 

a. An improved discussion generates unique and valid viewpoints and ideas, leading to a representative view of the situation. The significant variance between participants is discussed and (re)voted (if needed) to gain consensus. 

b. The group scrutiny leads to a more qualitative view and understanding, resulting in knowledge transfers and an increase in awareness (for example, about accountability and responsibility) 

c. collectively committing to a goal, roadmap, and (actions)ownership leads to a more sustainable implementation of the AI strategy 

Discussion

This objectivity, validity, sincerity, and collectively become increasingly relevant when judging and understanding an Ai technology algorithm. GSS allows you to get all actors and expert opinions at the table and scrutinize the latter via discussion, elaboration of issues, discuss social or ethical barriers, and from a technology perspective to understand how the AI configuration or AI parameters operate. It also helps to determine if you want to develop this in-house or outsource the development of the AI application. Next, HR staff can determine where the maintenance and technical capabilities must be allocated. In other words, are we recruiting on this or outsourcing it? 

Conclusion

Since the AI act requires a risk assessment (like proposed here) and (registration/administration) treatment of these risks so, all AI actors collectively are aware and can act accordingly instead of leaving it in the mystery as it is now. 

The AI assessment framework provided by NIST and applied via a Meetingwizard Group Support System methodology foresees just that: an assessment of the context and required capabilities of the organization. 

This AI assessment method also guides the implementation of treatment plans to mitigate a particular risk or leverage a specific opportunity. It also allows organizational decision-makers to develop an AI (exit) strategy. 


Bibliography
[1] A. R. Chow and B. Perrigo , 'The AI Arms Race Is Changing Everything,' 17 02 2023. [Online]. Available: https://time.com/6255952/ai-impact-chatgpt-microsoft-google/.
[2] AIVD, 'AI systems: develop them securely,' 15 02 2023. [Online]. Available: https://english.aivd.nl/latest/news/2023/02/15/ai-systems-develop-them-securely.
[3] A. Scroxton, 'Italy’s ChatGPT ban: Sober precaution or chilling overreaction?,' Computerweekly, 5 4 2023. [Online]. Available: https://www.computerweekly.com/news/365534355/Italys-ChatGPT-ban-Sober-precaution-or-chilling-overreaction. [Accessed 17 4 2023].
[4] Microsoft, 'Solving the challenge of securing AI and machine learning systems,' 6 12 2019. [Online]. Available: https://blogs.microsoft.com/on-the-issues/2019/12/06/ai-machine-learning-security/.
[5] S. Yanisky-Ravid and S. K. Hallisey, ' Equality and privacy by design.,' Fordham Urban Law Journal, 46(2), p. 428–486., 2019.
[6] J. Lansing, A. Benlian and A. Sunjaev, '“Unblackboxing” Decision Makers’ Interpretations of IS Certifications in the Context of Cloud Service Certifications,' Journal of the Association for Information Systems, 19(11), p. Available at: https://aisel.aisnet.org/jais/vol19/iss11/3, 2018.
[7] J. Maclure, 'AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind,' Minds and Machines, https://doi.org/10.1007/s11023-021-09570-x, 2023.
[8] K. Brecker, S. Lins and A. Sunyaev, 'Why it Remains Challenging to Assess Artificial Intelligence,' Proceedings of the 56th Hawaii International Conference on System Sciences | 2023, pp. 5242-5251, URI: https://hdl.handle.net/10125/103275, 2023.
[9] R. Goodman, 'Why Amazon’s Automated Hiring Tool Discriminated Against Women,' aclu, 12 10 2018. [Online]. Available: https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against. [Accessed 17 4 2023].
[10] R. Long, “Fallibility and Risk”, Living With Uncertainty, https://safetyrisk.net/fallibility-and-risk-free-download-of-dr-longs-latest-book/, 2018.
[11] N. Kuijper, “Identifying the human aspects of cyber risks using a behavioural cyber risk context assessments template”, Antwerpen: Antwerp management school, 2022.
[12] A. Domingues-Pereira-da-Silva and Y. Bobbert, 'Cybersecurity Readiness: An Empirical Study of Effective Cybersecurity Practices for Industrial Control Systems,' Scientific Journal of Research & Reviews (SJRR) : DOI: 10.33552/SJRR.2019.02.000536, pp. Volume 2, Issue 3, 2019.
[13] A. Refsdal, B. Solhhaug and K. Stolen, Context Establishment, Chapter 6; https://link.springer.com/chapter/10.1007/978-3-319-23570-7_6: PringerLink: DOI: 10.1007/978-3-319-23570-7_6, 2015.
[14] NIST, 'AI Risk Management Framework,' 26 1 2023. [Online]. Available: https://www.nist.gov/itl/ai-risk-management-framework.
[15] Microsoft, 'Failure Modes in Machine Learning,' 11 2 2022. [Online]. Available: https://learn.microsoft.com/en-us/security/engineering/failure-modes-in-machine-learning.
[16] B. Xia, Q. Lu, P. Harsha, Z. Liming, X. Zhenchang and L. Yue, “A Survey on AI Risk Assessment Frameworks”, University of New South Wales, Sydney, Australia: arXiv:2301.11616v2, 2023.
[17] 'DAN, 7.0 Presenting,' 22 3 2023. [Online]. Available: https://www.reddit.com/r/ChatGPT/comments/110w48z/presenting_dan_70/.
[18] L. Kiho, 'ChatGPT DAN,' [Online]. Available: https://github.com/0xk1h0/ChatGPT_DAN. [Accessed 27 3 2023].
[19] European Parliament, “EU AI Act: first regulation on artificial intelligence” [Online]. Available: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on- artificial-intelligence [Accessed 16 8 2023]

E-book: 25 years of Group Support Systems

In this book, the authors describe what the developments Group Support Systems (GSS) have gone through over the past 25 years. They outline examples of how GSS offers solutions for meeting dilemmas and group dynamics. How to achieve consensus using GSS and make decisions in a pleasant way. Various experts in the field of GSS have contributed to the book. Finally, they paint a picture of the coming years in which technology will increasingly support group processes and implementation and how the role of the traditional chairperson will transform from process facilitator to a meeting 'wizard'.