Have any questions:

Call +1 650 379 0080

Mail to info@nowigence.com

In: Artificial Intelligence

AI is a rapidly evolving technology that has the potential to transform how we live and work. But with its promise comes the question of trust. How can we trust that AI will act in the best interest of humans and society? This is an important question to consider, as AI is being used in more and more areas of our lives, from healthcare to transportation to finance. In this article, we’ll explore the concept of trust in AI, the barriers to trustworthiness, and what’s next for AI in terms of building trust.

Caltech Science Exchange What are the Barriers to Trustworthiness?

When it comes to trust, there are several barriers that must be overcome in order to create trust in AI. The first is transparency and explainability. Today’s advanced AI systems are not transparent, meaning that it is difficult to understand exactly how they make decisions. This lack of transparency can lead to mistrust, as it is hard for users to know exactly what the AI system is doing and why. Additionally, AI systems are not able to explain their decisions, which can be problematic if the decisions are wrong or have unintended consequences.

It is also important to consider that AI systems are programmed to do exactly what they are programmed to do [1]. This means that the instructions given to the AI system are incredibly important, as any mistakes or oversights in the programming can lead to incorrect decisions [1]. On the other hand, AI systems are consistent, meaning that they will always make the same decisions when given the same input [1]. This can be beneficial, as it helps to avoid human inconsistencies and snap judgments [1].

Take, for example, an AI system that is trained to identify resumes of candidates who are the most likely to succeed at a company [1]. The AI system is given a data set of resumes and is trained to recognize patterns and make predictions about which resumes are the most successful [1]. However, no data set is perfectly objective; each comes with baked-in biases, or assumptions and preferences [1]. For example, if the data set only includes images of North American birds in daytime, the AI system will not be able to identify birds from other parts of the world or at night. This is an example of how AI systems can be limited by the data they are given.

Uncertainty Measures

Another active area of research is designing AI systems that are aware of and can give users accurate measures of certainty in results [1]. This is a complicated technical task, as AI systems must be able to assess the accuracy of their own predictions and output a measure of certainty [1]. This is important for ensuring trust in AI, as it helps users to understand the level of confidence that the AI system has in its decisions.

Trust in Artificial Intelligence: From a Foundational Trust Framework

Trust is an important factor when considering AI systems. Even in casual interactions, trust happens in the background, barely below conscious awareness; nonetheless, it remains critical [2]. The concept of trust in Luhmann’s works spans from personal trust towards individuals, as well as system trust towards social systems [2]. In complex social orders, “trust in systems” supports our ability to connect with the decisions taken by others in that social system [2].

Luhmann (Luhmann, 1995; Luhmann, 2018; Luhmann & Gilgen, 2012), developed a theory of trust and proposed to conceptualize trust as a mechanism to interact with social systems [2]. For example, trust in a personal trainer occurs in the absence of the full knowledge of the inner workings of the trainer’s brain [2]. Organizations, such as corporations or universities, are social systems composed of humans and other systems (e [2].g, human artifacts). Some contents of human mind can be conceptualized as conceptual systems (Bunge, 1979); that is, interconnected ideas, thoughts, propositions, and theories [2]. Since we assume that all systems are open systems, system openness is not of a kind, but of a degree [2]. Likewise, when two people meet and like each other, they may decide to get married, thereby creating a new social system – family [2].

Instilling Human Values in AI

As AI becomes more pervasive, so too has the concern over how we can trust that it reflects human values [3]. To explore this question, we spoke to 30 AI scientists and leading thinkers [3]. They told us that building trust in AI will require a significant effort to instill in it a sense of morality, operate in full transparency, and provide education about the opportunities it will create for businesses and consumers [3].

One example of instilling human values in AI comes from a facial recognition company. The company selects a culturally diverse set of images from more than 75 countries to train their AI system to recognize emotion in faces [3]. They also label all the images with their corresponding emotion by hand and test every single AI algorithm to verify its accuracy [3]. This is an example of how AI can be used to reflect human values and build trust.

What’s Next for AI – Building Trust

AI is no longer the future—it’s now here in our living rooms and cars and, often, our pockets [3]. As the technology has continued to expand its role in our lives, an important question has emerged: What level of trust can—and should—we place in these AI systems? [3].

The answer to this question is complex, as AI can be used for both good and bad. We are more than capable of harnessing AI for justice, security, and opportunity for all [4]. But it can also be used for other types of social impact in which one man’s good is another man’s evil [3]. To ensure that AI is used for good, the US government has issued an executive order on the safe, secure, and trustworthy development and use of artificial intelligence [4]. The order outlines steps that agencies should take to reduce risks associated with AI, such as developing tools to evaluate AI capabilities and developing model guardrails to reduce risks.

The order also encourages agencies to prioritize the allocation of grants to opportunities that will help to ensure trust in AI [4]. Additionally, the order calls for the development of fellowship programs and awards to support the development of AI that reflects human values and is trustworthy [4].

Conclusion

Building trust in AI is an important and complex task that requires collaboration across scientific disciplines, industries, and government. In order to ensure that AI is used for good, it is important that we instill in it a sense of morality, operate in full transparency, and provide education about the opportunities it will create for businesses and consumers [3]. The US government has taken steps to ensure trust in AI by issuing an executive order on the safe, secure, and trustworthy development and use of artificial intelligence [4]. This order outlines steps that agencies should take to reduce risks associated with AI and encourages agencies to prioritize the allocation of grants to opportunities that will help to ensure trust in AI. With the right steps, we can ensure that AI is used for good and that it is trustworthy.

References:-

  1. https://scienceexchange.caltech.edu/topics/artificial-intelligence-research/trustworthy-ai
  2. https://link.springer.com/article/10.1007/s12525-022-00605-4
  3. https://www.ibm.com/watson/advantage-reports/future-of-artificial-intelligence/building-trust-in-ai.html
  4. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

OTCQB: NOWG

$0.35Delayed quote: USD
0.0000 (0.00%)

View Chart and Data

How Can We Help You?

Need to bounce off ideas for an upcoming AI project or Software integration? Looking to transform your business with the implementation of full-potential AI?

For any career inquiries, please visit our careers page here.