Your browser is unable to display this site correctly. Please try an up-to-date version of Chrome or Firefox instead.

< Back to all posts

Explainable AI for Enterprise Applications

Benny Cheung

By Benny Cheung

Senior Technical Architect

View bio
Loren Zimmermann

By Loren Zimmermann

Co-op Student (Alumnum)

August 07, 2019

Explainable AI for Enterprise Application

Machine Learning (ML) offers a remarkably accurate prediction capability, especially when it is coupled with Artificial Neural Network (ANN). However, even the designers of these platforms are often unable to explain how the models arrive at specific decisions due to the “black box” nature of some of these constructs. It therefore presents a unique ethical and operational challenge to the enterprises that are championing the adoption of ML. Businesses have a moral and legal obligation to adhere to government regulations designed to protect human rights and maintain fairness in their practice. Without the ability to trace how a business decision is made through the ML platform, organizations are justifiably reluctant to adopt this new technology, especially in consumer-facing applications, over the concern of unwittingly violating such regulations.

In this article, we will outline the role played by Explainable AI (XAI) in the overall adoption of enterprise AI. XAI refers to techniques in AI which are trusted and easily understood by humans, and they are instrumental to alleviating concerns shared by many businesses in their adoption of the technology. Following this discussion, we will outline the practical application of XAI in regulation compliance and demonstrate how it can bridge the gap between AI and the regulatory inquiry in the decision process.

Answers to Enterprise Business Concerns

Let’s begin with three business use cases where AI technology is used to showcase the importance of explainability:

  • Case #1: For consumer protection, credit risk models must by law offer reasons for declining a credit application.
  • Case #2: To facilitate prompt response and preventive care with little time lost to unnecessary additional diagnoses, Predictive Maintenance models for complex systems should justify with clear evidence why they flag a potential maintenance issue.
  • Case #3: It was discovered a few years after the fact that the AI model used by Amazon in hiring decisions was producing gender-biased recommendations, and therefore results.

Negative incidents stemming from unexplainable AI predictions have far-reaching impact and consequences. They turn into public relation nightmares and diminish the trustworthiness of a brand. Therefore, the true measurement of an AI solution goes far beyond the mere accuracy of its predictions, but it also deals with the explainability of its decisions.

Explainability is important to the four key areas of full AI life-cycle [Kaushik18].

XAI Enterprise Business Concerns

Figure 1. The full solution life cycle of AI in 4 areas of concerns

  • Optimize - Desire to understand and improve on the model. Explanation helps to pinpoint issues in data and feature behaviors. Domain expert can collaboratively improve the decision-making process; subsequently, it improves the confidence with the decision results.
  • Retain - Give clear AI system's behavior and boundary, subsequently safety guidelines and alerts on its violation.
  • Maintain - Trace through all the reasoning of each and every decision made. The auditor can monitor for ethical issues and violations caused by bias in training data. The on-going monitoring and maintenance will build trust with stakeholders who can see through all the reasoning of each and every decision made.
  • Comply - Comply with the regulatory requirements (like GDPR) where "Right to Explain" is a must-have for a system. Use explainability to monitor AI so that operations can comply with accountability requirements within the organization for auditing and other purposes.

We will show an example of healthcare regulation explainability in a later section. In addition, readers can refer to our recent presentation at the AI Geeks Meetup for a further exposition [Cheung19].

What makes AI Explainable?

As explainability plays an increasingly prominent role in the development and implementation of AI solutions, we must take a step back to define “explainability” before delving into what makes AI explainable.

Explaining in the Cultural Context

It would be natural for us to assume that what is explainable must inexorably be expressed in a language that we can understand. Ludwig Wittgenstein is a renowned philosopher in analytic philosophy. He famously stated that if a lion could speak, we could not understand him. [Wittgenstein58].

If a Lion Could Speak

Figure 2. Ludwig Wittgenstein quote from [Philosophical Investigation, p.223]

The statement is contradictory because we naturally, and erroneously, assume that if a lion could speak in English, we would be able to understand it. For Wittgenstein, the words themselves do not convey much meaning, but they do, however, express the intent which is confined within a particular context. This context takes place within our shared culture and experiences.

The computing professionals are guilty of using obscure acronyms and terms. For example, we may say "Dockerized blah blah on AWS that blah blah with Kubernetes" at a dinner party. Those who are not educated in computer science will not understand what we are talking about. Despite the fact that the words are in English. Hence the mantra "Language is in its usage", is the most important element in communicating or explaining.

The two key explaining features that enhance human understanding are: (1) the right level of abstraction and (2) a human level of reasoning.

Explaining at the Right Level of Abstraction

Even if we share the same culture and context, the explanation factor still needs to fit into multiple levels of abstraction. For example, when a pilot discovers that an airplane cannot take off because of an engine problem, the pilot will simply explain that "The plane is having a technical issue, so we cannot take off at this time". This is both the right level of detail and a satisfactory explanation to the passengers. However, the conversation between the pilot and aircraft mechanics will present a lot of technical details because it represents a different context for a different explanation level.

This will lead us to ask the correct question for Explainable AI: How does AI provide the right context with an acceptable level of human interpretability?

Let us consider an example. When you see the recognizable form of an old friend named Bill from afar, you may call out the greeting: “Bill!” immediately without waiting to visually confirm the figure’s identity up close. Some may ask: "Why did you yell out Bill?" as a predictive result. It is unlikely for you to choose a very low perceptive level and explain which visual cells in your eyes received the pixel values that looked like Bill. Typically, we would assess the symbolic representation of "Bill" as an old friend and explaining in the symbolic level of reasoning in your answer. It follows that the right level of abstraction in human culture is the symbolic level of explanation. Although we can trace back to the supporting perceptive elements, these elements do not need to be referenced until the deep tracing stage.

Explaining in Human Level of Reasoning

Humans need symbolic explanation to understand the AI model’s reasoning. Symbolic explanation is an old phenomenon, a product of a former AI paradigm coined as Good Old Fashion AI (GOFAI). Symbolic reasoning is implemented through rule engines or Expert Systems [GiarratanoRiley04], a proven technology of the 80’s.

While the current investments in AI/ML computing technology are mostly focused on statistical techniques using deep neural networks because of their adaptive learning ability to handle real world data, in particular, in the domain of noisy data such as images and speech, expert systems are making a comeback. To improve human-level explainability, which is sorely lacking in the perceptive neural net, there is an active trend towards the revival of symbolic processes. Both ends have to meet somewhere in the middle according to AI experts.

Furthermore, tapping into the wealth of Data Mining and Knowledge Based Management System (KBMS), we understand how to make the large quantity of facts usable in AI reasoning tasks. If the human level of symbolic facts is fed into the rule-based system, the reasoning engine can search either backward-chaining or forward-chaining through a set of domain specific rules. The most important business value is to ensure that the reasoning steps are traceable and explainable based on the original truthful observation from the human context.

Explainable AI in Regulation Compliance

Now that we have established that the reasoning can be traced to a human context via expert rules, we will turn our attention to the practical enterprise problem of regulation compliance as an example to illustrate how XAI techniques are applied. To be clear, modeling explainability with expert rules is not the only approach in XAI. However, expert system techniques are particularly attractive in the case of rule-oriented regulation compliance.

We will use the Health Insurance Portability and Accountability Act (HIPAA) in the US as an illustrative example of explainability. HIPAA regulates the transfer of patient medical information, depending on the type of disclosure and patient consent. The issue is that enforcing this law manually is virtually impractical due to the volume of information exchanged in typical healthcare interactions. With AI technology, not only it is feasible to achieve compliance, it is now possible to explain how decisions are made to regulators and administrators in the course of meeting and maintaining such compliance requirements.

Example: HIPAA Rule - 164.508

To help the later description, let's read a small excerpt from HIPAA rule 164.508 - Uses and disclosures for which an authorization is required. [Cornell19]

(a) Standard: Authorizations for uses and disclosures -
        (1) Authorization required: General rule. Except as otherwise permitted or required by this subchapter, a covered entity may not use or disclose protected health information without an authorization that is valid under this section. When a covered entity obtains or receives a valid authorization for its use or disclosure of protected health information, such use or disclosure must be consistent with such authorization.
        (2) Authorization required: Psychotherapy notes. Notwithstanding any provision of this subpart, other than the transition provisions in § 164.532, a covered entity must obtain an authorization for any use or disclosure of psychotherapy notes, except:
            (i) To carry out the following treatment, payment, or health care operations:
                (A) Use by the originator of the psychotherapy notes for treatment;
                (B) Use or disclosure by the covered entity for its own training programs in which students, trainees, or practitioners in mental health learn under supervision to practice or improve their skills in group, joint, family, or individual counseling; or
                (C) Use or disclosure by the covered entity to defend itself in a legal action or other proceeding brought by the individual; and
            (ii) A use or disclosure that is required by § 164.502(a)(2)(ii) or permitted by § 164.512(a); § 164.512(d) with respect to the oversight of the originator of the psychotherapy notes; § 164.512(g)(1); or § 164.512(j)(1)(i)

...

The HIPAA rule above can be mapped naturally into backward inference strategy of a rule-based expert system. For instance, if we want to check if the top level HIPAA clause is satisfied, we need to progressively check all the sub-clauses, see the following picture, that is the backward inference strategy.

HIPAA clause are supported by a web of sub-clauses

Figure 3. Illustrate a HIPAA clause is progressively supported by sub-clauses that require to be determined as true or false.

For further technical understanding, interested readers can refer to the research from Sundaram of Stanford University, a rule-based system implemented in Prolog, based on [LamMitchell18]. In this article, we shall only concentrate on how XAI could be looked with the explanation facility in expert systems.

Explanation Facility

A notable characteristic trait of expert systems is their ability to explain their reasoning. Expert systems have an additional module called the explanation facility. Using this facility, an expert system can provide an explanation to the user on why it is asking a question and how it reached a given conclusion.

The explanation facility is beneficial to both the system's developer and the system’s user. The developer can use it to uncover errors in the system's knowledge. The user benefits from the transparency inherently programmed into the system's reasoning.

We will now introduce the concept of formalization using 164.508.a.2 of HIPAA. As a whole 164.508 governs uses and disclosures of protected health information that require an authorization. Specifically, 164.508.a.2 states, among other things, that a covered entity must obtain an authorization for any use or disclosure of psychotherapy notes, except if it is to be used by the originator of the psychotherapy notes for treatment.

Explaining The How

In addition to providing a final result, both human experts and expert systems can explain how they arrived at a result. This situation often brings into question the validity of the system's findings, which requires that a justification be given to support the results.

For example, a typical explanation provided by a human expert for the final result might be:

  • Expert: The covered entity transmit action is forbidden
  • Person: How?
  • Expert: Since 164.508.a.2 states, among other things, that a covered entity must obtain an authorization for any use or disclosure of psychotherapy notes

This type of explanation follows the reasoning process used by the expert. Expert systems respond to a "How" query in a similar fashion by tracing back through the rules that led to the conclusion in the first place. This tracing is a map of the system's line of reasoning. The user will be more confident in the result when he or she can see the rationale behind the recommendation.

Explaining The Why

An expert system can also explain why it is asking a given question. When an individual consults with a human expert, the conversation is highly interactive. On occasion, the individual may ask the expert why he is pursuing a certain line of reasoning. The given explanation can make the user feel more comfortable with the line of questions and can also provide insight as to what issues the expert believes are important.

  • Expert: Is the psychotherapy notes for treatment requested by the originator?
  • Person: Why?
  • Expert: If I know that the action is requested by the originator of the psychotherapy notes for treatment, then the exception rule 164.502.a.2.i.A can be applied. The exception stated that “Use by the originator of the psychotherapy notes for treatment” is excepted from authorization.

When asked why a question is posed, experts respond by describing what they might conclude from the answer. Most expert systems respond to a "Why" query by displaying the rule it is currently pursuing.

Interface

The interaction between an expert system and user is conducted in a natural language style. The interaction is also highly interactive and follows closely the conversation between humans. To conduct this process in a manner that is acceptable to the user, special requirements are placed on the design of the user interface.

A basic design requirement of the interface is to for the expert system to ask questions. To obtain reliable information from the user, you will need to pay particular attention to the question's design. This may require you to design the interface using menus, graphics, or tailor-made screens. The user may also request the ability to explore or change the information contained within the working memory. This feature can be important in applications where the user may want to change the answer to some prior questions. We must be aware of the requirements of the user and design that the interface must meet to support the “Why” and “How” interactive question and answer between human user and AI.

We can be certain about the collaboration of using human expert rules with machines that enhanced the explainability. XAI will build bridges for trust so that we, humans, can adapt, work, and excel with machine intelligence in our various endeavors.

Can Explainable AI Succeed?

Amongst the current efforts to create XAI, the most notable is DARPA-XAI. US Department of Defense's Defense Advanced Research Projects Agency (DARPA) launched the Explainable Artificial Intelligence (XAI) project to develop a software library toolkit for explainable AI [Launchbury17]. In May 2018, researchers applied these explainable techniques to machine learning problems to demonstrate the initial implementation of their explainable AI systems. The techniques are not exclusive to building contextual explainable constructs for neural networks, such as linear regression and decision tree techniques can be used beyond neural networks.

Expert system technology is a part of the overall solution to bridge the gap between perceptive and symbolic reasoning needs for explainability. However, XAI systems are still in the early stages of research. The good news is building fair, accountable, and transparent machine learning systems is possible, demonstrated by the popular AI/ML frameworks that enables explainability [Hall18] [HallGill18]. The bad news is that it is harder than many blogs and software package documents would have you believe. The truth is, nearly all interpretable machine learning techniques generate approximate explanations that require strong expertise in the domain. The added duty of XAI researches will continue to be the promotion of fairness, accountability, and transparency in AI. We will continue to need new ways of understanding how and why technology like AI operates the way it does, and XAI is a critical field of thought that will be central to the future of AI enterprise deployment.

We are hopeful that one day we will be able to paraphrase Ludwig Wittgenstein, by saying "If AI could speak, we could understand it!".

References

  • [Cheung19] Cheung B. (2019), Explainable AI for Enterprise System, AI Geeks Meetup, Toronto, video link, slides link
  • [Cornell19] Cornell Law School (2019), Legal Information Institute, Digital version of 164.508, website link
  • [GiarratanoRiley04] Giarratano J, Riley G. (2004), Expert Systems: Principles and Programming, Fourth Edition, Course Technology, ISBN: 978-0534384470
  • [Hall18] Hall P. (2018), Building Explainable Machine Learning Systems: The Good, the Bad, and the Ugly, pdf link
  • [HallGill18] Hall P., Gill N. (2018), An Introduction to Machine Learning Interpretability, 2018 O'Reilly Media, Inc, ISBN:978-1-492-05029-2
  • [Kaushik18] Kaushik S. (2018), Enterprise Explainable AI, website link
  • [LamMitchell18] Lam P.E., Mitchell J.C., Sundaram S. (2018) A Formalization of HIPAA for a Medical Messaging System. In: Fischer-Hübner S., Lambrinoudakis C., Pernul G. (eds) Trust, Privacy and Security in Digital Business. TrustBus 2018. Lecture Notes in Computer Science, vol 5695. Springer, pdf link
  • [Launchbury17] Launchbury J. (2017), A DARPA Perspective on Artificial Intelligence, pdf link
  • [Wittgenstein58] Wittgenstein L. (1958), Philosophical Investigations, the English text of the third edition, Basil Blackwell Ltd, UK, ISBN:0-631-14670-9

About Jonah Group

Jonah Group is a digital consultancy the designs and builds high-performance software applications for the enterprise. Our industry is constantly changing, so we help our clients keep pace by making them aware of the possibilities of digital technology as it relates to their business.

  • 24,465
    sq. ft office in downtown Toronto
  • 169
    team members in our close-knit group
  • 22
    years in business, and counting